chash
stringlengths
16
16
content
stringlengths
267
674k
a0cc42ac9ac36376
Lambert W function From Wikipedia, the free encyclopedia Jump to: navigation, search The graph of W(x) for W > −4 and x < 6. The upper branch with W ≥ −1 is the function W0 (principal branch), the lower branch with W ≤ −1 is the function W−1. In mathematics, the Lambert-W function, also called the omega function or product logarithm, is a set of functions, namely the branches of the inverse relation of the function f(z) = zez where ez is the exponential function and z is any complex number. In other words By substituting the above equation in , we get the defining equation for the W function (and for the W relation in general): for any complex number z'. Since the function ƒ is not injective, the relation W is multivalued (except at 0). If we restrict attention to real-valued W, the complex variable z is then replaced by the real variable x, and the relation is defined only for x ≥ −1/e, and is double-valued on (−1/e, 0). The additional constraint W ≥ −1 defines a single-valued function W0(x). We have W0(0) = 0 and W0(−1/e) = −1. Meanwhile, the lower branch has W ≤ −1 and is denoted W−1(x). It decreases from W−1(−1/e) = −1 to W−1(0) = −∞. The Lambert W relation cannot be expressed in terms of elementary functions.[1] It is useful in combinatorics, for instance in the enumeration of trees. It can be used to solve various equations involving exponentials (e.g. the maxima of the Planck, Bose–Einstein, and Fermi–Dirac distributions) and also occurs in the solution of delay differential equations, such as y'(t) = a y(t − 1). In biochemistry, and in particular enzyme kinetics, a closed-form solution for the time course kinetics analysis of Michaelis–Menten kinetics is described in terms of the Lambert W function. Main branch of the Lambert-W function in the complex plane. Note the branch cut along the negative real axis, ending at −1/e. In this picture, the hue of a point z is determined by the argument of W(z) and the brightness by the absolute value of W(z). The modulus of the principal branch of the Lambert-W function, colored according to the argument(W(z)) The two main branches and The Lambert-W function is named after Johann Heinrich Lambert. The main branch W0 is denoted by Wp in the Digital Library of Mathematical Functions and the branch W−1 is denoted by Wm there. The notation convention chosen here (with W0 and W−1) follows the canonical reference on the Lambert-W function by Corless, Gonnet, Hare, Jeffrey and Knuth.[2] Lambert first considered the related Lambert's Transcendental Equation in 1758,[3] which led to a paper by Leonhard Euler in 1783[4] that discussed the special case of wew. The Lambert W function was "re-discovered" every decade or so in specialized applications.[citation needed] In 1993, when it was reported that the Lambert W function provides an exact solution to the quantum-mechanical double-well Dirac delta function model for equal charges—a fundamental problem in physics—Corless and developers of the Maple Computer algebra system made a library search, and found that this function was ubiquitous in nature.[2][5] By implicit differentiation, one can show that all branches of W satisfy the differential equation (W is not differentiable for z = −1/e.) As a consequence, we get the following formula for the derivative of W: Using the identity , we get the following equivalent formula which holds for all : The function W(x), and many expressions involving W(x), can be integrated using the substitution w = W(x), i.e. x = w ew: (The last equation is more common in the literature but does not hold at .) One consequence of which (using the fact that ) is the identity: Asymptotic expansions[edit] The Taylor series of around 0 can be found using the Lagrange inversion theorem and is given by The radius of convergence is 1/e, as may be seen by the ratio test. The function defined by this series can be extended to a holomorphic function defined on all complex numbers with a branch cut along the interval (−∞, −1/e]; this holomorphic function defines the principal branch of the Lambert W function. For large values of x, W0 is asymptotic to where , and is a non-negative Stirling number of the first kind.[6] Keeping only the first two terms of the expansion, The other real branch, , defined in the interval [−1/e, 0), has an approximation of the same form as x approaches zero, with in this case and . In [7] it is shown that the following bound holds for : In [8] it was proven that branch can be bounded as follows: for . Integer and complex powers[edit] Integer powers of also admit simple Taylor (or Laurent) series expansions at More generally, for the Lagrange inversion formula gives which is, in general, a Laurent series of order r. Equivalently, the latter can be written in the form of a Taylor expansion of powers of which holds for any and . A few identities follow from definition: Note that, since f(x) = x⋅ex is not injective, not always W(f(x)) = x. For fixed x < 0 and x ≠ 1 the equation x⋅ex = y⋅ey has two solutions in y, one of which is of course y = x. Then, for i = 0 and x < −1 as well as for i = −1 and x ∈ (−1, 0), Wi(x⋅ex) is the other solution of the equation x⋅ex = y⋅ey. (which can be extended to other n and x if the right branch is chosen) From inverting f(ln(x)): With Euler's iterated exponential h(x): Special values[edit] For any non-zero algebraic number x, W(x) is a transcendental number. Indeed, if W(x) is zero then x must be zero as well, and if W(x) is non-zero and algebraic, then by the Lindemann–Weierstrass theorem, eW(x) must be transcendental, implying that x=W(x)eW(x) must also be transcendental. (the Omega constant) Other formulas[edit] Definite integrals[edit] There are several useful definite integral formulas involving the W function, including the following: The first identity can be found by writing the Gaussian integral in polar coordinates. The second identity can be derived by making the substitution which gives The third identity may be derived from the second by making the substitution and the first can also be derived from the third by the substitution . Except for z along the branch cut (where the integral does not converge), the principal branch of the Lambert W function can be computed by the following integral: where the two integral expressions are equivalent due to the symmetry of the integrand. Indefinite integrals[edit] Many equations involving exponentials can be solved using the W function. The general strategy is to move all instances of the unknown to one side of the equation and make it look like Y = XeX at which point the W function provides the value of the variable in X. In other words : Example 1[edit] More generally, the equation can be transformed via the substitution which yields the final solution Example 2[edit] or, equivalently, by definition. Example 3[edit] taking the n-th root let : then Example 4[edit] Whenever the complex infinite exponential tetration converges, the Lambert W function provides the actual limit value as where ln(z) denotes the principal branch of the complex log function. This can be shown by observing that if c exists, so which is the result which was to be found. Example 5[edit] Solutions for have the form[5] Example 6[edit] The solution for the current in a series diode/resistor circuit can also be written in terms of the Lambert W. See diode modeling. Example 7[edit] The delay differential equation has characteristic equation , leading to and , where is the branch index. If , only need be considered. Example 8[edit] The Lambert-W function has been recently (2013) shown to be the optimal solution for the required magnetic field of a Zeeman slower.[12] Example 9[edit] Granular and debris flow fronts and deposits, and the fronts of viscous fluids in natural events and in the laboratory experiments can be described by using the Lambert–Euler omega function as follows: where H(x) is the debris flow height, x is the channel downstream position, L is the unified model parameter consisting of several physical and geometrical parameters of the flow, flow height and the hydraulic pressure gradient. Example 10[edit] The Lambert-W function was employed in the field of Neuroimaging for linking cerebral blood flow and oxygen consumption changes within a brain voxel, to the corresponding Blood Oxygenation Level Dependent (BOLD) signal.[13] Example 11[edit] The Lambert-W function was employed in the field of Chemical Engineering for modelling the porous electrode film thickness in a glassy carbon based supercapacitor for electrochemical energy storage. The Lambert "W" function turned out to be the exact solution for a gas phase thermal activation process where growth of carbon film and combustion of the same film compete with each other.[14][15] Example 12[edit] The Lambert-W function was employed in the field of epitaxial film growth for the determination of the critical dislocation onset film thickness. This is the calculated thickness of an epitaxial film, where due to thermodynamic principles the film will develop crystallographic dislocations in order to minimise the elastic energy stored in the films. Prior to application of Lambert "W" for this problem, the critical thickness had to be determined via solving an implicit equation. Lambert "W" turns it in an explicit equation for analytical handling with ease.[16] Example 13[edit] The Lambert-W function has been employed in the field of fluid flow in porous media to model the tilt of an interface separating two gravitationally segregated fluids in a homogeneus tilted porous bed of constant dip and thickness where the heavier fluid, injected at the bottom end, displaces the lighter fluid that is produced at the same rate from the top end. The principal branch of the solution corresponds to stable displacements while the -1 branch applies if the displacement is unstable with the heavier fluid running underneath the ligther fluid.[17] Example 14[edit] The equation (linked with the generating functions of Bernoulli numbers and Todd genus): can be solved by means of the two real branches and : This application shows in evidence that the branch difference of the W function can be employed in order to solve other trascendental equations. See : D. J. Jeffrey and J. E. Jankowski, "Branch differences and Lambert W" Example 15[edit] The centroid of a set of histograms defined with respect to the symmetrized Kullback-Leibler divergence (also called the Jeffreys divergence) is in closed form using the Lambert function. See : F. Nielsen, "Jeffreys Centroids: A Closed-Form Expression for Positive Histograms and a Guaranteed Tight Approximation for Frequency Histograms" Example 16[edit] The Lambert-W function appears in a quantum-mechanical potential (see The Lambert-W step-potential) which affords the fifth – next to those of the harmonic oscillator plus centrifugal, the Coulomb plus inverse square, the Morse, and the inverse square root potential – exact solution to the stationary one-dimensional Schrödinger equation in terms of the confluent hypergeometric functions. The potential is given as A peculiarity of the solution is that each of the two fundamental solutions that compose the general solution of the Schrödinger equation is given by a combination of two confluent hypergeometric functions of an argument proportional to . See : A.M. Ishkhanyan, "The Lambert W-barrier – an exactly solvable confluent hypergeometric potential" The standard Lambert-W function expresses exact solutions to transcendental algebraic equations (in x) of the form: where a0, c and r are real constants. The solution is . Generalizations of the Lambert W function[18][19][20] include: and where r1 and r2 are real distinct constants, the roots of the quadratic polynomial. Here, the solution is a function has a single argument x but the terms like ri and ao are parameters of that function. In this respect, the generalization resembles the hypergeometric function and the Meijer G-function but it belongs to a different class of functions. When r1 = r2, both sides of (2) can be factored and reduced to (1) and thus the solution reduces to that of the standard W function. Eq. (2) expresses the equation governing the dilaton field, from which is derived the metric of the R=T or lineal two-body gravity problem in 1+1 dimensions (one spatial dimension and one time dimension) for the case of unequal (rest) masses, as well as, the eigenenergies of the quantum-mechanical double-well Dirac delta function model for unequal charges in one dimension. • Analytical solutions of the eigenenergies of a special case of the quantum mechanical three-body problem, namely the (three-dimensional) hydrogen molecule-ion.[22] Here the right-hand-side of (1) (or (2)) is now a ratio of infinite order polynomials in x: where ri and si are distinct real constants and x is a function of the eigenenergy and the internuclear distance R. Eq. (3) with its specialized cases expressed in (1) and (2) is related to a large class of delay differential equations. Hardy's notion of a "false derivative" provides exact multiple roots to special cases of (3).[23] Applications of the Lambert "W" function in fundamental physical problems are not exhausted even for the standard case expressed in (1) as seen recently in the area of atomic, molecular, and optical physics.[24] Numerical evaluation[edit] The W function may be approximated using Newton's method, with successive approximations to (so ) being The W function may also be approximated using Halley's method, given in Corless et al. to compute W. The Lambert-W function is implemented as LambertW in Maple, lambertw in GP (and glambertW in PARI), lambertw in MATLAB,[25] also lambertw in octave with the 'specfun' package, as lambert_w in Maxima,[26] as ProductLog (with a silent alias LambertW) in Mathematica,[27] as lambertw in Python scipy's special function package,[28] as LambertW in Perl's ntheory module,[29] and as gsl_sf_lambert_W0 and gsl_sf_lambert_Wm1 functions in special functions section of the GNU Scientific Library – GSL. See also[edit] 1. ^ Chow, Timothy Y. (1999), "What is a closed-form number?", American Mathematical Monthly, 106 (5): 440–448, doi:10.2307/2589148, MR 1699262 . 3. ^ Lambert JH, "Observationes variae in mathesin puram", Acta Helveticae physico-mathematico-anatomico-botanico-medica, Band III, 128–168, 1758 (facsimile) 5. ^ a b Corless, R. M.; Gonnet, G. H.; Hare, D. E. G.; Jeffrey, D. J. (1993). "Lambert's W function in Maple". The Maple Technical Newsletter. MapleTech. 9: 12–22. CiteSeerX accessible.  6. ^ Approximation of the Lambert W function and the hyperpower function, Hoorfar, Abdolhossein; Hassani, Mehdi. 7. ^ 8. ^ Chatzigeorgiou, I. (2013). "Bounds on the Lambert function and their Application to the Outage Analysis of User Cooperation". IEEE Communications Letters. 17 (8): 1505–1508. arXiv:1601.04895Freely accessible. doi:10.1109/LCOMM.2013.070113.130972.  9. ^ 10. ^ 11. ^ "The Lambert W Function". Ontario Research Centre.  12. ^ B Ohayon., G Ron. (2013). "New approaches in designing a Zeeman Slower". Journal of Instrumentation. 8 (02): P02016. doi:10.1088/1748-0221/8/02/P02016.  13. ^ Sotero, Roberto C.; Iturria-Medina, Yasser (2011). "From Blood oxygenation level dependent (BOLD) signals to brain temperature maps". Bull Math Biol. 73 (11): 2731–47. doi:10.1007/s11538-011-9645-5. PMID 21409512.  14. ^ Braun, Artur; Wokaun, Alexander; Hermanns, Heinz-Guenter (2003). "Analytical Solution to a Growth Problem with Two Moving Boundaries". Appl Math Model. 27 (1): 47–52. doi:10.1016/S0307-904X(02)00085-9.  15. ^ Braun, Artur; Baertsch, Martin; Schnyder, Bernhard; Koetz, Ruediger (2000). "A Model for the film growth in samples with two moving boundaries – An Application and Extension of the Unreacted-Core Model.". Chem Eng Sci. 55 (22): 5273–5282. doi:10.1016/S0009-2509(00)00143-3.  16. ^ Braun, Artur; Briggs, Keith M.; Boeni, Peter (2003). "Analytical solution to Matthews' and Blakeslee's critical dislocation formation thickness of epitaxially grown thin films". J Cryst Growth. 241 (1/2): 231–234. Bibcode:2002JCrGr.241..231B. doi:10.1016/S0022-0248(02)00941-7.  17. ^ Colla, Pietro (2014). "A New Analytical Method for the Motion of a Two-Phase Interface in a Tilted Porous Medium". PROCEEDINGS,Thirty-Eighth Workshop on Geothermal Reservoir Engineering,Stanford University. SGP-TR-202. ([1]) 18. ^ Scott, T. C.; Mann, R. B.; Martinez Ii, Roberto E. (2006). "General Relativity and Quantum Mechanics: Towards a Generalization of the Lambert W Function". AAECC (Applicable Algebra in Engineering, Communication and Computing). 17 (1): 41–47. arXiv:math-ph/0607011Freely accessible. doi:10.1007/s00200-006-0196-1.  19. ^ Scott, T. C.; Fee, G.; Grotendorst, J. (2013). "Asymptotic series of Generalized Lambert W Function". SIGSAM (ACM Special Interest Group in Symbolic and Algebraic Manipulation). 47 (185): 75–83. doi:10.1145/2576802.2576804.  20. ^ Scott, T. C.; Fee, G.; Grotendorst, J.; Zhang, W.Z. (2014). "Numerics of the Generalized Lambert W Function". SIGSAM. 48 (1/2): 42–56. doi:10.1145/2644288.2644298.  21. ^ Farrugia, P. S.; Mann, R. B.; Scott, T. C. (2007). "N-body Gravity and the Schrödinger Equation". Class. Quantum Grav. 24 (18): 4647–4659. arXiv:gr-qc/0611144Freely accessible. doi:10.1088/0264-9381/24/18/006.  22. ^ Scott, T. C.; Aubert-Frécon, M.; Grotendorst, J. (2006). "New Approach for the Electronic Energies of the Hydrogen Molecular Ion". Chem. Phys. 324 (2–3): 323–338. arXiv:physics/0607081Freely accessible. doi:10.1016/j.chemphys.2005.10.031.  23. ^ Maignan, Aude; Scott, T. C. (2016). "Fleshing out the Generalized Lambert W Function". SIGSAM. 50 (2): 45–60. doi:10.1145/2992274.2992275.  24. ^ Scott, T. C.; Lüchow, A.; Bressanini, D.; Morgan, J. D. III (2007). "The Nodal Surfaces of Helium Atom Eigenfunctions". Phys. Rev. A. 75 (6): 060101. doi:10.1103/PhysRevA.75.060101.  25. ^ lambertw – MATLAB 26. ^ Maxima, a Computer Algebra System 27. ^ ProductLog at WolframAlpha 28. ^ [2] 29. ^ ntheory at MetaCPAN External links[edit]
d688faf9b5d37f5f
Erwin Schroedinger Erwin Schrödinger Final Answers © 2000-2016   Gérard P. Michon, Ph.D. The Schrödinger Equation but to think what nobody has yet thought,  about that which everybody sees Arthur Schopenhauer (1788-1860) Related articles on this site: Related Links (Outside this Site) Schrödinger Picture and Heisenberg Picture  (equivalent representations). Schrödinger's Equation in 1-D  by  Michael Fowler  |  Physics Applets. Solutions to the Schrödinger Equation  (calculator, shooting method). Electron in a Finite Square Well Potential  (calculator). The Hydrogen Atom HyperPhysics  by  Carl R. (Rod) Nave :   Hydrogen Atom   |   Spherical Well Video:  Particles and Waves  (MU50)  by  David L. Goodstein   1 | 2 | 3 | 4 Atoms to Quarks  (MU51)  by  David L. Goodstein   1 | 2 | 3 | 4 | 5 Justifying Schrödinger's Equation Schroedinger's 126th Birthday The celebrated  Schrödinger equation  is merely what the most ordinary  wave equation  becomes when the celerity of a wave  (i.e., the product  u = ln of its wavelength by its frequency)  is somehow equated to the ratio  (E/p)  of the energy to the momentum of an "associated" nonrelativistic particle. This surprising relation is essentially what the  (relativistic)  de Broglie principle  postulates.  In an introductory course, it might be more pedagological and more economical to invoke the de Broglie principle in order to derive  Schrödinger's equation... However, it's enlightening to present how Erwin Schrödinger himself introduced the subject:  Following Hamilton, he showed how the relation  u = E/p  can be obtained, by equating the classical principles previously stated by Fermat for waves (least time) and Maupertuis for particles (least "action"). This is an idea which made the revolutionary concepts of wave mechanics acceptable to physicists of a bygone era, including Erwin Schrödinger himself.  Also, the more recent "sum over histories" formulation of quantum mechanics by Richard Feynman is arguably based on the same variational principles. (2002-11-02)   Hamilton's Analogy:  Paths to the Schrödinger Equation Equating the principles of Fermat and Maupertuis yields the celerity  u. Schrödinger took seriously an analogy attributed to William Rowan Hamilton (1805-1865)  which bridges the gap between well-known features of two aspects of physical reality, classical mechanics and wave theory.  Hamilton's analogy states that, whenever waves conspire to create the illusion of traveling along a definite path (like "light rays" in geometrical optics), they are analogous to a classical particle:  The Fermat principle for waves may then be equated with the Maupertuis principle for particles.  Equating also the velocity of a particle with the group speed of a wave, Schrödinger drew the mathematical consequences of combining it all with Planck's quantum hypothesis (E = hn). These ideas were presented (March 5, 1928) at the Royal Institution of London, to start a course of  "Four Lectures on Wave Mechanics"  which Schrödinger dedicated to his late teacher, Fritz Hasenöhrl. Maupertuis' Principle of Least "Action" (1744, 1750) Pierre-Louis Moreau de Maupertuis (1698-1759) Adding up the masses of all bodies multiplied by their respective  speeds and the distances they travel yields the quantity called  action,   which is always the  least possible  in any natural motion. Pierre-Louis Moreau de Maupertuis.  "Sur les lois du mouvement " (1746). When a point of mass m moves at a speed v in a force field described by a potential energy V (which depends on the position), its kinetic energy is T = ½ mv2 (the total energy E = T+V remains constant).  The actual trajectory from a point A to a point B turns out to be such as to minimize the quantity that Maupertuis (1698-1759) dubbed action, namely the integral  ò 2T dt.  (Maupertuis' Principle is thus also called the Least Action Principle.) Now, introducing the curvilinear abscissa (s) along the trajectory, we have: 2T   =   mv2   =   m (ds/dt)2   =   2(E-V) Multiply the last two quantities by m and take their square roots to obtain an expression for  m (ds/dt) ,  which you may plug back into the whole thing to get an interesting value for 2T: 2T  =  (ds/dt) Ö 2m (E-V)so the action is:ò Ö2m (E-V)  ds The time variable (t) has thus disappeared from the integral to be minimized, which is now a purely static function of the spatial path from A to B. Pierre de Fermat Fermat's Principle: Least Time (c. 1655) When some quantity  j  propagates in 3 dimensions at some celerity u (also called phase speed), it verifies the well-known wave equation: Pierre-Simon Laplace    2 j     =     2 j   +   2 j   +   2 j   Vinculum Vinculum Vinculum Vinculum Vinculum u 2 t 2 x 2 y 2 z 2  =Dj [D is the Laplacian operator] The speed u may depend on the properties of the medium in which the "thing" propagates, and it may thus vary from place to place.  When light goes through some nonhomogeneous medium with a varying refractive index (n>1), it propagates at a speed   u = c/n   and will travel along a path (a "ray", in the approximation of geometrical optics) which is always such that the time (òdt) it takes to go from point A to point B is minimal [among "nearby" paths].  This is Fermat's Principle, first stated by Pierre de Fermat (1601-1665) for light in the context of geometrical optics, where it implies both the law of reflection and Snell's law for refraction.  This principle applies quite generally to any type of wave, in those circumstances where some path of propagation can be defined. If we introduce a curvilinear abscissa s for a wave that follows some path in the same way light propagate along rays [in a smooth enough medium], we have u = ds/dt.  This allows us to express the time it takes to go from A to B as an integral of ds/u.  The conclusion is that a wave will [roughly speaking] take a "path" from A to B along which the following integral is minimal: ò 1/u   ds Hamilton's Analogy : The above shows that, when a wave appears to propagate along a path, this path satisfies a condition of the same mathematical form as that obeyed by the trajectory of a particle.  In both cases, a static integral along the path has to be minimized.  If the same type of "mechanics" is relevant, it seems the quantities to integrate should be proportional.  The coefficient of proportionality cannot depend on the position, but it may very well depend on the total energy  E  (which is constant in the whole discussion).  In other words, the proportionality between the integrand of the principle of Maupertuis and its Fermat counterpart  (1/u)  implies that the following quantity is a function of the energy E alone:  f (E)   =   u Ö  2m (E-V) Combined with Planck's formula, the next assumption implies  f (E)  =  E ... Schrödinger's Argument : Schrödinger assumed that the wave equivalent of the speed  v  of a particle had to be the so-called  group velocity, given by the following expression:  (n / u) vinculum vinculum We enter the quantum realm by postulating  Planck's formula :  E = hn.  This proportionality of energy and frequency turns the previous equation into:  (E / u) vinculum vinculum On the other hand, since  ½ mv2 = E-V,  the following relation also holds:   Ö  2m (E-V) vinculum vinculum Recognizing the square root as the quantity we denoted   f (E) / u   in the above context of Hamilton's analogy [it's actually the momentum p, if you must know] the equality of the right-hand sides of the last two equations implies that the following quantity  C  does not depend on E: ( f (E) - E ) / u     =     C     =     [ 1 - E / f (E) ]  Ö  2m (E-V) This means  f (E)  =  E / ( 1 - C [ 2m (E-V) ] -1/2   which is, in general, a function of E alone  only  if C vanishes  (as V depends on space coordinates).  Therefore  f (E)  =  E, as advertised, which can be expressed by the relation: u   =   E  /  Ö  2m (E-V) Mathematically, this equation and Planck's relation  (E = hn)  turn the general wave equation into the stationary  equation of Schrödinger, discussed below. In 1928, Schrödinger quoted only as "worth mentioning" the fact that the above relation boils down to  u = E/p,  without identifying that as the nonrelativistic counterpart of the formally identical relation for the celerity  u = ln  obtained from the 1923 expression of a de Broglie wave's momentum (p = h/l) using  E = hn. Nowadays, it's simpler to merely invoke  de Broglies's Principle  to establish mathematically the formal  stationary  equation of Schrödinger, given below. English translations of the 9 papers and 4 lectures that Erwin Schrödinger published about his own approach to Quantum Theory  ("Wave Mechanics")  between 1926 and 1928 have been compiled in:  " Collected Papers on Wave Mechanics "  by E. Schrödinger  (Chelsea Publishing Co., NY, 1982) Schrödinger's  Stationary  Equation Dy   +   (8 p2 m / h2 ) (E - V)  y     =     0 (2005-07-08)   Partial Confinement in a Box by a Finite Potential Solutions for a single dimension yield the three-dimensional solutions. Consider a particle confined within a rectangular box by a finite potential, so that (8 p2 m / h2 ) (V - E)   is  -l-2  inside the box, and  m-2  outside of it. Finite one-dimensional well For a single dimension, we'd be looking at a box with boundaries at  x = ± L  and a  bounded  and  continuous  solution  y  of the following type: y(x)  =   [ A cos (L/l)  -  B sin (L/l) ]  exp ( [L+x] / m )      for  x < -L =     A cos (x/l)  +  B sin (x/l) for |x| < L =   [ A cos (L/l)  +  B sin (L/l) ]  exp ( [L-x] / m ) for  x > L The  continuity of the derivative  of  y  at  x = ± L  translates into the relations: (A/l) sin (L/l)  +  (B/l) cos (L/l)   =   (1/m) [ A cos (L/l)  -  B sin (L/l) ] (-A/l) sin (L/l)  +  (B/l) cos (L/l)   =   (1/m) [ -A cos (L/l)  -  B sin (L/l) ] We may replace these by their sum and their difference, which boil down to: • B = 0   or     m cos (L/l)  =  -l sin (L/l) • A = 0   or      m sin (L/l)  =  l cos (L/l) Since  lm  does not vanish, either  A  or  B  does  (not both).  A nonzero solution is thus  either  even (B=0) or odd (A=0) with the matching condition derived from the above, which is dubbed "quantization" in the following table: Single-Dimensional Well of Width  2L  and   Energy Depth  V 1 / l2  +  1 / m2   =   (8p2 m / h2 )  V Symmetry y(-x) = y(x) y(-x) = -y(x) Quantization l / m  =  tan (L / l) m / l  =  -tan (L / l) y(x) x < -L cos (L/l)  exp ( [L+x] / m ) -sin (L/l)  exp ( [L+x] / m ) -L < x < L cos ( x / l ) sin ( x / l ) L < x cos (L/l)  exp ( [L-x] / m ) sin (L/l)  exp ( [L-x] / m ) ò  |y| 2 dx m cos2 (L/l)  + L  +  ½ l sin (2L/l) m sin2 (L/l)  + L  -  ½ l sin (2L/l) m + L Any solution is proportional to the function expressed in either of the above columns.  The last line indicates that (because of their respective quantization conditions) the norms of both tabulated functions have a unified expression.  This is just a coincidence, since we merely took  a priori  the simplest choices among proportional expressions...  Normalized functions are thus obtained by multiplying the above expressions by  e / Ö(m+L)  for some complex unit  e  ( |e| = 1 ). The probability  P ( |x| > L )  to find the particle outside the box also has a unified expression, valid for either parity of the wavefunction: ( m2 + l2 ) (m + L) Wavefunctions for a 3-dimensional box of dimensions  a, b, c  are obtained as products of the above types of functions of  x, y or z, respectively. Come back later, we're still working on this one... (2005-07-10)   Harmonic Oscillator Quantization of energy in a parabolic well  (Hooke's law). Come back later, we're still working on this one... Hermite Polynomials   |   Charles Hermite (1822-1901; X1842) (2005-07-10)   Angular Momentum The angular momentum of a rotator is quantized. Come back later, we're still working on this one... (2005-07-10)   Coulomb Potential Classification of the orbitals corresponding to a Coulomb potential. Come back later, we're still working on this one... Legendre Polynomials   |   Laguerre Polynomials (2015-11-22)   The Wallis Formula for p   (John Wallis, 1655). A quantum derivation by  Tamar Friedmann  and  C. R. Hagen  (2015). Come back later, we're still working on this one... Tamar Friedmann  &  C. R. Hagen,   AIP Journal of Mathematical Physics56, 112101 (2015). New derivation of pi links quantum physics and pure math  (AIP, 2015-11-10). (2016-01-16)   How tough is Schrödinger's equation, really? Any homogeneous second-order linear differential equation reduces to it. In one dimension, an second-order linear differential equation can be tranformed into a Schrödinger equation or a Ricatti equation, and vice-versa. Come back later, we're still working on this one... visits since January 15, 2007
3be0621709aca154
The Anderson Institute Logo     Where history is becoming an experimental science       Quantum Tunneling An Overview and Comparison by Dr. David Lewis Anderson Quantum Tunneling is an evanescent wave coupling effect that occurs in quantum mechanics. The correct wavelength combined with the proper tunneling barrier makes it possible to pass signals faster than light, backwards in time. Quantum Tunneling Time Control In the diagram above light pulses consisting of waves of various frequencies are shot toward a 10 centimeter chamber containing cesium vapor. All information about the incoming pulse is contained in the leading edge of its waves. This information is all the cesium atoms need to replicate the pulse and send it out the other side. At the same time it is believed an opposite wave rebounds inside the chamber cancelling out the main part of the incoming pulse as it enters the chamber. By this time the new pulse, moving faster than the speed of light, has traveled about 60 feet beyond the chamber. Essentially the pulse has left the chamber before it finished entering, traveling backwards in time. The key characteristics of the application of quantum tunneling for time control and time travel are presented in the picture below. This is followed by more detail describing the phenomenon below. Quantum Tunneling Time Control and Time Travel Wave-mechanical tunneling (also called quantum-mechanical tunneling, quantum tunneling, and the tunnel effect) is an evanescent wave coupling effect that occurs in the context of quantum mechanics because the behavior of particles is governed by Schrödinger's wave-equation. All wave equations exhibit evanescent wave coupling effects if the conditions are right. Wave coupling effects mathematically equivalent to those called "tunneling" in quantum mechanics can occur with Maxwell's wave-equation (both with light and with microwaves), and with the common non-dispersive wave-equation often applied (for example) to waves on strings and to acoustics. For these effects to occur there must be a situation where a thin region of "medium type 2" is sandwiched between two regions of "medium type 1", and the properties of these media have to be such that the wave equation has "traveling-wave" solutions in medium type 1, but "real exponential solutions" (rising and falling) in medium type 2. In optics, medium type 1 might be glass, medium type 2 might be vacuum. In quantum mechanics, in connection with motion of a particle, medium type 1 is a region of space where the particle total energy is greater than its potential energy, medium type 2 is a region of space (known as the "barrier") where the particle total energy is less than its potential energy. If conditions are right, amplitude from a traveling wave, incident on medium type 2 from medium type 1, can "leak through" medium type 2 and emerge as a traveling wave in the second region of medium type 1 on the far side. If the second region of medium type 1 is not present, then the traveling wave incident on medium type 2 is totally reflected, although it does penetrate into medium type 2 to some extent. Depending on the wave equation being used, the leaked amplitude is interpreted physically as traveling energy or as a traveling particle, and, numerically, the ratio of the square of the leaked amplitude to the square of the incident amplitude gives the proportion of incident energy transmitted out the far side, or (in the case of the Schrödinger equation) the probability that the particle "tunnels" through the barrier. Quantum Tunneling Introduction quantum tunneling Quantum Tunneling The scale on which these "tunneling-like phenomena" occur depends on the wavelength of the traveling wave. For electrons the thickness of "medium type 2" (called in this context "the tunneling barrier") is typically a few nanometers; for alpha-particles tunneling out of a nucleus the thickness is very much less; for the analogous phenomenon involving light the thickness is very much greater. With Schrödinger's wave-equation, the characteristic that defines the two media discussed above is the kinetic energy of the particle if it is considered as an object that could be located at a point. In medium type 1 the kinetic energy would be positive, in medium type 2 the kinetic energy would be negative. There is no inconsistency in this, because particles cannot physically be located at a point: they are always spread out ("delocalized") to some extent, and the kinetic energy of the delocalized object is always positive. What is true is that it is sometimes mathematically convenient to treat particles as behaving like points, particular in the context of Newton's Second Law and classical mechanics generally. In the past, people thought that the success of classical mechanics meant that particles could always and in all circumstances be treated as if they were located at points. But there never was any convincing experimental evidence that this was true when very small objects and very small distances are involved, and we now know that this viewpoint was mistaken. However, because it is still traditional to teach students early in their careers that particles behave like points, it sometimes comes as a big surprise for people to discover that it is well established that traveling physical particles always physically obey a wave-equation (even when it is convenient to use the mathematics of moving points). Clearly, a hypothetical classical point particle analyzed according to Newton's Laws could not enter a region where its kinetic energy would be negative. But, a real delocalized object, that obeys a wave-equation and always has positive kinetic energy, can leak through such a region if conditions are right. An approach to tunneling that avoids mention of the concept of "negative kinetic energy" is set out below in the section on "Schrödinger equation tunneling basics". Quantum Tunneling Effect Reflection and tunneling of an electron wave packet directed at a potential barrier. The bright spot moving to the left is the reflected part of the wave packet. A very dim spot can be seen moving to the right of the barrier. This is the small fraction of the wave packet that tunnels through the classically forbidden barrier. Also notice the interference fringes between the incoming and reflected waves. An electron approaching a barrier has to be represented as a wave-train. This wave-train can sometimes be quite long – electrons in some materials can be 10 to 20 nm long. This makes animations difficult. If it were legitimate to represent the electron by a short wave-train, then tunneling could be represented as in the animation alongside. It is sometimes said that tunneling occurs only in quantum mechanics. Unfortunately, this statement is a bit of linguistic conjuring trick. As indicated above, "tunneling-type" evanescent-wave phenomena occur in other contexts too. But, until recently, it has only been in quantum mechanics that evanescent wave coupling has been called "tunneling". (However, there is an increasing tendency to use the label "tunneling" in other contexts too, and the names "photon tunneling" and "acoustic tunneling" are now used in the research literature.) With regards to the mathematics of tunneling, a special problem arises. For simple tunneling-barrier models, such as the rectangular barrier, the Schrödinger equation can be solved exactly to give the value of the tunneling probability (sometimes called the "transmission coefficient"). Calculations of this kind make the general physical nature of tunneling clear. One would also like to be able to calculate exact tunneling probabilities for barrier models that are physically more realistic. However, when appropriate mathematical descriptions of barriers are put into the Schrödinger equation, then the result is an awkward non-linear differential equation. Usually, the equation is of a type where it is known to be mathematically impossible in principle to solve the equation exactly in terms of the usual functions of mathematical physics, or in any other simple way. Mathematicians and mathematical physicists have been working on this problem since at least 1813, and have been able to develop special methods for solving equations of this kind approximately. In physics these are known as "semi-classical" or "quasi-classical" methods. A common semi-classical method is the so-called WKB approximation (also known as the "JWKB approximation"). The first known attempt to use such methods to solve a tunneling problem in physics was made in 1928, in the context of field electron emission. It is sometimes considered that the first people to get the mathematics of applying this kind of approximation to tunneling fully correct (and to give reasonable mathematical proof that they had done so) were N. Fröman and P.O. Fröman, in 1965. Their complex ideas have not yet made it into theoretical-physics textbooks, which tend to give simpler (but slightly more approximate) versions of the theory. An outline of one particular semi-classical method is given below. quantum tunneling Three notes may be helpful. In general, students taking physics courses in quantum mechanics are presented with problems (such as the quantum mechanics of the hydrogen atom) for which exact mathematical solutions to the Schrödinger equation exist. Tunneling through a realistic barrier is a reasonably basic physical phenomenon. So it is sometimes the first problem that students encounter where it is mathematically impossible in principle to solve the Schrödinger equation exactly in any simple way. Thus, it may also be the first occasion on which they encounter the "semi-classical-method" mathematics needed to solve the Schrödinger equation approximately for such problems. Not surprisingly, this mathematics is likely to be unfamiliar, and may feel "odd". Unfortunately, it also comes in several different variants, which doesn't help. Also, some accounts of tunneling seem to be written from a philosophical viewpoint that a particle is "really" point-like, and just has wave-like behavior. There is very little experimental evidence to support this viewpoint. A preferable philosophical viewpoint is that the particle is "really" delocalized and wave-like, and always exhibits wave-like behavior, but that in some circumstances it is convenient to use the mathematics of moving points to describe its motion. This second viewpoint is used in this section. The precise nature of this wave-like behavior is, however, a much deeper matter, beyond the scope of this article on tunneling. Although the phenomenon under discussion here is usually called "quantum tunneling" or "quantum-mechanical tunneling", it is the wave-like aspects of particle behavior that are important in tunneling theory, rather than effects relating to the quantization of the particle's energy states. For this reason, some writers prefer to call the phenomenon "wave-mechanical tunneling. By 1928, George Gamow had solved the theory of the alpha decay of a nucleus via tunneling. Classically, the particle is confined to the nucleus because of the high energy requirement to escape the very strong potential. Under this system, it takes an enormous amount of energy to pull apart the nucleus. In quantum mechanics, however, there is a probability the particle can tunnel through the potential and escape. Gamow solved a model potential for the nucleus and derived a relationship between the half-life of the particle and the energy of the emission. Alpha decay via tunneling was also solved concurrently by Ronald Gurney and Edward Condon. Shortly thereafter, both groups considered whether particles could also tunnel into the nucleus. After attending a seminar by Gamow, Max Born recognized the generality of quantum-mechanical tunneling. He realized that the tunneling phenomenon was not restricted to nuclear physics, but was a general result of quantum mechanics that applies to many different systems. Today the theory of tunneling is even applied to the early cosmology of the universe. Quantum tunneling was later applied to other situations, such as the cold emission of electrons, and perhaps most importantly semiconductor and superconductor physics. Phenomena such as field emission, important to flash memory, are explained by quantum tunneling. Tunneling is a source of major current leakage in Very-large-scale integration (VLSI) electronics, and results in the substantial power drain and heating effects that plague high-speed and mobile technology. Another major application is in electron-tunneling microscopes which can resolve objects that are too small to see using conventional microscopes. Electron tunneling microscopes overcome the limiting effects of conventional microscopes (optical aberrations, wavelength limitations) by scanning the surface of an object with tunneling electrons. Quantum tunneling has been shown to be a mechanism used by enzymes to enhance reaction rates. It has been demonstrated that enzymes use tunneling to transfer both electrons and nuclei such as hydrogen and deuterium. It has even been shown, in the enzyme glucose oxidase, that oxygen nuclei can tunnel under physiological conditions.
61f68951aff33da4
Dismiss Notice Join Physics Forums Today! Why the least action: a fact or a meaning ? 1. Jun 28, 2006 #1 Have some people tried to find a meaning to the principle of least action that apparently underlies the whole physics? I know of one attempt, but not convincing to me (°). A convincing attempt, even modest, should suggest why it occurs, what is/could be behind the scene and how it might lead us to new discoveries. The link from QM/Schroedinger to CM/Newton is a clear explanation for the classical least action. But the surpirse is that least action can be found nearly everywhere, even as a basis for QFT (isn't it?). (°) this is how I understood the book by Roy Frieden "Science from fisher information" 2. jcsd 3. Jun 28, 2006 #2 Feynmann gave a beatiful "justification2 or explanation of this principle when dealing with Path integral..if you have: [tex] \int D[\phi]e^{iS[\phi]/\hbar} [/tex] then the classical behavior h-->0 so only the points for wich the integrand have a maximum or a minimum contribute to the integration in our case the maximum or minimum is given by the equation [tex] \delta S =0 [/tex] wich is precisely the "Principle of Least action"... Unfortunately following Feynmann there is no variational principles in quantum mechanics. 4. Jun 30, 2006 #3 The Schrödinger equation has also a Lagrangian and can be derived from a least action principle. Other systems surprisingly also have a Lagragian and a least action principle: the classical damped oscillator, and the diffusion equation, for example ! Clearly this is an exception: this pictural explanation for the CM least action derived from the stationary phase limit of QM. Least action is seen nearly everywhere. This is why I asked the PF if there is explanation or a meaning behind that. Would it be possible that a very wide range of differential equations can be reformulated as a least action principle? Then the explanation would be general mathematics, and the meaning would not be much of physics. This would translate my question to something like "why is physics based on differential equations?". Or is there more to learn on physics from the LAP ? Last edited: Jun 30, 2006 Have something to add? Similar Discussions: Why the least action: a fact or a meaning ? 1. Least action (Replies: 1)
749f3b293e5dfa39
Equations of motion From Wikipedia, the free encyclopedia   (Redirected from Equation of motion) Jump to: navigation, search In mathematical physics, equations of motion are equations that describe the behaviour of a physical system in terms of its motion as a function of time.[1] More specifically, the equations of motion describe the behaviour of a physical system as a set of mathematical functions in terms of dynamic variables: normally spatial coordinates and time are used, but others are also possible, such as momentum components and time. The most general choice are generalized coordinates which can be any convenient variables characteristic of the physical system.[2] The functions are defined in a Euclidean space in classical mechanics, but are replaced by curved spaces in relativity. If the dynamics of a system is known, the equations are the solutions to the differential equations describing the motion of the dynamics. There are two main descriptions of motion: dynamics and kinematics. Dynamics is general, since momenta, forces and energy of the particles are taken into account. In this instance, sometimes the term refers to the differential equations that the system satisfies (e.g., Newton's second law or Euler–Lagrange equations), and sometimes to the solutions to those equations. However, kinematics is simpler as it concerns only variables derived from the positions of objects, and time. In circumstances of constant acceleration, these simpler equations of motion are usually referred to as the "SUVAT" equations, arising from the definitions of kinematic quantities: displacement (S), initial velocity (U), final velocity (V), acceleration (A), and time (T). (see below). Equations of motion can therefore be grouped under these main classifiers of motion. In all cases, the main types of motion are translations, rotations, oscillations, or any combinations of these. A differential equation of motion, usually identified as some physical law and applying definitions of physical quantities, is used to set up an equation for the problem. Solving the differential equation will lead to a general solution with arbitrary constants, the arbitrariness corresponding to a family of solutions. A particular solution can be obtained by setting the initial values, which fixes the values of the constants. To state this formally, in general an equation of motion M is a function of the position r of the object, its velocity (the first time derivative of r, v = dr/dt), and its acceleration (the second derivative of r, a = d2r/dt2), and time t. Euclidean vectors in 3d are denoted throughout in bold. This is equivalent to saying an equation of motion in r is a second order ordinary differential equation (ODE) in r, where t is time, and each overdot denotes one time derivative. The initial conditions are given by the constant values at t = 0, \mathbf{r}(0) \,, \quad \mathbf{\dot{r}}(0) \,. The solution r(t) to the equation of motion, with specified initial values, describes the system for all times t after t = 0. Other dynamical variables like the momentum p of the object, or quantities derived from r and p like angular momentum, can be used in place of r as the quantity to solve for from some equation of motion, although the position of the object at time t is by far the most sought-after quantity. Sometimes, the equation will be linear and is more likely to be exactly solvable. In general, the equation will be non-linear, and cannot be solved exactly so a variety of approximations must be used. The solutions to nonlinear equations may show chaotic behavior depending on how sensitive the system is to the initial conditions. Historically, equations of motion first appeared in classical mechanics to describe the motion of massive objects, a notable application was to celestial mechanics to predict the motion of the planets as if they orbit like clockwork (this was how Neptune was predicted before its discovery), and also investigate the stability of the solar system. It is important to observe that the huge body of work involving kinematics, dynamics and the mathematical models of the universe developed in baby steps - faltering, getting up and correcting itself - over three millennia and included contributions of both known names and others who have since faded from the annals of history. In antiquity, notwithstanding the success of priests, astrologers and astronomers in predicting solar and lunar eclipses, the solstices and the equinoxes of the Sun and the period of the moon, there was nothing other than a set of algorithms to help them. Despite the great strides made in the development of geometry in the Ancient Greece and surveys in Rome, we were to wait for another thousand years before the first equations of motion arrive. The exposure of Europe to the collected works by the Muslims of the Greeks, the Indians and the Islamic scholars, such as Euclid’s Elements, the works of Archimedes, and Al-Khwārizmī’s treatises [3] began in Spain, and scholars from all over Europe went to Spain, read, copied and translated the learning into Latin. The exposure of Europe to Indo-Arabic numerals and their ease in computations encouraged first the scholars to learn them and then the merchants and envigorated the spread of knowledge throughout Europe. By the 13th century the universities of Oxford and Paris had come up, and the scholars were now studying mathematics and philosophy with lesser worries about mundane chores of life—the fields were not as clearly demarcated as they are in the modern times. Of these, compendia and redactions, such as those of Johannes Campanus, of Euclid and Aristotle, confronted scholars with ideas about infinity and the ratio theory of elements as a means of expressing relations between various quantities involved with moving bodies. These studies led to a new body of knowledge that is now known as physics. Of these institutes Merton College sheltered a group of scholars devoted to natural science, mainly physics, astronomy and mathematics, of similar in stature to the intellectuals at the University of Paris. Thomas Bradwardine, one of those scholars, extended Aristotelian quantities such as distance and velocity, and assigned intensity and extension to them. Bradwardine suggested an exponential law involving force, resistance, distance, velocity and time. Nicholas Oresme further extended Bradwardine's arguments. The Merton school proved the that the quantity of motion of a body undergoing a uniformly accelerated motion is equal to the quantity of a uniform motion at the speed achieved halfway through the accelerated motion. For writers on kinematics before Galileo, since small time intervals could not be measured, the affinity between time and motion was obscure. They used time as a function of distance, and in free fall, greater velocity as a result of greater elevation. Only Domingo de Soto, a Spanish Theologean, in his commentary on Aristotle's Physics published in 1545, after defining "uniform difform" motion (which is uniformly accelerated motion) - the word velocity wasn't used - as proportional to time, declared correctly that this kind of motion was identifiable with freely falling bodies and projectiles, without his proving these propositions or suggesting a formula relating time, velocity and distance. de Soto's comments are shockingly correct regarding the definitions of acceleration (acceleration was a rate of change of motion (velocity) in time) and the observation that during the violent motion of ascent acceleration would be negative. Discourses such as these spread throughout the Europe and definitely influenced Galileo and others, and helped in laying the foundation of kinematics.[4] Galileo deduced the equation \begin{align} s & = \frac{1}{2} gt^2 \quad \\ \end{align} in his work geometrically,[5] using Merton's rule, now known as a special case of one of the equations of Kinematics. He couldn't use the now-familiar mathematical reasoning. The relationships between speed, distance, time and acceleration was not known at the time. Galileo was the first to show that the path of a projectile is a parabola. Galileo had an understanding of centrifugal force and gave a correct definition of momentum. This emphasis of momentum as a fundamental quantity in dynamics is of prime importance. He measured momentum by the product of velocity and weight; mass is a later concept, developed by Huygens and Newton. In the swinging of a simple pendulum, Galileo says in Discourses[6] that "every momentum acquired in the descent along an arc is equal to that which causes the same moving body to ascend through the same arc." His analysis on projectiles indicates that Galileo had grasped the first law and the second law of motion. He did not generalize and make them applicable to bodies not subject to the earth's gravitation. That step was Newton's contribution. The term "inertia" was used by Kepler who applied it to bodies at rest.The first law of motion is now often called the law of inertia. Galileo did not fully grasp the third law of motion, the law of the equality of action and reaction, though he corrected some errors of Aristotle. With Stevin and others Galileo also wrote on statics. He formulated the principle of the parallelogram of forces, but he did not fully recognize its scope. Galileo also was interested by the laws of the pendulum, his first observations was when he was a young man. In 1583, while he was praying in the cathedral at Pisa, his attention was arrested by the motion of the great lamp lighted and left swinging, referencing his own pulse for time keeping. To him the period appeared the same, even after the motion had greatly diminished, discovering the isochronism of the pendulum. More careful experiments carried out by him later, and described in his Discourses, revealed the period of oscillation to be independent of the mass and material of the pendulum and as the square root of its length. Thus we arrive at Rene Descartes, Isaac Newton, Leibniz, et al; and the evolved forms of the equations of motion that begin to be recognized as the modern ones. Later the Equations of Motion also appeared in electrodynamics, when describing the motion of charged particles in electric and magnetic fields, the Lorentz force is the general equation which serves as the definition of what is meant by an electric field and magnetic field. With the advent of special relativity and general relativity, the theoretical modifications to spacetime meant the classical equations of motion were also modified to account for the finite speed of light, and curvature of spacetime. In all these cases the differential equations were in terms of a function describing the particle's trajectory in terms of space and time coordinates, as influenced by forces or energy transformations.[7] However, the equations of quantum mechanics can also be considered "equations of motion", since they are differential equations of the wavefunction, which describes how a quantum state behaves analogously using the space and time coordinates of the particles. There are analogs of equations of motion in other areas of physics, for collections of physical phenomena that can be considered waves, fluids, or fields. Kinematic equations for one particle[edit] Kinematic quantities[edit] From the instantaneous position r = r(t), instantaneous meaning at an instant value of time t, the instantaneous velocity v = v(t) and acceleration a = a(t) have the general, coordinate-independent definitions;[8] \mathbf{v} = \frac{d \mathbf{r}}{d t} \,, \quad \mathbf{a} = \frac{d \mathbf{v}}{d t} = \frac{d^2 \mathbf{r}}{d t^2} \,\! Notice that velocity always points in the direction of motion, in other words for a curved path it is the tangent vector. Loosely speaking, first order derivatives are related to tangents of curves. Still for curved paths, the acceleration is directed towards the center of curvature of the path. Again, loosely speaking, second order derivatives are related to curvature. The rotational analogues are the "angular vector" (angle the particle rotates about some axis) θ = θ(t), angular velocity ω = ω(t), and angular acceleration α = α(t): \boldsymbol{\theta} = \theta \hat{\mathbf{n}} \,,\quad \boldsymbol{\omega} = \frac{d \boldsymbol{\theta}}{d t} \,, \quad \boldsymbol{\alpha}= \frac{d \boldsymbol{\omega}}{d t} \,, where n is a unit vector in the direction of the axis of rotation, and θ is the angle the object turns through about the axis. The following relation holds for a point-like particle, orbiting about some axis with angular velocity ω:[9] \mathbf{v} = \boldsymbol{\omega}\times \mathbf{r} \,\! where r is the position vector of the particle (radial from the rotation axis) and v the tangential velocity of the particle. For a rotating continuum rigid body, these relations hold for each point in the rigid body. Uniform acceleration[edit] The differential equation of motion for a particle of constant or uniform acceleration in a straight line is simple: the acceleration is constant, so the second derivative of the position of the object is constant. The results of this case are summarized below. Constant translational acceleration in a straight line[edit] These equations apply to a particle moving linearly, in three dimensions in a straight line with constant acceleration.[10] Since the position, velocity, and acceleration are collinear (parallel, and lie on the same line) - only the magnitudes of these vectors are necessary, and because the motion is along a straight line, the problem effectively reduces from three dimensions to one. v & = at+v_0 \quad [1]\\ r & = r_0 + v_0 t + \frac{{a}t^2}{2} \quad [2]\\ r & = r_0 + \left( \frac{v+v_0}{2} \right )t \quad [3]\\ v^2 & = v_0^2 + 2a\left( r - r_0 \right) \quad [4]\\ r & = r_0 + vt - \frac{{a}t^2}{2} \quad [5]\\ Here a is constant acceleration, or in the case of bodies moving under the influence of gravity, the standard gravity g is used. Note that each of the equations contains four of the five variables, so in this situation it is sufficient to know three out of the five variables to calculate the remaining two. In elementary physics the same formulae are frequently written in different notation as: v & = u + at \quad [1] \\ s & = ut + \frac{1}{2} at^2 \quad [2] \\ s & = \frac{1}{2}(u + v)t \quad [3] \\ v^2 & = u^2 + 2as \quad [4] \\ s & = vt - \frac{1}{2}at^2 \quad [5] \\ where u has replaced v0, s replaces r, and s0 = 0. They are often referred to as the "SUVAT" equations, where "SUVAT" is an acronym from the variables: s = displacement (s0 = initial displacement), u = initial velocity, v = final velocity, a = acceleration, t = time.[11][12] Constant linear acceleration in any direction[edit] Trajectory of a particle with initial position vector r0 and velocity v0, subject to constant acceleration a, all three quantities in any direction, and the position r(t) and velocity v(t) after time t. The initial position, initial velocity, and acceleration vectors need not be collinear, and take an almost identical form. The only difference is that the square magnitudes of the velocities require the dot product. The derivations are essentially the same as in the collinear case, \mathbf{v} & = \mathbf{a}t+\mathbf{v}_0 \quad [1]\\ \mathbf{r} & = \mathbf{r}_0 + \mathbf{v}_0 t + \frac{{\mathbf{a}}t^2}{2} \quad [2]\\ \mathbf{r} & = \mathbf{r}_0 + \left( \frac{\mathbf{v}+\mathbf{v}_0}{2} \right )t \quad [3]\\ v^2 & = v_0^2 + 2\mathbf{a}\cdot\left( \mathbf{r} - \mathbf{r}_0 \right) \quad [4]\\ \mathbf{r} & = \mathbf{r}_0 + \mathbf{v}t - \frac{{\mathbf{a}}t^2}{2} \quad [5]\\ although the Torricelli equation [4] can be derived using the distributive property of the dot product as follows: v^{2} = \mathbf{v}\cdot\mathbf{v} = (\mathbf{v}_0+\mathbf{a}t)\cdot(\mathbf{v}_0+\mathbf{a}t)=v_0^{2}+2t(\mathbf{a}\cdot\mathbf{v}_0)+a^{2}t^{2} (2\mathbf{a})\cdot(\mathbf{r}-\mathbf{r}_0) = (2\mathbf{a})\cdot\left(\mathbf{v}_0t+\frac{1}{2}\mathbf{a}t^{2}\right)=2t(\mathbf{a}\cdot\mathbf{v}_0)+a^{2}t^{2} = v^{2} - v_0^{2} \therefore v^{2} = v_0^{2} + 2(\mathbf{a}\cdot(\mathbf{r}-\mathbf{r}_0)) Elementary and frequent examples in kinematics involve projectiles, for example a ball thrown upwards into the air. Given initial speed u, one can calculate how high the ball will travel before it begins to fall. The acceleration is local acceleration of gravity g. At this point one must remember that while these quantities appear to be scalars, the direction of displacement, speed and acceleration is important. They could in fact be considered as uni-directional vectors. Choosing s to measure up from the ground, the acceleration a must be in fact −g, since the force of gravity acts downwards and therefore also the acceleration on the ball due to it. At the highest point, the ball will be at rest: therefore v = 0. Using equation [4] in the set above, we have: Substituting and cancelling minus signs gives: s = \frac{u^2}{2g}. Constant circular acceleration[edit] The analogues of the above equations can be written for rotation. Again these axial vectors must all be parallel to the axis of rotation, so only the magnitudes of the vectors are necessary, \omega & = \omega_0 + \alpha t \\ \theta &= \theta_0 + \omega_0t + \tfrac12\alpha t^2 \\ \theta & = \theta_0 + \tfrac12(\omega_0 + \omega)t \\ \omega^2 & = \omega_0^2 + 2\alpha(\theta - \theta_0) \\ \theta & = \theta_0 + \omega t - \tfrac12\alpha t^2 \\ where α is the constant angular acceleration, ω is the angular velocity, ω0 is the initial angular velocity, θ is the angle turned through (angular displacement), θ0 is the initial angle, and t is the time taken to rotate from the initial state to the final state. General planar motion[edit] Main article: General planar motion Position vector r, always points radially from the origin. Velocity vector v, always tangent to the path of motion. These are the kinematic equations for a particle traversing a path in a plane, described by position r = r(t).[13] They are simply the time derivatives of the position vector in plane polar coordinates using the definitions of physical quantities above for angular velocity ω and angular acceleration α. The position, velocity and acceleration of the particle are respectively: \mathbf{r} & =\mathbf{r}\left ( r(t),\theta(t) \right ) = r \mathbf{\hat{e}}_r \\ \mathbf{v} & = \mathbf{\hat{e}}_r \frac{d r}{dt} + r \omega \mathbf{\hat{e}}_\theta \\ \mathbf{a} & =\left ( \frac{d^2 r}{dt^2} - r\omega^2\right )\mathbf{\hat{e}}_r + \left ( r \alpha + 2 \omega \frac{dr}{dt} \right )\mathbf{\hat{e}}_\theta \end{align} \,\! where \scriptstyle \mathbf{\hat{e}}_r, \mathbf{\hat{e}}_\theta, \,\! are the polar unit vectors. For the velocity v, dr/dt is the component of velocity in the radial direction, and is the additional component due to the rotation. For the acceleration a, (–2) is the centripetal acceleration and 2ωdr/dt the Coriolis acceleration, in addition to the radial acceleration d2r/dt2 and angular acceleration . Special cases of motion described be these equations are summarized qualitatively in the table below. Two have already been discussed above, in the cases that either the radial components or the angular components are zero, and the non-zero component of motion describes uniform acceleration. State of motion Constant r r linear in t r quadratic in t r non-linear in t Constant θ Stationary Uniform translation (constant translational velocity) Uniform translational acceleration Non-uniform translation θ linear in t Uniform angular motion in a circle (constant angular velocity) Uniform angular motion in a spiral, constant radial velocity Angular motion in a spiral, constant radial acceleration Angular motion in a spiral, varying radial acceleration θ quadratic in t Uniform angular acceleration in a circle Uniform angular acceleration in a spiral, constant radial velocity Uniform angular acceleration in a spiral, constant radial acceleration Uniform angular acceleration in a spiral, varying radial acceleration θ non-linear in t Non-uniform angular acceleration in a circle Non-uniform angular acceleration in a spiral, constant radial velocity Non-uniform angular acceleration in a spiral, constant radial acceleration Non-uniform angular acceleration in a spiral, varying radial acceleration General 3d motion[edit] In 3d space, the equations in spherical coordinates (r, θ, ϕ) with corresponding unit vectors \scriptstyle \mathbf{\hat{e}}_r, \mathbf{\hat{e}}_\theta, \mathbf{\hat{e}}_\phi \,\!, the position, velocity, and acceleration generalize respectively to \mathbf{r} & =\mathbf{r}\left ( t \right ) = r \mathbf{\hat{e}}_r\\ \mathbf{v} & = v \mathbf{\hat{e}}_r + r\,\frac{d\theta}{dt}\mathbf{\hat{e}}_\theta + r\,\frac{d\phi}{dt}\,\sin\theta \mathbf{\hat{e}}_\phi \\ \mathbf{a} & = \left( a - r\left(\frac{d\theta}{dt}\right)^2 - r\left(\frac{d\phi}{dt}\right)^2\sin^2\theta \right)\mathbf{\hat{e}}_r \\ & + \left( r \frac{d^2 \theta}{dt^2 } + 2v\frac{d\theta}{dt} - r\left(\frac{d\phi}{dt}\right)^2\sin\theta\cos\theta \right) \mathbf{\hat{e}}_\theta \\ & + \left( r\frac{d^2 \phi}{dt^2 }\,\sin\theta + 2v\,\frac{d\phi}{dt}\,\sin\theta + 2 r\,\frac{d\theta}{dt}\,\frac{d\phi}{dt}\,\cos\theta \right) \mathbf{\hat{e}}_\phi \end{align} \,\! In the case of a constant ϕ this reduces to the planar equations above. Dynamic equations of motion[edit] Newtonian mechanics[edit] Main article: Newtonian mechanics The first general equation of motion developed was Newton's second law of motion, in its most general form states the rate of change of momentum p = p(t) = mv(t) of an object equals the force F = F(x(t), v(t), t) acting on it,[14] The force in the equation is not the force the object exerts. Replacing momentum by mass times velocity, the law is also written more famously as \mathbf{F} = m\mathbf{a} since m is a constant in Newtonian mechanics. Newton's second law applies to point-like particles, and to all points in a rigid body. They also apply to each point in a mass continua, like deformable solids or fluids, but the motion of the system must be accounted for, see material derivative. In the case the mass is not constant, it is not sufficient to use the product rule for the time derivative on the mass and velocity, and Newton's second law requires some modification consistent with conservation of momentum, see variable-mass system. It may be simple to write down the equations of motion in vector form using Newton's laws of motion, but the components may vary in complicated ways with spatial coordinates and time, and solving them is not easy. Often there is an excess of variables to solve for the problem completely, so Newton's laws are not always the most efficient way to determine the motion of a system. In simple cases of rectangular geometry, Newton's laws work fine in Cartesian coordinates, but in other coordinate systems can become dramatically complex. The momentum form is preferable since this is readily generalized to more complex systems, generalizes to special and general relativity (see four-momentum).[14] It can also be used with the momentum conservation. However, Newton's laws are not more fundamental than momentum conservation, because Newton's laws are merely consistent with the fact that zero resultant force acting on an object implies constant momentum, while a resultant force implies the momentum is not constant. Momentum conservation is always true for an isolated system not subject to resultant forces. For a number of particles (see many body problem), the equation of motion for one particle i influenced by other particles is[8][15] \frac{d\mathbf{p}_i}{dt} = \mathbf{F}_{E} + \sum_{i \neq j} \mathbf{F}_{ij} \,\! where pi is the momentum of particle i, Fij is the force on particle i by particle j, and FE is the resultant external force due to any agent not part of system. Particle i does not exert a force on itself. Euler's laws of motion are similar to Newton's laws, but they are applied specifically to the motion of rigid bodies. The Newton–Euler equations combine the forces and torques acting on a rigid body into a single equation. Newton's second law for rotation takes a similar form to the translational case,[16] \boldsymbol{\tau} = \frac{d\mathbf{L}}{dt} \,, by equating the torque acting on the body to the rate of change of its angular momentum L. Analogous to mass times acceleration, the moment of inertia tensor I depends on the distribution of mass about the axis of rotation, and the angular acceleration is the rate of change of angular velocity, \boldsymbol{\tau} = \mathbf{I} \cdot \boldsymbol{\alpha}. Again, these equations apply to point like particles, or at each point of a rigid body. Likewise, for a number of particles, the equation of motion for one particle i is[17] \frac{d\mathbf{L}_i}{dt} = \boldsymbol{\tau}_E + \sum_{i \neq j} \boldsymbol{\tau}_{ij} \,, where Li is the angular momentum of particle i, τij the torque on particle i by particle j, and τE = resultant external torque (due to any agent not part of system). Particle i does not exert a torque on itself. Some examples[18] of Newton's law include describing the motion of a simple pendulum, - mg\sin\theta = m\frac{d^2 (\ell\theta)}{d t^2} \quad \Rightarrow \quad \frac{d^2 \theta}{d t^2} = - \frac{g}{\ell}\sin\theta \,, and a damped, sinusoidally driven harmonic oscillator, F_0 \sin(\omega t) = m\left(\frac{d^2x}{dt^2} + 2\zeta\omega_0\frac{dx}{dt} + \omega_0^2 x \right)\,. For describing the motion of masses due to gravity, Newton's law of gravity can be combined with Newton's second law. For two examples, a ball of mass m thrown in the air, in air currents (such as wind) described by a vector field of resistive forces R = R(r, t), - \frac{GmM}{|\mathbf{r}|^2} \mathbf{\hat{e}}_r + \mathbf{R} = m\frac{d^2 \mathbf{r}}{d t^2} + 0 \quad \Rightarrow \quad \frac{d^2 \mathbf{r}}{d t^2} = - \frac{GM}{|\mathbf{r}|^2} \mathbf{\hat{e}}_r + \mathbf{A} \,\! where G is the gravitational constant, M the mass of the Earth, and A = R/m is the acceleration of the projectile due to the air currents at position r and time t. The classical N-body problem for N particles each interacting with each other due to gravity is a set of N nonlinear coupled second order ODEs, \frac{d^2\mathbf{r}_i}{dt^2} = G\sum_{i\neq j}\frac{m_i m_j}{|\mathbf{r}_j - \mathbf{r}_i|^3} (\mathbf{r}_j - \mathbf{r}_i) where i = 1, 2, ..., N labels the quantities (mass, position, etc.) associated with each particle. Analytical mechanics[edit] As the system evolves, q traces a path through configuration space (only some are shown). The path taken by the system (red) has a stationary action (δS = 0) under small changes in the configuration of the system (δq).[19] Using all three coordinates of 3d space is unnecessary if there are constraints on the system. If the system has N degrees of freedom, then one can use a set of N generalized coordinates q(t) = [q1(t), q2(t) ... qN(t)], to define the configuration of the system. They can be in the form of arc lengths or angles. They are a considerable simplification to describe motion, since they take advantage of the intrinsic constraints that limit the system's motion, and the number of coordinates is reduced to a minimum. The time derivatives of the generalized coordinates are the generalized velocities \mathbf{\dot{q}} = d\mathbf{q}/dt \,. The Euler–Lagrange equations are[2][20] \frac{d}{d t} \left ( \frac{\partial L}{\partial \mathbf{\dot{q}} } \right ) = \frac{\partial L}{\partial \mathbf{q}} \,, where the Lagrangian is a function of the configuration q and its time rate of change dq/dt (and possibly time t) L = L\left [ \mathbf{q}(t), \mathbf{\dot{q}}(t), t \right ] \,. Setting up the Lagrangian of the system, then substituting into the equations and evaluating the partial derivatives and simplifying, a set of coupled N second order ODEs in the coordinates are obtained. Hamilton's equations are[2][20] \mathbf{\dot{p}} = -\frac{\partial H}{\partial \mathbf{q}} \,, \quad \mathbf{\dot{q}} = + \frac{\partial H}{\partial \mathbf{p}} \,, where the Hamiltonian H = H\left [ \mathbf{q}(t), \mathbf{p}(t), t \right ] \,, is a function of the configuration q and conjugate "generalized" momenta \mathbf{p} = \partial L/\partial \mathbf{\dot{q}} \,, in which ∂/∂q = (∂/∂q1, ∂/∂q2,..., ∂/∂qN) is a shorthand notation for a vector of partial derivatives with respect to the indicated variables (see for example matrix calculus for this denominator notation), and possibly time t, Setting up the Hamiltonian of the system, then substituting into the equations and evaluating the partial derivatives and simplifying, a set of coupled 2N first order ODEs in the coordinates qi and momenta pi are obtained. The Hamilton–Jacobi equation is[2] - \frac{\partial S(\mathbf{q},t)}{\partial t} = H\left(\mathbf{q}, \mathbf{p}, t \right) \,. S[\mathbf{q},t] = \int_{t_1}^{t_2}L(\mathbf{q}, \mathbf{\dot{q}}, t)\,dt \,, is Hamilton's principal function, also called the classical action is a functional of L. In this case, the momenta are given by \mathbf{p} = \partial S /\partial \mathbf{q}\,. Although the equation has a simple general form, for a given Hamiltonian it is actually a single first order non-linear PDE, in N + 1 variables. The action S allows identification of conserved quantities for mechanical systems, even when the mechanical problem itself cannot be solved fully, because any differentiable symmetry of the action of a physical system has a corresponding conservation law, a theorem due to Emmy Noether. All classical equations of motion can be derived from the variational principle known as Hamilton's principal of least action \delta S = 0 \,, stating the path the system takes through the configuration space is the one with the least action S. In electrodynamics, the force on a charged particle of charge q is the Lorentz force:[21] Combining with Newton's second law gives a first order differential equation of motion, in terms of position of the particle: m\frac{d^2 \mathbf{r}}{dt^2} = q\left(\mathbf{E} + \frac{d \mathbf{r}}{dt} \times \mathbf{B}\right) \,\! or its momentum: \frac{d\mathbf{p}}{dt} = q\left(\mathbf{E} + \frac{\mathbf{p} \times \mathbf{B}}{m}\right) \,\! The same equation can be obtained using the Lagrangian (and applying Lagrange's equations above) for a charged particle of mass m and charge q:[22] where A and ϕ are the electromagnetic scalar and vector potential fields. The Lagrangian indicates an additional detail: the canonical momentum in Lagrangian mechanics is given by: \mathbf{P} = \frac{\partial L}{\partial \dot{\mathbf{r}}} = m \dot{\mathbf{r}} + q \mathbf{A} instead of just mv, implying the motion of a charged particle is fundamentally determined by the mass and charge of the particle. The Lagrangian expression was first used to derive the force equation. Alternatively the Hamiltonian (and substituting into the equations):[20] H = \frac{\left(\mathbf{P} - q \mathbf{A}\right)^2}{2m} + q\phi \,\! can derive the Lorentz force equation. General relativity[edit] Geodesic equation of motion[edit] Geodesics on a sphere are arcs of great circles (yellow curve). On a 2dmanifold (such as the sphere shown), the direction of the accelerating geodesic is uniquely fixed if the separation vector ξ is orthogonal to the "fiducial geodesic" (green curve). As the separation vector ξ0 changes to ξ after a distance s, the geodesics are not parallel (geodesic deviation).[23] The above equations are valid in flat spacetime. In curved space spacetime, things become mathematically more complicated since there is no straight line; this is generalized and replaced by a geodesic of the curved spacetime (the shortest length of curve between two points). For curved manifolds with a metric tensor g, the metric provides the notion of arc length (see line element for details), the differential arc length is given by:[24] ds = \sqrt{g_{\alpha\beta} d x^\alpha dx^\beta} and the geodesic equation is a second-order differential equation in the coordinates, the general solution is a family of geodesics:[25] \frac{d^2 x^\mu}{ds^2} = - \Gamma^\mu{}_{\alpha\beta}\frac{d x^\alpha}{ds}\frac{d x^\beta}{ds} where Γμαβ is a Christoffel symbol of the second kind, which contains the metric (with respect to the coordinate system). Given the mass-energy distribution provided by the stress–energy tensor Tαβ, the Einstein field equations are a set of non-linear second-order partial differential equations in the metric, and imply the curvature of space time is equivalent to a gravitational field (see principle of equivalence). Mass falling in curved spacetime is equivalent to a mass falling in a gravitational field - because gravity is a fictitious force. The relative acceleration of one geodesic to another in curved spacetime is given by the geodesic deviation equation: \frac{D^2\xi^\alpha}{ds^2} = -R^\alpha{}_{\beta\gamma\delta}\frac{dx^\alpha}{ds}\xi^\gamma\frac{dx^\delta}{ds} where ξα = (x2)α − (x1)α is the separation vector between two geodesics, D/ds (not just d/ds) is the covariant derivative, and Rαβγδ is the Riemann curvature tensor, containing the Christoffel symbols. In other words, the geodesic deviation equation is the equation of motion for masses in curved spacetime, analogous to the Lorentz force equation for charges in an electromagnetic field.[26] For flat spacetime, the metric is a constant tensor so the Christoffel symbols vanish, and the geodesic equation has the solutions of straight lines. This is also the limiting case when masses move according to Newton's law of gravity. Spinning objects[edit] In general relativity, rotational motion is described by the relativistic angular momentum tensor, including the spin tensor, which enter the equations of motion under covariant derivatives with respect to proper time. The Mathisson–Papapetrou–Dixon equations describe the motion of spinning objects moving in a gravitational field. Analogues for waves and fields[edit] Unlike the equations of motion for describing particle mechanics, which are systems of coupled ordinary differential equations, the analogous equations governing the dynamics of waves and fields are always partial differential equations, since the waves or fields are functions of space and time. For a particular solution, boundary conditions along with initial conditions need to be specified. Sometimes in the following contexts, the wave or field equations are also called "equations of motion". Field equations[edit] Equations that describe the spatial dependence and time evolution of fields are called field equations. These include This terminology is not universal: for example although the Navier–Stokes equations govern the velocity field of a fluid, they are not usually called "field equations", since in this context they represent the momentum of the fluid and are called the "momentum equations" instead. Wave equations[edit] Equations of wave motion are called wave equations. The solutions to a wave equation give the time-evolution and spatial dependence of the amplitude. Boundary conditions determine if the solutions describe traveling waves or standing waves. From classical equations of motion and field equations; mechanical, gravitational wave, and electromagnetic wave equations can be derived. The general linear wave equation in 3d is: \frac{1}{v^2}\frac{\partial^2 X}{\partial t^2} = \nabla^2 X where X = X(r, t) is any mechanical or electromagnetic field amplitude, say:[27] and v is the phase velocity. Non-linear equations model the dependence of phase velocity on amplitude, replacing v by v(X). There are other linear and non-linear wave equations for very specific applications, see for example the Korteweg–de Vries equation. Quantum theory[edit] In quantum theory, the wave and field concepts both appear. In quantum mechanics, in which particles also have wave-like properties according to wave–particle duality, the analogue of the classical equations of motion (Newton's law, Euler–Lagrange equation, Hamilton–Jacobi equation, etc.) is the Schrödinger equation in its most general form: i\hbar\frac{\partial\Psi}{\partial t} = \hat{H}\Psi \,, where Ψ is the wavefunction of the system, \hat{H} is the quantum Hamiltonian operator, rather than a function as in classical mechanics, and ħ is the Planck constant divided by 2π. Setting up the Hamiltonian and inserting it into the equation results in a wave equation, the solution is the wavefunction as a function of space and time. The Schrödinger equation itself reduces to the Hamilton–Jacobi equation in when one considers the correspondence principle, in the limit that ħ becomes zero. Throughout all aspects of quantum theory, relativistic or non-relativistic, there are various formulations alternative to the Schrödinger equation that govern the time evolution and behavior of a quantum system, for instance: See also[edit] 1. ^ Encyclopaedia of Physics (second Edition), R.G. Lerner, G.L. Trigg, VHC Publishers, 1991, ISBN (Verlagsgesellschaft) 3-527-26954-1 (VHC Inc.) 0-89573-752-3 2. ^ a b c d Analytical Mechanics, L.N. Hand, J.D. Finch, Cambridge University Press, 2008, ISBN 978-0-521-57572-0 3. ^ See History of Mathematics 4. ^ The Britannica Guide to History of Mathematics, ed. Erik Gregersen 5. ^ Discourses, Galileo 6. ^ Dialogues Concerning Two New Sciences, by Galileo Galilei; translated by Henry Crew, Alfonso De Salvio 7. ^ Halliday, David; Resnick, Robert; Walker, Jearl (2004-06-16). Fundamentals of Physics (7 Sub ed.). Wiley. ISBN 0-471-23231-9.  8. ^ a b Dynamics and Relativity, J.R. Forshaw, A.G. Smith, Wiley, 2009, ISBN 978-0-470-01460-8 9. ^ M.R. Spiegel, S. Lipcshutz, D. Spellman (2009). Vector Analysis. Schaum's Outlines (2nd ed.). McGraw Hill. p. 33. ISBN 978-0-07-161545-7.  10. ^ a b Essential Principles of Physics, P.M. Whelan, M.J. Hodgeson, second Edition, 1978, John Murray, ISBN 0-7195-3382-1 11. ^ Hanrahan, Val; Porkess, R (2003). Additional Mathematics for OCR. London: Hodder & Stoughton. p. 219. ISBN 0-340-86960-7.  12. ^ Keith Johnson (2001). Physics for you: revised national curriculum edition for GCSE (4th ed.). Nelson Thornes. p. 135. ISBN 978-0-7487-6236-1. The 5 symbols are remembered by "suvat". Given any three, the other two can be found.  13. ^ 3000 Solved Problems in Physics, Schaum Series, A. Halpern, Mc Graw Hill, 1988, ISBN 978-0-07-025734-4 14. ^ a b An Introduction to Mechanics, D. Kleppner, R.J. Kolenkow, Cambridge University Press, 2010, p. 112, ISBN 978-0-521-19821-9 15. ^ Encyclopaedia of Physics (second Edition), R.G. Lerner, G.L. Trigg, VHC publishers, 1991, ISBN (VHC Inc.) 0-89573-752-3 16. ^ "Mechanics, D. Kleppner 2010" 17. ^ "Relativity, J.R. Forshaw 2009" 18. ^ The Physics of Vibrations and Waves (3rd edition), H.J. Pain, John Wiley & Sons, 1983, ISBN 0-471-90182-2 19. ^ R. Penrose (2007). The Road to Reality. Vintage books. p. 474. ISBN 0-679-77631-1.  20. ^ a b c Classical Mechanics (second edition), T.W.B. Kibble, European Physics Series, 1973, ISBN 0-07-084018-0 21. ^ Electromagnetism (second edition), I.S. Grant, W.R. Phillips, Manchester Physics Series, 2008 ISBN 0-471-92712-0 22. ^ Classical Mechanics (second Edition), T.W.B. Kibble, European Physics Series, Mc Graw Hill (UK), 1973, ISBN 0-07-084018-0. 23. ^ Misner, Thorne, Wheeler, Gravitation 24. ^ C.B. Parker (1994). McGraw Hill Encyclopaedia of Physics (second ed.). p. 1199. ISBN 0-07-051400-3.  25. ^ C.B. Parker (1994). McGraw Hill Encyclopaedia of Physics (second ed.). p. 1200. ISBN 0-07-051400-3.  26. ^ J.A. Wheeler, C. Misner, K.S. Thorne (1973). Gravitation. W.H. Freeman & Co. pp. 34–35. ISBN 0-7167-0344-0.  27. ^ H.D. Young, R.A. Freedman (2008). University Physics (12th ed.). Addison-Wesley (Pearson International). ISBN 0-321-50130-6.
d74e72e828bb8961
How to model large atoms 1. Hi! One can easily analyze the Hydrogen Atom since it is a two body problem. But how do you apply Quantum Theory to model atoms (such as iron) which are much larger and predict their behaviour in an environment? My guess is that you use statistical mechanics, but I only just started a course and it is basically limited to heat. thank you 2. jcsd 3. mfb Staff: Mentor The first approach is still a two-body problem. Afterwards, interactions between the electrons can be taken into account. To describe the state of the electrons and bonds in a material, this is pure quantum mechanics. If you want to describe things like heat, you don't have to care about those details, you take the "output" of quantum mechanics (crystal structure, energy bands and so on) and apply statistical mechanics to it. 4. Hi mfb! Thanks for your answer. Basically I want to evaluate the effect of electric fields, magnetic fields and magnetic vector potentials on the properties of Iron in haemoglobin. How do I go about this? 5. mfb Staff: Mentor I guess that will need some protein folding software if you expect effects - the fields influence the whole thing, not just a single small atom inside. 6. By the way, I think we analyze the Hydrogen atom by quantum mechanics is a one body problem, because we assume the nuclear is fixed, and it just provide a potential to the atom system, but in the real physics we should also use a wave-function to describe the nuclear 7. mfb Staff: Mentor Usually the two-body problem is reduced to a 1-body problem with a reduced mass, so both electron and nucleus are taken into account. The other degrees of freedom of the two-body system correspond to a total motion of the atom. 8. cgk cgk 485 Science Advisor OP, real atoms (and molecules) are handled with quantum chemistry software. Such programs (e.g., Molpro, Orca) can solve the many-body Schrödinger equations in various approximations to determine the quantitative behavior of the electrons, including their response to external fields. For Iron atoms, for example, you would employ approximations like Hartree-Fock and CCSD(T) (a coupled cluster method) or Multi-configuration self consistent field (MCSCF) and mutli-reference configuration interaction (MRCI), depending on the application. Understanding and using such approximations (correctly) is not easy, and normally requires some background reading in many-body quantum mechanics and quantum chemistry. Have something to add? Draft saved Draft deleted
205d4df01391cb98
Complex potential model for low-energy neutron scattering Fiedeldey H. ; Frahn W.E. (1961) The optical model for low-energy neutron scattering is treated explicitly by means of a new form of complex potential which permits an exact solution of the S-wave Schrödinger equation. This potential is everywhere continuously differentiable and its imaginary part consists of both a volume and a surface absorption term which is in close agreement with recent theoretical calculations of the spatial distribution of the imaginary potential. Closed-form expressions are obtained for the logarithmic derivative of the wave function, and hence for the S-wave strength function and scattering length, from which their dependence on all potential parameters can be studied explicitly. In particular, it is shown that concentrating the absorption in the nuclear surface can serve as a remedy for a well-known discrepancy, by lowering the minima of the strength function to more realistic values. © 1961. This item appears in the following collections:
b6ee73b6d300c120
Menu principal Comité de coordination Printer friendly page Qualitative Behaviour and Controllability of Partial Differential Equations / Comportement qualitatif et controlabilité des EDP (Org: Holger Teismann, Acadia University) DAVID AMUNDSEN, Carleton University Resonant Solutions of the Forced KdV Equation The forced Korteweg-de Vries (fKdV) Equation provides a canonical model for evolution of weakly nonlinear dispersive waves in the presence of additional effects such as external forcing or variable topography. While the symmetries and integrability of the underlying KdV structure facilitate extensive analysis, in this generalized setting such favourable properties no longer hold. Through physical and numerical experimentation it is known that a rich family of resonant steady solutions exist, yet qualitative analytic insight into them is limited. Based on hierarchical perturbative and matched asymptotic approaches we present a formal mathematical framework for construction of solutions in the small dispersion limit. In this way not only obtaining accurate analytic representations but also important a priori insight into the response of the system as it is detuned away from resonance. Specific examples and comparisons in the case of a fundamental periodic resonant mode will be presented. Joint work with M. P. Mortell (UC Cork) and E. A. Cox (UC Dublin). SEAN BOHUN, Penn State The Wigner-Poisson System with an External Coulomb Field This system of equations describes the time evolution of the quantum mechanical behaviour of a large ensemble of particles in a vacuum where the long range interactions between the particles can be taken into account. The model also facilitates the introduction of external classical effects. As tunneling effects become more pronounced in semiconductor devices, models which are able to bridge the gap between the quantum behaviour and external classical effects become increasingly relevant. The WP system is such a model. Local existence is shown by a contraction mapping argument which is then extended to a global result using macroscopic control (conservation of probability and energy). Asymptotic behaviour of the WP system and the underlying SP system is established with a priori estimates on the spatial moments. Finally, conditions on the energy are given which (a) ensure that the solutions decay and (b) ensure that the solutions do not decay. SHAOHUA CHEN, University College of Cape Breton Boundedness and Blowup for the Solution of an Activator-Inhibitor Model We consider a general activator-inhibitor model ut = eDu - mu +  up vt = D Dv - nv +  ur with the Neumann boundary conditions, where rq > (p-1)(s+1). We show that if r > p-1 then the solutions exist long time for all initial values and if r > p-1 and q < s+1 then the solutions are bounded for all initial values. However, if r < p-1 then, for some special initial values, the solutions will blow up. STEPHEN GUSTAFSON, University of British Columbia, Mathematics Department, 1984 Mathematics Rd., Vancouver, BC V6T 1Z2 Scattering for the Gross-Pitaevskii Equation The Gross-Pitaevskii equation, a nonlinear Schroedinger equation with non-zero boundary conditions, models superfluids and Bose-Einstein condensates. Recent mathematical work has focused on the finite-time dynamics of vortex solutions, and existence of vortex-pair traveling waves. However, little seems to be known about the long-time behaviour (eg. scattering theory, and the asymptotic stability of vortices). We address the simplest such problem-scattering around the vacuum state-which is already tricky due to the non-self-adjointness of the linearized operator, and "long-range" nonlinearity. In particular, our present methods are limited to higher dimensions. This is joint work in progress with K. Nakanishi and T.-P. Tsai. HORST LANGE, Universitaet Köln, Weyertal 86-90, 50931 Köln, Germany Noncontrollability of the nonlinear Hartree-Schrödinger and Gross-Pitaevskii-Schrödinger equations We consider the bilinear control problem for the nonlinear Hartree-Schrödinger equation [HS] (which plays a prominent role in quantum chemistry), and for the Gross-Pitaevskii-Schrödinger equation [GPS] (of the theory of Bose-Einstein condensates); for both systems we study the case of a bilinear control term involving the position operator or the momentum operator. A target state uT Î L2(R3) is said to be reachable from an initial state u0 Î L2(R3) in time T > 0 if there exists a control s.t. the system allows a solution state u(t,x) with u(0,x) = u0(x), u(T,x) = uT(x). We prove that, for any T > 0 and any initial datum u0 Î L2 (R3) \{0}, the set of non-reachable target states (in time T > 0) is relatively L2-dense in the sphere {u Î L2(R3) | ||u||L2 = ||u0||L2} (for both [HS] and [GPS]). The proof uses Fourier transform, estimates for Riesz potentials for [HS], estimates for the Schrödinger group associated with the Hamiltonian -D+x2 for [GPS]. HAILIANG LI, Department of Pure and Applied Mathematics, Osaka University, Japan On Well-posedness and Asymptotics of Multi-dimensional Quantum Hydrodynamics In the modelling of semiconductor devices in nano-size, for instance, MOSFET's and RTD's where quantum effects (like particle tunnelling through potential barriers and built-up in quantum wells) take place, the quantum hydrodynamical equations are important and dominative in the description of the motion of electron or hole transport under the self-consistent electric field. These quantum hydrodynamic equations consist of conservation laws of mass, balance laws of momentum forced by an additional nonlinear dispersion (caused by the quantum (Bohm) potential), and self-consistent electric field. In this talk, we shall review the recent progress on the multi-dimensional quantum hydrodynamic equations, including the mathematical modelings based on the moment method applied to the Wigner-Boltzmann equation, rigorous analysis on the well-posedness for general, nonconvex pressure-density relation and regular large initial data, long time stability of steady-state under a quantum subsonic condition, and global-in-time relaxation limit from the quantum hydrodynamic equations to the quantum drift-diffusion equations, and so on. Joint with A. Jüngel, P. Marcati, and A. Matsumura. DONG LIANG, York University, 4700 Keele Street, Toronto, Ontario M3J 1P3 Analysis of the S-FDTD Method for Three-Dimensional Maxwell Equations The finite-difference time-domain (FDTD) method for Maxwell's equations, firstly introduced by Yee, is a very popular numerical algorithm in computational electromagnetics. However, the traditional FDTD scheme is only conditionally stable. The computation of three-dimensional problems by the scheme will need much more computer memory or become extremely difficult when the size of spatial steps becomes very small. Recently, there is considerable interest in developing efficient schemes for the problems. In this talk, we will present a new splitting finite-difference time-domain scheme (S-FDTD) for the general three-dimensional Maxwell's equations. Unconditional stability and convergence are proved for the scheme by using the energy method. The technique of reducing perturbation error is further used to derive a high order scheme. Numerical results are given to illustrate the performance of the methods. This research is joint work with L. P. Gao and B. Zhang. KIRSTEN MORRIS, University of Waterloo Controller Design for Partial Differential Equations Many controller design problems of practical interest involve systems modelled by partial differential equations. Typically a numerical approximation is used at some stage in controller design. However, not every scheme that is suitable for simulation is suitable for controller design. Misleading results may be obtained if care is not taken in selecting a scheme. Sufficient conditions for a scheme to be suitable for linear quadratic or H¥ controller design have been obtained. Once a scheme is chosen, the resulting approximation will in general be a large system of ordinary differential equations. Standard control algorithms are only suitable for systems with model order less than 100 and special techniques are required. KEITH PROMISLOW, Michigan State University Nonlocal Models of Membrane Hydration in PEM Fuel Cells Polymer electrolyte membrane (PEM) fuel cells are unique energy conversion devices, effeciently generating useful electric voltage from chemical reactants without combustion. They have recently captured public attention for automotive applications for which they promise high performance without the pollutants associated with combustion. >From a mathematical point of view the device is governed by coupled systems of elliptic, parabolic, and degenerate parabolic equations describing the heat, mass, and ion tranpsort through porous medias and polymer electrolyte membranes. This talk will describe the overall funtionality of the PEM fuel cell, presenting analysis of the slow, nonlocal propagation of hydration fronts within the polymer electrolyte membrane. TAI-PENG TSAI, University of British Columbia, Vancouver Boundary regularity criteria for suitable weak solutions of Navier-Stokes equations I will present some new regularity criteria for suitable weak solutions of the Navier-Stokes equations near boundary in space dimension 3. Partial regularity is also analyzed. This is joint work with Stephen Gustafson and Kyungkeun Kang. top of page Copyright © Canadian Mathematical Society - Société mathématique du Canada. Any comments or suggestions should be sent to - Commentaires ou suggestions envoyé à: [email protected].
c1338c0d60eeff37
Moving gapless indirect excitons in monolayer graphene • Mahmood Mahmoodian1Email author and Affiliated with • Matvey Entin1 Affiliated with Nanoscale Research Letters20127:599 DOI: 10.1186/10.1186/1556-276X-7-599 Received: 16 July 2012 Accepted: 11 October 2012 Published: 30 October 2012 The existence of moving indirect excitons in monolayer graphene is theoretically evidenced in the envelope-function approximation. The excitons are formed from electrons and holes near the opposite conic points. The electron-hole binding is conditioned by the trigonal warping of the electron spectrum. It is stated that the exciton exists in some sectors of the exciton momentum space and has the strong trigonal warping of the spectrum. Monolayer graphene Exciton Energy spectrum Optical absorption Specific heat 71.35.-y; 73.22.Lp; 73.22.Pr; 78.67.Wj; 65.80.Ck An exciton is a usual two-particle state of semiconductors. The electron-hole attraction decreases the excitation energy compared to independent particles producing the bound states in the bandgap of a semiconductor. The absence of the gap makes this picture inapplicable to graphene, and the immobile exciton becomes impossible in a material with zero gap. However, at a finite total momentum, the gap opens that makes the binding of the moving pair allowable. The purpose of the present paper is an envelope-approximation study of the possibility of the Wannier-Mott exciton formation near the conic point in a neutral graphene. In the present paper, we use the term ‘exciton’ in its direct meaning, unlike other papers where this term is referred to as many-body (‘excitonic’) effects[1, 2], exciton insulator with full spectrum reconstruction, or exciton-like singularities originating from saddle points (van Hove singularity) of the single-particle spectrum[3]. On the contrary, our goal is the pair bound states of electrons and holes. There is a widely accepted opinion that zero gap in graphene forbids the Mott exciton states (see, e.g.,[4]). This statement which is valid in the conic approximation proves to be incorrect beyond this approximation. Our aim is to demonstrate that the excitons exist if one takes the deviations from the conic spectrum into consideration. We consider the envelope tight-binding Hamiltonian of monolayer graphene as follows: H ex = ( p e ) + ( p h ) + V ( r e r h ) , ( p ) = γ 0 1 + 4 cos a p x 2 cos 3 a p y 2 + 4 cos 2 a p x 2 , is the single-electron energy, a = 0.246 nm is the lattice constant, = 1, V(r )= −e2/(χr) is the potential energy of the electron-hole interaction. The electron spectrum has conic points ν K,ν = ±1, K = (4Π/3a,0), where (p)≈s|pν K|, s = γ 0 a 3 / 2 is the electron velocity in the conic approximation. The electron and hole momenta pe,hcan be expressed via pair q=p e + p h and relative p=p e p h momenta. The momenta pe,h can be situated near the same (qk ≪ 2K) or near the opposite conic points (q = 2K + k ,k ≪ 2K). We assumed that graphene is embedded into the insulator with a relatively large dielectric constant χ so that the effective dimensionless constant of interaction g = e 2 / ( sχℏ ) 2 / χ 1 and the many-body complications are inessential. In the conic approximation, the classical electron and hole with the same direction of momentum have the same velocities s. The interaction changes their momenta, but not their velocities. The two-particle Hamiltonian contains no terms quadratic in the component of the relative momentum p along k. In a quantum language, such attraction does not result in binding. Thus, the problem of binding demands accounting for the corrections to the conic spectrum. Two kinds of excitons are potentially allowed in graphene: a direct exciton with k ≪ 1/a(when the pair belongs to the same extremum) and an indirect exciton with q = 2K + k. Assuming pk (this results from the smallness of g), we get to the quadratic Hamiltonian H ex = sk + p 1 2 2 m 1 + p 2 2 2 m 2 e 2 χr , where the coordinate system with the basis vectors e1k/k and e2e1 is chosen, r = (x1,x2). In the conic approximation, we have m2 = k/s, m1 = . Thus, this approximation is not sufficient to find m1. Beyond the conic approximation (but near the conic point), we should expand the spectrum (2) with respect to k up to the square terms, which results in the trigonal spectrum warping. As a result, we have for the indirect exciton, 1 m 1 = ν sa 4 3 cos 3 ϕ k , where ϕ k is an angle between k and K. The effective mass m1m2is directly determined by the trigonal spectrum warping, and the large value of m1 follows from the warping smallness. The sign of m1is determined by ν cos3 ϕ k . If ν cos3 ϕ k > 0, electrons and holes tend to bind, or else to run away from each other. Thus, the binding of an indirect pair is permitted for ν cos3ϕ k >0. Apart from the conic point, this condition transforms to ( 1 + u + v ) < 0 ( 1 + u + v + ) < 0 ( 1 + u + v ) < 0 ( 1 + v + v + ) < 0 ( 1 + u + v + ) < 0 ( 1 + v + v + ) < 0 , where u = cos a k x , v ± = cos ( ( k x ± 3 k y ) a / 2 ) . To find the indirect exciton states analytically, we solved the Schrödinger equation with the Hamiltonian (3) using the large ratio of effective masses. This parameter can be utilized by the adiabatic approximation similar with the problem of molecular levels. Coordinates 1 and 2 play a role of heavy ‘ion’ and ‘electron’ coordinates. At the first stage, the ion term in the Hamiltonian is omitted, and the Schrödinger equation is solved with respect to the electron wave function at a fixed ion position. The resulting electron terms then are used to solve the ion equation. This gives the approximate ground level of exciton ε(k)=skε ex (k), where the binding energy of the exciton is ε ex (k) = Π−1sk g2 log2(m1/m2) (the coefficient 1/Π here is found by a variational method). A similar reasoning for the direct exciton gives negative mass m1=−32/(ks a2(7−cos6ϕ k )). As a result, the direct exciton kinetic energy of the electron-hole relative motion is not positively determined and that means the impossibility of binding of electrons with holes from the same cone point. Results and discussion Figure1 shows the domain of indirect exciton existence in the momentum space. This domain covers a small part of the Brillouin zone. Figure 1 Relief of the single-electron spectrum. Domains where exciton states exist are bounded by a thick line. The quantity ε ex (k) essentially depends on the momentum via the ratio of effective masses m1/m2. Within the accepted assumptions, ε ex is less than the energy of unbound pair sk. However, at a small-enough dielectric constant χ, the ratio of both quantities is not too small. Although we have no right to consider the problem with a large g in the two-particle approach, it is obvious that the increase of the parameter g can only result in the binding energy growth. Besides, we have studied the problem of the exciton numerically in the same approximation and by means of a variational approach. Figure2 represents the dependence of the exciton binding energy on its momentum for χ=10. Figure3 shows the radial sections of the two-dimensional plot. The characteristic exciton binding energies have the order of 0.2 eV. Figure 2 Relief map of indirect exciton ground-state binding energy. The map shows ε ex (in eV) as a function of the wave vector in units of reciprocal lattice constant. The exciton exists in the colored sectors. Figure 3 Radial sections of Figure 2 at fixed angles in degrees (marked). Curves run up to the ends of exciton spectrum. All results for embedded graphene are applicable to the free-suspended layer if the interaction constant g is replaced with a smaller quantity g ~, which is renormalized by many-body effects. In this case, the exciton binding energy becomes essentially larger and comparable to kinetic energy sk. We discuss the possibility of observation of the indirect excitons in graphene. As we saw, their energies are distributed between zero and some tenth of eV that smears up the exciton resonance. The large exciton momentum blocks both direct optical excitation and recombination. However, a slow recombination and an intervalley relaxation preserve the excitons (when generated someway) from recombination or the decay. On the other hand, the absence of a low-energy threshold results in the contribution of excitons in the specific heat and the thermal conductivity even at low temperature. It is found that the exciton contribution to the specific heat at low temperatures in the Dirac point is proportional to (gT/s)2log2(aT/s)). It is essentially lower than the electron specific heat ∝(T/s)2 and the acoustic phonon contribution ∝(T/c)2, where c is the phonon velocity. Nevertheless, the exciton contribution to the electron-hole plasma specific heat is essential for experiments with hot electrons. In conclusion, the exciton states in graphene are gapless and possess strong angular dependence. This behavior coheres with the angular selectivity of the electron-hole scattering rate[5]. In our opinion, it is reasonable to observe the excitons by means of high-resolution electron energy loss spectroscopy of the free-suspended graphene in vacuum. Such energy and angle-resolving measurements can reproduce the indirect exciton spectrum. This research has been supported in part by the grants of RFBR nos. 11-02-00730 and 11-02-12142. Authors’ Affiliations Institute of Semiconductor Physics, Siberian Branch, Russian Academy of Sciences 1. Yang L, Deslippe J, Park CH, Cohen ML, Louie SG: Excitonic effects on the optical response of graphene and bilayer graphene. Phys Rev Lett 2009, 103: 186802.View Article 2. Yang L: Excitons in intrinsic and bilayer graphene. Phys Rev B 2011, 83: 085405.View Article 3. Chae DH, Utikal T, Weisenburger S, Giessen H, vKlitzing K, Lippitz M, Smet JH: Excitonic fano resonance in free-standing graphene. Nano Lett 2011, 11: 1379. 10.1021/nl200040qView Article 4. Ratnikov PV, Silin AP: Size quantization in planar graphene-based heterostructures: pseudospin splitting, interface states, and excitons. Zh Eksp Teor Fiz 2012, 141: 582. [JETP 2012, 114(3):512] [JETP 2012, 114(3):512] 5. Golub LE, Tarasenko SA, Entin MV, Magarill LI: Valley separation in graphene by polarized light. Phys Rev B 2011, 84: 195408.View Article © Mahmoodian and Entin; licensee Springer. 2012
8d89d8424e478008
More Options Buddy Can You Paradigm? Reality Check Victor Stenger Skeptical Briefs Volume 10.3, September 2000 A common view is that science progresses by a series of abrupt changes in which new scientific theories replace old ones that are “proven wrong” and never again see the light of day. Unless, as John Horgan has suggested, we have reached the ”end of science,” every theory now in use, such as evolution or gravity, seems destined to be overturned. If this is true, then we cannot interpret any scientific theory as a reliable representation of reality. While this view of science originated with philosopher Karl Popper, its current widespread acceptance is usually imputed to Thomas Kuhn, whose The Structures of Scientific Revolutions (1962) was the best selling academic book of the twentieth century, and probably also the most cited. Kuhn alleged that science does not progress gradually but rather through a series of revolutions. He characterized these revolutions with the now famous and overworked term paradigm shifts in which the old problem-solving tools, the “paradigms” of a discipline are replaced by new ones. In between revolutions, not much is supposed to happen. And after the revolution, the old paradigms are largely forgotten. Being a physicist by training, Kuhn focused mainly on revolutions in physics. One of the most important examples he covered was the transition from classical mechanics to quantum mechanics that occurred in the early 1900s. In quantum mechanics, the physicist calculates probabilities for particles following certain paths, rather than calculating the exact paths themselves as in classical mechanics. True, this constitutes a different procedure. But has classical mechanics become a forgotten tool, like the slide rule? Hardly. Except for computer chips, lasers, and a few other special devices, most of today’s hightech society is fully explicable with classical physics alone. While quantum mechanics is needed to understand basic chemistry, no special quantum effects are evident in biological mechanisms. Thus, most of what is labeled natural science in today’s world still rests on a foundation of Newtonian physics that has not changed much, in basic principles and methods, for centuries. Nobel physicist Steven Weinberg, who was a colleague of Kuhn’s at Harvard and originally admired his work, has taken a retrospective look at Structures. In an article on the October 8, 1998, New York Review of Books called “The Revolution That Didn't Happen,” Weinberg writes: It is not true that scientists are unable to “switch back and forth between ways of seeing,” and that after a scientific revolution they become incapable of understanding the science that went before it. One of the paradigm shifts to which Kuhn gives much attention in Structures is the replacement at the beginning of this century of Newtonian mechanics by the relativistic mechanics of Einstein. But in fact in educating new physicists the first thing that we teach them is still good old Newtonian mechanics, and they never forget how to think in Newtonian terms, even after they learn about Einstein’s theory of relativity. Kuhn himself as an instructor at Harvard must have taught Newtonian mechanics to undergraduates. Weinberg maintains that the last “mega-paradigm shift” in physics occurred with the transition from Aristotle to Newton, which actually took several hundred years: “[N]othing that has happened in our understanding of motion since the transition from Newtonian to Einsteinian mechanics, or from classical to quantum physics fits Kuhn’s description of a ‘paradigm shift.'” While tentative proposals often prove incorrect, I cannot think of a single case in recent times where a major physical theory that for many years has successfully described all the data within a wide domain was later found to be incorrect in the limited circumstances of that domain. Old, standby theories are generally modified, extended, often simplified with excess baggage removed, and always clarified. Rarely, if ever, are such well-established theories shown to be entirely wrong. More often the domain of applicability is refined as we gain greater knowledge or modifications are made that remain consistent with the overall principles. This is certainly the case with Newtonian physics. The advent of relativity and quantum mechanics in the twentieth century established the precise domain for physics that had been constructed up to that point, but did not dynamite that magnificent edifice. While excess baggage such as the aether and phlogiston was cast off, the old methods still exist as smooth extrapolations of the new ones to the classical domain. The continued success and wide application of Newtonian physics must be viewed as strong evidence that it represents true aspects of reality, that it is not simply a human invention. Furthermore, the new theories grew naturally from the old. When you look in depth at the history of quantum mechanics, you have to conclude it was not the abrupt transition from classical mechanics usually portrayed. Heisenberg retained the classical equations of motion and simply represented observables by matrices instead of real numbers. Basically, all he did was make a slight modification to the algebraic rules of mechanics by relaxing the commutative law. Quantization then arose from assumed commutation rules that were chosen based on what seemed to work. Similarly, the Schrödinger equation was derived from the classical Hamilton-Jacobi equation of motion. These were certainly major developments, but I maintain they were more evolutionary than revolutionary. Where else in the history of science to the present can we identify significant paradigm shifts? With Darwin and Mendel, certainly, in biology. But what in biology since then? Discovering the structure of DNA and decoding the genome simply add to the details of the genetic mechanism that are being gradually enlarged without any abrupt change in the basic naturalistic paradigm. A kind of Darwinian graduated evolution characterizes the development of science and technology. That is not to say that change is slow or uniform, in biological or social systems. The growth of science and technology in recent years has been quick but not instantaneous and still represents a relatively smooth extension of what went before. Victor Stenger
ecb8e80037df8cd8
Subscribe now Log in Your login is case sensitive I have forgotten my password New Scientist TV: Lego pirate proves how freak waves can sink ships Sandrine Ceurstemont, editor, New Scientist TV A calm sea can sometimes unleash an unexpected weapon: a sudden monster wave that engulfs a large ship. Now Amin Chabchoub from the Hamburg-Harburg Technical University in Germany and colleagues have used a Lego ship to replicate the phenomenon in a wave tank for the first time, giving insight into how it occurs. To recreate the effect, the team produced waves based on a solution of the non-linear wave equation thought to be the most likely explanation for large freak waves. In this case, a weak oscillation propagates continuously while suddenly increasing in amplitude for a short time. "I programmed the paddle of the wave maker to generate a wave train which is modulated according to theory," says Chabchoub. "This generated small waves as predicted from the equations and we observed the formation of a giant rogue wave during this evolution." As seen in the video above, the toy boat rides along on gentle waves until suddenly a large wave appears and it capsizes. The experiment proves that the non-linear model provides a possible explanation for the sudden formation of walls of water in the ocean. The team hopes to expand on the research to model more realistic sea conditions involving wind, water currents and two-dimensional wave trains. The results could be used to develop a short-term prediction system for monster waves. If you enjoyed this video, see how a toy boat was used to recreate the dead water effect or check out a water-bouncing ball that mimics skipping stones. Facebook iconDigg iconDelicious iconStumbleUpon iconTwitter iconTechnorati iconReddit iconAddThis icon Subscribe to New Scientist Magazine I don't understand. Where are the non-linearities? What was the condition that caused the freak wave? How is it possible to have lots and lots of little waves and then a sudden freak wave? What was the deterministic principle that lead to this freak wave? I suggest that from this day, every single new theory must be 'proven,' or at least represented in a model, using legos. So the investigator programmed the wave maker to make a big wave, and the big wave sunk the toy ship. Who is paying this guy for this research? I hope they're not using taxpayer money. Why is this nonsense being published in New Scientist? No, the wave maker produces small regular oscillations. The evolution of these small oscillations in water is described by the Nonlinear Schrödinger equation. If you follow the wiki-link you can read about how this can result in in a high amplitude wave of short duration (aka a 'rogue wave'). The large amplitude pulse creation is interesting, but the Lego boat, while cute for some, distracts from the importance of this. Put something in there with a realistic CM and righting moment. Or better, show the side view near the creation point of the rogue wave and let us watch it happen. A recent TV show commeted how "experts", using a linear wave model, could not explain freak/monster waves. Does anyone understand that the real oceans are not "linear"? They are an almost infinite series of "linear" axies at each compass heading. Electronic Engineers have used "hetrodying" for over a century to create sum and difference frequencies in radio (thru cellphone) designs. It seems to me these freak/monster wave are the result of constructive/destructive interference between the multitude of multi-axis "linear" waves. David: The Hamburg-Harburg Technical University is in Germany. I think it's safe to say your tax money was not involved. Besides that, I'd bet there is much more to this researcher's work than this video. Odd that someone prone to jumping to conclusions who derides a sliver of research as "nonesense" reads a scientific journal at all. I have seen a TV show where space based radar found wave fronts of 30 meters (100 feet) out in the ocean. It seems the sailors were telling the truth when asked what sunk their vessels; they were not lying to cover any supposed incompetence. wow.... you owe me 48 seconds of my life. plus the 30 seconds it took to load. The brief period of low-amplitude waves both before and after the freak waves is interesting. The scientific paper can be downloaded (free) from Super Rogue Waves: Observation of a Higher-Order Breather in Water Waves A. Chabchoub, N. Hoffmann1, M. Onorato, and N. Akhmediev Phys. Rev. X 2, 011015 (2012) Twitter Follow us Twitter updates Recent comments Your login is case sensitive I have forgotten my password © Copyright Reed Business Information Ltd.
a6578f9fe4cd0f71
Researchers in the US have created the first artificial samples of graphene with electronic properties that can be controlled in a way not possible in the natural form of the material. The samples can be used to study the properties of so-called Dirac fermions, which give graphene many of its unique electronic properties. The work may also lead to the creation of a new generation of quantum materials and devices with exotic behaviour. Graphene is a single layer of carbon atoms organized in a honeycomb lattice. Physicists know that particles, such as electrons, moving though such a structure behave as though they have no mass and travel through the material at near light speeds. These particles are called massless Dirac fermions and their behaviour could be exploited in a host of applications, including transistors that are faster than any that exist today. The new "molecular" graphene, as it is has been dubbed, is similar to natural graphene except that its fundamental electronic properties can be tuned much more easily. It was made using a low-temperature scanning tunnelling microscope with a tip – made of iridium atoms – that can be used to individually position carbon-monoxide molecules on a perfectly smooth, conducting copper substrate. The carbon monoxide repels the freely moving electrons on the copper surface and "forces" them into a honeycomb pattern, where they then behave like massless graphene electrons, explains team leader Hari Manoharan of Stanford University. Described by Dirac "We confirmed that the graphene electrons are massless Dirac fermions by measuring the conductance spectrum of the electrons travelling in our material," says Manoharan. "We showed that the results match the two-dimensional Dirac equation for massless particles moving at the speed of light rather than the conventional Schrödinger equation for massive electrons." The researchers then succeeded in tuning the properties of the electrons in the molecular graphene by moving the positions of the carbon-monoxide molecules on the copper surface. This has the effect of distorting the lattice structure so that it looks as though it has been squeezed along several axes – something that makes the electrons behave as though they have been exposed to a strong magnetic or electric field, although no actual such field has been applied. The team was also able to tune the density of the electrons on the copper surface by introducing defects or impurities into the system. "Studying such artificial lattices in this way may certainly lead to technological applications, but they also provide a new level of control over Dirac fermions and allow us to experimentally access a set of phenomena that could only be investigated using theoretical calculations until now," adds Manoharan. "Introducing tunable interactions between the electrons could allow us to make spin liquids in graphene, for instance, and observe the spin quantum Hall effect if we can succeed in introducing spin-orbit interactions between the electrons." He adds that molecular graphene is just the first of this type of "designer" quantum structure and hopes to make other nanoscale materials with such exotic topological properties using similar bottom-up techniques. The work is reported in Nature 483 306.
4bac3aa360cf025f
Quantum Mechanics/Waves and Modes < Quantum Mechanics Many misconceptions about quantum mechanics may be avoided if some concepts of field theory and quantum field theory like "normal mode" and "occupation" are introduced right from the start. They are needed for understanding the deepest and most interesting ideas of quantum mechanics anyway. Questions about this approach are welcome on the talk page. Waves and modesEdit A wave is a propagating disturbance in a continuous medium or a physical field. By adding waves or multiplying their amplitudes by a scale factor, superpositions of waves are formed. Waves must satisfy the superposition principle which states that they can go through each other without disturbing each other. It looks like there were two superimposed realities each carrying only one wave and not knowing of each other (that's what is assumed if one uses the superposition principle mathematically in the wave equations). Examples are acoustic waves and electromagnetic waves (light), but also electronic orbitals, as explained below. A standing wave is considered a one-dimensional concept by many students, because of the examples (waves on a spring or on a string) usually provided. In reality, a standing wave is a synchronous oscillation of all parts of an extended object at a definite frequency, in which the oscillation profile (in particular the nodes and the points of maximal oscillation amplitude) doesn't change. This is also called a normal mode of oscillation. The profile can be made visible in Chladni's figures and in vibrational holography. In unconfined systems, i.e. systems without reflecting walls or attractive potentials, traveling waves may also be chosen as normal modes of oscillation (see boundary conditions). A phase shift of a normal mode of oscillation is a time shift scaled as an angle in terms of the oscillation period, e.g. phase shifts by 90° and 180° (or   and  ) are time shifts by the fourth and half of the oscillation period, respectively. This operation is introduced as another operation allowed in forming superpositions of waves (mathematically, it is covered by the phase factors of complex numbers scaling the waves). • Helmholtz ran an experiment which clearly showed the physical reality of resonances in a box. (He predicted and detected the eigenfrequencies.) • experiments with standing and propagating waves Electromagnetic and electronic modesEdit Max Planck, one of the fathers of quantum mechanics. Planck was the first to suggest that the electromagnetic modes are not excited continuously but discretely by energy quanta   proportional to the frequency. By this assumption, he could explain why the high-frequency modes remain unexcited in a thermal light source: The thermal exchange energy   is just too small to provide an energy quantum   if   is too large. Classical physics predicts that all modes of oscillation (2 degrees of freedom each) — regardless of their frequency — carry the average energy  , which amounts to an infinite total energy (called ultraviolet catastrophe). This idea of energy quanta was the historical basis for the concept of occupations of modes, designated as light quanta by Einstein, also denoted as photons since the introduction of this term in 1926 by Gilbert N. Lewis. An electron beam (accelerated in a cathode ray tube similar to TV) is diffracted in a crystal and diffraction patterns analogous to the diffraction of monochromatic light by a diffraction grating or of X-rays on crystals are observed on the screen. This observation proved de Broglie's idea that not only light, but also electrons propagate and get diffracted like waves. In the attracting potential of the nucleus, this wave is confined like the acoustic wave in a guitar corpus. That's why in both cases a standing wave (= a normal mode of oscillation) forms. An electron is an occupation of such a mode. An optical cavity. An electronic orbital is a normal mode of oscillation of the electronic quantum field, very similar to a light mode in an optical cavity being a normal mode of oscillation of the electromagnetic field. The electron is said to be an occupation of an orbital. This is the main new idea in quantum mechanics, and it is forced upon us by observations of the states of electrons in multielectron atoms. Certain fields like the electronic quantum field are observed to allow its normal modes of oscillation to be excited only once at a given time, they are called fermionic. If you have more occupations to place in this quantum field, you must choose other modes (the spin degree of freedom is included in the modes), as is the case in a carbon atom, for example. Usually, the lower-energy (= lower-frequency) modes are favoured. If they are already occupied, higher-energy modes must be chosen. In the case of light, the idea that a photon is an occupation of an electromagnetic mode was found much earlier by Planck and Einstein, see below. Processes and particlesEdit All processes in nature can be reduced to the isolated time evolution of modes and to (superpositions of) reshufflings of occupations, as described in the Feynman diagrams (since the isolated time evolution of decoupled modes is trivial, it is sometimes eliminated by a mathematical redefinition which in turn creates a time dependence in the reshuffling operations; this is called Dirac's interaction picture, in which all processes are reduced to (redefined) reshufflings of occupations). For example in an emission of a photon by an electron changing its state, the occupation of one electronic mode is moved to another electronic mode of lower frequency and an occupation of an electromagnetic mode (whose frequency is the difference between the frequencies of the mentioned electronic modes) is created. Electrons and photons become very similar in quantum theory, but one main difference remains: electronic modes cannot be excited/occupied more than once (= Pauli exclusion principle) while photonic/electromagnetic modes can and even prefer to do so (= stimulated emission). This property of electronic modes and photonic modes is called fermionic and bosonic, respectively. Two photons are indistinguishable and two electrons are also indistinguishable, because in both cases, they are only occupations of modes: all that matters is which modes are occupied. The order of the occupations is irrelevant except for the fact that in odd permutations of fermionic occupations, a negative sign is introduced in the amplitude. Of course, there are other differences between electrons and photons: • The electron carries an electric charge and a rest mass while the photon doesn't. • In physical processes (see the Feynman diagrams), a single photon may be created while an electron may not be created without at the same time removing some other fermionic particle or creating some fermionic antiparticle. This is due to the conservation of charge. Mode numbers, Observables and eigenmodesEdit The system of modes to describe the waves can be chosen at will. Any arbitrary wave can be decomposed into contributions from each mode in the chosen system. For the mathematically inclined: The situation is analogous to a vector being decomposed into components in a chosen coordinate system. Decoupled modes or, as an approximation, weakly coupled modes are particlularly convenient if you want to describe the evolution of the system in time, because each mode evolves independently of the others and you can just add up the time evolutions. In many situations, it is sufficient to consider less complicated weakly coupled modes and describe the weak coupling as a perturbation. In every system of modes, you must choose some (continuous or discrete) numbering (called "quantum numbers") for the modes in the system. In Chladni's figures, you can just count the number of nodal lines of the standing waves in the different space directions in order to get a numbering, as long as it is unique. For decoupled modes, the energy or, equivalently, the frequency might be a good idea, but usually you need further numbers to distinguish different modes having the same energy/frequency (this is the situation referred to as degenerate energy levels). Usually these additional numbers refer to the symmetry of the modes. Plane waves, for example — they are decoupled in spatially homogeneous situations — can be characterized by the fact that the only result of shifting (translating) them spatially is a phase shift in their oscillation. Obviously, the phase shifts corresponding to unit translations in the three space directions provide a good numbering for these modes. They are called the wavevector or, equivalently, the momentum of the mode. Spherical waves with an angular dependence according to the spherical harmonics functions (see the pictures) — they are decoupled in spherically symmetric situations — are similarly characterized by the fact that the only result of rotating them around the z-axis is a phase shift in their oscillation. Obviously, the phase shift corresponding to a rotation by a unit angle is part of a good numbering for these modes; it is called the magnetic quantum number m (it must be an integer, because a rotation by 360° mustn't have any effect) or, equivalently, the z-component of the orbital angular momentum. If you consider sharp wavepackets as a system of modes, the position of the wavepacket is a good numbering for the system. In crystallography, the modes are usually numbered by their transformation behaviour (called group representation) in symmetry operations of the crystal, see also symmetry group, crystal system. The mode numbers thus often refer to physical quantities, called observables characterizing the modes. For each mode number, you can introduce a mathematical operation, called operator, that just multiplies a given mode by the mode number value of this mode. This is possible as long as you have chosen a mode system that actually uses and is characterized by the mode number of the operator. Such a system is called a system of eigenmodes, or eigenstates: Sharp wavepackets are no eigenmodes of the momentum operator, they are eigenmodes of the position operator. Spherical harmonics are eigenmodes of the magnetic quantum number, decoupled modes are eigenvalues of the energy operator etc. If you have a superposition of several modes, you just operate the operator on each contribution and add up the results. If you chose a different modes system that doesn't use the mode number corresponding to the operator, you just decompose the given modes into eigenmodes and again add up the results of the operator operating on the contributions. So if you have a superposition of several eigenmodes, say, a superposition of modes with different frequencies, then you have contributions of different values of the observable, in this case the energy. The superposition is then said to have an indefinite value for the observable, for example in the tone of a piano note, there is a superposition of the fundamental frequency and the higher harmonics being multiples of the fundamental frequency. The contributions in the superposition are usually not equally large, e.g. in the piano note the very high harmonics don't contribute much. Quantitatively, this is characterized by the amplitudes of the individual contributions. If there are only contributions of a single mode number value, the superposition is said to have a definite or sharp value. • the basics of wave-particle duality. If you do a position measurement, the result is the occupation of a very sharp wavepacket being an eigenmode of the position operator. These sharp wavepackets look like pointlike objects, they are strongly coupled to each other, which means that they spread soon. In measurements of such a mode number in a given situation, the result is an eigenmode of the mode number, the eigenmode being chosen at random from the contributions in the given superposition. All the other contributions are supposedly eradicated in the measurement — this is called the wave function collapse and some features of this process are questionable and disputed. The probability of a certain eigenmode to be chosen is equal to the absolute square of the amplitude, this is called Born's probability law. This is the reason why the amplitudes of modes in a superposition are called "probability amplitudes" in quantum mechanics. The mode number value of the resulting eigenmode is the result of the measurement of the observable. Of course, if you have a sharp value for the observable before the measurement, nothing is changed by the measurement and the result is certain. This picture is called the Copenhagen interpretation. A different explanation of the measurement process is given by Everett's many-worlds theory; it doesn't involve any wave function collapse. Instead, a superposition of combinations of a mode of the measured system and a mode of the measuring apparatus (an entangled state) is formed, and the further time evolutions of these superposition components are independent of each other (this is called "many worlds"). As an example: a sharp wavepacket is an eigenmode of the position observable. Thus the result of measurements of the position of such a wavepacket is certain. On the other hand, if you decompose such a wavepacket into contributions of plane waves, i.e. eigenmodes of the wavevector or momentum observable, you get all kinds of contributions of modes with many different momenta, and the result of momentum measurements will be accordingly. Intuitively, this can be understood by taking a closer look at a sharp or very narrow wavepacket: Since there are only a few spatial oscillations in the wavepacket, only a very imprecise value for the wavevector can be read off (for the mathematically inclined reader: this is a common behaviour of Fourier transforms, the amplitudes of the superposition in the momentum mode system being the Fourier transform of the amplitudes of the superposition in the position mode system). So in such a state of definite position, the momentum is very indefinite. The same is true the other way round: The more definite the momentum is in your chosen superposition, the less sharp the position will be, and it is called Heisenberg's uncertainty relation. Two different mode numbers (and the corresponding operators and observables) that both occur as characteristic features in the same mode system, e.g. the number of nodal lines in one of Chladni's figures in x direction and the number of nodal lines in y-direction or the different position components in a position eigenmode system, are said to commute or be compatible with each other (mathematically, this means that the order of the product of the two corresponding operators doesn't matter, they may be commuted). The position and the momentum are non-commuting mode numbers, because you cannot attribute a definite momentum to a position eigenmode, as stated above. So there is no mode system where both the position and the momentum (referring to the same space direction) are used as mode numbers. The Schrödinger equation, the Dirac equation etc.Edit As in the case of acoustics, where the direction of vibration, called polarization, the speed of sound and the wave impedance of the media, in which the sound propagates, are important for calculating the frequency and appearance of modes as seen in Chladni's figures, the same is true for electronic or photonic/electromagnetic modes: In order to calculate the modes (and their frequencies or time evolution) exposed to potentials that attract or repulse the waves or, equivalently, exposed to a change in refractive index and wave impedance, or exposed to magnetic fields, there are several equations depending on the polarization features of the modes: • Electronic modes (their polarization features are described by Spin 1/2) are calculated by the Dirac equation, or, to a very good approximation in cases where the theory of relativity is irrelevant, by the Schrödinger equation]] and the Pauli equation. • Photonic/electromagnetic modes (polarization: Spin 1) are calculated by Maxwell's equations (You see, 19th century already found the first quantum-mechanical equation! That's why it's so much easier to step from electromagnetic theory to quantum mechanics than from point mechanics). • Modes of Spin 0 would be calculated by the Klein-Gordon equation. It is much easier and much more physical to imagine the electron in the atom to be not some tiny point jumping from place to place or orbiting around (there are no orbits, there are orbitals), but to imagine the electron being an occupation of an extended orbital and an orbital being a vibrating wave confined to the neighbourhood of the nucleus by its attracting force. That's why Chladni's figures of acoustics and the normal modes of electromagnetic waves in a resonator are such a good analogy for the orbital pictures in quantum physics. Quantum mechanics is a lot less weird if you see this analogy. The step from electromagnetic theory (or acoustics) to quantum theory is much easier than the step from point mechanics to quantum theory, because in electromagnetics you already deal with waves and modes of oscillation and solve eigenvalue equations in order to find the modes. You just have to treat a single electron like a wave, just in the same way as light is treated in classical electromagnetics. In this picture, the only difference between classical physics and quantum physics is that in classical physics you can excite the modes of oscillation to a continuous degree, called the classical amplitude, while in quantum physics, the modes are "occupied" discretely. — Fermionic modes can be occupied only once at a given time, while Bosonic modes can be occupied several times at once. Particles are just occupations of modes, no more, no less. As there are superpositions of modes in classical physics, you get in quantum mechanics quantum superpositions of occupations of modes and the scaling and phase-shifting factors are called (quantum) amplitudes. In a Carbon atom, for example, you have a combination of occupations of 6 electronic modes of low energy (i.e. frequency). Entangled states are just superpositions of combinations of occupations of modes. Even the states of quantum fields can be completely described in this way (except for hypothetical topological defects). As you can choose different kinds of modes in acoustics and electromagnetics, for example plane waves, spherical harmonics or small wave packets, you can do so in quantum mechanics. The modes chosen will not always be decoupled, for example if you choose plane waves as the system of acoustic modes in the resonance corpus of a guitar, you will get reflexions on the walls of modes into different modes, i.e. you have coupled oscillators and you have to solve a coupled system of linear equations in order to describe the system. The same is done in quantum mechanics: different systems of eigenfunctions are just a new name for the same concept. Energy eigenfunctions are decoupled modes, while eigenfunctions of the position operator (delta-like wavepackets) or eigenfunctions of the angular momentum operator in a non-spherically symmetric system are usually strongly coupled. What happens in a measurement depends on the interpretation: In the Copenhagen interpretation you need to postulate a collapse of the wavefunction to some eigenmode of the measurement operator, while in Everett's Many-worlds theory an entangled state, i.e. a superposition of occupations of modes of the observed system and the observing measurement apparatus, is formed. The formalism of quantum mechanics and quantum field theoryEdit In Dirac's formalism, superpositions of occupations of modes are designated as state vectors or states, written as   (  being the name of the superposition), the single occupation of the mode   by   or just  . The vacuum state, i.e. the situation devoid of any occupations of modes, is written as  . Since the superposition is a linear operation, i.e. it only involves multiplication by complex numbers and addition, as in (a superposition of the single occupations of mode   and mode   with the amplitudes   and  , respectively), the states form a vector space (i.e. they are analogous to vectors in Cartesian coordinate systems). The operation of creating an occupation of a mode   is written as a generator   (for photons) and   (for electrons), and the destruction of the same occupation as a destructor   and  , respectively. A sequence of such operations is written from right to left (the order matters): In   an occupation of the electronic mode   is moved to the electronic mode   and a new occupation of the electromagnetic mode   is created — obviously, this reshuffling formula represents the emission of a photon by an electron changing its state.   is the superposition of two such processes differing in the final mode of the photon (  versus  ) with the amplitudes   and  , respectively. If the mode numbers are more complex — e.g. in order to describe an electronic mode of a Hydrogen atom, (i.e. an orbital) you need the 4 mode numbers n, l, m, s — the occupation of such a mode is written as   or   (in words: the situation after creating an occupation of mode (n, l, m, s) in the vacuum). If you have two occupations of different orbitals, you might write   or  . It is important to distinguish such a double occupation of two modes from a superposition of single occupations of the same two modes, which is written as   or  . But superpositions of multiple occupations are also possible, even superpositions of situations with different numbers or different kinds of particles:
7bfea6fd3d8bacf1
Tim Maudlin The Metaphysics Within Physics Tim Maudlin, The Metaphysics Within Physics, Oxford University Press, 2007, 197pp., $49.95 (hbk), ISBN 9780199218219. Reviewed by Richard Healey, University of Arizona This brief but fertile volume develops and defends the basic idea that "metaphysics, in so far as it is concerned with the natural world, can do no better than to reflect on physics." It consists of six essays sandwiched between an introduction and an epilogue. Though written independently over more than fifteen years, in combination they offer a unified blueprint for the construction of a metaphysics based on physics. Maudlin proposes to build on a foundation in which laws of nature and a directed time are assumed as primitives which generate the cosmic pattern of events -- observable or not. Physical modality follows readily, but (he argues) physics does not itself employ a notion of causation. So causal and counterfactual locutions are fit candidates for an analysis that will supplement physical law with pragmatic factors, while metaphysical possibility is suspect beyond the bounds of physical possibility. In the first essay, Maudlin advocates the view that laws of nature should be taken as primitive, and then uses them both to analyze many counterfactual locutions and to ground the fundamental dynamical explanations so prized in science. He defends the superiority of his view over rival proposals of David Lewis and Bas Van Fraassen, among others. Lewis analyzed natural laws as those generalizations that figure in all theoretical systematizations of empirical truths that best combine strength and simplicity. Maudlin objects that this analysis rides roughshod over the intuition that some such generalizations could fail to be laws in worlds that we should follow scientists in deeming physically possible. Van Fraassen argued that laws of nature are of no philosophical significance, and may be eliminated in favor of models in a satisfactory analysis of science. Maudlin counters that this deprives one of the resources to say how cutting down its class of models can enhance a theory's explanatory power, a phenomenon that is readily accounted for when one takes a theory's model class as well as its explanatory power to derive from its constituent laws. Laws of Temporal Evolution (LOTEs) are of special philosophical significance for Maudlin. Besides grounding dynamical explanations (as well as some laws of coexistence), they figure prominently in his accounts of propensities, counterfactuals and causation. He distinguishes some laws of temporal evolution as fundamental (modeled on Newton's second law and the Schrödinger equation) from other special laws that hold only in the absence of interference (such as laws of population biology). Fundamental Laws of Time Evolution (FLOTEs) are involved in a 3-step procedure for the evaluation of many types of counterfactuals. First, one selects a relevant time (technically, a Cauchy surface); then one responds to a command implicit in the antecedent to alter the state of the world at that time in more or less specific ways; finally, one applies FLOTEs to determine a second state of the world at another (usually later) time: the counterfactual is evaluated positively (as true or otherwise acceptable) if and only if the consequent is true in the second state of the world. It is because this procedure involves pragmatic factors and background knowledge in addition to the FLOTEs that its results may be uncertain or even indeterminate. If a relevant FLOTE is stochastic rather than deterministic, multiple second states may emerge at the final step. Maudlin introduces a notion of infection to handle these. Roughly, a second state is infected iff the modifications at step one induce alterations in how FLOTEs produce that state. He suggests that evaluation of a counterfactual ignores uninfected second states that differ from the actual world only through differing in the outcome of a stochastic FLOTE (SLOTE), while acknowledging that this suggestion flouts some people's intuitions. As for propensities, he takes these not to ground stochastic laws, but to follow from them: a propensity for a certain outcome exists just in case a SLOTE delivers an appropriately converging sequence of probabilities as one applies it at times closer and closer to the time at which that outcome might occur. Chapter two questions the motivation behind Lewis's influential doctrine of Humean supervenience, according to which the laws of nature, along with everything else, supervene on the local distribution of basic qualities. Maudlin decomposes the doctrine into two subdoctrines he calls Separability and Physical Statism. Separability maintains that the complete physical state of the world is determined by the intrinsic physical state at each spacetime point and the spatio-temporal relations between those points: according to Physical Statism, all facts about the world, including modal and nomological facts, are determined by its total physical state. Physics, not metaphysics, decides the fate of Separability. Maudlin argues (pace Einstein as well as Lewis) that the support it received from classical physics has been decisively withdrawn by quantum mechanics, with the entanglement of systems that Schrödinger called the characteristic trait of that theory. Humean supervenience requires that modal properties, law, causal connections and chance all supervene on the total physical state of the world. Does this much Physical Statism derive support from physics? Not according to Maudlin. He maintains that the total physical state of the world provides a promising supervenience base for physical possibility, counterfactuals, causal connections and chances (insofar as each of these is objective), given the physical laws. But, he argues, while it accords with actual scientific practice to regard them so, it flies in the face of scientific practice to take the laws themselves to be determined by the total physical state of the world. This argument parallels a similar argument from the first essay: they are both subject to the same objection. Here's the argument. Assume that every model of a set of laws represents a possible way for a world governed by those laws to be. Then each of two incompatible sets of laws may have a model that represents the same total physical state of the world as possible. (Indeed, two incompatible stochastic theories may have identical sets of models, agreeing on every possible total physical state of the world, disagreeing only on their constituent probabilities.) Now it is impossible for a single world to be governed by incompatible laws. Symmetry therefore suggests that a world deemed possible by incompatible laws be governed by neither set. But how can one maintain that laws cannot obtain in a world that is a model of those laws, and hence allowed by them? To avoid this threatened reductio, one must admit that which laws obtain at a world is not determined by the total physical state of that world. A defender of Physical Statism has a natural reply. By assumption, any laws supervene on the total physical state of some world W. A world W* deemed possible by the laws of W is one whose total physical state determines no regularities that conflict with those laws. But these regularities need not be laws of W*: W*'s laws supervene on its total physical state, not on the total state of W. The metaphor of "governance" is inappropriate: a world deemed possible by laws need not be a world where these are laws, though it must be a world where they "obtain" in the weak sense that their underlying regularities are there respected. Doubtless Maudlin would object that this reply flouts scientific practice. A physicist must abstract the laws from data provided by the actual world, but, once abstracted, regards them as "floating free" of that world, and so holding by fiat in each situation they deem possible. But this attitude may be squared with Physical Statism. For a scientific interest in physical possibility is limited to applications of laws to the actual world. The Schwarzschild solution represents a scientifically interesting possible General Relativistic world because it can be used approximately to model a system like a planet, star, or other local feature of the actual world. In such employment, of course the system's behavior will be "governed" by the laws of general relativity, insofar as these are assumed to hold in the actual world. If asked whether an infinite, empty Minkowski spacetime is "governed" by the laws of Special or General Relativity (or perhaps some other theory), the practicing scientist should decline to answer, on pain of turning metaphysician. In chapter five, Maudlin uses hypothetical FLOTEs to sketch a novel approach to causation, in opposition to counterfactual analyses. He constructs two test cases to argue that knowledge that C caused E need neither yield nor require knowledge that if C had not occurred, then E would not have happened (or other more complex candidates for a counterfactual analysis of causation). Then he sketches an account of how laws enter into the evaluation of causal claims. The key to this account is a basic division between quasi-Newtonian LOTEs and the rest. LOTEs are quasi-Newtonian iff they both prescribe undisturbed behavior and specify how disturbances perturb such behavior: such disturbances then count as the causes of the perturbed behavior. If the applicable laws admit no natural division between disturbed and undisturbed behavior, then we must fall back on a notion of a complete cause -- an earlier state of the world sufficient to prescribe (perhaps stochastically) the subsequent development of a system. For Maudlin, lawlike generalizations of the special sciences apply to systems only by virtue of, and to the extent permitted by, physical laws. But the basic division applies to all LOTEs. "Those special sciences that manage to employ taxonomies with quasi-Newtonian lawlike generalizations can be expected to support particularly robust judgments about causes." But Maudlin uses an example of McDermott to argue that when we can carve up a situation in different ways to apply alternative quasi-Newtonian lawlike generalizations our causal judgments are likely to waver, even though each partition licenses the same counterfactuals. And he despairs of any adequate analysis of remote causation, where nothing less than complete causes could play the role of antecedents to reliable lawlike generalizations. Whether the world has a rich causal structure at the fundamental level depends on whether the laws of physics take quasi-Newtonian form. But the physical laws need not fulfill a metaphysician's yearning for causes. In chapter four, Maudlin argues that time passes: along with primitive physical laws, time's passage completes what he calls his anti-Humean metaphysical package. For him the passage of time is neither a mere psychological phenomenon nor an a priori metaphysical truth. Rather, we should believe that time passes because that's what ordinary experience suggests the physical world is like, and nothing in our best physics currently tells us otherwise. But what does this belief amount to? Maudlin tells us that the passage of time is an intrinsic asymmetry in the temporal structure of the world with no spatial counterpart. Given a classical space-time theory (Newtonian or relativistic), one can represent such an asymmetry by assuming a primitive temporal orientation -- a partition of the time-like vectors at each space-time point into two disjoint sets in a way that varies smoothly from point to point (at least locally), together with a designation of one set as future-directed, the other as past-directed. This assumption is consistent with the metaphysics of a B-theorist who believes in a "block universe" (as Maudlin says he does). Metaphysical proponents of a "dynamical" time would likely refuse to accept it as an expression of the robust sense of passage to which they are committed. (Some A-theorists may even have trouble stating the assumption, given their ontological qualms about future events.) And Maudlin seems to join company with them when he writes that "the passage of time connotes more than just an intrinsic asymmetry: not just any asymmetry would produce passing"; and "The passage of time underwrites claims about one state 'coming out of' or 'being produced from' another". But he admits that time flows only in a metaphorical sense, while seemingly committed to the literal truth of time's passage. The subtlety of this distinction has this reviewer scratching his head! Maudlin sets out to refute logical, physical and epistemological objections to the view that time passes, culling many of these from Huw Price's influential Time's Arrow and Archimedes' Point. While he scores a few points in the ensuing philosophical brawl, I would call the contest at best a tie; at worst, it is marred by persistent confusion as to what exactly is being fought over. He then presents a case in favor of time's passage. Even here, the case is partly negative. Where Gödel denied that time could pass in a space-time with no foliation by spacelike hypersurfaces, Maudlin counters that the passage of time entails only a preferred temporal orientation. He objects to attempts to analyze change without the passage of time because they cannot account for the directionality of change: attempts to ground this in entropy increase fail. Besides highlighting the time-asymmetry physicists acknowledge in the laws applicable to esoteric weak-interaction phenomena, Maudlin does offer one interesting physics-based argument for time's passage. Statistical physics explains pervasive asymmetries in our world by postulating an early state that is macroscopically atypical but microscopically typical. Only by supposing that later states are produced by such a state can one explain why later microscopic states are atypical, as statistical physics requires. But for a Humean opponent, it is a contingent aspect of the Humean mosaic that it permits such temporally asymmetric explanations, and another contingent fact that it features creatures like us able to exploit them to good physical (but bad metaphysical!) ends. Still, for Maudlin, arguments from physics remain secondary to what he takes to be our manifest experience of the objective passage of time. Doubtless we all experience world history as one damn thing after another: but this seems an unlikely premise on which to base a significant metaphysical conclusion. In chapter three, Maudlin locates suggestions for deep metaphysics in the gauge theories of contemporary science. He argues that while a metaphysics of substance and universals may arise as a natural projection of the structure of language onto the world, theories such as the chromodynamics that high energy physicists use to treat the strong interactions among quarks favor a rival, novel ontology suggested by the way in which they apply the mathematics of fiber bundles. Maudlin first argues that not even spatiotemporal relations (arguably the best candidates for external relations) are what he calls 'metaphysically pure' (which I take to be a synonym of the -- equally tricky -- term 'intrinsic'). The argument is that what distance relations obtain between a pair of points depends on the existence and nature of the continuous paths that link them through other points. Next he uses the example of a plane non-Euclidean geometry modeled by the surface of a sphere to argue that whether a pair of vectors ('arrows') attached at different places point in the same direction depends on how one thinks of transporting one vector to the location of the other along some continuous curve linking the two places. The conclusion -- that pointing in the same direction is not a metaphysically pure internal relation -- is then extended to the abstract vectors that contemporary gauge theories use to represent the matter fields associated with quarks and other leptons. He concludes that to refer to a quark as red (as physicists applying chromodynamics are whimsically wont to do) is not to say that it bears a relation of color similarity to other red quarks, since the theory posits no such metaphysically pure relation. Whether two quarks will count as having the same color depends on what space-time path one chooses to connect the space-time locations associated with them. What physicists call color charge is simply not an intrinsic property of quarks, or anything else. "Fiber bundles provide new mathematical structures for representing physical states, and hence a new way to understand physical ontology." I heartily endorse Maudlin's declaration that "Empirical science has produced more astonishing suggestions about the fundamental structure of the world than philosophers have been able to invent, and we must attend to those suggestions." But if contemporary gauge theories do have any clear suggestions for ontology, Maudlin's is not among them -- or so I have argued in my Gauging What's Real (Oxford: 2007). First, it is not clear how to reconcile the quantum field theories of quantum chromodynamics with a fundamental ontology that includes the quarks whose behavior physicists take them to describe (a point to which some of Maudlin's remarks suggest he is sensitive). More importantly, taking a gauge field such as the (quantized) electromagnetic field to be a connection on a fiber bundle is more than just a category mistake of just the kind that Maudlin warns us against in chapter five: it is to ignore the element of conventionality involved in choosing one out of a continuum of gauge-equivalent connections, each grounding a different path-dependent notion of color-similarity. Classical gauge theories, at least, suggest an ontology in which properties are ascribed to extended loops, in violation of Separability, but still in conformity to a substance/universal ontology, though one of a radically unfamiliar kind. This is an elegantly written and enormously stimulating book. It is full of original, provocative, philosophical argumentation. Maudlin shows by example what it is to do the best kind of naturalized metaphysics: one based on thorough acquaintance with real science, but unwilling to accept a superficial analysis of how it bears on deep philosophical problems. Every metaphysician should read it and emulate Maudlin's method, even when disagreeing with his conclusions.
c08ad1c86b32c27a
Copenhagen interpretation From Wikiquote Jump to navigation Jump to search The Copenhagen interpretation is a loosely-knit informal collection of axioms or doctrines that attempt to express in everyday language the mathematical formalism of quantum mechanics. The interpretation was largely devised in the years 1925–1927 by Niels Bohr and Werner Heisenberg. • The Copenhagen interpretation is a very ambiguous term. Some people use it just to mean the sort of practical quantum mechanics that you can do — like you can ride a bicycle without really knowing what you're doing. It's the rules for using quantum mechanics and the experience that we have in using it. […] Then there's another side to the Copenhagen interpretation, which is a philosophy of the whole thing. It tries to be very deep and tell you that these ambiguities, which you worry about, are somehow irreducible. It says that ambiguities are in the nature of things. We, the observers, are also part of nature. It's impossible for us to have any sharp conception of what is going on. because we, the observers, are involved. And so there is this philosophy, which was designed to reconcile people to the muddle; You shouldn't strive for clarity— that's naive. • Bohr’s principle of complementarity – the heart of the Copenhagen philosophy – implies that quantum phenomena can only be described by pairs of partial, mutually exclusive, or ‘complementary’ perspectives. Though simultaneously inapplicable, both perspectives are necessary for the exhaustive description of phenomena. Bohr aspired to generalize complementarity into all fields of knowledge, maintaining that new epistemological insights are obtained by adjoining contrary, seemingly incompatible, viewpoints. [...] The value of Bohr’s philosophy for the advancement of physics is controversial. His followers consider complementarity a profound insight into the nature of the quantum realm. Others consider complementarity an illuminating but superfluous addendum to quantum theory. More severe is the opinion that Bohr’s philosophy is an obscure ‘web of words’ and mute on crucial foundational issues. • Mara Beller, "Bohr, Niels (1885-1962)", Routledge Encyclopedia of Philosophy • In recent years the debate on these ideas has reopened, and there are some who question what they call "the Copenhagen interpretation of quantum mechanics"—as if there existed more than one possible interpretation of the theory. • Rudolf Peierls, Surprises in Theoretical Physics (1979), Ch. 1. General Quantum Mechanics If one follows the great difficulty which even eminent scientists like Einstein had in understanding and accepting the Copenhagen interpretation... one can trace the roots... to the Cartesian will take a long time for it [this partition] to be replaced by a really different attitude toward the problem of reality. • Maxel, you know I love you and nothing can change that. But I do need to give you once a thorough head washing. So stand still. The impudence with which you assert time and again that the Copenhagen interpretation is practically universally accepted, assert it without reservation, even before an audience of the laity—who are completely at your mercy—it’s at the limit of the estimable […]. Have you no anxiety about the verdict of history? Are you so convinced that the human race will succumb before long to your own folly? • Erwin Schrödinger, Letter to Max Born (October 10, 1960), quoted in Walter John Moore, A Life of Erwin Schrödinger (1994), p. 342 • As Bohr acknowledged, in the Copenhagen interpretation a measurement changes the state of a system in a way that cannot itself be described by quantum mechanics. […] In quantum mechanics the evolution of the state vector described by the time-dependent Schrödinger equation is deterministic. If the time-dependent Schrödinger equation described the measurement process, then whatever the details of the process, the end result would be some definite state, not a number of possibilities with different probabilities. This is clearly unsatisfactory. If quantum mechanics applies to everything, then it must apply to a physicist’s measurement apparatus, and to physicists themselves. On the other hand, if quantum mechanics does not apply to everything, then we need to know where to draw the boundary of its area of validity. Does it apply only to systems that are not too large? Does it apply if a measurement is made by some automatic apparatus, and no human reads the result? • Steven Weinberg, Lectures on Quantum Mechanics (2012), Ch. 3 : General Principles of Quantum Mechanics • I have always felt bitter about the way how Bohr’s authority together with Pauli’s sarcasm killed any discussion about the fundamental problems of the quantum. [...] I expect that the Copenhagen interpretation will some time be called the greatest sophism in the history of science, but I would consider it a terrible injustice if—when some day a solution should be found—some people claim that ‘this is of course what Bohr always meant’, only because he was sufficiently vague. External links[edit] Wikipedia has an article about:
6ebdbbf8b428264b
Time-independent schrödinger equation What is time independent Schrodinger equation? The time independent Schrodinger equation for one dimension is of the form. where U(x) is the potential energy and E represents the system energy. It has a number of important physical applications in quantum mechanics. What is M in Schrodinger equation? …where m is the mass of the particle, V(x,t) is the potential energy function of the system, i again represents the square root of –1, and the constant ħ is defined as in equation (2.4): (2.4) Equation (2.3) is known as the time-dependent Schrödinger (wave) equation. What is Schrodinger equation in chemistry? The Schrödinger equation, sometimes called the Schrödinger wave equation, is a partial differential equation. It uses the concept of energy conservation (Kinetic Energy + Potential Energy = Total Energy) to obtain information about the behavior of an electron bound to a nucleus. Why is there an I in the Schrodinger equation? The imaginary constant i appears in the original Schroedinger article (I) for positive values of the energy, which therefore are discarded by Schrödinger, who wants real eigenvalues and requires negative energy. What is Schrodinger’s law? In Schrodinger’s imaginary experiment, you place a cat in a box with a tiny bit of radioactive substance. Now, the decay of the radioactive substance is governed by the laws of quantum mechanics. This means that the atom starts in a combined state of “going to decay” and “not going to decay”. What are the applications of Schrodinger equation? Schrödinger’s equation offers a simple way to find the previous Zeeman–Lorentz triplet. This proves once more the broad range of applications of this equation for the correct interpretation of various physical phenomena such as the Zeeman effect. What is de Broglie equation? In 1924, French scientist Louis de Broglie (1892–1987) derived an equation that described the wave nature of any particle. Particularly, the wavelength (λ) of any moving object is given by: λ=hmv. In this equation, h is Planck’s constant, m is the mass of the particle in kg, and v is the velocity of the particle in m/s Can Schrodinger equation be derived? It is not possible to derive it from anything you know. It came out of the mind of Schrödinger. The foundation of the equation is structured to be a linear differential equation based on classical energy conservation, and consistent with the De Broglie relations. Is the cat alive or dead? What is the equation for quantum physics? The Schrödinger equation is the fundamental equation of physics for describing quantum mechanical behavior. It is also often called the Schrödinger wave equation, and is a partial differential equation that describes how the wavefunction of a physical system evolves over time. What is the formula of wave function? 17.1. Schrödinger saw that for an object with E=hν (the Planck relation, where E equals energy and h is Planck’s constant), and λ = h/p (the de Broglie wavelength, where p is momentum), this equation can be rewritten as a quantum wave function. This is the quantum wave function. Leave a Reply Tensile stress equation Quotient rule equation
944f70eab0e73639
Dedication is a more important sign of integrity than enthusiasm.  It is necessary to have faith in a pathway and clear away doubts to ascertain if they are realistic or merely forms of resistance.  A seeker should have the security and support of inner certainty and firm conviction that are consequent to study, personal research, and investigation. Thus, a pathway should be intrinsically reconfirming by discovery and inner experience.  A true pathway unfolds, is self-revelatory, and is subject to reconfirm action experiential. Daily Reflections from Dr. David R. Hawkins: 365 Contemplations on Surrender, Healing, and  Consciousness, pg. 23. The source of pain is not the belief system itself but one’s attachment to it and the inflation of its imaginary value.  The inner processing of attachments is dependent on the exercise of the will, which alone has the power to undo the mechanism of attachment by the process of surrender.  This may be subjectively experienced or contextualized as sacrifice, although it is actually a liberation.  The emotional pain of loss arises from the attachment itself and not from the “what” that has been lost. Daily Reflections from Dr. David R. Hawkins, pg. 123 This latest book can be purchased through Amazon. Care Instead of Fear Each of us has within us a certain reservoir of suppressed and repressed fear. This quantity of fear spills into all areas of our life, colors all of our experience, decreases our joy in life, and reflects itself in the musculature of the face so as to affect our physical appearance, our physical strength, and the condition of health in all of the organs in the body.  Sustained and chronic fear gradually suppresses the body’s immune system.  … Although we know that it is totally damaging to our relationships, health, and happiness, we still hang on to fear.  Why is that? We have the unconscious fantasy that fear is keeping us alive; this is because fear is associated with our whole set of survival mechanisms. We have the idea that if we were to let go of fear, our main defense mechanism, we would become vulnerable in some way.  In Reality, the truth is just the opposite.  Fear is what blinds us to the real dangers of life.  In fact, fear itself is the greatest danger that the human body faces. It is fear and guilt that bring about disease and failure in every area of our lives. We could take the same protective actions out of love rather than out of fear. Can we not care for our bodies because we appreciate and value them, rather than out of fear of disease and dying? Can we not be of service to others in our life out of love, rather than out of fear of losing them? Can we not be polite and courteous to strangers because we care for our fellow human beings, rather than because we fear of losing their good opinion of us? … Can we not perform our job well because we care about the recipients of our services, rather than just the fear of losing our jobs or pursuing our own ambition?  Can we not accomplish more by cooperation, rather than fearful competition? …On a Spiritual level, isn’t it more effective if, out of compassion and identification with our fellow human beings, we care for them, rather than trying to love them out of fear of God’s punishment if we don’t? Letting Go: The Pathway of Surrender, Ch. 6, pg. 99-100 Truth is Non-Predictive Just on the physical level, we saw from the Heisenberg principle that the state of the universe, as it is now, which we can define by the Schrödinger equation, is changed by merely observing it. Because what happens is you collapse the wave function from potentiality to actuality. You now have a new reality. In fact, you have to use different mathematical formulas, like the Dirac equation. So, you’ve gone from potential into actuality. That transformation does not occur without interjection of consciousness. Consequently, a thing could stand as a potentiality for thousands of years. Along comes somebody who looks at it differently, and bang, it becomes an actuality. So the unmanifest then becomes the manifest as the consequence of creation. Therefore, predicting the future is impossible because you would have to know the mind of God, because creation is the unfolding of potentiality, depending on local conditions and intention. You have no idea what intention is. Intention can change one second from now. If the future was predictable, there would be no point to human existence because there would be no karmic benefit, no gain or capacity to undo that which is negative. It would be confined to what is called predestination. Predestination and predictions of the future miss the whole purpose of existence and jump the whole understanding of the evolution of consciousness. There would be no karmic merit nor demerit. There would be no salvation. There would be no heavens. There would be no stratifications of levels of consciousness. We would all just emerge perfectly in a perfect realm. And therefore, there would be no purpose to this life at all. The Wisdom of Dr. David R. Hawkins, Ch. 6, pg. 102 Note: This book is available through Amazon:  The Wisdom of Dr. David R. Hawkins: Classic Teachings on Spiritual Truth and Enlightenment: Hawkins M.D. Ph.D, David R.: 9781401964979: Books or Hay House, Inc.: The Wisdom of Dr. David R. Hawkins ( Greater Freedom Spiritual reality is a greater source of pleasure and satisfaction than the world can supply.  It is endless and always available in the present instead of the future.  It is actually more exciting because one learns to live on the crest of the current moment, instead of on the back of the wave (which is the past) or on the front of the wave (which is the future).  There is greater freedom from living on the exciting knife-edge of the moment than being a prisoner of the past or having expectations of the future.
27e88fea507636d3
Lie-algebraic discretization of differential equations title={Lie-algebraic discretization of differential equations}, author={Yu. F. Smirnov and Alexander V. Turbiner}, journal={Modern Physics Letters A}, A certain representation for the Heisenberg algebra in finite difference operators is established. The Lie algebraic procedure of discretization of differential equations with isospectral property is proposed. Using sl2-algebra based approach, (quasi)-exactly-solvable finite difference equations are described. It is shown that the operators having the Hahn, Charlier and Meissner polynomials as the eigenfunctions are reproduced in the present approach as some particular cases. A discrete version… Expand Umbral calculus, difference equations and the discrete Schrödinger equation In this paper, we discuss umbral calculus as a method of systematically discretizing linear differential equations while preserving their point symmetries as well as generalized symmetries. TheExpand Discretization of nonlinear evolution equations over associative function algebras Abstract A general approach is proposed for discretizing nonlinear dynamical systems and field theories on suitable functional spaces, defined over a regular lattice of points, in such a way thatExpand Linear operators with invariant polynomial space and graded algebra The irreducible, finite-dimensional representations of the graded algebras osp(j,2) (j=1,2,3) are expressed in terms of differential operators. Some quantum deformations of these algebras are shownExpand Bethe ansatz solutions to quasi exactly solvable difference equations Bethe ansatz formulation is presented for several explicit examples of quasi exactly solvable difference equations of one degree of freedom which are introduced recently by one of the presentExpand Discrete Differential Geometry and Lattice Field Theory We develope a difference calculus analogous to the differential geometry by translating the forms and exterior derivatives to similar expressions with difference operators, and apply the results toExpand Dolan–Grady relations and noncommutative quasi-exactly solvable systems We investigate a U(1) gauge invariant quantum mechanical system on a 2D noncommutative space with coordinates generating a generalized deformed oscillator algebra. The Hamiltonian is taken as aExpand Canonical commutation relation preserving maps We study maps preserving the Heisenberg commutation relation ab - ba = 1. We find a one-parameter deformation of the standard realization of the above algebra in terms of a coordinate and its dualExpand Quasi-Exactly Solvable Hamiltonians related to Root Spaces Abstract sl(2)−Quasi-Exactly-Solvable (QES) generalization of the rational A n , BC n , G 2, F 4, E 6,7,8 Olshanetsky-Perelomov Hamiltonians including many-body Calogero Hamiltonian is found. ThisExpand Heisenberg algebra, umbral calculus and orthogonal polynomials Umbral calculus can be viewed as an abstract theory of the Heisenberg commutation relation [P,M]=1. In ordinary quantum mechanics, P is the derivative and M the coordinate operator. Here, we shallExpand A certain notion of canonical equivalence in quantum mechanics is proposed. It is used to relate quantal systems with discrete ones. Discrete systems canonically equivalent to the celebrated harmonicExpand Quasi-exactly-solvable problems andsl(2) algebra Recently discovered quasi-exactly-solvable problems of quantum mechanics are shown to be related to the existence of the finite-dimensional representations of the groupSL(2,Q), whereQ=R, C. It isExpand Lie-algebras and linear operators with invariant subspaces A general classification of linear differential and finite-difference operators possessing a finite-dimensional invariant subspace with a polynomial basis (the generalized Bochner problem) is given.Expand Classical Orthogonal Polynomials of a Discrete Variable The basic properties of the polynomials p n (x) that satisfy the orthogonality relations $$ \int_a^b {{p_n}(x)} {p_m}(x)\rho (x)dx = 0\quad (m \ne n) $$ (2.0.1) hold also for the polynomialsExpand Turbiner “ Quasiexactlysolvable problems and sl ( 2 , R ) algebra ” • Comm . Math . Phys . Journ . Phys . A
151a201c12351d7b
संघ लोक सेवा आयोग UPSC REVISED SYLLABI COMBINED EXAMINATION The Union Public Service Commission in consultation with the Government (Ministry of Mines, the Nodal Ministry) has decided to revise the Scheme, Pattern and Syllabi of the Combined Geo-Scientist and Geologist Examination. UPSC REVISED SYLLABI COMBINED EXAM The salient features of the same are as under: (i) The nomenclature of this Examination has been changed to “Combined Geo-Scientist Examination” in place of “Combined Geo-Scientist and Geologist Examination”. (ii) There will a three tier examination pattern i.e. (i) Stage-I : Preliminary Examination (ii) Stage-II : Main Examination (iii) Stage-III : Personality Test. (iii) Preliminary Examination will screen the candidates for taking the Main Examination (Stage–II). (iv) The Preliminary Examination will be of objective type having two Papers. Marks secured in this Examination will be counted for deciding the final merit. (v) The Preliminary Examination will be a Computer Based Examination. (vi) The Main Examination will have three Papers for each Stream and all Papers will be of descriptive type. Marks secured in this Examination will be counted for deciding the final merit. (vii) Existing General English Paper has been discontinued. (viii) The Revised Scheme, Pattern and Syllabi of the Examination will be made effective from the 2020 Examination to give sufficient preparation time to the aspirants. 2. The details of this revised Scheme, Pattern and Syllabi are attached. Plan of Examination 1. The Examination shall be conducted according to the following plan:— (i) Stage-I: Combined Geo-Scientist (Preliminary) Examination (Objective Type Papers) for the selection of candidates for the Stage-II: Combined Geo-Scientist (Main) Examination; (ii) Stage-II: Combined Geo-Scientist (Main) Examination (Descriptive Type Papers) and (iii) Stage-III: Personality Test 2. The detailed scheme and syllabi of Combined Geo-Scientist Examination is as under: A. Stage-I : Combined Geo-Scientist (Preliminary) Examination [Objective-type]:- The Examination shall comprise of two papers Stream-I : Geologist & Jr. Hydrogeologist Subject Duration Maximum Marks Paper-I : General Studies 2 Hours 100 Marks Paper-II : Geology/Hydrogeology 2 Hours 300 Marks Total 400 Marks Stream-II : Geophysicist Subject Duration Maximum Marks Paper-I : General Studies 2 Hours 100 Marks Paper-II : Geophysics 2 Hours 300 Marks Total 400 Marks Stream-III : Chemist Subject Duration Maximum Marks Paper-I : General Studies 2 Hours 100 Marks Paper-II : Chemistry 2 Hours 300 Marks Total 400 Marks B. Stage-II : Combined Geo-Scientist (Main) Examination [Descriptive-type]:- The Examination shall comprise of three papers in each stream. Stream-I : Geologist Subject Duration Maximum Marks Paper-I : Geology 3 Hours 200 Marks Paper-II : Geology 3 Hours 200 Marks Paper-III : Geology 3 Hours 200 Marks Total 600 Marks Stream-II : Geophysicist Subject Duration Maximum Marks Paper-I : Geophysics 3 Hours 200 Marks Paper-II : Geophysics 3 Hours 200 Marks Paper-III : Geophysics 3 Hours 200 Marks Total 600 Marks Stream-III : Chemist Subject Duration Maximum Marks Paper-I : Chemistry 3 Hours 200 Marks Paper-II : Chemistry 3 Hours 200 Marks Paper-III : Chemistry 3 Hours 200 Marks Total 600 Marks Stream-IV : Jr. Hydrogeologist Subject Duration Maximum Marks Paper-I : Geology 3 Hours 200 Marks Paper-II : Geology 3 Hours 200 Marks Paper-III : Hydrogeology 3 Hours 200 Marks Total 600 Marks C. Stage-III : Personality Test – 200 Marks Syllabus of Combined Geo-Scientist (Preliminary) Examination Stage-I (Objective Type) Paper-I : General Studies (Common for all streams)  Current events of national and international importance.  History of India and Indian National Movement.  General Science’ Stage-I (Objective Type) Paper-II : Geology/Hydrogeology 1. Physical Geology Principle of uniformitarianism; origin, differentiation and internal structure of the Earth; origin of atmosphere; earthquakes and volcanoes; continental drift, sea-floor spreading, isostasy, orogeny and plate tectonics; geological action of rivers, wind, glaciers, waves; erosional and depositional landforms; weathering processes and products. 2. Structural Geology Stress, strain and rheological properties of rocks; planar and linear structures; classification of folds and faults; Mohr’s circle and criteria for failure of rocks; ductile and brittle shear in rocks; study of toposheets, V-rules and outcrop patterns; stereographic projections of structural elements. 3. Mineralogy Elements of symmetry, notations and indices; Bravais lattices; chemical classification of minerals; isomorphism, polymorphism, solid solution and exsolution; silicate structures; physical and optical properties of common rock forming minerals- olivine, garnet, pyroxene, amphibole, mica, feldspar and quartz. 4. Igneous Petrology Magma types and their evolution; IUGS classification of igneous rocks; forms, structures and textures of igneous rocks; applications of binary and ternary phase diagrams in petrogenesis; magmatic differentiation and assimilation; petrogenesis of granites, basalts, komatiiites and alkaline rocks (carbonatite, kimberlite, lamprophyre and nepheline syenite). 5. Metamorphic Petrology Limits, types and controls of metamorphism; metamorphic structuresslate, schist and gneiss; metamorphic textures- pre, syn and post tectonic porphyroblasts; concept of metamorphic zone, isograd and facies; geothermal gradients, facies series and plate tectonics. 6. Sedimentology Origin of sediments; sedimentary textures, grain-size scale; primary sedimentary structures; classification of sandstone and carbonate rocks; siliciclastic depositional environments and sedimentary facies; diagenesis of carbonate sediments. 7. Paleontology Fossils and processes of fossilization; concept of species and binomial nomenclature; morphology and classification of invertebrates (Trilobites, Brachiopods, Lamellibranchs, Gastropods and Cephalopods); evolution in Equidae and Hominidae; microfossils-Foraminifera, Ostracoda; Gondwana flora. 8. Stratigraphy Law of superposition; stratigraphic nomenclature- lithostratigraphy, biostratigraphy and chronostratigraphy; Archaean cratonic nucleii of Peninsular India (Dharwar, Singhbhum, and Aravalli cratons); Proterozoic mobile belts (Central Indian Tectonic Zone, Aravalli-Delhi and Eastern Ghats); Purana sedimentary basins (Cuddapah and Vindhyan); Phanerozoic stratigraphy of India- Spiti, Kashmir, Damodar valley, Kutch, Trichinopoly, Siwaliks and Indo-Gangetic alluvium. 9. Economic Geology Properties of mineral deposits- form, mineral assemblage, texture, rockore association and relationship; magmatic, sedimentary, metamorphic, hydrothermal, supergene and weathering-related processes of ore formation; processes of formation of coal and petroleum; distribution and geological characteristics of major mineral and hydrocarbon deposits of India. 10. Hydrogeology Groundwater occurrence and aquifer characteristics, porosity, permeability, hydraulic conductivity, transmissivity; Darcy’s Law in homogenous and heterogenous media; Bernoulli equation, Reynold’s number; composition of groundwater ; application of H and O isotopes in groundwater studies; artificial recharge of groundwater. Stage-I (Objective Type) Paper-II : Geophysics 1. Solid Earth Geophysics: Introduction to Geophysics and its branches. Solar system: origin, formation and characteristics of planets, Earth: shape and rotation. Gravity and magnetic fields of earth. Geomagnetism, elements of earth’s magnetism, Rock and mineral magnetism, Elastic waves, types and their propagation characteristics, internal structure of earth, variation of physical properties in the interior of earth. Plate tectonics, Earthquakes and their causes, focal depth, epicenter, Intensity and Magnitude scales, Energy of earthquakes, Seismicity. 2. Mathematical Methods in Geophysics: Elements of vector analysis, Vector algebra, Properties of scalars, vectors and tensors, Gradient, Divergence and Curl, Gauss’s divergence theorem, Stoke’s theorem. Matrices, Eigen values and Eigen vectors and their applications in geophysics. Newton’s Law of gravitation, Gravity potential and gravity fields due to bodies of different geometric shapes. Basic Forces of Nature and their strength: Gravitational, Electromagnetic, Strong and Weak forces. Conservation Laws in Physics: Energy, Linear and angular momentum. Rigid body motion and moment of inertia. Basics of special theory of relativity and Lorentz transformation. Fundamental concepts of inverse theory, Definition of inversion and application to Geophysics. Forward and Inverse problems. Probability theory, Random variables, binomial, Poisson and normal distributions. Linear algebra, Linear ordinary differential equations of first and second order. Partial differential equations (Laplace, wave and heat equations in two and three dimensions). Elements of numerical techniques: root of functions, interpolation, and extrapolation, integration by trapezoid and Simpson’s rule, solution of first order differential equation using Runge-Kutta method, Introduction to finite difference and finite elements methods. 3. Electromagnetism: Electrostatic and magneto-static fields, Coulomb’s law, Electrical permittivity and dielectric constant, Lorentz force and their applications. Ampere’s law, Biot and Savart’s law, Gauss’s Theorem, Poisson’s equation. Laplace’s equation: solution of Laplace’s equation in Cartesian coordinates, use of Laplace’s equation in the solutions of geophysical and electrostatic problems. Displacement current, Faraday’s law of electromagnetic induction. Maxwell’s equations. Boundary conditions. Wave equation, plane electromagnetic waves in free space, dielectric and conducting media, electromagnetic vector and scalar potentials. 4. Geophysical Prospecting: Elements of geophysical methods: Principles, data reduction and applications of gravity, magnetic, electrical, electromagnetic and well logging methods. Fundamentals of seismic methods: Fermat’s Principle, Snell’s Law, Energy portioning, Reflection and transmission coefficients, Reflection and Refraction from layered media. Signals and systems, sampling theorem, aliasing effect, Fourier series and periodic waveforms, Fourier transform and its application, Laplace transforms, Convolution, Auto and cross correlations, Power spectrum, Delta function, unit step function. 5. Remote Sensing and Thermodynamics: Fundamentals of remote sensing, electromagnetic spectrum, energyfrequency-wavelength relationship, Stefan-Boltzmann Law, Wien’s Law, electromagnetic energy and its interactions in the atmosphere and with terrain features. Planck’s Radiation Law. Laws of thermodynamics and thermodynamic potential. 6. Nuclear Physics and Radiometry: Basic nuclear properties: size, shape, charge distribution, spin and parity; Binding energy, semi-empirical mass formula; Fission and fusion. Principles of radioactivity, Alpha, beta and gamma decays, Photoelectric and Compton Effect, Pair Production, radioactivity decay law, radioactivity of rocks and minerals, Radiation Detectors: Ionization chamber, G-M counter, Scintillation counter and Gamma ray spectrometer. Matter Waves and wave particle duality, Electron spin, Spectrum of Hydrogen, helium and alkali atoms. Stage-I (Objective Type) Paper-II : Chemistry 1. Chemical periodicity: Schrödinger equation for the H-atom. Radial distribution curves for 1s, 2s, 2p, 3s, 3p, 3d orbitals. Electronic configurations of multi-electron atoms. Periodic table, group trends and periodic trends in physical properties. Classification of elements on the basis of electronic configuration. Modern IUPAC Periodic table. General characteristics of s, p, d and f block elements. Effective nuclear charges, screening effects, atomic radii, ionic radii, covalent radii. Ionization enthalpy, electron gain enthalpy and electronegativity. Group trends and periodic trends in these properties in respect of s-, p- and d-block elements. General trends of variation of electronic configuration, elemental forms, metallic nature, magnetic properties, catenation and catalytic properties, oxidation states, aqueous and redox chemistry in common oxidation states, properties and reactions of important compounds such as hydrides, halides, oxides, oxy-acids, complex chemistry in respect of s-block and p-block elements. 2. Chemical bonding and structure: Ionic bonding: Size effects, radius ratio rules and their limitations. Packing of ions in crystals, lattice energy, Born-Landé equation and its applications, Born-Haber cycle and its applications. Solvation energy, polarizing power and polarizability, ionic potential, Fajan’s rules. Defects in solids. Covalent bonding: Valence Bond Theory, Molecular Orbital Theory, hybridization. Concept of resonance, resonance energy, resonance structures. Coordinate bonding: Werner theory of coordination compounds, double salts and complex salts. Ambidentate and polydentate ligands, chelate complexes. IUPAC nomenclature of coordination compounds. Coordination numbers, Geometrical isomerism. Stereoisomerism in square planar and octahedral complexes. 3. Acids and bases: Chemical and ionic equilibrium. Strengths of acids and bases. Ionization of weak acids and bases in aqueous solutions, application of Ostwald’s dilution law, ionization constants, ionic product of water, pH-scale, effect of temperature on pH, buffer solutions and their pH values, buffer action & buffer capacity; different types of buffers and Henderson’s equation. 4. Theoretical basis of quantitative inorganic analysis: Volumetric Analysis: Equivalent weights, different types of solutions, normal and molar solutions. Primary and secondary standard substances. General principles of different types of titrations: i) acid-base, ii) redox, iii) complexometric, iv) Precipitation. Types of indicators – i) acid-base, ii) redox iii) metal-ion indicators. 5. Kinetic theory and the gaseous state: Kinetic theory of gases, average kinetic energy of translation, Boltzmann constant and absolute scale of temperature. Maxwell-Boltzmann distribution of speeds. Calculations of average, root mean square and most probable velocities. Collision diameter; collision number and mean free path; frequency of binary collisions; wall collision and rate of effusion. 6. Chemical thermodynamics and chemical equilibrium: First law and its applications to chemical problems. Thermodynamic functions. Total differentials and state functions. Free expansion, Joule Thomson coefficient and inversion temperature. Hess’ law. Applications of Second law of thermodynamics. Gibbs function (G) and Helmholtz function (A), Gibbs-Helmholtz equation, criteria for thermodynamic equilibrium and spontaneity of chemical processes. 7. Solutions of non-electrolytes: Colligative properties of solutions, Raoult’s Law, relative lowering of vapour pressure, osmosis and osmotic pressure; elevation of boiling point and depression of freezing point of solvents. Solubility of gases in liquids and solid solutions. 8. Electrochemistry: Cell constant, specific conductance and molar conductance. Kohlrausch’s law of independent migration of ions, ion conductance and ionic mobility. Equivalent and molar conductance at infinite dilution. Debye-Hückel theory. Application of conductance measurements. Conductometric titrations. Determination of transport number by moving boundary method. 9. Basic organic chemistry: Delocalized chemical bond, resonance, conjugation, hyperconjugation, hybridisation, orbital pictures of bonding sp3, sp2, sp: C-C, C-N and C-O system), bond polarization and bond polarizability. Reactive intermediates: General methods of formation, relative stability and reactivity of carbocations, carbanions and free radicals. 10. Stereochemistry: Configuration and chirality (simple treatment of elements of symmetry), optical isomerism of compounds containing two to three stereogenic centres, R,S nomenclature, geometrical isomerism in compounds containing two C=C double bonds (E,Z naming), and simple cyclic systems, Newman projection (ethane and substituted ethane). 11. Types of organic reactions: Aliphatic substitution reactions: SN1, SN2 mechanisms, stereochemistry, relative reactivity in aliphatic substitutions. Effect of substrate structure, attacking nucleophile, leaving group and reaction medium and competitive reactions. Elimination reactions: E1, E2, mechanisms, stereochemistry, relative reactivity in aliphatic eliminations. Effect of substrate structure, attacking base, leaving group, reaction medium and competitive reactions, orientation of the double bond, Saytzeff and Hoffman rules. Addition reactions: Electrophilic, nucleophilic and radical addition reactions at carbon-carbon double bonds. Electrophilic and nucleophilic aromatic substitution: Electrophilic (halogenation, sulphonation, nitration, Friedal-Crafts alkylation and acylation), nucleophilic (simple SNAr, SN1 and aryne reactions). 12. Molecular Rearrangements: Acid induced rearrangement and Wagner-Meerwein rearrangements. Neighbouring group participation. Syllabus of Combined Geo-Scientist (Main) Examination Stage-II (Descriptive Type) Geology : Paper-I Section A. Physical geology and remote sensing Evolution of Earth; Earth’s internal structure; earthquakes and volcanoes; principles of geodesy, isostasy; weathering- processes and products; geomorphic landforms formed by action of rivers, wind, glaciers, waves and groundwater; features of ocean floor; continental shelf, slope and rise; concepts of landscape evolution; major geomorphic features of India- coastal, peninsular and extrapeninsular. Electromagnetic spectrum; electromagnetic bands in remote sensing; spectral signatures of soil, rock, water and vegetation; thermal, near infra-red and microwave remote sensing; digital image processing; LANDSAT, IRS and SPOTcharacteristics and use; aerial photos- types, scale, parallax, relief displacement; elements of image interpretation. Section B. Structural geology Principles of geological mapping; kinematic and dynamic analysis of deformation; stress-strain relationships for elastic, plastic and viscous materials; measurement of strain in deformed rocks; structural analysis of fold, cleavage, boudin, lineation, joint, and fault; stereographic projection of linear and planar structures; superposed deformation; deformation at microscaledynamic and static recrystallisation, controls of strain rate and temperature on development of microfabrics; brittle and ductile shear zones; time relationship between crystallisation and deformation, calculation of paleostress. Section C. Sedimentology Classification of sedimentary rocks; sedimentary textures- grain size, roundness, sphericity, shape and fabric; quantitative grain size analysis; sediment transport and deposition- fluid and sediment gravity flows, laminar and turbulent flows, Reynold’s number, Froude number, grain entrainment, Hjulstrom diagram, bed load and suspension load transport; primary sedimentary structures; penecontemporaneous deformation structure; biogenic structures; principles and application of paleocurrent analysis; composition and significance of different types of sandstone, limestone, banded iron formation, mudstone, conglomerate; carbonate diagenesis and dolomitisation; sedimentary environments and facies- facies models for fluvial, glacial, deltaic, siliciclastic shallow and deep marine environments; carbonate platforms- types and facies models; sedimentation in major tectonic settings; principles of sequence stratigraphy- concepts and factors controlling base level changes, parasequence, clinoform, systems tract, unconformity and sequence boundary. Section D. Paleontology Fossil record and geological time scale; modes of preservation of fossils and concept of taphonomy; body- and ichno-fossils, species concept, organic evolution, Ediacara Fauna; morphology and time range of Graptolites, Trilobites, Brachiopods, Lamellibranchs, Gastropods, Cephalopods, Echinoids and Corals; evolutionary trends in Trilobites, Lamellibranchs, Gastropods and Cephalopods; micropaleontology- methods of preparation of microfossils, morphology of microfossil groups (Foraminifera, Ostracoda), fossil spores, pollen and dinoflagellates; Gondwana plant fossils and their significance; vertebrate life through ages, evolution in Proboscidea, Equidae and Hominidae; applications of paleontological data in stratigraphy, paleoecology and paleoclimatology; mass extinctions. Section E. Stratigraphy Principles of stratigraphy- code of stratigraphic nomenclature of India; lithostratigraphy, biostratigraphy, chronostratigraphy and magnetostratigraphy; principles of stratigraphic correlation; characteristics of Archean granitegreenstone belts; Indian stratigraphy- geological evolution of Archean nucleii (Dharwar, Bastar, Singhbhum, Aravalli and Bundelkhand); Proterozoic mobile belts- Eastern Ghats Mobile Belt, Southern Granulite Terrain, Central Indian Tectonic Zone, Aravalli-Delhi Belt, North Singhbhum Mobile Belt; Proterozoic sedimentary basins (Cuddapah and Vindhyan); Phanerozoic stratigraphyPaleozoic (Spiti, Kashmir and Kumaon), Mesozoic (Spiti, Kutch, Narmada Valley and Trichinopoly), Gondwana Supergroup, Cenozoic (Assam, Bengal basins, Garhwal-Shimla Himalayas); Siwaliks; boundary problems in Indian stratigraphy. Stage-II (Descriptive Type) Geology : Paper-II Section A. Mineralogy Symmetry, motif, Miller indices; concept of unit cell and Bravais lattices; 32 crystal classes; types of bonding, Pauling’s rules and coordination polyhedra; crystal imperfections- defects, twinning and zoning; polymorphism, pseudomorphism, isomorphism and solid solution; physical properties of minerals; polarising microscope and accessory plate; optical properties of minerals- double refraction, polarisation, pleochroism, sign of elongation, interference figure and optic sign; structure, composition, physical and optical properties of major rock-forming minerals- olivine, garnet, aluminosilicates, pyroxene, amphibole, mica, feldspar, clay, silica and spinel group. Section B. Geochemistry and isotope geology Chemical composition and characteristics of atmosphere, lithosphere, hydrosphere; geochemical cycles; meteorites- types and composition; Goldschmidt’s classification of elements; fractionation of elements in minerals/rocks; Nernst’s partition coefficient (compatible and incompatible elements), Nernst-Berthelot partition coefficient and bulk partition coefficient; Fick’s laws of diffusion and activity composition relation (Roult’s and Henry’s law); application of trace elements in petrogenesis; principles of equilibrium and Rayleigh fractionation; REE patterns, Eh and pH diagrams and mineral stability. Half-life and decay equation; dating of minerals and rocks with potassiumargon , rubidium-strontium, uranium-lead and samarium-neodymium isotopes; petrogenetic implications of samarium-neodymium and rubidium-strontium systems; stable isotope geochemistry of carbon, oxygen and sulphur and their applications in geology; monazite chemical dating. Section C. Igneous petrology Viscosity, temperature and pressure relationships in magmas; IUGS classification of plutonic and volcanic rocks; nucleation and growth of minerals in magmatic rocks, development of igneous textures; magmatic evolution (differentiation, assimilation, mixing and mingling); types of mantle melting (batch, fractional and dynamic); binary (albite-anorthite, forsterite-silica and diopside-anorthite) and ternary (diopside-forsterite-silica, diopside forsteriteanorthite and nepheline-kalsilite-silica) phase diagrams and relevance to magmatic crystallization; petrogenesis of granites, basalts, ophiolite suite, komatiites, syenites, boninites, anorthosites and layered complexes, and alkaline rocks (carbonatite, kimberlite, lamproite, lamprophyre); mantle metasomatism, hotspot magmatism and large igneous provinces of India. Section D. Metamorphic petrology Limits and physico-chemical controls (pressure, temperature, fluids and bulk rock composition) of metamorphism; concept of zones, facies, isograds and facies series, geothermal gradients and tectonics of orogenic belts; structures, micro-structures and textures of regional and contact metamorphic rocks; representation of metamorphic assemblages (ACF, AKF and AFM diagrams); equilibrium concept in thermodynamics; laws of thermodynamics, enthalpy, entropy, Gibb’s free energy, chemical potential, fugacity and activity; tracing the chemical reactions in P-T space, phase rule and mineralogical phase rule in multi-component system; Claussius-Clapeyron equation and slopes of metamorphic reactions; heat flow, diffusion and mass transfer; Fourier’s law of heat conduction; geothermobarometry; mass and energy change during fluidrock interactions; charnockite problem, formation of skarns, progressive and retrogressive metamorphism of pelitic, calcareous and basic rocks; P-T-t path and tectonic setting. Section E. Geodynamics Phase transitions and seismic discontinuities in the Earth; seismic waves and relation between Vp, Vs and density; seismic and petrological Moho; rheology of rocks and fluids (Newtonian and non-Newtonian liquids); rock magnetism and its origin; polarity reversals, polar wandering and supercontinent cycles; continental drift, sea floor spreading; gravity and magnetic anomalies of ocean floors and their significance; mantle plumes and their origin; plate tectonicstypes of plate boundaries and their inter-relationship; heat flow and heat production of the crust. Stage-II (Descriptive Type) Geology : Paper-III Section A. Economic geology Ore minerals and industrial minerals; physical and optical properties of ore minerals; ore textures and paragenesis; characteristics of mineral depositsspatial and temporal distribution, rock-ore association; syngenetic and epigenetic deposits, forms of ore bodies, stratiform and strata-bound deposits; ore forming processes- source and migration of ore constituents and ore fluid, mechanism of ore deposition; magmatic and pegmatitic deposits (chromite, Timagnetite, diamond, Cu-Ni sulphide, PGE, REE, muscovite, rare metals); hydrothermal deposits (porphyry Cu-Mo, greisen Sn-W, skarn, VMS and SEDEX type sulphide deposits, orogenic gold); sedimentary deposits (Fe, Mn, phosphorite, placer); supergene deposits (Cu, Al, Ni and Fe); metamorphic and metamorphosed deposits (Mn, graphite); fluid inclusions in ore mineral assemblage- physical and chemical properties, microthermometry; stable isotope (S, C, O, H) in ore genesis- geothermometry, source of ore constituents; global tectonics and mineralisation. Section B. Indian mineral deposits and mineral economics Distribution of mineral deposits in Indian shield; geological characteristics of important industrial mineral and ore deposits in India- chromite, diamond, muscovite, Cu-Pb-Zn, Sn-W, Au, Fe-Mn, bauxite; minerals used in refractory, fertilizer, ceramic, cement, glass, paint industries; minerals used as abrasive, filler; building stones. Strategic, critical and essential minerals; India’s status in mineral production; co-products and by-products; consumption, substitution and conservation of minerals; National Mineral Policy; Mineral Concession Rules; marine mineral resources and laws of the sea. Section C. Mineral exploration Stages of exploration; scope, objectives and methods of prospecting, regional exploration and detailed exploration; geological, geochemical and geobotanical methods; litho-, bio-, soil geochemical surveys, mobility and dispersion of elements, geochemical anomalies; ore controls and guides; pitting, trenching, drilling; sampling, assaying, ore reserve estimation; categorization of ore reserves; geophysical methods- ground and airborne surveys; gravity, magnetic, electrical and seismic methods of mineral exploration. Section D. Fuel geology and Engineering geology Coal and its properties; proximate and ultimate analysis; different varieties and ranks of coal; concept of coal maturity, peat, lignite, bituminous and anthracite coal; origin of coal, coalification process; lithotypes, microlithotypes and maceral groups of coal; mineral and organic matter in coal; lignite and coal deposits of India; origin, migration and entrapment of natural hydrocarbons; characteristics of source and reservoir rocks; structural, stratigraphic and mixed traps; geological, geochemical and geophysical methods of hydrocarbon exploration; petroliferous basins of India; geological characteristics and genesis of major types of U deposits and their distribution in India. Engineering properties of rocks; geological investigations in construction of dams, reservoirs, tunnels, bridges, highways and coastal protection structures; geologic considerations of construction materials. Section E. Environmental geology and Natural hazards Stefan-Boltzmann equation and planetary temperature; cause and effects of global climate change; Earth’s radiation budget; greenhouse gases and effect; examples of positive and negative feedback mechanisms; biogeochemical cycle of carbon; geological investigations of nuclear waste disposal sites; marginal marine environments- estuaries, mangroves and lagoons; ozone hole depletion, ocean acidification, coral bleaching, Milankovitch cycle, sea level rise, eutrophication and acid rain; environmental impacts of urbanization, mining and hydropower projects; water pollution, water logging and soil erosion; Himalayan glaciers; causes and consequences of earthquakes, volcanoes, tsunami, floods, landslides, coastal erosion, droughts and desertification; application of remote sensing and geographic information systems (GIS) in environmental management. Stage-II (Descriptive Type) Section A. Occurrence and distribution of groundwater Origin of water on Earth; global water cycle and budget; residence time concept, geologic formations as aquifers; confined and unconfined aquifers; groundwater table mapping and piezometric nests; porosity, void ratio, effective porosity and representative porosity range; primary and secondary porosities; groundwater zonation; specific retention, specific yield; groundwater basins; springs. Section B. Groundwater movement and well hydraulics Groundwater flow concepts; Darcy’s Law in isotropic and anisotropic media and validity; water flow rates, direction and water volume in aquifers; permeability and hydraulic conductivity and ranges in representative rocks; Bernoulli equation; determination of hydraulic conductivity in field and laboratory; concept of groundwater flow through dispersion and diffusion; transmissivity and aquifer thickness. Section C. Water wells and groundwater levels Unidirectional and radial flow to a well (steady and unsteady); well flow near aquifer boundaries; methods for constructing shallow wells, drilling wells, well completion; testing wells, pumping test, slug tests for confined and unconfined aquifers; fluctuations in groundwater levels; stream flow and groundwater flows; groundwater level fluctuations; land subsidence; impact of global climate change on groundwater. Section D. Groundwater exploration Surface investigation of groundwater- geologic, remote sensing, electrical resistivity, seismic, gravity and magnetic methods; sub-surface investigation of groundwater- test drilling, resistivity logging, spontaneous potential logging, radiation logging. Section E. Groundwater quality and management Groundwater composition, units of expression, mass-balance calculations; rockwater interaction (chemical equilibrium, free energy, redox reactions and cation/anion exchanges), graphic representation of chemical data; groundwater hardness, microorganisms in groundwater; water quality standards; sea-water intrusion; groundwater issues due to urbanization; solid and liquid waste disposal and plume migration models; application of isotopes (H, C, O) in groundwater; concepts of artificial recharge methods; managing groundwater resources; groundwater basin investigations and management practices. Stage-II (Descriptive Type) Geophysics : Paper-I A1. Solid Earth Geophysics: Introduction to Geophysics and its branches. Solar system: origin, characteristics of planets, Earth: rotation and figure, Geoid, Spheroid and topography. Plate tectonics and Geodynamic processes, Thermal history and heat flow, Temperature variation in the earth, convection currents. Gravity field of earth and Isostasy. Geomagnetism, elements of earth’s magnetism: Internal and External fields and their causes, Paleomagnetism, Polar wandering paths, Continental drift, Seafloor spreading and its geophysical evidences. Elastic Waves, Body Waves and internal structure of earth, variation of physical properties in the interior of earth, Adam-Williamson’s Equation. A2. Earthquake Seismology: Seismology, earthquakes, focal depth, epicenter, great Indian earthquakes, Intensity and Magnitude scales, Energy of earthquakes, foreshocks, aftershocks, Elastic rebound theory, Types and Nature of faulting, Fault plane solutions, Seismicity and Seismotectonics of India, Frequency-Magnitude relation (bvalues). Bulk and rigidity modulus, Lame’s Parameter, Seismic waves: types and their propagation characteristics, absorption, attenuation and dispersion. Seismic ray theory for spherically and horizontally stratified earth, basic principles of Seismic Tomography and receiver function analysis, Velocity structure, Vp/Vs studies, Seismic network and arrays, telemetry systems, Principle of electromagnetic seismograph, displacement meters, velocity meters, accelerometers, Broadband Seismometer, WWSSN stations, seismic arrays for detection of nuclear explosions. Earthquake prediction; dilatancy theory, short-, medium- and long- term predictions, Seismic microzonations, Applications for engineering problems. A3. Mathematical methods in Geophysics: Elements of vector analysis, Gradient, Divergence and Curl, Gauss’s divergence theorem, Stoke’s theorem, Gravitational field, Newton’s Law of gravitation, Gravitation potential and fields due to bodies of different geometric shapes, Coulomb’s law, Electrical permittivity and dielectric constant, Origin of Magnetic field, Ampere’s law, Biot and Savart’s law, Geomagnetic fields, Magnetic fields due to different type of structures, Solution of Laplace equation in Cartesian, Cylindrical and Spherical Coordinates, Image theory, Electrical fields due to charge, point source, continuous charge distribution and double layers, equipotential and line of force. Current and potential in the earth, basic concept and equations of electromagnetic induction, Maxwell’s Equation, near and far fields, Attenuation of EM waves, EM field of a loops of wire on half space and multi-layered media. A4. Geophysical Inversion: Fundamental concepts of inverse theory, Definition and its application to Geophysics. Probability, Inversion with discrete and continuous models. Forward problems versus Inverse problems, direct and model based inversions, Formulation of inverse problems, classification of inverse problems, least square solutions and minimum norm solution, concept of norms, Jacobian matrix, Condition number, Stability, non-uniqueness and resolution of inverse problems, concept of ‘a priori’ information, constrained linear least squares inversion, review of matrix theory. Models and data spaces, data resolution matrix, model resolution matrix, Eigen values and Eigen vectors, singular value decomposition (SVD), Gauss Newton method, steepest descent (gradient) method, Marquardt-Levenberg method. Probabilistic approach of inverse problems, maximum likelihood and stochastic inverse methods, Random search inversion (Monte-Carlo) Backus-Gilbert method, Bayesian Theorem and Inversion. Global optimization techniques: genetic algorithm and simulated annealing methods. B1. Mathematical Methods of Physics: Dimensional analysis; Units and measurement; Vector algebra and vector calculus; Linear algebra, Matrices: Eigenvalues and eigenvectors; Linear ordinary differential equations of first and second order; Special functions (Hermite, Bessel, Laguerre and Legendre); Fourier series, Fourier and Laplace transforms; Elementary probability theory, Random variables, Binomial, Poisson and normal distributions; Green’s function; Partial differential equations (Laplace, wave and heat equations in two and three dimensions); Elements of numerical techniques: root of functions, interpolation, and extrapolation, integration by trapezoid and Simpson’s rule, solution of first order differential equation using Runge-Kutta method; Tensors; Complex variables and analysis; Analytic functions; Taylor & Laurent series; poles, residues and evaluation of integrals; Beta and Gamma functions. Operators and their properties; Leastsquares fitting. B2. Electrodynamics: Electrostatics: Gauss’ Law and its applications; Laplace and Poisson equations, Boundary value problems; Magnetostatics: Biot-Savart law, Ampere’s theorem; Ampere’s circuital law; Magnetic vector potential; Faraday’s law of electromagnetic induction; Electromagnetic vector and scalar potentials; Uniqueness of electromagnetic potentials and concept of gauge: Lorentz and Coulomb gauges; Lorentz force; Charged particles in uniform and non-uniform electric and magnetic fields; Poynting theorem; Electromagnetic fields from Lienard-Wiechert potential of a moving charge; Bremsstrahlung radiation; Cerenkov radiation; Radiation due to oscillatory electric dipole; Condition for plasma existence; Occurrence of plasma; Magnetohydrodynamics; Plasma waves; Transformation of electromagnetic potentials; Lorentz condition; Invariance or covariance of Maxwell field equations in terms of 4 vectors; Electromagnetic field tensor; Lorentz transformation of electric and magnetic fields. B3. Electromagnetic Theory: Maxwell’s equations: its differential and integral forms, physical significance; Displacement current; Boundary conditions; Wave equation, Plane electromagnetic waves in: free space, non-conducting isotropic medium, conducting medium; Scalar and vector potentials; Reflection; refraction of electromagnetic waves; Fresnel’s Law; interference; coherence; diffraction and polarization; Lorentz invariance of Maxwell’s equations; Transmission lines and waveguides. B4. Introductory Atmospheric and Space Physics: The neutral atmosphere; Atmospheric nomenclature; Height profile of atmosphere; Hydrostatic equation; Geopotential height; Expansion and contraction; Fundamental forces in the atmosphere; Apparent forces; Atmospheric composition; Solar radiation interaction with the neutral atmosphere; Climate change; Electromagnetic radiation and propagation of Waves: EM Radiation; Effects of environment; Antennas: basic considerations, types. Propagation of waves: ground wave, sky wave, and space wave propagation; troposcatter communication and extra terrestrial communication; The Ionosphere; Morphology of ionosphere: the D, E and F-regions; Chemistry of the ionosphere Ionospheric parameters E and F region anomalies and irregularities in the ionosphere; Global Positioning Systems (GPS): overview of GPS system, augmentation services GPS system segment; GPS signal characteristics; GPS errors; multi path effects; GPS performance; Satellite navigation system and applications. Stage-II (Descriptive Type) Geophysics : Paper-II A1. Potential Field (Gravity and Magnetic) Methods: Geophysical potential fields, Inverse square law, Principles of Gravity and Magnetic methods, Global gravity anomalies, Newtonian and logarithmic potential, Laplace’s equations for potential field. Green’s Function, Concept of gravity anomaly, Rock densities, factors controlling rock densities, determination of density, Earth’s main magnetic field, origin, diurnal and secular variations of the field, Geomagnetic elements, intensity of magnetization and induction, magnetic potential and its relation to field, units of measurement, interrelationship between different components of magnetic fields, Poisson’s relation, Magnetic susceptibility, factors controlling susceptibility. Magnetic Mineralogy: Hysteresis, rock magnetism, natural, and remnant magnetization, demagnetization effects. Principles of Gravity and Magnetic instruments, Plan of conducting gravity and magnetic surveys, Gravity and Magnetic data reduction, Gravity bases, International Gravity formula, IGRF corrections. Concept of regional and residual anomalies and various methods of their separation, Edge Enhancement Techniques (Derivatives, Continuation, Analytical Signal, Reduced to Pole and Euler Deconvolution), ambiguity in potential field interpretation, Factors affecting magnetic anomalies, Application of gravity and magnetics in geodynamic, mineral exploration and environmental studies. Qualitative interpretation, Interpretation of gravity and magnetic anomalies due to different geometry shaped bodies and modeling. A2. Electrical and Electromagnetic methods: Electrical properties of rocks and minerals, concepts and assumptions of horizontally stratified earth, anisotropy and its effects on electrical fields, geoelectric and geological sections, D.C Resistivity method. Concept of natural electric field, various electrode configurations, Profiling and Sounding (VES). Tpes of Sounding curves, Equivalence and Suppression, Concept of Electrical Resistivity Tomography (ERT). SP Method:, Origin of SP, application of SP surveys. Induced Polarization (IP) Method: Origin of IP, Membrane and Electrode polarization, time and frequency domains of measurement, chargeability, percent frequency effect and metal factor, Application of IP surveys for mineral exploration. Electromagnetic methods, Passive and Active source methods, Diffusion equation, wave equation and damped wave equation used in EM method, boundary conditions, skin depth, depth of investigation and depth of penetration, amplitude and phase relations, real and imaginary components, elliptical polarization, Principles of EM prospecting, various EM methods: Dip angle, Turam, moving source-receiver methods-horizontal loop (Slingram), AFMAG, and VLF.. Principles of Time Domain EM: INPUT method. EM Profiling and sounding, Interpretation of EM anomalies. Principle of EM scale modeling. Magnetotelluric methods: Origin and characteristics of MT fields, Instrumentation, Transverse Electric and Transverse Magnetic Modes, Static Shift. Dimensionality and Directionality analysis. Field Layout and interpretation of MT data and its applications. Principles of Ground Penetrating Radar (GPR). A3. Seismic Prospecting: Basic principles of seismic methods, Various factors affecting seismic velocities in rocks, Reflection, refraction and Energy partitioning at an interface, Geometrical spreading, Reflection and refraction of wave phenomena in a layered and dipping media. Seismic absorption and anisotropy, Multi channel seismic (CDP) data acquisition (2D and 3D), sources of energy, Geophones, geometry of arrays, different spread geometry, Instrumentation, digital recording. Different types of multiples, Travel time curves, corrections, Interpretation of data, bright spot, low velocity layer, Data processing, static and dynamic (NMO and DMO) corrections, shot-receiver gather, foldage, multiplexing and demultiplexing. Dix’s equation, Velocities: Interval, Average and RMS, Seismic resolution and Fresnel Zone, Velocity analysis and Migration techniques, Seismic Interpretation, Time and Depth Section, Fundamentals of VSP method, High Resolution Seismic Surveys (HRSS). A4. Borehole Geophysics: Objectives of well logging, concepts of borehole geophysics, borehole conditions, properties of reservoir rock formations, formation parameters and their relationships-formation factor, porosity, permeability, formation water resistivity, water saturation, irreducible water saturation, hydrocarbon saturation, residual hydrocarbon saturation; Arhcie’s and Humble’s equations; principles, instrumentations, operational procedures and interpretations of various geophysical logs: SP, resistivity and micro resistivity, gamma ray, neutron, sonic, temperature, caliper and directional logs. Production logging, overlay and cross-plots of well-log data, determination of formation lithology, porosity, permeability and oil-water saturation, sub-surface correlation and mapping, delineation of fractures; application of well-logging in hydrocarbon, groundwater, coal, metallic and non-metallic mineral exploration. B1. Classical Mechanics Inertial and non-inertial frames, Newton’s laws; Pseudo forces; Central force motion; Two-body collisions, Scattering in laboratory and centre-of-mass frames; Rigid body dynamics, Moment of inertia, Variational principle, Lagrangian and Hamiltonian formalisms and equations of motion; Poisson brackets and canonical transformations; Symmetry, Invariance and conservation laws, Cyclic coordinates; Periodic motion, Small oscillations and normal modes; Special theory of relativity, Lorentz transformations, Relativistic kinematics and mass-energy equivalence. B2. Thermodynamics and Statistical Physics Laws of thermodynamics and their significance; Thermodynamic potentials, Maxwell relations; Chemical potential, Phase equilibria; Phase space, Micro- and macro- states; Micro canonical, canonical and grand-canonical ensembles and partition functions; Free Energy and connection with thermodynamic quantities; First and second order phase transitions; Maxwell-Boltzmann distribution, Quantum statistics, Ideal Fermi and Bose gases; Principle of detailed balance; Blackbody radiation and Planck’s distribution law; Bose-Einstein condensation; Random walk and Brownian motion; Diffusion equation. B3. Atomic and Molecular Physics and Characterization of materials Quantum states of an electron in an atom; Electron spin; Stern-Gerlach experiment; Spectrum of Hydrogen, Helium and alkali atoms; Relativistic corrections for energy levels of hydrogen; Hyperfine structure and isotopic shift; Width of spectral lines; LS and JJ coupling; Zeeman, Paschen Back and Stark effects; Rotational, vibrational, electronic, and Raman spectra of diatomic molecules; Frank-Condon principle; Thermal and optical properties of materials, Study of microstructure using SEM, Study of crystal structure using TEM, Resonance methods: Spin and applied magnetic field, Larmor precession, relaxation times – spin-spin relaxation, Spin-lattice relaxation, Electron spin resonance, g factor, Nuclear Magnetic resonance, line width, Motional narrowing, Hyperfine splitting; Nuclear Gamma Resonance: Principles of Mössbauer Spectroscopy, Line width, Resonance absorption, Isomer Shift, Quadrupole splitting. B4. Nuclear and Particle Physics Basic nuclear properties: size, shape, charge distribution, spin and parity; Binding energy, Packing fraction, Semi-empirical mass formula; Liquid drop model; Fission and fusion, Nuclear reactor; Line of stability, Characteristics of the nuclear forces, Nucleon-nucleon potential; Charge-independence and charge-symmetry of nuclear forces; Isospin; Deuteron problem; Evidence of shell structure, Single-particle shell model and, its validity and limitations; Elementary ideas of alpha, beta and gamma decays and their selection rules; Nuclear reactions, reaction mechanisms, compound nuclei and direct reactions; Classification of fundamental forces; Elementary particles (quarks, baryons, mesons, leptons); Spin and parity assignments, strangeness; Gell MannNishijima formula; C, P and T invariance and applications of symmetry arguments to particle reactions, Parity non-conservation in weak interaction; Relativistic kinematics Stage-II (Descriptive Type) Geophysics : Paper-III A1. Radiometric and Airborne Geophysics: Principles of radioactivity, radioactivity decay processes, units, radioactivity of rocks and minerals, Instruments, Ionization chamber, G-M counter, Scintillation counter, Gamma ray spectrometer, Radiometric prospecting for mineral exploration (Direct/Indirect applications), beach placers, titanium, zirconium and rare-earths, radon studies in seismology and environmental applications. Airborne geophysical surveys (gravity, magnetic, electromagnetic and radiometric), planning of surveys, flight path recovery methods. Applications in geological mapping, identification of structural features and altered zones. A2. Marine Geophysics: Salinity, temperature and density of sea water. Introduction to Sea-floor features: Physiography, divisions of sea floor, continental shelves, slopes, and abyssal plains, growth and decline of ocean basins, turbidity currents, occurrence of mineral deposits and hydrocarbons in offshore. Geophysical surveys and instrumentation: Gravity, Magnetic and electromagnetic surveys, Sonobuoy surveys, Instrumentation used in ship borne surveys, towing cable and fish, data collection and survey procedures, corrections and interpretation of data. Oceanic magnetic anomalies, Vine-Mathews hypothesis, geomagnetic time scale and dating sea floor, Oceanic heat flow, ocean ridges, basins, marginal basins, rift valleys. Seismic surveys, energy sources, Pinger, Boomer, Sparker, Air gun, Hydrophones and steamer cabling. Data reduction and interpretation. Ocean Bottom Seismic surveys. Bathymetry, echo sounding, bathymetric charts, sea bed mapping. Navigation and Position fixing methods. A3. Geophysical Signal Processing: Time Series, Types of signals, sampling theorem, aliasing effect, Fourier series of periodic waveforms, Fourier transform and its properties, Discrete Fourier transform and FFT, Hilbert Transform, Convolution and Deconvolution, Auto and cross correlations, Power spectrum, Delta function, unit step function. Time domain windows, Z transform and properties, Inverse Z transform. Poles and zeroes. Principles of digital filters, types of filters: recursive, non recursive, time invariant, Chebyshev, Butterworth, moving average, amplitude and phase response of filters, low pass, band pass and high pass filters. Processing of Random signals. Improvement of signal to noise ratio, source and geophone arrays as spatial filters. Earth as low pass filter. A4. Remote Sensing and Geohydrology: Fundamental concepts of remote sensing, electromagnetic radiation spectrum, Interaction of electromagnetic energy and its interactions in atmosphere and surface of the earth, elements of photographic systems, reflectance and emittance, false color composites, remote sensing platforms, flight planning, geosynchronous and sun synchronous orbits, sensors, resolution, parallax and vertical exaggeration, relief displacement, mosaic, aerial photo interpretation and geological application. Fundamentals of photogrammetry, satellite remote sensing, multi-spectral scanners, thermal scanners, microwave remote sensing, fundamental of image processing and interpretation for geological applications. Types of water bearing formations, porosity, permeability, storage coefficient, specific storage, specific retention, specific yield, Different types of aquifers, vertical distribution of ground water, General flow equation; steady and unsteady flow of ground water in unconfined and confined aquifers. B1. Solid State Physics and Basic Electronics Crystalline and amorphous structure of matter; Different crystal systems, Space groups; Methods of determination of crystal structure; X-ray diffraction, Scanning and transmission electron microscopes; Band theory of solids, conductors, insulators and semiconductors; Thermal properties of solids, Specific heat: Einstein’s and Debye theory; Magnetism: dia, para and ferro; Elements of superconductivity; Meissner effect, Josephson junctions and applications; Elementary ideas about high temperature superconductivity. Semiconductor devices and circuits: Intrinsic and Extrinsic semiconductors; Devices and structures (p-n junctions, diodes, transistors, FET, JFET and MOSFET, homo and hetero junction transistors, thermistors), Device characteristics, Frequency dependence and applications. Opto-electronic devices (solar cells, photo detectors, LEDs) Operational amplifiers and their applications. B2. Laser systems Spontaneous and stimulated emission of radiation. Coherence, Light amplification and relation between Einstein A and B coefficients. Rate equations for three and four level systems. Lasers: Ruby, Nd-YAG, CO2, Dye, Excimer, Semiconductor. Laser cavity modes, Line shape function and full width at half maximum (FWHM) for natural broadening, collision broadening, Doppler broadening; Saturation behavior of broadened transitions, Longitudinal and transverse modes. Mode selection, ABCD matrices and cavity stability criteria for confocal resonators. Quality factor, Expression for intensity for modes oscillating at random and mode-locked in phase. Methods of Q-switching and mode locking. Optical fiber waveguides, Fiber characteristics. B3. Digital electronics, Radar systems, Satellite communications Digital techniques and applications: Boolean identities, de Morgan’s theorems, Logic gates and truth tables; Simple logic circuits: registers, counters, comparators and similar circuits). A/D and D/A converters. Microprocessor: basics and architecture; Microcontroller basics. Combination and sequential logic circuits, Functional diagram, Timing diagram of read and write cycle, Data transfer techniques: serial and parallel. Fundamentals of digital computers. Radar systems, Signal and data processing, Surveillance radar, Tracking radar, Radar antenna parameters. Fundamentals of satellite systems, Communication and Orbiting satellites, Satellite frequency bands, Satellite orbit and inclinations. Earth station technology. B4. Quantum Mechanics Wave-particle duality; Wave functions in coordinate and momentum representations; Commutators and Heisenberg’s uncertainty principle; Schrodinger’s wave equation (time-dependent and time-independent); Eigenvalue problems: particle in a box, harmonic oscillator, tunneling through a 1-D barrier; Motion in a central potential; Orbital angular momentum; Addition of angular momentum; Hydrogen atom; Matrix representation; Dirac’s bra and ket notations; Time-independent perturbation theory and applications; Variational method; WKB approximation; Time dependent perturbation theory and Fermi’s Golden Rule; Selection rules; Semi-classical theory of radiation; Elementary theory of scattering, Phase shifts, Partial waves, Born approximation; Identical particles, Pauli’s exclusion principle, Spin-statistics connection; Relativistic quantum mechanics: Klein Gordon and Dirac equations. Stage-II (Descriptive Type) Chemistry : Paper-I (Inorganic Chemistry) 1. Inorganic solids: Defects, non-stoichiometric compounds and solid solutions, atom and ion diffusion, solid electrolytes. Synthesis of materials, monoxides of 3d-metals, higher oxides, complex oxides (corundrum, ReO3, spinel, pervoskites), framework structures (phosphates, aluminophosphates, silicates, zeolites), nitrides and fluorides, chalcogenides, intercalation chemistry, semiconductors, molecular materials. 2. Chemistry of coordination compounds: Isomerism, reactivity and stability: Determination of configuration of cis- and trans- isomers by chemical methods. Labile and inert complexes, substitution reactions on square planar complexes, trans effect. Stability constants of coordination compounds and their importance in inorganic analysis. Structure and bonding: Elementary Crystal Field Theory: splitting of dn configurations in octahedral, square planar and tetrahedral fields, crystal field stabilization energy, pairing energy. Jahn-Teller distortion. Metal-ligand bonding, sigma and pi bonding in octahedral complexes and their effects on the oxidation states of transition metals. Orbital and spin magnetic moments, spin only moments and their correlation with effective magnetic moments, d-d transitions; LS coupling, spectroscopic ground states, selection rules for electronic spectral transitions; spectrochemical series of ligands, charge transfer spectra. 3. Acid base titrations: Titration curves for strong acid-strong base, weak acid-strong base and weak base-strong acid titrations, polyprotic acids, poly-equivalent bases, determining the equivalence point: theory of acid-base indicators, pH change range of indicator, selection of proper indicator. Principles used in estimation of mixtures of NaHCO3 and Na2CO3 (by acidimetry). 4. Gravimetric Analysis: General principles: Solubility, solubility product and common ion effect, effect of temperature on the solubility; Salt hydrolysis, hydrolysis constant, degree of hydrolysis. Stoichiometry, calculation of results from gravimetric data. Properties of precipitates. Nucleation and crystal growth, factors influencing completion of precipitation. Co-precipitation and post-precipitation, purification and washing of precipitates. Precipitation from homogeneous solution. A few common gravimetric estimations: chloride as silver chloride, sulphate as barium sulphate, aluminium as oxinate and nickel as dimethyl glyoximate. 5. Redox Titrations: Standard redox potentials, Nernst equation. Influence of complex formation, precipitation and change of pH on redox potentials, Normal Hydrogen Electrode (NHE). Feasibility of a redox titration, redox potential at the equivalence point, redox indicators. Redox potentials and their applications. Principles behind Iodometry, permanganometry, dichrometry, difference between iodometry and iodimetry. Principles of estimation of iron, copper, manganese, chromium by redox titration. 6. Complexometric titrations: Complex formation reactions, stability of complexes, stepwise formation constants, chelating agents. EDTA: acidic properties, complexes with metal ions, equilibrium calculations involving EDTA, conditional formation constants, derivation of EDTA titration curves, effect of other complexing agents, factors affecting the shape of titration curves: indicators for EDTA titrations, titration methods employing EDTA: direct, back and displacement titrations, indirect determinations, titration of mixtures, selectivity, masking and demasking agents. Typical applications of EDTA titrations: hardness of water, magnesium and aluminium in antacids, magnesium, manganese and zinc in a mixture, titrations involving unidentate ligands: titration of chloride with Hg2+ and cyanide with Ag+. 7. Organometallic compounds: 18-electron rule and its applications to carbonyls and nature of bonding involved therein. Simple examples of metal-metal bonded compounds and metal clusters. Wilkinson’s catalyst. 8. Nuclear chemistry: Radioactive decay- General characteristics, decay kinetics, parent-daughter decay growth relationships, determination of half-lives. Nuclear stability. Decay theories. Unit of radioactivity. Preparation of artificial radionuclides by bombardment, radiochemical separation techniques. Experimental techniques in the assay of radioisotopes, Geiger-Muller counters. Solid state detectors. 9. Chemistry of d- and f-block elements: d-block elements: General comparison of 3d, 4d and 5d elements in terms of electronic configuration, elemental forms, metallic nature, atomization energy, oxidation states, redox properties, coordination chemistry, spectral and magnetic properties. f-block elements: Electronic configuration, ionization enthalpies, oxidation states, variation in atomic and ionic (3+) radii, magnetic and spectral properties of lanthanides, separation of lanthanides (by ion-exchange method). Stage-II (Descriptive Type) Chemistry : Paper-II (Physical Chemistry) 1. Kinetic theory and the gaseous state: Real gases, Deviation of gases from ideal behaviour; compressibility factor; van der Waals equation of state and its characteristic features. Existence of critical state. Critical constants in terms of van der Waals constants. Law of corresponding states and significance of second virial coefficient. Boyle temperature. 2. Solids: Nature of solid state. Band theory of solids: Qualitative idea of band theory, conducting, semiconducting and insulating properties. Law of constancy of angles, concept of unit cell, different crystal systems, Bravais lattices, law of rational indices, Miller indices, symmetry elements in crystals. X-ray diffraction, Bragg’s law. 3. Chemical thermodynamics and chemical equilibrium: Chemical potential in terms of Gibbs energy and other thermodynamic state functions and its variation with temperature and pressure. Gibbs-Duhem equation; fugacity of gases and fugacity coefficient. Thermodynamic conditions for equilibrium, degree of advancement. vant Hoff’s reaction isotherm. Equilibrium constant and standard Gibbs energy change. Definitions of KP, KC and Kx; vant Hoff’s reaction isobar and isochore. Activity and activity coefficients of electrolytes / ions in solution. Debye-Hückel limiting law. 4. Chemical kinetics and catalysis: Second order reactions. Determination of order of reactions. Parallel and consecutive reactions. Temperature dependence of reaction rate, energy of activation. Collision Theory and Transition State Theory of reaction rates. Enthalpy of activation, entropy of activation, effect of dielectric constant and ionic strength on reaction rate, kinetic isotope effect. Physisorption and chemisorption, adsorption isotherms, Freundlich and Langmuir adsorption isotherms, BET equation, surface area determination; colloids, electrical double layer and colloid stability, electrokinetic phenomenon. Elementary ideas about soaps and detergents, micelles, emulsions. 5. Electrochemistry: Types of electrochemical cells, cell reactions, emf and Nernst equation, ᐃG, ᐃH and ᐃS of cell reactions. Cell diagrams and IUPAC conventions. Standard cells. Half-cells / electrodes, types of reversible electrodes. Standard electrode potential and principles of its determination. Concentration cells. Determination of ᐃGº, Kº, Ksp and pH. Basic principles of pH metric and potentiometric titrations, determination of equivalence point and pKa values. 6. Quantum chemistry: Eigenfunctions and eigenvalues. Uncertainty relation, Expectation value. Hermitian operators. Schrödinger time-independent equation: nature of the equation, acceptability conditions imposed on the wave functions and probability interpretation of wave function. Schrödinger equation for particle in a one-dimensional box and its solution. Comparison with free particle eigenfunctions and eigenvalues. Particle in a 3-D box and concept of 7. Basic principles and applications of spectroscopy: Electromagnetic radiation, interaction with atoms and molecules and quantization of different forms of energies. Units of frequency, wavelength and wavenumber. Condition of resonance and energy of absorption for various types of spectra; origin of atomic spectra, spectrum of hydrogen atom. Rotational spectroscopy of diatomic molecules: Rigid rotor model, selection rules, spectrum, characteristic features of spectral lines. Determination of bond length, effect of isotopic substitution. Vibrational spectroscopy of diatomic molecules: Simple Harmonic Oscillator model, selection rules and vibration spectra. Molecular vibrations, factors influencing vibrational frequencies. Overtones, anharmonicity, normal mode analysis of polyatomic molecules. Raman Effect: Characteristic features and conditions of Raman activity with suitable illustrations. Rotational and vibrational Raman spectra. 8. Photochemistry: Franck-Condon principle and vibrational structure of electronic spectra. Bond dissociation and principle of determination of dissociation energy. Decay of excited states by radiative and non-radiative paths. Fluorescence and phosphorescence, Jablonski diagram. Laws of photochemistry: Grotthus-Draper law, Stark-Einstein law of photochemical equivalence; quantum yield and its measurement for a photochemical process, actinometry. Photostationary state. Photosensitized reactions. Kinetics of HI decomposition, H2-Br2 reaction, dimerisation of anthracene. Stage-II (Descriptive Type) Chemistry : Paper-III (Analytical and Organic) PART-A (Analytical Chemistry) A1. Errors in quantitative analysis: Accuracy and precision, sensitivity, specific standard deviation in analysis, classification of errors and their minimization, significant figures, criteria for rejection of data, Q-test, t-test, and F-test, control chart, sampling methods, sampling errors, standard reference materials, statistical data treatment. A2. Separation Methods: Chromatographic analysis: Basic principles of chromatography (partition, adsorption and ion exchange), column chromatography, plate concept, plate height (HETP), normal phase and reversed phase concept, thin layer chromatography, frontal analysis, principles of High Performance Liquid Chromatography (HPLC) and Gas Liquid Chromatography (GLC), and Ionexchange chromatography. Solvent extraction: Classification, principle and efficiency of the technique, mechanism of extraction, extraction by solvation and chelation, qualitative and quantitative aspects of solvent extraction, extraction of metal ions from aqueous solutions. A3. Spectroscopic methods of analysis: Lambert-Beer’s Law and its limitations. UV-Visible Spectroscopy: Basic principles of UV-Vis spectrophotometer, Instrumentation consisting of source, monochromator, grating and detector, spectrophotometric determinations (estimation of metal ions from aqueous solutions, determination of composition of metal complexes using Job’s method of continuous variation and mole ratio method). Infra-red Spectrometry: Basic principles of instrumentation (choice of source, monochromator and detector) for single and double beam instruments, sampling techniques. Flame atomic absorption and emission spectrometry: Basic principles of instrumentation (choice of source, monochromator, detector, choice of flame and burner design), techniques of atomization and sample introduction, method of background correction, sources of chemical interferences and methods of removal, techniques for the quantitative estimation of trace level metal ions. Basic principles and theory of AAS. Three different modes of AAS – Flame-AAS, VG-AAS, and GF-AAS. Single beam and double beam AAS. Function of Hollow Cathode Lamp (HCL) and Electrode Discharge Lamp (EDL). Different types of detectors used in AAS. Qualitative and quantitative analysis. A4. Thermal methods of analysis: Theory of thermogravimetry (TG), basic principle of instrumentation, techniques for quantitative analysis of Ca and Mg compounds. A5. X-ray methods of Analysis: Introduction, theory of X-ray generation, X-ray spectroscopy, X-ray diffractionand X-ray fluorescence methods, instrumentation and applications. Qualitative and quantitative measurements. Powder diffraction method. A6. Inductively coupled plasma spectroscopy: Theory and principles, plasma generation, utility of peristaltic pump, sampler– skimmer systems, ion lens, quadrupole mass analyzer, dynode / solid state detector, different types of interferences- spectroscopic and non-spectroscopic interferences, isobaric and molecular interferences, applications. A7. Analysis of geological materials: Analysis of minerals and ores- estimation of (i) CaCO3, MgCO3 in dolomite (ii) Fe2O3, Al2O3, and TiO2 in bauxite (iii) MnO and MnO2 in pyrolusite. Analysis of metals and alloys: (i) Cu and Zn in brass (ii) Cu, Zn, Fe, Mn, Al and Ni in bronze (iii) Cr, Mn, Ni, and P in steel (iv) Pb, Sb, Sn in ‘type metal’. Introduction to petroleum: constituents and petroleum fractionation. Analysis of petroleum products: specific gravity, viscosity, Doctor test, aniline point, colour determination, cloud point, pour point. Determination of water, neutralization value (acid and base numbers), ash content, Determination of lead in petroleum. Types of coal and coke, composition, preparation of sample for proximate and ultimate analysis, calorific value by bomb calorimetry. PART B (Organic chemistry) B1. Unstable, uncharged intermediates: Structure and reactivity of carbenes and nitrenes and their rearrangements (Reimer-Tiemann, Hoffman, Curtius, Lossen, and Schimdt,). B2. Addition reactions: Addition to C-C multiple bonds: Mechanism of addition involving electrophiles, nucleophiles and free radicals (polymerization reactions of alkenes and substituted alkenes), Ziegler-Natta catalyst for polymerization, polyurethane, and conducting polymers; addition to conjugated systems (Diels-Alder reaction), orientation and reactivity (on simple cis- and trans- alkenes). Addition to carbon-heteroatom multiple bonds: Addition to C=O double bond, structure and reactivity, hydration, addition of ROH, RSH, CN-, bisulphite, amine derivatives, hydride ions. B3: Reactions at the carbonyl group: Cannizzaro, Aldol, Perkin, Claisen ester, benzoin, benzil-benzilic acid rearrangement, Mannich, Dieckmann, Michael, Strobe, Darzen, Wittig, Doebner, Knoevenagel, Reformatsky reactions. B4. Oxidation and Reduction: Reduction of C=C, Meerwein-Pondorf reaction, Wolff-Kishner and Birch reduction. Oxidation of C=C, hydration, hydroxylation, hydroboration, ozonolysis, epoxidation, Sharpless epoxidation. B5. Electrocyclic Reactions: Molecular orbital symmetry, frontier orbitals of ethylene, 1,3-butadiene, 1,3,5- hexatriene, allyl system, FMO approach, pericyclic reactions, WoodwardHoffman correlation diagram method and perturbation molecular orbital (PMO) approach for the explanation of pericyclic reactions under thermal and photochemical conditions. Simple cases of Norrish type-I and type-II reactions. Conrotatory and disrotatory motions of (4n) and (4n+2) polyenes with emphasis on [2+2] and [4+2] cycloadditions, sigmatropic rearrangements- shift of H and carbon moieties, Claisen, Cope, Sommerlet-Hauser rearrangement. B6. Spectroscopic methods of analysis: Infrared spectroscopy: Characteristic frequencies of organic molecules and interpretation of spectra. Modes of molecular vibrations, characteristic stretching frequencies of O-H, N-H, C-H, C-D, C=C, C=N, C=O functions; factors affecting stretching frequencies. Ultraviolet spectroscopy: Chromophores, auxochromes. Electronic transitions (σ−σ*, n-σ*, π-π* and n-π*), relative positions of λmax considering conjugative effect, steric effect, solvent effect, red shift (bathochromic shift), blue shift (hypsochromic shift), hyperchromic effect, hypochromic effect (typical examples). Woodward rules. Applications of UV spectroscopy to conjugated dienes, trienes, unsaturated carbonyl compounds and aromatic compounds. Nuclear Magnetic Resonance Spectrometry: (Proton and Carbon-13 NMR) Nuclear spin, NMR active nuclei, principle of proton magnetic resonance, equivalent and non-equivalent protons. Measurement of spectra, the chemical shift, shielding / deshielding of protons, upfield and downfield shifts, intensity of NMR signals and integration factors affecting the chemical shifts: spin-spin coupling to 13C I H-I H first order coupling: some simple I H-I H splitting patterns: the magnitude of I H-I H coupling constants, diamagnetic anisotropy. Mass spectrometry: Basic Principles, the mass spectrometer, isotope abundances; the molecular ion, metastable ions. McLafferty rearrangement. Leave a Reply Your email address will not be published.
429a52bcb3f75cd6
Light-front holography - A new approach to relativistic hadron dynamics and nonperturbative QCD Universidad de Costa Rica, San José, Costa Rica    Stanley J. Brodsky SLAC National Accelerator Laboratory, Stanford University, Stanford, CA 94309, USA This research was supported by the Department of Energy contract DE–AC02–76SF00515. SLAC–PUB–15094. The holographic mapping of gravity in AdS space to QCD, quantized at fixed light-front time, provides a precise relation between the bound-state amplitudes in the fifth dimension of AdS space and the boost-invariant light-front wavefunctions describing the internal structure of hadrons in physical space-time. In particular, the elastic and transition form factors of the pion and the nucleons are well described in this framework. The light-front AdS/QCD holographic approach thus gives a frame-independent first approximation of the color-confining dynamics, spectroscopy, and excitation spectra of relativistic light-quark bound states in QCD. More generally, we show that the valence Fock-state wavefunctions of the eigensolutions of the light-front QCD Hamiltonian satisfy a single-variable relativistic equation of motion, analogous to the nonrelativistic radial Schrödinger equation, with an effective confining potential which systematically incorporates the effects of higher quark and gluon Fock states. The proposed method to compute the effective interaction thus resembles the two-particle-irreducible functional techniques used in quantum field theory. 1 Introduction Forty years after the discovery of QCD, the description of hadrons in terms of their fundamental quark and gluon constituents appearing in the QCD Lagrangian and the nature of color-confinement still remain among the most challenging problems of strong interaction dynamics. Euclidean lattice calculations provide an important numerical simulation of nonperturbative QCD. However, the excitation spectrum of hadrons represents an important challenge to lattice QCD due to the enormous computational complexity beyond ground-state configurations and the unavoidable presence of multi-hadron thresholds. In contrast, the incorporation of the AdS/CFT correspondence between gravity in AdS space and conformal field theories in physical space-time [1] has led to an analytic semiclassical approximation for strongly-coupled quantum field theories as well as providing important new physical insight into the wavefunctions and nonperturbative dynamics of relativistic light-hadron bound states [2]. Light-front (LF) holographic methods were originally introduced [3] by mapping the electromagnetic form factors in AdS space [4] to the corresponding expression at fixed LF time in physical space-time [5]. It was also shown that one obtains an identical mapping for the matrix elements of the energy-momentum tensor [6], by perturbing the AdS metric around its static solution [7]. In the “bottom-up” approach to the gauge/gravity duality [8, 9], fields in the bulk geometry are introduced to match the chiral symmetries of QCD. In contrast, in LF holography a direct connection with the internal constituent structure of hadrons is established using LF quantization  [2, 3, 6, 10]. The identification of AdS space with partonic physics in physical space-time is specific to the light front: the transition amplitudes in AdS are expressed as a wavefunction overlap [4] which maps precisely to the convolution of frame-independent LF wavefunctions (LFWFs) [5]. In contrast, the AdS convolution formula cannot be mapped to current matrix elements at ordinary fixed time since one must include connected currents from the vacuum which are not given by eigensolutions of the instant-time Hamiltonian. There are no such vacuum contributions in the LF for current matrix elements – in agreement with the AdS formulae. Furthermore, the instant-time wavefunctions must be boosted from the hadron’s rest frame – an intractable dynamical problem. Unlike ordinary instant-time quantization, the Hamiltonian equation of motion in the LF is frame-independent and has a structure similar to eigenmode equations in AdS space. This makes the direct connection of QCD to AdS/CFT methods possible. In fact, one can also study the AdS/CFT duality and its modifications starting from the LF Hamiltonian equation of motion for a relativistic bound-state system in physical space-time [2]. To a first semiclassical approximation, where quantum loops and quark masses are not included, LF holography leads to a LF Hamiltonian equation which describes the bound-state dynamics of light hadrons in terms of an invariant impact kinematical variable which measures the separation of the partons within the hadron at equal LF time. Remarkably, the unmodified AdS equations correspond to the kinetic energy terms of the partons inside a hadron, whereas the interaction terms in the QCD Lagrangian build confinement and correspond to the truncation of AdS space in an effective dual gravity approximation [2]. Thus, all the complexities of strong-interaction dynamics are hidden in an effective confining potential , which acts in the valence sector of the theory, reducing the many-particle problem in QCD to an effective one-body problem. The derivation of the effective interaction directly from QCD then becomes the central issue. 2 The Light-front Schrödinger equation: a semiclassical approximation to QCD The hadronic four-momentum generator in the front form [11] is denoted by , where the longitudinal and transverse generators and do not depend on the interaction (they are kinematical generators which leave the LF plane invariant) and the dynamical generator which contain the interactions. It is the LF time evolution operator, , and it is constructed canonically from the QCD Lagrangian [12]. The hadronic mass states are determined by the Lorentz-invariant Hamiltonian equation for the relativistic bound-state with . The hadronic state is an expansion in multiparticle Fock states , where the components are a column vector of states, and the basis vectors are the -parton eigenstates of the free LF Hamiltonian: etc. For certain applications it is useful to reduce the multiparticle eigenvalue problem (1) to a single equation [13], instead of diagonalizing the Hamiltonian. The central problem then becomes the derivation of the effective interaction, which acts only on the valence sector of the theory and has, by definition, the same eigenvalue spectrum as the initial Hamiltonian problem. For carrying out this program one most systematically express the higher Fock components as functionals of the lower ones. The method has the advantage that the Fock space is not truncated and the symmetries of the Lagrangian are preserved [13]. In our recent work we have shown how light front holographic methods lead to a remarkably simple equation of motion for mesons at fixed light-front time. To this end, we write the LFWF in terms of the invariant impact-space variable for a two-parton state thus factoring the angular dependence and the longitudinal, , and transverse mode . In the limit of zero quark masses the longitudinal mode decouples and the LF eigenvalue equation is thus a light-front wave equation for a relativistic single-variable LF Schrödinger equation (LFSE). The effective interaction is instantaneous in LF time and acts on the lowest state of the LF Hamiltonian. This equation describes the spectrum of mesons as a function of , the number of nodes in , the total angular momentum and the internal orbital angular momentum of the constituents  111The Casimir corresponds to the group of rotations in the transverse LF plane.. It is the relativistic frame-independent front-form analog of the non-relativistic radial Schrödinger equation for muonium and other hydrogenic atoms in presence of an instantaneous Coulomb potential. 3 Effective confinement interaction from the gauge/gravity correspondence A remarkable correspondence between the equations of motion in AdS and the Hamiltonian equation for relativistic bound-states was found in [2]. In fact, to a first semiclassical approximation, LF QCD is formally equivalent to the equations of motion on a fixed gravitational background [2] asymptotic to AdS, where confinement properties are encoded in a dilaton profile . A spin- field in AdS is represented by a rank tensor field , which is totally symmetric in all its indices. In presence of a dilaton background field the action is 222The study of higher integer and half-integer spin wave equations in AdS is based on our collaboration with Hans Guenter Dosch. See also the discussion in Ref. [14]. where , and is the covariant derivative which includes parallel transport. The coordinates of AdS are the Minkowski coordinates and the holographic variable labeled . The d + 1 dimensional mass is not a physical observable and is a priory an arbitrary parameter. The dilaton background field in (4) introduces an energy scale in the five-dimensional AdS action, thus breaking its conformal invariance. It vanishes in the conformal ultraviolet limit . A physical hadron has plane-wave solutions and polarization indices along the 3 + 1 physical coordinates , with four-momentum and invariant hadronic mass . All other components vanish identically. One can then construct an effective action in terms of the spin modes with only physical degrees of freedom. In this case the system of coupled differential equations which follow from (4) reduce to a homogeneous equation in terms of the physical field upon rescaling the AdS mass Upon the substitution and in (5), we find for the LFSE (3) with effective potential [15] provided that the fifth dimensional mass is related to the internal orbital angular momentum and the total angular momentum according to . The critical value corresponds to the lowest possible stable solution, the ground state of the LF Hamiltonian. For the five dimensional mass is related to the orbital momentum of the hadronic bound state by and thus . The quantum mechanical stability condition is thus equivalent to the Breitenlohner-Freedman stability bound in AdS [16]. The correspondence between the LF and AdS equations thus determines the effective confining interaction in terms of the infrared behavior of AdS space and gives the holographic variable a kinematical interpretation. The identification of the orbital angular momentum is also a key element of our description of the internal structure of hadrons using holographic principles. A particularly interesting example is a dilaton profile of either sign, since it leads to linear Regge trajectories [17] and avoids the ambiguities in the choice of boundary conditions at the infrared wall. For the confining solution the effective potential is and Eq. (3) has eigenvalues , with a string Regge form . A discussion of the light meson and baryon spectrum, as well as the elastic and transition form factors of the light hadrons using LF holographic methods, is given in Ref. [18]. 4 Effective confinement interaction from higher Fock states in light-front QCD As we have discussed in Sec. 2, one can systematically eliminate the higher Fock states in terms of an effective interaction in order to obtain an equation for the valence Fock state [13]. The potential depends on the eigenvalue via the LF energy denominators of the intermediate states which connect different LF Fock states. Here . The dependence of on is analogous to the retardation effect in QED interactions, such as the hyperfine splitting in muonium, which involves the exchange of a propagating photon. Accordingly, the eigenvalues must be determined self-consistently. The dependence of the effective potential thus reflects the contributions from higher Fock states in the LFSE (3), since is also the kernel for the scattering amplitude at It has only “proper” contributions; i.e., it has no intermediate state. The potential can be constructed systematically using LF time-ordered perturbation theory. Thus the QCD theory has identical form as the AdS theory, but with the quantum field-theoretic corrections due to the higher Fock states giving a general form for the potential. This provides a novel way to solve nonperturbative QCD. This LFSE for QCD becomes increasingly accurate as one includes contributions from very high particle number Fock states. There is only one dynamical variable . The AdS/QCD harmonic oscillator potential could emerge when one includes contributions from the exchange of two connected gluons; i.e., “H” diagrams [19]. We notice that becomes complex for an excited state since a denominator can vanish; this gives a complex eigenvalue and the decay width. The above discussion assumes massless quarks. More generally we must include mass terms in the kinetic energy term and allow the potential to have dependence on the LF momentum fraction . The quark masses also appear in due to the presence in the LF denominators as well as the chirality-violating interactions connecting the valence Fock state to the higher Fock states. In this case, however, the equation of motion cannot be reduced to a single variable. The LFSE approach also can be applied to atomic bound states in QED and nuclei. In principle one could compute the spectrum and dynamics of atoms, such as the Lamb shift and hyperfine splitting of hydrogenic atoms to high precision by a systematic treatment of the potential. Unlike the ordinary instant form, the resulting LFWFs are independent of the total momentum and can thus describe “flying atoms” without the need for dynamical boosts, such as the “true muonium” bound states which can be produced by Bethe-Heitler pair production below threshold [20]. A related approach for determining the valence light-front wavefunction and studying the effects of higher Fock states without truncation has been given in Ref. [21]. 5 Conclusions Despite some limitations, AdS/QCD, the LF holographic approach to the gauge/gravity duality, has given significant physical insight into the strongly-coupled nature and internal structure of hadrons. In particular, the AdS/QCD soft-wall model provides an elegant analytic framework for describing nonperturbative hadron dynamics, the systematics of the excitation spectrum of hadrons, including their empirical multiplicities and degeneracies. It also provides powerful new analytical tools for computing hadronic transition amplitudes incorporating conformal scaling behavior at short distances and the transition from the hard-scattering perturbative domain, where quark and gluons are the relevant degrees of freedom, to the long-range confining hadronic region. We have also discussed the possibility of computing the effective confining potential in light-front QCD for a single-variable LF Schrödinger equation by systematically incorporating the effects of higher Fock states, thus providing the basis for a profound connection between physical QCD, quantized on the light-front, and the physics of hadronic modes in a higher dimensional AdS space. For everything else, email us at [email protected].
180e718b1ab1eff1
Dismiss Notice Join Physics Forums Today! Experimental Support of Shrodinger Equation 1. Oct 16, 2014 #1 The reason I'm posting this is because I'm trying to understand how to interpret the wave function of a particle. I'm trying to decide which interpretations of quantum mechanics are just speculation, and which ones are consistent with experimentation. I know that quantum objects such as electrons exhibit wave-like properties, but how do we know these observed properties are consistent with the Shrondinger equation? Is there any particular experiment which shows that the probability distribution of a quantum particle is consistent with the Shrodinger equation? Or any other types of experiments which support the Shrodinger equation? For example, can you construct an approximately infinite well and observe the appropriate energies and expectation values predicted by the Shrodinger equation? Etc etc. 2. jcsd 3. Oct 16, 2014 #2 User Avatar Science Advisor Evidence for the non-relativistic Schroedinger equation comes from spectra of atoms and molecules, and properties of solids like their ability to conduct electricity. http://en.wikipedia.org/wiki/Hydrogen_spectral_series (atomic spectra) http://ocw.mit.edu/courses/chemistry/5-61-physical-chemistry-fall-2007/lecture-notes/lecture35.pdf (builds on the simple harmonic potential) http://en.wikipedia.org/wiki/BCS_theory (electrical conductivity) In relativistic situations, the Schroedinger equation must be generalized to quantum field theory. For example, a detail of the hydrogen spectrum that is not predicted by the non-relativistic Schroedinger equation is the Lamb shift. Quantum field theory gets this right. 4. Oct 16, 2014 #3 Simon Bridge User Avatar Science Advisor Homework Helper ... in addition: all interpretations are speculation and all are consistent with experimentation. There's a good thread about this somewhere, but I keep losing it. 5. Oct 17, 2014 #4 User Avatar Science Advisor Gold Member Also, a more "direct" demonstration would be quantum well type structures that are used in e.g. AlGaAs-GaAs laser diodes (i..e the type of diode you have in home electronics); there structures are designed by varying the amount of aluminium which raises/lowers the potential barriers for the electrons. This way you can create an arbitrarily shaped potential landscape and you get the properties of that landscape (e.g. the wavelength of the generated light) by solving the Schroedinger equation. In some cases you can actually solve the equations analytically and get very close to what is seen experimentally. There are many, many other examples. Nearly all modern electronics uses components where the designers that to solve the SE at one point or another to model their behavior. Hence there are literally billions of components out there that wouldn't work unless the SE was correct. The point I am trying to make is that the Schroedinger equation is not an "exotic" equation despite what you might think after reading pop-sci books, it is a fundamental equation in science and engineering.. 6. Oct 17, 2014 #5 Why do people say that "a particle's wave function 'collapses' when a measurement is made"? If you have a simple system with a bound particle, the bound particle will have some wave function before the measurement is made (which tells us where we might find the particle). Then, once we've made the measurement, why would the particle's wave function be any different? It's still under the same potential. Maybe it will gain or lose some energy from the act of measuring, but disregarding that, if you were to solve the Shrodinger equation again immediately after the measurement, the wave function should turn out to be the exact same right? Even at the moment of measurement, wouldn't the solution to the Shrodinger equation still yield the same wave function? Is it because when you know the position of a particle this gives you a new "initial condition" to solve the Shrodinger equation for? The problem with this is that we don't actually measure the wave function, we just measure the position. So how could we ever get an "initial wave function" via measurement? 7. Oct 17, 2014 #6 This happens because the process of making a measurement involves obtaining a single result for a given value. If a particle's position wavefunction exists over a continuum (that is, when the particle does not have a single definite position), then measuring the particle's position requires interfering with it in such a way that it must assume a definite position. From the values of the wavefunction, you can calculate the probability that the position it assumes will be in a certain range. In the classic double-slit experiment, the measurement involves setting up a system where you can measure which slit an electron passed through (or just building a system where you could determine which slit an electron passed through - whether you look at the data yourself doesn't make a difference). In order to determine the position of an electron, one might track the strength of its electric field at an observation point next to the slits - in this case, the charged particle(s) you use to measure the field strength exert a pull on the electron disrupting its motion. Since the magnitude of the effect used to make the observation is relatively large, the wavefunction distribution becomes insignificant, and particle-like behavior becomes dominant. 8. Oct 17, 2014 #7 Staff: Mentor If it happens is interpretation dependant. They probably simply haven't thought the issue through and/or don't know a lot about the various interpretations out there. They simply regurgitate what beginner texts say. Its the same with the wave-particle duality. Ballentine's text carefully explains exactly what's going on, but not all texts take that level of care - and even his text isn't perfect saying that Copenhagen requires collapse which is unreasonable physically. Its only unreasonable if you consider the wave-function is real - most versions of Copenhagen have it as simply as an aid to calculation. The issue is subtle though: But note, and people sometimes forget it with regard to that important theorem: 'The argument depends on few assumptions. One is that a system has a real physical state" - not necessarily completely described by quantum theory, but objective and independent of the observer. This assumption only needs to hold for systems that are isolated, and not entangled with other systems. Nonetheless, this assumption, or some part of it, would be denied by instrumentalist approaches to quantum theory, wherein the quantum state is merely a calculational tool for making predictions concerning macroscopic measurement outcomes.' 9. Oct 17, 2014 #8 Simon Bridge User Avatar Science Advisor Homework Helper If I roll a couple of die and hide the result - you would describe the possible outcomes by a specific probability distribution. If you then measure the result - by looking at it - you would describe the possible outcomes by quite a different probability distribution. If I ask you how tall a person is who you've never seen, you would reply in terms of the mean and standard deviation human heights ... but once you've seen that person, the probability distribution is quite different, and after taking a measurement with a ruler, different again - and also different if you use more or less accurate devices. These are easy to understand. Before and after a measurement the possible outcomes are different - in the above case because your state of knowledge of the system has changed. In the QM case, the wave-function still reflects the possible outcomes of a measurement. The act of measuring something affects the possible outcomes quite directly by interacting with the system. The slit situation is a good example of this: Shine a beam of light at a small hole in a screen: the photons can have a wide range of possible positions in the plane just before the hole, but just after the hole the possibilities are very curtailed simply because the screen has intercepted all the light that did not go through the hole. The screen acts to measure the position of the photons in that plane. When you learn QM, the word "measurement" gets used without talking abut the exact method used to make the measurement. As you advance you will learn about the classic experiments and how the measurements were made and so how those measurements affect the outcomes turns out to be consistent with what you are learning now. 10. Oct 18, 2014 #9 My understanding is that as soon as your system of a particle in a potential interacts with something else (such as interacting with your position measurement apparatus), you can no longer use Schrodinger equation of this system as if it is still an isolated one. If you do, you get erroneous results. The full correct treatment is to analyze the wavefunction of the bigger system now, one which includes measurement apparatus. In this bigger picture, you will have a wavefunction which is a superposition of many possible states where measurement apparatus detected the particle in every possible location. 11. Oct 18, 2014 #10 Doug Huffman User Avatar Gold Member I've been watching Leonard Susskind's lectures, and am impressed with his steady delivery. I was also impressed with his brief rant about the volume of QM nonsense crap in the popular literature. The rant may have even been delivered during the Schrödinger equation lecture. 12. Oct 18, 2014 #11 User Avatar 2017 Award Staff: Mentor The name is Schrödinger - or Schroedinger if writing "ö" is too complicated with your keyboard. Not Shrondinger, Shrodinger, Schrodinger or whatever else came up here. That depends on your measurement. If you can include the measurement process in the system and use the equation for the combined evolution of the system, then everything is fine. And if you do not stop doing that at arbitrary points, you get the many-worlds interpretation of quantum mechanics. 13. Oct 18, 2014 #12 Staff: Mentor That's true. If you observe one part of an entangled system it looks exactly the same as a mixed state (called an improper mixed state) and you get apparent collapse. Many books explain this, but at the beginner level Susskind certainly does: That's the view of dechohence. The modern version of collapse is the so called problem of outcomes - which is - why do we get any outcomes at all or, more technically, how does the improper mixed state become a proper one. Last edited by a moderator: May 7, 2017 14. Oct 18, 2014 #13 Staff: Mentor I think most people who have studied QM as the real deal from good physics books get very upset at populist crap such as What The Bleep Do We Know Anyway. I know I certainly do.
959390e420715865
Copenhagen interpretation From Wikipedia, the free encyclopedia Jump to navigation Jump to search The Copenhagen interpretation is an expression of the meaning of quantum mechanics that was largely devised in the years 1925 to 1927 by Niels Bohr and Werner Heisenberg. It remains one of the most commonly taught interpretations of quantum mechanics.[1] According to the Copenhagen interpretation, physical systems generally do not have definite properties prior to being measured, and quantum mechanics can only predict the probabilities that measurements will produce certain results. The act of measurement affects the system, causing the set of probabilities to reduce to only one of the possible values immediately after the measurement. This feature is known as wave function collapse. There have been many objections to the Copenhagen interpretation over the years. These include: discontinuous jumps when there is an observation, the probabilistic element introduced upon observation, the subjectiveness of requiring an observer, the difficulty of defining a measuring device, and the necessity of invoking classical physics to describe the "laboratory" in which the results are measured. Alternatives to the Copenhagen interpretation include the many-worlds interpretation, the De Broglie–Bohm (pilot-wave) interpretation, and quantum decoherence theories. Max Planck, Albert Einstein, and Niels Bohr postulated the occurrence of energy in discrete quantities (quanta) in order to explain phenomena such as the spectrum of black-body radiation, the photoelectric effect, and the stability and spectrum of atoms. These phenomena had eluded explanation by classical physics and even appeared to be in contradiction with it. Although elementary particles show predictable properties in many experiments, they become thoroughly unpredictable in others, such as attempts to identify individual particle trajectories through a simple physical apparatus. Classical physics draws a distinction between particles and waves. It also relies on continuity and determinism in natural phenomena. In the early twentieth century, newly discovered atomic and subatomic phenomena seemed to defy those conceptions. In 1925–1926, quantum mechanics was invented as a mathematical formalism that accurately describes the experiments, yet appears to reject those classical conceptions. Instead, it posits that probability, and discontinuity, are fundamental in the physical world. Classical physics also relies on causality. The standing of causality for quantum mechanics is disputed. Quantum mechanics cannot easily be reconciled with everyday language and observation, and has often seemed counter-intuitive to physicists, including its inventors.[2] The Copenhagen interpretation intends to indicate the proper ways of thinking and speaking about the physical meaning of the mathematical formulations of quantum mechanics and the corresponding experimental results. It offers due respect to discontinuity, probability, and a conception of wave–particle dualism. In some respects, it denies standing to causality. Origin of the term[edit] Werner Heisenberg had been an assistant to Niels Bohr at his institute in Copenhagen during part of the 1920s, when they helped originate quantum mechanical theory. In 1929, Heisenberg gave a series of invited lectures at the University of Chicago explaining the new field of quantum mechanics. The lectures then served as the basis for his textbook, The Physical Principles of the Quantum Theory, published in 1930.[3] In the book's preface, Heisenberg wrote: On the whole the book contains nothing that is not to be found in previous publications, particularly in the investigations of Bohr. The purpose of the book seems to me to be fulfilled if it contributes somewhat to the diffusion of that 'Kopenhagener Geist der Quantentheorie' [i.e., Copenhagen spirit of quantum theory] if I may so express myself, which has directed the entire development of modern atomic physics. The term 'Copenhagen interpretation' suggests something more than just a spirit, such as some definite set of rules for interpreting the mathematical formalism of quantum mechanics, presumably dating back to the 1920s. However, no such text exists, apart from some informal popular lectures by Bohr and Heisenberg, which contradict each other on several important issues[citation needed]. It appears that the particular term, with its more definite sense, was coined by Heisenberg in the 1950s,[4] while criticizing alternate "interpretations" (e.g., David Bohm's[5]) that had been developed.[6] Lectures with the titles 'The Copenhagen Interpretation of Quantum Theory' and 'Criticisms and Counterproposals to the Copenhagen Interpretation', that Heisenberg delivered in 1955, are reprinted in the collection Physics and Philosophy.[7] Before the book was released for sale, Heisenberg privately expressed regret for having used the term, due to its suggestion of the existence of other interpretations, that he considered to be "nonsense".[8] Current status of the term[edit] According to an opponent of the Copenhagen interpretation, John G. Cramer, "Despite an extensive literature which refers to, discusses, and criticizes the Copenhagen interpretation of quantum mechanics, nowhere does there seem to be any concise statement which defines the full Copenhagen interpretation."[9] There is no uniquely definitive statement of the Copenhagen interpretation. It consists of the views developed by a number of scientists and philosophers during the second quarter of the 20th Century. Bohr and Heisenberg never totally agreed on how to understand the mathematical formalism of quantum mechanics. Bohr once distanced himself from what he considered to be Heisenberg's more subjective interpretation.[10] Different commentators and researchers have associated various ideas with it. Asher Peres remarked that very different, sometimes opposite, views are presented as "the Copenhagen interpretation" by different authors.[11] Some basic principles generally accepted as part of the interpretation include: 1. A wave function represents the state of the system. It encapsulates everything that can be known about that system before an observation; there are no additional "hidden parameters".[12] The wavefunction evolves smoothly in time while isolated from other systems. 2. The properties of the system are subject to a principle of incompatibility. Certain properties cannot be jointly defined for the same system at the same time. The incompatibility is expressed quantitatively by Heisenberg's uncertainty principle. For example, if a particle at a particular instant has a definite location, it is meaningless to speak of its momentum at that instant. 3. During an observation, the system must interact with a laboratory device. When that device makes a measurement, the wave function of the systems is said to collapse, or irreversibly reduce to an eigenstate of the observable that is registered.[13] 4. The results provided by measuring devices are essentially classical, and should be described in ordinary language. This was particularly emphasized by Bohr, and was accepted by Heisenberg.[14] 5. The description given by the wave function is probabilistic. This principle is called the Born rule, after Max Born. 6. The wave function expresses a necessary and fundamental wave–particle duality. This should be reflected in ordinary language accounts of experiments. An experiment can show particle-like properties, or wave-like properties, according to the complementarity principle of Niels Bohr.[15] 7. The inner workings of atomic and subatomic processes are necessarily and essentially inaccessible to direct observation, because the act of observing them would greatly affect them. 8. When quantum numbers are large, they refer to properties which closely match those of the classical description. This is the correspondence principle of Bohr and Heisenberg. Metaphysics of the wave function[edit] The Copenhagen interpretation denies that the wave function provides a directly apprehensible image of an ordinary material body or a discernible component of some such,[16][17] or anything more than a theoretical concept. In metaphysical terms, the Copenhagen interpretation views quantum mechanics as providing knowledge of phenomena, but not as pointing to 'really existing objects', which it regarded as residues of ordinary intuition. This makes it an epistemic theory. This may be contrasted with Einstein's view, that physics should look for 'really existing objects', making itself an ontic theory.[18] The metaphysical question is sometimes asked: "Could quantum mechanics be extended by adding so-called "hidden variables" to the mathematical formalism, to convert it from an epistemic to an ontic theory?" The Copenhagen interpretation answers this with a strong 'No'.[19] It is sometimes alleged, for example by J.S. Bell, that Einstein opposed the Copenhagen interpretation because he believed that the answer to that question of "hidden variables" was "yes". That allegation has achieved mythical potency, but is mistaken. Countering that myth, Max Jammer writes "Einstein never proposed a hidden variable theory."[20] Einstein explored the possibility of a hidden variable theory, and wrote a paper describing his exploration, but withdrew it from publication because he felt it was faulty.[21][22] Because it asserts that a wave function becomes 'real' only when the system is observed, the term "subjective" is sometimes proposed for the Copenhagen interpretation. This term is rejected by many Copenhagenists[23] because the process of observation is mechanical and does not depend on the individuality of the observer. Some authors[who?] have proposed that Bohr was influenced by positivism (or even pragmatism). On the other hand, Bohr and Heisenberg were not in complete agreement, and they held different views at different times. Heisenberg in particular was prompted to move towards realism.[24] Carl Friedrich von Weizsäcker, while participating in a colloquium at Cambridge, denied that the Copenhagen interpretation asserted "What cannot be observed does not exist." He suggested instead that the Copenhagen interpretation follows the principle "What is observed certainly exists; about what is not observed we are still free to make suitable assumptions. We use that freedom to avoid paradoxes."[9] Born rule[edit] Max Born speaks of his probability interpretation as a "statistical interpretation" of the wave function,[25][26] and the Born rule is essential to the Copenhagen interpretation.[27] Writers do not all follow the same terminology. The phrase "statistical interpretation", referring to the "ensemble interpretation", often indicates an interpretation of the Born rule somewhat different from the Copenhagen interpretation.[28][29] For the Copenhagen interpretation, it is axiomatic that the wave function exhausts all that can ever be known in advance about a particular occurrence of the system. The "statistical" or "ensemble" interpretation, on the other hand, is explicitly agnostic about whether the information in the wave function is exhaustive of what might be known in advance. It sees itself as more 'minimal' than the Copenhagen interpretation in its claims. It only goes as far as saying that on every occasion of observation, some actual value of some property is found, and that such values are found probabilistically, as detected by many occasions of observation of the same system. The many occurrences of the system are said to constitute an 'ensemble', and they jointly reveal the probability through these occasions of observation. Though they all have the same wave function, the elements of the ensemble might not be identical to one another in all respects, according to the 'agnostic' interpretations. They may, for all we know, beyond current knowledge and beyond the wave function, have individual distinguishing properties. For present-day science, the experimental significance of these various forms of Born's rule is the same, since they make the same predictions about the probability distribution of outcomes of observations, and the unobserved or unactualized potential properties are not accessible to experiment. Nature of collapse[edit] Those who hold to the Copenhagen interpretation are willing to say that a wave function involves the various probabilities that a given event will proceed to certain different outcomes. But when the apparatus registers one of those outcomes, no probabilities or superposition of the others linger.[30] According to Howard, wave function collapse is not mentioned in the writings of Bohr.[4] Some argue that the concept of the collapse of a "real" wave function was introduced by Heisenberg and later developed by John von Neumann in 1932.[31] However, Heisenberg spoke of the wavefunction as representing available knowledge of a system, and did not use the term "collapse" per se, but instead termed it "reduction" of the wavefunction to a new state representing the change in available knowledge which occurs once a particular phenomenon is registered by the apparatus (often called "measurement").[32] In 1952 David Bohm developed decoherence, an explanatory mechanism for the appearance of wave function collapse. Bohm applied decoherence to Louis DeBroglie's pilot wave theory, producing Bohmian mechanics,[33][34] the first successful hidden variables interpretation of quantum mechanics. Collapse was avoided by Hugh Everett in 1957 in his relative state interpretation.[35] Decoherence was largely[36] ignored until the 1980s.[37][38] Non-separability of the wave function[edit] The domain of the wave function is configuration space, an abstract object quite different from ordinary physical space–time. At a single "point" of configuration space, the wave function collects probabilistic information about several distinct particles, that respectively have physically space-like separation. So the wave function is said to supply a non-separable representation. This reflects a feature of the quantum world that was recognized by Einstein as early[citation needed] as 1905. In 1927, Bohr drew attention to a consequence of non-separability. The evolution of the system, as determined by the Schrödinger equation, does not display particle trajectories through space–time. It is possible to extract trajectory information from such evolution, but not simultaneously to extract energy–momentum information. This incompatibility is expressed in the Heisenberg uncertainty principle. The two kinds of information have to be extracted on different occasions, because of the non-separability of the wave function representation. In Bohr's thinking, space–time visualizability meant trajectory information. Again, in Bohr's thinking, 'causality' referred to energy–momentum transfer; in his view, lack of energy–momentum knowledge meant lack of 'causality' knowledge. Therefore Bohr thought that knowledge respectively of 'causality' and of space–time visualizability were incompatible but complementary.[4] Wave–particle dilemma[edit] The term Copenhagen interpretation is not well defined when one asks about the wave–particle dilemma, because Bohr and Heisenberg had different or perhaps disagreeing views on it. According to Camilleri, Bohr thought that the distinction between a wave view and a particle view was defined by a distinction between experimental setups, while, differing, Heisenberg thought that it was defined by the possibility of viewing the mathematical formulas as referring to waves or particles. Bohr thought that a particular experimental setup would display either a wave picture or a particle picture, but not both. Heisenberg thought that every mathematical formulation was capable of both wave and particle interpretations.[39][40] Alfred Landé was for a long time considered orthodox. He did, however, take the Heisenberg viewpoint, in so far as he thought that the wave function was always mathematically open to both interpretations. Eventually this led to his being considered unorthodox, partly because he did not accept Bohr's one-or-the-other view, preferring Heisenberg's always-both view. Another part of the reason for branding Landé unorthodox was that he recited, as did Heisenberg, the 1923 work[41] of old-quantum-theorist William Duane, which anticipated a quantum mechanical theorem that had not been recognized by Born. That theorem seems to make the always-both view, like the one adopted by Heisenberg, rather cogent. One might say "It's there in the mathematics", but that is not a physical statement that would have convinced Bohr. Perhaps the main reason for attacking Landé is that his work demystified the phenomenon of diffraction of particles of matter, such as buckyballs.[42] Acceptance among physicists[edit] Throughout much of the twentieth century the Copenhagen interpretation had overwhelming acceptance among physicists. Although astrophysicist and science writer John Gribbin described it as having fallen from primacy after the 1980s,[43] according to a very informal poll (some people voted for multiple interpretations) conducted at a quantum mechanics conference in 1997,[44] the Copenhagen interpretation remained the most widely accepted specific interpretation of quantum mechanics among physicists. In more recent polls conducted at various quantum mechanics conferences, varying results have been found.[45][46][47] In a 2017 article, physicist and Nobel laureate Steven Weinberg states that the Copenhagen interpretation "is now widely felt to be unacceptable."[48] The nature of the Copenhagen interpretation is exposed by considering a number of experiments and paradoxes. 1. Schrödinger's cat This thought experiment highlights the implications that accepting uncertainty at the microscopic level has on macroscopic objects. A cat is put in a sealed box, with its life or death made dependent on the state of a subatomic particle. Thus a description of the cat during the course of the experiment—having been entangled with the state of a subatomic particle—becomes a "blur" of "living and dead cat." But this can't be accurate because it implies the cat is actually both dead and alive until the box is opened to check on it. But the cat, if it survives, will only remember being alive. Schrödinger resists "so naively accepting as valid a 'blurred model' for representing reality."[49] How can the cat be both alive and dead? The Copenhagen interpretation: The wave function reflects our knowledge of the system. The wave function means that, once the cat is observed, there is a 50% chance it will be dead, and 50% chance it will be alive. 2. Wigner's friend Wigner puts his friend in with the cat. The external observer believes the system is in the state . His friend, however, is convinced that the cat is alive, i.e. for him, the cat is in the state . How can Wigner and his friend see different wave functions? The Copenhagen interpretation: The answer depends on the positioning of Heisenberg cut, which can be placed arbitrarily. If Wigner's friend is positioned on the same side of the cut as the external observer, his measurements collapse the wave function for both observers. If he is positioned on the cat's side, his interaction with the cat is not considered a measurement. 3. Double-slit diffraction Light passes through double slits and onto a screen resulting in a diffraction pattern. Is light a particle or a wave? The Copenhagen interpretation: Light is neither. A particular experiment can demonstrate particle (photon) or wave properties, but not both at the same time (Bohr's complementarity principle). The same experiment can in theory be performed with any physical system: electrons, protons, atoms, molecules, viruses, bacteria, cats, humans, elephants, planets, etc. In practice it has been performed for light, electrons, buckminsterfullerene,[50][51] and some atoms. Due to the smallness of Planck's constant it is practically impossible to realize experiments that directly reveal the wave nature of any system bigger than a few atoms but, in general, quantum mechanics considers all matter as possessing both particle and wave behaviors. The greater systems (like viruses, bacteria, cats, etc.) are considered as "classical" ones but only as an approximation, not exact. 4. EPR (Einstein–Podolsky–Rosen) paradox Entangled "particles" are emitted in a single event. Conservation laws ensure that the measured spin of one particle must be the opposite of the measured spin of the other, so that if the spin of one particle is measured, the spin of the other particle is now instantaneously known. Because this outcome cannot be separated from quantum randomness, no information can be sent in this manner and there is no violation of either special relativity or the Copenhagen interpretation. The Copenhagen interpretation: Assuming wave functions are not real, wave-function collapse is interpreted subjectively. The moment one observer measures the spin of one particle, he knows the spin of the other. However, another observer cannot benefit until the results of that measurement have been relayed to him, at less than or equal to the speed of light. Copenhagenists claim that interpretations of quantum mechanics where the wave function is regarded as real have problems with EPR-type effects, since they imply that the laws of physics allow for influences to propagate at speeds greater than the speed of light. However, proponents of many worlds[52] and the transactional interpretation[53][54] (TI) maintain that Copenhagen interpretation is fatally non-local. The claim that EPR effects violate the principle that information cannot travel faster than the speed of light have been countered by noting that they cannot be used for signaling because neither observer can control, or predetermine, what he observes, and therefore cannot manipulate what the other observer measures. The completeness of quantum mechanics (thesis 1) was attacked by the Einstein–Podolsky–Rosen thought experiment which was intended to show that quantum mechanics could not be a complete theory. Experimental tests of Bell's inequality using particles have supported the quantum mechanical prediction of entanglement. The Copenhagen interpretation gives special status to measurement processes without clearly defining them or explaining their peculiar effects. In his article entitled "Criticism and Counterproposals to the Copenhagen Interpretation of Quantum Theory," countering the view of Alexandrov that (in Heisenberg's paraphrase) "the wave function in configuration space characterizes the objective state of the electron." Heisenberg says, Of course the introduction of the observer must not be misunderstood to imply that some kind of subjective features are to be brought into the description of nature. The observer has, rather, only the function of registering decisions, i.e., processes in space and time, and it does not matter whether the observer is an apparatus or a human being; but the registration, i.e., the transition from the "possible" to the "actual," is absolutely necessary here and cannot be omitted from the interpretation of quantum theory.[55] Many physicists and philosophers[who?] have objected to the Copenhagen interpretation, both on the grounds that it is non-deterministic and that it includes an undefined measurement process that converts probability functions into non-probabilistic measurements. Einstein's comments "I, at any rate, am convinced that He (God) does not throw dice."[56] and "Do you really think the moon isn't there if you aren't looking at it?"[57] exemplify this. Bohr, in response, said, "Einstein, don't tell God what to do."[58] Steven Weinberg in "Einstein's Mistakes", Physics Today, November 2005, page 31, said: All this familiar story is true, but it leaves out an irony. Bohr's version of quantum mechanics was deeply flawed, but not for the reason Einstein thought. The Copenhagen interpretation describes what happens when an observer makes a measurement, but the observer and the act of measurement are themselves treated classically. This is surely wrong: Physicists and their apparatus must be governed by the same quantum mechanical rules that govern everything else in the universe. But these rules are expressed in terms of a wave function (or, more precisely, a state vector) that evolves in a perfectly deterministic way. So where do the probabilistic rules of the Copenhagen interpretation come from? Considerable progress has been made in recent years toward the resolution of the problem, which I cannot go into here. It is enough to say that neither Bohr nor Einstein had focused on the real problem with quantum mechanics. The Copenhagen rules clearly work, so they have to be accepted. But this leaves the task of explaining them by applying the deterministic equation for the evolution of the wave function, the Schrödinger equation, to observers and their apparatus. The problem of thinking in terms of classical measurements of a quantum system becomes particularly acute in the field of quantum cosmology, where the quantum system is the universe.[59] E. T. Jaynes,[60] from a Bayesian point of view, argued that probability is a measure of a state of information about the physical world. Quantum mechanics under the Copenhagen interpretation interpreted probability as a physical phenomenon, which is what Jaynes called a mind projection fallacy. Common criticisms of the Copenhagen interpretation often lead to the problem of continuum of random occurrences: whether in time (as subsequent measurements, which under certain interpretations of the measurement problem may happen continuously) or even in space. A recent experiment showed that a particle may leave a trace about the path which it used when travelling as a wave – and that this trace exhibits equality of both paths.[61] If such result is raised to the rank of a wave-only non-transactional worldview and proved better – i.e. that a particle is in fact a continuum of points capable of acting independently but under a common wavefunction – it would rather support theories such as Bohm's one (with its guiding towards the centre of orbital and spreading of physical properties over it) than interpretations which presuppose full randomness, because with the latter it will be problematic to demonstrate universally and in all practical cases how can a particle remain coherent in time, in spite of non-zero probabilities of its individual points going into regions distant from the centre of mass (through a continuum of different random determinations).[62] An alternative possibility would be to assume that there is a finite number of instants/points within a given time or area, but theories which try to quantize the space or time itself seem to be fatally incompatible with the special relativity. The view that particle diffraction logically guarantees the need for a wave interpretation has been questioned. A recent experiment has carried out the two-slit protocol with helium atoms.[63] The basic physics of quantal momentum transfer considered here was originally pointed out in 1923, by William Duane, before quantum mechanics was invented.[41] It was later recognized by Heisenberg[64] and by Pauling.[65] It was championed against orthodox ridicule by Alfred Landé.[66] It has also recently been considered by Van Vliet.[67][68] If the diffracting slits are considered as classical objects, theoretically ideally seamless, then a wave interpretation seems necessary, but if the diffracting slits are considered physically, as quantal objects exhibiting collective quantal motions, then the particle-only and wave-only interpretations seem perhaps equally valid. The Ensemble interpretation is similar; it offers an interpretation of the wave function, but not for single particles. The consistent histories interpretation advertises itself as "Copenhagen done right". Although the Copenhagen interpretation is often confused with the idea that consciousness causes collapse, it defines an "observer" merely as that which collapses the wave function.[55] Quantum information theories are more recent, and have attracted growing support.[69][70] Under realism and determinism, if the wave function is regarded as ontologically real, and collapse is entirely rejected, a many worlds theory results. If wave function collapse is regarded as ontologically real as well, an objective collapse theory is obtained. Under realism and determinism (as well as non-localism), a hidden variable theory exists, e.g., the de Broglie–Bohm interpretation, which treats the wavefunction as real, position and momentum as definite and resulting from the expected values, and physical properties as spread in space. For an atemporal indeterministic interpretation that “makes no attempt to give a ‘local’ account on the level of determinate particles”,[71] the conjugate wavefunction, ("advanced" or time-reversed) of the relativistic version of the wavefunction, and the so-called "retarded" or time-forward version[72] are both regarded as real and the transactional interpretation results.[71] Many physicists[who?] have subscribed to the instrumentalist interpretation of quantum mechanics, a position often equated with eschewing all interpretation. It is summarized by the sentence "Shut up and calculate!". While this slogan is sometimes attributed to Paul Dirac[73] or Richard Feynman, it seems to be due to David Mermin.[74] See also[edit] Notes and references[edit] 1. ^ Wimmel, Hermann (1992). Quantum Physics & Observed Reality: A Critical Interpretation of Quantum Mechanics. World Scientific. p. 2. ISBN 978-981-02-1010-6.  2. ^ Werner Heisenberg, Physics and Philosophy (1958): "I remember discussions with Bohr which went through many hours till very late at night and ended almost in despair; and when at the end of the discussion I went alone for a walk in the neighbouring park I repeated to myself again and again the question: Can nature possibly be so absurd as it seemed to us in these atomic experiments?" 3. ^ J. Mehra and H. Rechenberg, The historical development of quantum theory, Springer-Verlag, 2001, p. 271. 4. ^ a b c Howard, Don (2004). "Who invented the Copenhagen Interpretation? A study in mythology" (PDF). Philosophy of Science. 71 (5): 669–682. doi:10.1086/425941. JSTOR 10.1086/425941.  5. ^ Bohm, David (1952). "A Suggested Interpretation of the Quantum Theory in Terms of 'Hidden' Variables. I & II". Physical Review. 85 (2): 166–193. Bibcode:1952PhRv...85..166B. doi:10.1103/PhysRev.85.166.  6. ^ H. Kragh, Quantum generations: A History of Physics in the Twentieth Century, Princeton University Press, 1999, p. 210. ("the term 'Copenhagen interpretation' was not used in the 1930s but first entered the physicist’s vocabulary in 1955 when Heisenberg used it in criticizing certain unorthodox interpretations of quantum mechanics.") 7. ^ Werner Heisenberg, Physics and Philosophy, Harper, 1958 8. ^ Olival Freire Jr., "Science and exile: David Bohm, the hot times of the Cold War, and his struggle for a new interpretation of quantum mechanics", Historical Studies on the Physical and Biological Sciences, Volume 36, Number 1, 2005, pp. 31–35. ("I avow that the term ‘Copenhagen interpretation’ is not happy since it could suggest that there are other interpretations, like Bohm assumes. We agree, of course, that the other interpretations are nonsense, and I believe that this is clear in my book, and in previous papers. Anyway, I cannot now, unfortunately, change the book since the printing began enough time ago.") 9. ^ a b Cramer, John G. (1986). "The Transactional Interpretation of Quantum Mechanics". Reviews of Modern Physics. 58 (3): 649. Bibcode:1986RvMP...58..647C. doi:10.1103/revmodphys.58.647. Archived from the original on 2012-11-08.  10. ^ Stanford Encyclopedia of Philosophy 11. ^ "There seems to be at least as many different Copenhagen interpretations as people who use that term, probably there are more. For example, in two classic articles on the foundations of quantum mechanics, Ballentine (1970) and Stapp (1972) give diametrically opposite definitions of 'Copenhagen.'", Asher Peres (2002). "Popper's experiment and the Copenhagen interpretation". Stud. History Philos. Modern Physics. 33: 23. arXiv:quant-ph/9910078Freely accessible.  12. ^ "... for the ″hidden parameters″ of Bohm's interpretation are of such a kind that they can never occur in the description of real processes, if the quantum theory remains unchanged." Heisenberg, W. (1955). The development of the quantum theory, pp. 12–29 in Niels Bohr and the Development of Physics, ed. W. Pauli with the assistance of L. Rosenfeld and V. Weisskopf, Pergamon, London, at p. 18. 13. ^ "It is well known that the 'reduction of the wave packets' always appears in the Copenhagen interpretation when the transition is completed from the possible to the actual. The probability function, which covered a wide range of possibilities, is suddenly reduced to a much narrower range by the fact that the experiment has led to a definite result, that actually a certain event has happened. In the formalism this reduction requires that the so-called interference of probabilities, which is the most characteristic phenomena [sic] of quantum theory, is destroyed by the partly undefinable and irreversible interactions of the system with the measuring apparatus and the rest of the world." Heisenberg, W. (1959/1971). Criticism and counterproposals to the Copenhagen interpretation of quantum theory, Chapter 8, pp. 114–128, in Physics and Philosophy: the Revolution in Modern Science, third impression 1971, George Allen & Unwin, London, at p. 125. 14. ^ "Every description of phenomena, of experiments and their results, rests upon language as the only means of communication. The words of this language represent the concepts of ordinary life, which in the scientific language of physics may be refined to the concepts of classical physics. These concepts are the only tools for an unambiguous communication about events, about the setting up of experiments and about their results." Heisenberg, W. (1959/1971). Criticism and counterproposals to the Copenhagen interpretation of quantum theory, Chapter 8, pp. 114–128, in Physics and Philosophy: the Revolution in Modern Science, third impression 1971, George Allen & Unwin, London, at p. 127. 15. ^ "... there is no reason to consider these matter waves as less real than particles." Heisenberg, W. (1959/1971). Criticism and counterproposals to the Copenhagen interpretation of quantum theory, Chapter 8, pp. 114–128, in Physics and Philosophy: the Revolution in Modern Science, third impression 1971, George Allen & Unwin, London, at p. 118. 16. ^ Bohr, N. (1928). 'The quantum postulate and the recent development of atomic theory', Nature, 121: 580–590, doi:10.1038/121580a0, p. 586: "there can be no question of an immediate connexion with our ordinary conceptions". 17. ^ Heisenberg, W. (1959/1971). 'Language and reality in modern physics', Chapter 10, pp. 145–160, in Physics and Philosophy: the Revolution in Modern Science, George Allen & Unwin, London, ISBN 0-04-530016 X, p. 153: "our common concepts cannot be applied to the structure of the atoms." 18. ^ Jammer, M. (1982). 'Einstein and quantum physics', pp. 59–76 in Albert Einstein: Historical and Cultural Perspectives; the Centennial Symposium in Jerusalem, edited by G. Holton, Y. Elkana, Princeton University Press, Princeton NJ, ISBN 0-691-08299-5. On pp. 73–74, Jammer quotes a 1952 letter from Einstein to Besso: "The present quantum theory is unable to provide the description of a real state of physical facts, but only of an (incomplete) knowledge of such. Moreover, the very concept of a real factual state is debarred by the orthodox theoreticians. The situation arrived at corresponds almost exactly to that of the good old Bishop Berkeley." 19. ^ Heisenberg, W. (1927). Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik, Z. Phys. 43: 172–198. Translation as 'The actual content of quantum theoretical kinematics and mechanics' here: "Since the statistical nature of quantum theory is so closely [linked] to the uncertainty in all observations or perceptions, one could be tempted to conclude that behind the observed, statistical world a "real" world is hidden, in which the law of causality is applicable. We want to state explicitly that we believe such speculations to be both fruitless and pointless. The only task of physics is to describe the relation between observations." 21. ^ Belousek, D.W. (1996). "Einstein's 1927 unpublished hidden-variable theory: its background, context and significance". Stud. Hist. Phil. Mod. Phys. 21 (4): 431–461.  22. ^ Holland, P (2005). "What's wrong with Einstein's 1927 hidden-variable interpretation of quantum mechanics?". Foundations of Physics. 35 (2): 177–196. arXiv:quant-ph/0401017Freely accessible. Bibcode:2005FoPh...35..177H. doi:10.1007/s10701-004-1940-7.  23. ^ "Of course the introduction of the observer must not be misunderstood to imply that some kind of subjective features are to be brought into the description of nature." Heisenberg, W. (1959/1971). Criticism and counterproposals to the Copenhagen interpretation of quantum theory, Chapter 8, pp. 114–128, in Physics and Philosophy: the Revolution in Modern Science, third impression 1971, George Allen & Unwin, London, at p. 121. 24. ^ "Historically, Heisenberg wanted to base quantum theory solely on observable quantities such as the intensity of spectral lines, getting rid of all intuitive (anschauliche) concepts such as particle trajectories in space–time. This attitude changed drastically with his paper in which he introduced the uncertainty relations – there he put forward the point of view that it is the theory which decides what can be observed. His move from positivism to operationalism can be clearly understood as a reaction on the advent of Schrödinger’s wave mechanics which, in particular due to its intuitiveness, became soon very popular among physicists. In fact, the word anschaulich (intuitive) is contained in the title of Heisenberg’s paper.", from Claus Kiefer (2002). "On the interpretation of quantum theory – from Copenhagen to the present day". arXiv:quant-ph/0210152Freely accessible.  25. ^ Born, M. (1955). "Statistical interpretation of quantum mechanics". Science. 122 (3172): 675–679. Bibcode:1955Sci...122..675B. doi:10.1126/science.122.3172.675. PMID 17798674.  26. ^ "... the statistical interpretation, which I have first suggested and which has been formulated in the most general way by von Neumann, ..." Born, M. (1953). The interpretation of quantum mechanics, Br. J. Philos. Sci., 4(14): 95–106. 27. ^ Bohr, N. (1928). 'The quantum postulate and the recent development of atomic theory', Nature, 121: 580–590, doi:10.1038/121580a0, p. 586: "In this connexion [Born] succeeded in obtaining a statistical interpretation of the wave functions, allowing a calculation of the probability of the individual transition processes required by the quantum postulate.". 28. ^ Ballentine, L.E. (1970). "The statistical interpretation of quantum mechanics". Rev. Mod. Phys. 42 (4): 358–381. Bibcode:1970RvMP...42..358B. doi:10.1103/revmodphys.42.358.  29. ^ Born, M. (1949). Einstein's statistical theories, in Albert Einstein: Philosopher Scientist, ed. P.A. Schilpp, Open Court, La Salle IL, volume 1, pp. 161–177. 31. ^ "the "collapse" or "reduction" of the wave function. This was introduced by Heisenberg in his uncertainty paper [3] and later postulated by von Neumann as a dynamical process independent of the Schrodinger equation", Claus Kiefer (2002). "On the interpretation of quantum theory – from Copenhagen to the present day". arXiv:quant-ph/0210152Freely accessible.  32. ^ W. Heisenberg "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik," Zeitschrift für Physik, Volume 43, 172–198 (1927), as translated by John Wheeler and Wojciech Zurek, in Quantum Theory and Measurement (1983), p. 74. ("[The] determination of the position selects a definite "q" from the totality of possibilities and limits the options for all subsequent measurements. ... [T]he results of later measurements can only be calculated when one again ascribes to the electron a "smaller" wavepacket of extension λ (wavelength of the light used in the observation). Thus, every position determination reduces the wavepacket back to its original extension λ.") 33. ^ David Bohm, A Suggested Interpretation of the Quantum Theory in Terms of "Hidden Variables", I, Physical Review, (1952), 85, pp 166–179 34. ^ David Bohm, A Suggested Interpretation of the Quantum Theory in Terms of "Hidden Variables", II, Physical Review, (1952), 85, pp 180–193 35. ^ Hugh Everett, Relative State Formulation of Quantum Mechanics, Reviews of Modern Physics vol 29, (1957) pp 454–462, based on unitary time evolution without discontinuities. 36. ^ H. Dieter Zeh, On the Interpretation of Measurement in Quantum Theory, Foundations of Physics, vol. 1, pp. 69–76, (1970). 37. ^ Wojciech H. Zurek, Pointer Basis of Quantum Apparatus: Into what Mixture does the Wave Packet Collapse?, Physical Review D, 24, pp. 1516–1525 (1981) 38. ^ Wojciech H. Zurek, Environment-Induced Superselection Rules, Physical Review D, 26, pp.1862–1880, (1982) 39. ^ Camilleri, K (2006). "Heisenberg and the wave–particle duality". Stud. Hist. Phil. Mod. Phys. 37: 298–315.  40. ^ Camilleri, K. (2009). Heisenberg and the Interpretation of Quantum Mechanics: the Physicist as Philosopher, Cambridge University Press, Cambridge UK, ISBN 978-0-521-88484-6. 41. ^ a b Duane, W. (1923). The transfer in quanta of radiation momentum to matter, Proc. Natl. Acad. Sci. 9(5): 158–164. 42. ^ Jammer, M. (1974). The Philosophy of Quantum Mechanics: the Interpretations of QM in Historical Perspective, Wiley, ISBN 0-471-43958-4, pp. 453–455. 43. ^ Gribbin, J. Q for Quantum 44. ^ Max Tegmark (1998). "The Interpretation of Quantum Mechanics: Many Worlds or Many Words?". Fortsch. Phys. 46 (6–8): 855–862. arXiv:quant-ph/9709032Freely accessible. Bibcode:1998ForPh..46..855T. doi:10.1002/(SICI)1521-3978(199811)46:6/8<855::AID-PROP855>3.0.CO;2-Q.  45. ^ M. Schlosshauer; J. Kofler; A. Zeilinger (2013). "A Snapshot of Foundational Attitudes Toward Quantum Mechanics". Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics. 44 (3): 222–230. arXiv:1301.1069Freely accessible. Bibcode:2013SHPMP..44..222S. doi:10.1016/j.shpsb.2013.04.004.  46. ^ C. Sommer, "Another Survey of Foundational Attitudes Towards Quantum Mechanics", arXiv:1303.2719. 47. ^ T. Norsen, S. Nelson, "Yet Another Snapshot of Foundational Attitudes Toward Quantum Mechanics", arXiv:1306.4646. 48. ^ Steven Weinberg (19 January 2017). "The Trouble with Quantum Mechanics". New York Review of Books. Retrieved 8 January 2017.  49. ^ Erwin Schrödinger, in an article in the Proceedings of the American Philosophical Society, 124, 323–38. 50. ^ Nairz, Olaf; Brezger, Björn; Arndt, Markus; Zeilinger, Anton (2001). "Diffraction of Complex Molecules by Structures Made of Light". Physical Review Letters. 87 (16): 160401. arXiv:quant-ph/0110012Freely accessible. Bibcode:2001PhRvL..87p0401N. doi:10.1103/PhysRevLett.87.160401. PMID 11690188.  51. ^ Brezger, Björn; Hackermüller, Lucia; Uttenthaler, Stefan; Petschinka, Julia; Arndt, Markus; Zeilinger, Anton (2002). "Matter-Wave Interferometer for Large Molecules". Physical Review Letters. 88 (10): 100404. arXiv:quant-ph/0202158Freely accessible. Bibcode:2002PhRvL..88j0404B. doi:10.1103/PhysRevLett.88.100404. PMID 11909334.  52. ^ Michael price on nonlocality in Many Worlds 53. ^ Relativity and Causality in the Transactional Interpretation Archived 2008-12-02 at the Wayback Machine. 54. ^ Collapse and Nonlocality in the Transactional Interpretation 55. ^ a b Werner Heisenberg, Physics and Philosophy, Harper, 1958, p. 137. 56. ^ "God does not throw dice" quote 57. ^ A. Pais, Einstein and the quantum theory, Reviews of Modern Physics 51, 863–914 (1979), p. 907. 58. ^ Bohr recollected his reply to Einstein at the 1927 Solvay Congress in his essay "Discussion with Einstein on Epistemological Problems in Atomic Physics", in Albert Einstein, Philosopher–Scientist, ed. Paul Arthur Shilpp, Harper, 1949, p. 211: " spite of all divergencies of approach and opinion, a most humorous spirit animated the discussions. On his side, Einstein mockingly asked us whether we could really believe that the providential authorities took recourse to dice-playing ("ob der liebe Gott würfelt"), to which I replied by pointing at the great caution, already called for by ancient thinkers, in ascribing attributes to Providence in everyday language." Werner Heisenberg, who also attended the congress, recalled the exchange in Encounters with Einstein, Princeton University Press, 1983, p. 117,: "But he [Einstein] still stood by his watchword, which he clothed in the words: 'God does not play at dice.' To which Bohr could only answer: 'But still, it cannot be for us to tell God, how he is to run the world.'" 59. ^ 'Since the Universe naturally contains all of its observers, the problem arises to come up with an interpretation of quantum theory that contains no classical realms on the fundamental level.', Claus Kiefer (2002). "On the interpretation of quantum theory – from Copenhagen to the present day". arXiv:quant-ph/0210152Freely accessible.  60. ^ Jaynes, E. T. (1989). "Clearing up Mysteries – The Original Goal" (PDF). Maximum Entropy and Bayesian Methods: 7.  61. ^ L. Ph. H. Schmidt; et al. (5 September 2013). "Momentum Transfer to a Free Floating Double Slit: Realization of a Thought Experiment from the Einstein-Bohr Debates". Physical Review Letters. 111 (103201). Bibcode:2013PhRvL.111j3201S. doi:10.1103/PhysRevLett.111.103201.  62. ^ More correctly, when the law of large numbers is applied to solve this problem (so that the opposite change must also occur), a deterministic ensemble interpretation follows from the same law. 63. ^ L. Ph. H. Schmidt; et al. (5 September 2013). "Momentum Transfer to a Free Floating Double Slit: Realization of a Thought Experiment from the Einstein-Bohr Debates". Physical Review Letters. 111 (103201). Bibcode:2013PhRvL.111j3201S. doi:10.1103/PhysRevLett.111.103201. . See also the article on Bohr–Einstein debates. Likely there are even more such apparent interactions in various areas of the photon, for example when reflecting from the whole shutter. 64. ^ Heisenberg, W. (1930). The Physical Principles of the Quantum Theory, translated by C. Eckart and F.C. Hoyt, University of Chicago Press, Chicago, pp. 77–78. 65. ^ Pauling, L.C., Wilson, E.B. (1935). Introduction to Quantum Mechanics: with Applications to Chemistry, McGraw-Hill, New York, pp. 34–36. 66. ^ Landé, A. (1951). Quantum Mechanics, Sir Isaac Pitman and Sons, London, pp. 19–22. 67. ^ Van Vliet, K. (1967). "Linear momentum quantization in periodic structures". Physica. 35: 97–106. Bibcode:1967Phy....35...97V. doi:10.1016/0031-8914(67)90138-3.  68. ^ Van Vliet, K. (2010). "Linear momentum quantization in periodic structures ii". Physica A. 389 (8): 1585–1593. Bibcode:2010PhyA..389.1585V. doi:10.1016/j.physa.2009.12.026.  69. ^ Kate Becker (2013-01-25). "Quantum physics has been rankling scientists for decades". Boulder Daily Camera. Retrieved 2013-01-25.  70. ^ Schlosshauer, Maximilian; Kofler, Johannes; Zeilinger, Anton (2013-01-06). "A Snapshot of Foundational Attitudes Toward Quantum Mechanics". Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics. 44 (3): 222–230. arXiv:1301.1069Freely accessible. Bibcode:2013SHPMP..44..222S. doi:10.1016/j.shpsb.2013.04.004.  71. ^ a b The Quantum Liar Experiment, RE Kastner, Studies in History and Philosophy of Modern Physics, Vol. 41, Iss. 2, May 2010. 72. ^ The non-relativistic Schrödinger equation does not admit advanced solutions. 73. ^ 74. ^ N. David Mermin (2004). "Could Feynman Have Said This?". Physics Today. 57 (5): 10–11. Bibcode:2004PhT....57e..10M. doi:10.1063/1.1768652.  Further reading[edit] • G. Weihs et al., Phys. Rev. Lett. 81 (1998) 5039 • M. Rowe et al., Nature 409 (2001) 791. • J.A. Wheeler & W.H. Zurek (eds), Quantum Theory and Measurement, Princeton University Press 1983 • A. Petersen, Quantum Physics and the Philosophical Tradition, MIT Press 1968 • H. Margeneau, The Nature of Physical Reality, McGraw-Hill 1950 • M. Chown, Forever Quantum, New Scientist No. 2595 (2007) 37. • T. Schürmann, A Single Particle Uncertainty Relation, Acta Physica Polonica B39 (2008) 587. [1] External links[edit]
cb7057edc5b86ff1
lördag 29 juni 2013 The Linear Scalar MultiD Schrödinger Equation as Pseudo-Science If we are still going to put up with these damn quantum jumps, I am sorry that I ever had anything to do with quantum theory. (Erwin Schrödinger) The pillars of modern physics are quantum mechanics and relativity theory, which both however are generally acknowledged to be fundamentally mysterious and incomprehensible to even the sharpest minds and thus gives modern physics a shaky foundation. The mystery is so deep that it has been twisted into a virtue with the hype of string theory representing maximal mystery. The basic trouble with quantum mechanics is its multi-dimensional wave function solution depending on 3N space coordinates for an atom with N electrons,  as solution to the linear scalar multi-dimensional Schrödinger equation, which cannot be given a real physical meaning because reality has only 3 space coordinates. The way out to save the linear scalar multidimensional Schrödinger equation, which was viewed to be a gift by God and as such was untouchable, was to give the multidimensional wave function an  interpretation as the probability of the N-particle configuration given by the 3N coordinates. Quantum mechanics based on the linear scalar Schrödinger equation was thus rescued at the cost of making the microscopic atomistic world into a game of roulette asking for microscopics of microscopics as contradictory reduction in absurdum. But God does not write down the equations describing the physics of His Creation, only human minds and if insistence on a linear scalar (multidimensional) Schrödinger wave equation leads to contradiction, the only rational scientific attitude would be to search for an alternative, most naturally as a system of non-linear wave equations in 3 space dimensions, which can be given a deterministic physical meaning. There are many possibilities and one of them is explored as Many-Minds Quantum Mechanics in the spirit of Hartree. It is well known that macroscopic mechanics including planetary mechanics is not linear, and there is no reason to expect that atomistic physics is linear and allows superposition. There is no rational reason to view the linear scalar multiD Schrödinger equation as the basis of atomistic physics (other than as a gift by God which cannot be questioned), and physics without rational reason is unreasonable and thus may represent pseudo-science. The linear scalar multiD Schrödinger equation with an incredibly rich space of solutions beyond reason, requires drastic restrictions to represent anything like real physics.  Seemingly out of the blue, physicists  have come to agree that God can play only with fully symmetric (bosons) or antisymmetric (fermions) wave functions with the Pauli Exclusion Principle as a further restriction. But nobody has been able to come with any rational reasons for the restrictions to symmetry, antisymmetry and exclusion. According to Leibniz Principle of Sufficient Reason, this makes these restrictions into ad hoc pseudo-science.   måndag 17 juni 2013 Welcome Back Reality: Many-Minds Quantum Mechanics The new book Farewell to Reality by Jim Baggott gets a positive reception on Not Even Wrong (and accordingly a negative by Lubos). The main message of the book is that modern physics (SUSY, GUTS, Superstring/M-theory, the multiverse) is no longer connected to reality in the sense that experimental support is no longer possible and therefore is not considered to even be needed. But science without connection to reality is pseudo-science, and so how can it be that physics classically considered to be the model of all sciences, in modern times seems to have evolved into pseudo-science? Let's take a look back and see if we can find an answer: My view is that the departure from reality started in the 1920s with the introduction of the multi-dimensional wave function as solution to a linear scalar Schrödinger equation, with 3N space dimensions for an atom with N electrons. Such a wave function does not describe real physics, since reality has only 3 space dimensions and the only way out insisting on the truth of the linear Schrödinger equation as given by God,  was to give the wave function a statistical interpretation. But that meant a non-physical and non-real interpretation, since there is no reason to believe that real physics can operate like an insurance company filled with experts doing statistics, in Einstein's words expressed as "God does not play dice".  The statistical interpretation was so disgusting to Schrödinger that he gave up further exploration of the quantum mechanics he had invented.  Schrödinger believed that the wave function had a physical meaning as a description of the electron distribution around a positive kernel of an atom. A non-linear variant of the Schrödinger equation in the form of a system of N equations in 3 space dimensions for an N-electron atom was early on suggested by Hartree as a method to compute approximate solutions of the multi-dimensional Schrödinger, an equation which cannot be solved, and the corresponding wave function can be given a physical meaning as required by Schrödinger. I have explored this idea a little bit in the form of Many-Minds Quantum Mechanics (MMQM) as an analog of Many-Minds Relativity. MMQM seems to deliver a ground state of Helium corresponding to the observed minimal energy E = - 2.904,  with the 2 electrons of Helium distributed basically as two half-spherical shells (blue and green patches) filling a full shell around the kernel (red) as illustrated in the left picture. This configuration is to be compared with the spherically symmetric distributions of Parahelium 1s1s (hydrogenic orbital) in the middle with E = -2.75 and Ortohelium 1s2s with even bigger energy to the right: Classical quantum mechanics based on a multi-dimenisonal wave function satisfying the linear Schrödinger equation (QM) presents Parahelium as the ground state of Helium with the two electrons sharing a common spherically symmetric orbit in accordance with the Pauli Exclusion principle (PEP).  But the energy E = -2.75 of Parahelium is greater than the observed E = -2.904 and so Parahelium cannot be the ground state. QM with PEP thus does not describe even Helium correctly, a fact which is hidden in text books, while the non-spherical distribution of MMQM appears to give the correct energy. MMQM does not require any PEP and suggests a different explanation of the electronic shell structure of an atom with the numbers of 2, 8, 8, 18, 18... of electrons in each shell arising as 2 x n x n, with n=1,  2, 2, 3, 3, and the factor 2 reflecting the structure of the innermost shell as that of Helium, and n x n the two-dimensional aspect of a shell. The Farewell to Reality from modern physics was thus initiated with the introduction of the multi-dimensional wave function of the linear Schrödinger equation of QM in the 1920s, and the distance to Reality has only increased since then.  Once the connection the Reality is given up there is no limit to how far you can go with your favorite theory. QM is cut in stone as the linear multidimensional Schrödinger equation with wave function solution being either symmetric or antisymmetric and satisfying PEP, but QM in this form lacks real physical interpretation. The exploration of non-linear Schrödinger equations in 3 space dimensions with obvious possibilities of physical interpretation, has been pursued only as a way to compute approximate solutions to the multi-dimensional linear Schrödinger equation, but may merit attention also as true models of physical reality.    söndag 16 juni 2013 Gomorron Sverige Ärende hos Academic Rights Watch Academic Rights Watch har nu tagit upp mitt ärende "Gomorron Sverige", som inom kort kommer att behandlas av Kammarrätten i Stockholm. Den större frågan är om KTH har brutit mot Yttrandefrihetsgrundlagen, eller inte. Essence of Dynamics 1                            Computed turbulent flow around an airplane represents Case 3. below. The dynamics of a physical system can typically be described as an initial value problem of finding a vector function U(t) depending on time t such that • dU/dt + A(U) = F  for t > 0 with U(0) = G, where, A(U) is a given vector function of U,  F(t) is a given forcing and G is a given intial value at t = 0. In the basic case A(U) = A*U is linear with A = A(t) a matrix depending on time, which is also the linearized form of the system describing growth/decay of perturbations characterizing stable/unstable dynamics. An essential aspect of the dynamics is the perturbation dynamics described by the linearized system which is determined by the eigenvalues of its linearization matrix A, assuming for simplicity that A is diagonalizable and independent of time: 1. Positive eigenvalues: Stable in forward time; unstable in backward time. 2. Negative eigenvalues : Unstable in forward time; stable in backward time. 3. Both positive and negative eigenvalues: Both unstable and stable in both forward and backward time. 4. Imaginary eigenvalues: Wave solutions marginally stable in both forward and backward time. 5. Complex eigenvalues: Combinations of 1. - 4.          Here Case 1. represents a dissipative system with exponential decay of perturbations in forward time making long time prediction possible, but backward time reconstruction difficult because of exponential growth of perturbations. This is the dynamics of a diffusion process, e.g. the spreading of a contaminant by diffusion or heat conduction.  Case 2. is the reverse with forward prediction difficult but backward reconstruction possible. This is the dynamics of a Big Bang explosion.  Case 3. represents turbulent flow with both exponential growth and decay giving rise to complex dynamics without explosion, with mean-value but not point value predictability in forward time. The picture above shows the turbulent flow around an airplane with mean-value quantities like drag and lift being predictable (in forward time). This case represents the basic unsolved problem of classical mechanics which is now being uncovered by computational methods including revelation of the secret of flight (hidden in the above picture).   Case 4 represents wave propagation with possibilities of both forward prediction and backward reconstruction, with the harmonic oscillator as basic case.  There is a further limit case with A non-diagonalizable with an incomplete set of eigenvectors for a multiple zero eigenvalue, with possibly algebraic growth of perturbations, a case arising in transition to turbulence in parallel flow.  onsdag 12 juni 2013 The Dog and the Tail: Global Temperature vs CO2, continuation. This is a continuation of the previous post. Consider the following special case with T(t) = T_0 for t < 1970, T(t) increasing linearly for 1970 < t < 1998 to the value T_1 with T(t) = T_1 for t > 1998. The corresponding solution C(t) of the equation dC/dt increases linearly for t < 1970, quadratically for 1970 < t < 1998  and again linearly for t > 1998 as sketched by the solid lines in the following graph: We see that after 1998 the temperature stays constant while the CO2 increases linearly. The solid lines could picture reality. On the other hand, if you want to create a fiction of CO2 alarmism, you would argue as follows: Look at the solid lines before 1998 representing recorded reality and simply make an extrapolation until 2020 of the simultaneous increase of both T and C during the period 1970 - 1998, to get the dotted red line as a predicted alarming global warming in 2020 resulting from a continued increase of CO2. The extrapolation would then correspond to using a connection between T and C of the form T ~ C with T determined by C, instead of the as in the above model dC/dt = T with C determined by T. This shows the entirely different global warming scenarios obtained using the model T ~ C with T determined by C, and the model dC/dt = T with C determined by T. tisdag 11 juni 2013 The Dog and the Tail: Global Temperature vs CO2 Prof. Murry Salby's presentation in Hamburg in April is a showcase of effective scientific communication based on mathematics. Salby gives strong evidence based on observation that the offset of concentration C(t) of atmospheric CO2 as a function of time t is determined by the offset of global temperature T(t) by an equation of the form • dC/dt  = T   for all t > 0, C(0) = 0, after suitable scaling of C(t). In other words, C(t) is the integral of T(t), so that if T(t) = cos(t) then C(t) = sin(t) with a time lag of a quarter of a period.   The fact that in the equation dC/dt = T the concentration C(t) is determined by T(t), comes out as an aspect of stability (or wellposedness): Integration is a stable or well posed mathematical operation in the sense that small variations in the integrand T(t) gives small variations in the integral C(t).  On the other hand, differentiation is a an unstable or ill posed mathematical operation: small variations dC(t) in C(t) can give rise to large variations in dC(t)/dt as a result of division by a small dt. This means that viewing T(t) in the relation dC/dt = T to be determined by C(t) corresponds to an unstable mathematical operation.  To make a connection from cause to effect in physics, requires stability and thus in the observed relation dC/dt = T, it is C(t) which is determined by T(t) as the cause and not the other way around. Another way of expressing this fact is to say that C(t) lags T(t) with a quarter of a period, so that variations in the cause T(t) precedes the effect as variations C(t).  This is the observation from ice core proxies showing that temperature changes before CO2 and thus temperature is the dog and CO2 the tail with the dog wagging the tail, and not the other way around as the basic postulate of CO2 alarmism: måndag 10 juni 2013 Need of Education in Mathematics - IT Images des Math issued by CNRS reports on a proposal by L'Academie des Sciences to strengthen school education in Information Science and Technology (IT), and expresses the concern that while  the proposal identifies the strong impact of IT in physics, chemistry, biology, economy and social sciences, the connection between IT and mathematics is less visible. The reason L'Academie des Sciences forgets the fundamental connection between mathematics and IT, is that school mathematics is focussed on a tradition of analytical mathematics, where the IT-revolution of computational mathematics is not visible. This connects to my proposal of a reform of school mathematics into a new school subject named Mathematics - IT combining analytical and computational mathematics with the world of apps as the world of applications of mathematics using IT. Without such a reform school mathematics will follow the fate of classical latin and greek once at the center of the curriculum but now gone. This is not understood by mathematicians paralyzed by the world of apps based on computational mathematics. The strength of the (aristochratic) tradition of analytical mathematics is preventing a marriage with (newly rich) computational mathematics, which would serve as the adequate school mathematics of the IT age. As often, a strength can be turned into a fatal weakness when conditions change but strong tradition resists reform. lördag 1 juni 2013 Milestone: Direct Fem-Simulation of Airflow around Complete Airplane The first direct computational simulation of the flow of air around a complete airplane (DLR F11 high-lift configuration) has been performed by the CTLab group at KTH led by Johan Hoffman in the form of Direct Fem-Simulation (DFS). The simulation gives support to the new theory of flight developed by Hoffman and myself now under review by Journal of Mathematical Fluid Mechanics after initial rejection by AIAA. The milestone will be presented at 2nd AIAA CFD High Lift Prediction Workshop, San Diego, June 22-23 2013. DFS is performed by computational solution using an adaptive  residual-stabilized finite element method for the Navier-Stokes equations with a slip boundary condition modeling the small skin friction of air flow. DFS opens for the first time the possibility of constructing a realistic flight simulator allowing flight training under extreme dynamics, beyond the vision of AIAA limited by classical flight theory. For more details browse my upcoming talk at ADMOS 2013.
da4922d642e4ac6c
Entropy 2014, 16(2), 699-725; doi:10.3390/e16020699 Thermodynamics as Control Theory David Wallace Balliol College, Oxford OX1 3BJ, UK; E-Mail: [email protected] Received: 23 October 2013; in revised form: 18 November 2013 / Accepted: 17 December 2013 / Published: 24 January 2014 : I explore the reduction of thermodynamics to statistical mechanics by treating the former as a control theory: A theory of which transitions between states can be induced on a system (assumed to obey some known underlying dynamics) by means of operations from a fixed list. I recover the results of standard thermodynamics in this framework on the assumption that the available operations do not include measurements which affect subsequent choices of operations. I then relax this assumption and use the framework to consider the vexed questions of Maxwell’s demon and Landauer’s principle. Throughout, I assume rather than prove the basic irreversibility features of statistical mechanics, taking care to distinguish them from the conceptually distinct assumptions of thermodynamics proper. thermodynamics; landauer; control theory 1. Introduction Thermodynamics is misnamed. The name implies that it stands alongside the panoply of other “X-dynamics” theories in physics: Classical dynamics, quantum dynamics, electrodynamics, hydrodynamics, chromodynamics and so forth [1]. But what makes these theories dynamical is that they tell us how systems of a certain kind—classical or quantum systems in the abstract, or charged matter and fields, or fluids, or quarks and gluons, or whatever—evolve if left to themselves. The paradigm of a dynamical theory is a state space, giving us the possible states of the system in question at an instant, and a dynamical equation, giving us a trajectory (or, perhaps, a family of trajectories indexed by probabilities) through each state that tells us how that state will evolve under the dynamics. Thermodynamics basically delivers on the state space part of the recipe: Its state space is the space of systems at equilibrium. But it is not in the business of telling us how those equilibrium states evolve if left to themselves, except in the trivial sense that they do not evolve at all: That is what equilibrium means, after all. When the states of thermodynamical systems change, it is because we do things to them: We put them in thermal contact with other systems, we insert or remove partitions, we squeeze or stretch or shake or stir them. And the laws of thermodynamics are not dynamical laws like Newton’s: They concern what we can and cannot bring about through these various interventions. There is a general name for the study of how a system can be manipulated through external intervention: Control theory. Here again a system is characterised by its possible states, but instead of a dynamics being specified once and for all, a range of possible control actions is given. The name of the game is to investigate, for a given set of possible control actions, the extent to which the system can be controlled: That is, the extent to which it can be induced to transition from one specified state to another. The range of available transitions will be dependent on the forms of control available; the more liberal a notion of control, the more freedom we would expect to have to induce arbitrary transitions. This conception of thermodynamics is perfectly applicable to the theory understood phenomenologically: That is, without any consideration of its microphysical foundations. However, my purpose in this paper is instead to use the control-theory paradigm to explicate the relation between thermodynamics and statistical mechanics. That is: I will begin by assuming the main results of non-equilibrium statistical mechanics and then consider what forms of control theory they can underpin. In doing so I hope to clarify both the control-theory perspective itself and the reduction of thermodynamics to statistical mechanics, as well as providing some new ways to get insight into some puzzles in the literature: Notably, those surrounding Maxwell’s Demon and Landauer’s Principle. In Sections 2 and 3, I review the core results of statistical mechanics (making no attempt to justify them). In Sections 4 and 5 I introduce the general idea of a control theory and describe two simple examples: Adiabatic manipulation of a system and the placing of systems in and out of thermal contact. In Sections 6–8, I apply these ideas to construct a general account of classical thermodynamics as a control theory, and demonstrate that a rather minimal form of thermodynamics possesses the full control strength of much more general theories; I also explicate the notion of a one-molecule gas from the control-theoretic (and statistical-mechanical) perspective). In the remainder of the paper, I extend the notion of control theory to include systems with feedback, and demonstrate in what senses this does and does not increase the scope of thermodynamics. I develop the quantum and classical versions of the theory in parallel, and fairly deliberately flit between quantum and classical examples. When I use classical examples, in each case (I believe) the discussion transfers straightforwardly to the quantum case unless noted otherwise. The same is probably true in the other direction; if not, no matter, given that classical mechanics is of (non-historical) interest in statistical physics only insofar as it offers a good approximation to quantum mechanics. 2. Statistical-Mechanical Preliminaries Statistical mechanics, as I will understand it in this paper, is a theory of dynamics in the conventional sense: It is in the business of specifying how a given system will evolve spontaneously. For the sake of definiteness, I lay out here exactly what I assume to be delivered by statistical mechanics. The systems are classical or quantum systems, characterised inter alia by a classical phase space or quantum-mechanical Hilbert space Hamiltonian H[VI] which may depend on one or more external parameters VI (in the paradigm case of a gas in a box, the parameter is volume). In the quantum case I assume the spectrum of the Hamiltonian to be discrete; in either case I assume that the possible values of the parameters comprise a connected subset of RN and that the Hamiltonian depends smoothly on them. The states are probability distributions over phase space, or mixed states in Hilbert space. (Here I adopt what is sometimes called a Gibbsian approach to statistical mechanics; in [2], I defend the claim that this is compatible with a view of statistical mechanics as entirely objective.) Even in the classical case the interpretation of these probabilities is controversial; sometimes they are treated as quantifying an agent’s state of knowledge, sometimes as being an objective feature of the system; my own view is that the latter is correct (and that the probabilities are a classical limit of quantum probabilities; cf [3]). In the quantum case the interpretation of the mixed states merges into the quantum measurement problem, an issue I explore further in [4]. For the most part, though, the results of this paper are independent of the interpretation of the states. Given two systems, their composite is specified by the Cartesian product of the phase spaces (classical case) or by the tensor product of the Hilbert spaces (quantum case), and by the sum of the Hamiltonians (either case). The Gibbs entropy is a real function of the state, defined in the classical case as S G ( ρ ) = d x ρ ( x ) ln ρ ( x ) and in the quantum case as S G ( ρ ) = Tr ( ρ ln ρ ) . The dynamics are given by some flow on the space of states. In Hamiltonian dynamics, or unitary quantum mechanics, this would be the flow generated by Hamilton’s equation or the Schrödinger equation from the Hamiltonian H[VI], under which the Gibbs entropy is a constant of the motion; in statistical mechanics, however, we assume only that the flow (a) is entropy-non-decreasing, and (b) conserves energy, in the sense that the probability given by the state to any given energy is invariant under the flow. For any given system there is some time, the equilibration timescale, after which the system has evolved to that state which maximises the Gibbs entropy subject to the conservation constraint above [5]. Now, to be sure, it is controversial at best how statistical mechanics delivers all this. In particular, we have good reason to suppose that isolated (classical or quantum) systems ought really to evolve by Hamiltonian or unitary dynamics, according to which the Gibbs entropy is constant and equilibrium is never achieved; more generally, the statistical-mechanical recipe I give here is explicitly time-reversal-noninvariant, whereas the underlying dynamics of the systems in question have a time reversal symmetry. There are a variety of responses to offer to this problem, among them: • Perhaps no system can be treated as isolated, and interaction with an external environment somehow makes the dynamics of any realistic system non-Hamiltonian. • Perhaps the probability distribution (or mixed state) needs to be understood not as a property of the physical system but as somehow tracking our ignorance about the system’s true state, and the increase in Gibbs entropy represents an increase in our level of ignorance. • Perhaps the true dynamics is not, after all, Hamiltonian, but incorporates some time-asymmetric correction. My own preferred solution to the problem (and the one that I believe most naturally incorporates the insights of the “Boltzmannian” approach to statistical mechanics) is that the state ρ should not be interpreted as the true probability distribution over microstates, but as a coarse-grained version of it, correctly predicting the probabilities relevant to any macroscopically manageable process but not correctly tracking the fine details of the microdynamics, and that the true signature of statistical mechanics is the possibility of defining (in appropriate regimes, under appropriate conditions, and for appropriate timescales) autonomous dynamics for this coarse-grained distribution that abstract away from the fine-grained details. The time asymmetry of the theory, on this view, arises from a time asymmetry in the assumptions that have to be made to justify that coarse-graining. But from the point of view of understanding the reduction of thermodynamics to statistical mechanics, all this is beside the point. The most important thing to realise about the statistical-mechanical results I give above is that manifestly they are correct: The entire edifice of statistical mechanics (a) rests upon them; and (b) is abundantly supported by empirical data. (See [6] for more on this point.) There is a foundational division of labour here: the question of how this machinery is justified given the underlying mechanics is profoundly important, but it can be distinguished from the question of how thermodynamics relates to statistical mechanics. Statistical mechanics is a thoroughly successful discipline in its own right, and not merely a foundational project to shore up thermodynamics. 3. Characterising Statistical-Mechanical Equilibrium The “state which maximises the Gibbs entropy” can be evaluated explicitly. If the initial state ρ has a definite energy U, it will evolve to the distribution with the largest Gibbs entropy for that energy, and it is easy to see that (up to normalisation) in the classical case this is the uniform distribution on the hypersurface H[VI](x) = U, and that in the quantum case it is the projection onto the eigensubspace of [VI] with energy U. Writing ρU to denote this state, it follows that in general the equilibrium state achieved by a general initial ρ will be that statistical mixture of ρU that gives the same probability to each energy as ρ did. In the classical case this is ρ d U Pr ( U ) ρ U Pr ( U ) = ρ δ ( H U ) ; in the quantum case, it is ρ i Pr ( U i ) ρ U where the sum is over the distinct eigenvalues Ui of the Hamiltonian, Pr(Ui) = Tr(ρΠi), and Πi projects onto the energy Ui subspace. I will refer to states of this form (quantum or classical) as generalised equilibrium states. We can define the density of states 𝒱(U) at energy U for a given Hamiltonian H in the classical case as follows: We take 𝒱(U)δU to be the phase-space volume of states with energies between U and U + δU. We can use the density of states to write the Gibbs entropy of a generalised equilibrium state explicitly as S G ( ρ ) = d U Pr ( U ) ln 𝒱 ( U ) + ( d U Pr ( U ) ln Pr ( U ) ) . In the quantum case it is instead S G ( ρ ) = i Pr ( U i ) ln ( Dim U i ) + ( i Pr ( U i ) ln Pr ( U i ) ) where Dim(Ui) is the dimension of the energy-Ui subspace. Normally, I will assume that the quantum systems we are studying have sufficiently close-spaced energy eigenstates and sufficiently well-behaved states that we can approximate this expression by the classical one (defining 𝒱δU as the total dimension of eigensubspaces with energies between U and U + δU, and Pr(U)δU as the probability that the system has one of the energies in the range (U, U + δU)). Now, suppose that the effective spread ΔU over energies of a generalised equilibrium state around its expected energy U0 is narrow enough that the Gibbs entropy can be accurately approximated simply as the logarithm of 𝒱(U0). States of this kind are called microcanonical equilibrium states, or microcanonical distributions (though the term is sometimes reserved for the ideal limit, where Pr(U) is a delta function at U0, so that ρ(x) = (1/𝒱(U0))δ(H(x) − U0)). A generalised equilibrium state can usefully be thought of as a statistical mixture of microcanonical distributions. If ρ is a microcanonical ensemble with respect to H[VI] for particular values of the parameters VI, in general it will not be even a generalised equilibrium state for different values of those parameters. However, if close-spaced eigenvalues of the Hamiltonian remain close-spaced even when the parameters are changed, ρ will equilibrate into the microcanonical distribution. In this case, I will say that the system is parameter-stable; I will assume parameter stability for most of the systems I discuss. A microcanonical distribution is completely characterised (up to details of the precise energy width δU and the spread over that width) by its energy U and the external parameters VI. On the assumption that 𝒱(U) is monotonically increasing with U for any values of the parameters (and, in the quantum case, that the system is large enough that we can approximate 𝒱(U) as continuous) we can invert this and regard U as a function of Gibbs entropy S and the parameters. This function is (one form of) the equation of state of the system: For the ideal monatomic gas with N mass-m particles, for instance, we can readily calculate that 𝒱 ( U , V ) V N ( 2 m U ) 3 N / 2 1 and hence (for N ≫ 1) S S 0 + N ln V + ( 3 N / 2 ) ln U , which can be inverted to get U in terms of V and S. The microcanonical temperature is then defined as T = ( U S ) V I (for the ideal monatomic gas, it is 2U/3N). At the risk of repetition, it is not (or should not be!) controversial that these probability distributions are empirically correct as regards predictions of measurements made on equilibrated systems, both in terms of statistical averages and of fluctuations around those averages. It is an important and urgent question why they are correct, but it is not our question. 4. Adiabatic Control Theory Given this understanding of statistical mechanics, we can proceed to the control theory of systems governed by it. We will develop several different control theories, but each will have the same general form, being specified by: • A controlled object, the physical system being controlled. • A set of control operations that can be performed on the controlled object. • A set of feedback measurements that can be made on the controlled object. • A set of control processes, which are sequences of control operations and feedback measurements, possibly subject to additional constraints and where the control operation performed at a given point may depend on the outcomes of feedback measurements made before that point. Our goal is to understand the range of transitions between states of the controlled object that can be induced. In this section and the next I develop two extremely basic control theories intended to serve as components for thermodynamics proper in Section 6. The first such theory, adiabatic control theory, is specified as follows: • The controlled object is a statistical-mechanical system which is parameter-stable and initially at microcanonical equilibrium. • The control operations consist of (a) smooth modifications to the external parameters of the controlled object over some finite interval of time; (b) leaving the controlled object alone for a time long compared to its equilibration timescale. • There are no feedback measurements: The control operations are applied without any feedback as to the results of previous operations. • The control processes are sequences of control operations ending with a leave-alone operation. Because of parameter stability, the end state is guaranteed to be not just at generalised equilibrium but at microcanonical equilibrium. The control processes therefore consist of moving the system’s state around in the space of microcanonical equilibrium states. Since for any value of the parameters the controlled object’s evolution is entropy-nondecreasing, one result is immediate: The only possible transitions are between states x, y with SG(y) ≥ SG(x). The remaining question is: Which such transitions are possible? To answer this, consider the following special control processes: A process is quasi-static if any variations of the external parameters are carried out so slowly that the systems can be approximated to any desired degree of accuracy as being at or extremely close to equilibrium throughout the process. A crucial feature of quasi-static processes is that the increase in Gibbs entropy in such a process is extremely small, tending to zero as the length of the process tends to infinity. To see this [7], suppose for simplicity that there is only one external parameter whose value at time t is V (t). If the expected energy of the state at time t is U(t), there will be a unique microcanonical equilibrium state ρeq[U(t), V (t)] for each time determined by the values U(t) and V (t) of the expected energy and the parameter at that time. The full state ρ(t) at that time can be written as ρ ( t ) = ρ e q [ U ( t ) , V ( t ) ] + δ ρ ( t ) , where the requirement that the change is quasi-static imposes the requirement that δρ(t) is small. The system’s dynamics is determined by some equation of the form ρ ˙ ( t ) = L [ V ( t ) ] ρ ( t ) , with the linear operator L depending on the value of the parameter. By the definition of equilibrium L[V]ρeq[U, V] = 0, so that in fact ρ ˙ ( t ) = L [ V ( t ) ] δ ρ ( t ) ; it follows that if the quasi-static process takes overall time T and brings about change Δρ in the state, we have δ ρ ~ Δ ρ / T , i. e. , the typical magnitude of δρ(t) scales with 1/T for fixed overall change. Now the rate of change of Gibbs entropy in such a process is given by S ˙ ( t ) = δ S δ ρ | ρ ( t ) ρ ˙ ( t ) = δ S δ ρ | ρ ( t ) L [ V ( t ) ] δ ρ ( t ) which may be expanded around ρeq(t) ≡ ρeq[U(t), V (t)] to give S ˙ ( t ) = ( δ S δ ρ | ρ e q ( t ) + δ 2 S δ ρ 2 | ρ e q ( t ) δ ρ ( t ) + o ( δ ρ 2 ) ) L [ V ( t ) ] δ ρ ( t ) . But since ρeq maximises Gibbs entropy for given expected energy, and since the time evolution operator L[V ] conserves expected energy, δ S δ ρ | ρ e q ( t ) L [ V ( t ) ] δ ρ ( t ) = 0 , and so rate of entropy increase vanishes to first order in δρ. From (13) it follows that total entropy increase scales at most like 1/T and so can be made arbitrarily small [8]. (To see intuitively what is going on here, consider a very small change VV + δV made suddenly to a system initially at equilibrium. The sudden change leaves the state, and hence the Gibbs entropy, unchanged. The system then regresses to equilibrium on a trajectory of constant expected energy. But since the change is very small, and since the equilibrium state is an extremal state of entropy on the constant-expected-energy surface, to first order in δV the change in entropy in this part of the process is also zero.) To summarise: Quasi-static adiabatic processes are isentropic: They do not induce changes in system entropy. What about non-quasi-static adiabatic processes? Well, if at any point in the process the system is not at (or very close to) equilibrium, by the baseline assumptions of statistical mechanics it follows that its entropy will increase as it evolves. So an adiabatic control process is isentropic if quasi-static, entropy-increasing otherwise. In at least some cases, the result that quasi-static adiabatic processes are isentropic does not rely on any explicit equilibration assumption. To be specific: If the Hamiltonian has the form H ^ [ V I ] = i U i ( Λ I ) | ψ i ( Λ I ) ψ i ( Λ I ) | then the adiabatic theorem of quantum mechanics [9] tells us that if the parameters are changed sufficiently slowly from λ I 0 to λ I 1 then (up to phase, and to an arbitrarily high degree of accuracy) the Hamiltonian dynamics will cause | ψ i ( λ I 0 ) to evolve to | ψ i ( λ I 1 ) ; hence, in this regime the dynamics takes microcanonical states to microcanonical states of the same energy. In any case, we now have a complete solution to the control problem. By quasi-static processes we can move the controlled object’s state around arbitrarily on a given constant-entropy hypersurface; by applying a non-quasi-static process we can move it from one such hypersurface to a higher-entropy hypersurface. So the condition that the final state’s entropy is not lower than the initial state’s is sufficient as well as necessary: Adiabatic control theory allows a transition between equilibrium states iff it is entropy-nondecreasing. A little terminology: The work done on the controlled object under a given adiabatic control process is just the change in its energy, and is thus the same for any two control processes that induce the same transition, and it has an obvious physical interpretation: The work done is the energy cost of inducing the transition by any physical implementation of the control theory. (In phenomenological treatments of thermodynamics it is usual to assume some independent understanding of “work done”, so that the observation that adiabatic transitions from x to y require the same amount of work however they are performed becomes contentful, and is one form of the First Law of Thermodynamics; from our perspective, though, it is just an application of conservation of energy.) Following the conventions of thermodynamics, we write d―W for a very small quantity of work done during some part of a quasi-static control process. We have đ W = d U | δ S = 0 = I ( U V I ) V J , S d V I I P I d V I where the derivative is taken with all values of VJ other than VI held constant and the last step implicitly defines the generalised pressures. (In the case where VI just is the volume, PIδV is the energy cost to compress the gas by an amount δV, and hence is just the ordinary pressure.) 5. Thermal Contact Theory Our second control theory, thermal contact theory, is again intended largely as a tool for the development of more interesting theories. To develop it, suppose that we have two systems initially dynamically isolated from one another, and that we introduce a weak interaction Hamiltonian between the two systems. Doing so, to a good approximation, will leave the internal dynamics of each system largely unchanged but will allow energy to be transferred between the systems. Given our statistical-mechanical assumptions, this will cause the two systems (which are now one system with two almost-but-not-quite-isolated parts) to proceed, on some timescale, to a joint equilibrium state. When two systems are coupled in this way, we say that they are in thermal contact. Given our assumption that the interaction Hamiltonian is small, we will assume that the equilibration timescales of each system separately are very short compared to the joint equilibration timescale, so that the interaction is always between systems which separately have states extremely close to the equilibrium state. The result of this joint equilibration can be calculated explicitly. If two systems each confined to a narrow energy band are allowed to jointly equilibrate, the energies of one or other may end up spread across a wide range. For instance, if one system consists of a single atom initially with a definite energy E and it is brought in contact with a system of a great many such atoms, its post-equilibration energy distribution will be spread across a large number of states. However, for the most part we will assume that the microcanonical systems we consider are not induced to transition out of microcanonical equilibrium as a consequence of joint equilibration; systems with this property I call thermally stable. There is a well-known result that characterises systems that equilibrate with thermally stable systems which is worth rehearsing here. Suppose two systems have density-of-state functions 𝒱1, 𝒱2 and are initially in microcanonical equilibrium with total energy U. The probability of the two systems having energies U1, U2 is then Pr ( U 1 , U 2 ) 𝒱 ( U 1 ) 𝒱 ( U 2 ) δ ( U 1 + U 2 U ) and so the probability of the first system having energy U1 is Pr ( U 1 ) 𝒱 ( U 1 ) 𝒱 ( U U 1 ) . Assuming that the second system is thermally stable, we express the second term on the right hand side in terms of its Gibbs entropy and expand to first order around U (the assumption that the second system’s energy distribution is narrow tells us that higher terms in the expansion will be negligible): 𝒱 ( U U 1 ) = exp ( S ( U U 1 ) ) exp { S ( U ) ( S U ) V I U 1 } . Since the partial derivative here is just the inverse of the microcanonical temperature T of the second system, the conclusion is that Pr ( U 1 ) 𝒱 ( U 1 ) e U 1 / T , which is recognisable as the canonical distribution at canonical temperature T . In any case, so long as we assume thermal stability then systems placed into thermal contact may be treated as remaining separately at equilibrium as they evolve towards a joint state of higher entropy. We can now state thermal contact theory: • The controlled object is a fixed, finite collection of mutually isolated thermally stable statistical mechanical systems. • The available control operations are (i) placing two systems in thermal contact; (ii) breaking thermal contact between two systems; (iii) waiting for some period of time. • There are no feedback measurements. • The control processes are arbitrary sequences of control operations. Given the previous discussion, thermal contact theory shares with adiabatic control theory the feature of inducing transitions between systems at equilibrium, and we can characterise the evolution of the systems during the control process entirely in terms of the energy flow between systems. The energy flow between two bodies in thermal contact is called heat. (A reminder: Strictly speaking, the actual amount of heat flow is a probabilistic quantity very sharply peaked around a certain value.) The quantitative rate of heat flow between two systems in thermal contact will of course depend inter alia on the precise details of the coupling Hamiltonian between the two systems. But in fact the direction of heat flow is independent of these details. For the total entropy change (in either the microcanonical or canonical framework) when a small quantity of heat d―Q flows from system A to system B is δ S = δ S A + δ S B = { ( S A U A ) V i + ( S A U A ) V i } đ Q . But since the thermodynamical temperature T is just the rate of change of energy with entropy while external parameters are held constant, this can be rewritten as δ S = ( 1 / T B 1 / T A ) đ Q . So heat will flow from A to B only if the inverse thermodynamical temperature of A is lower than that of B. In most cases (there are exotic counter-examples, notably in quantum systems with bounded energy) thermodynamical temperature is positive, so that this can be restated as: Heat will flow from A to B only if the thermodynamical temperature of A is greater than that of B. For simplicity I confine attention to this case. If we define two systems as being in thermal equilibrium when placing them in thermal contact does not lead to any heat flow between them, then we have the following thermodynamical results: Two systems each in thermal equilibrium with a third system are at thermal equilibrium with one another; hence, thermal equilibrium is an equivalence relation. (The Zeroth Law of Thermodynamics). There exist real-valued empirical temperature functions which assign to each equilibrium system X a temperature t(X) such that heat flows from X to Y when they are in thermal contact iff t(X) > t(Y). (2) trivially implies (1); in phenomenological approaches to thermodynamics the converse is often asserted to be true, but of course various additional assumptions are required to make this inference. For our purposes, though, both are corollaries of statistical mechanics, and “empirical temperatures” are just monotonically increasing functions of thermodynamical temperature. Returning to control theory, we can now see just what transitions can and cannot be achieved via thermal contact theory. Specifically, the only transitions that can be induced are the heating and cooling of systems, and a system can be heated only if there is another system available at a higher temperature. The exact range of transitions thus achievable will depend on the size of the systems (if I have bodies at temperatures 300 K and 400 K, I can induce some temperature increase in the first, but how much will depend on how quickly the second is cooled). A useful extreme case involves heat baths: Systems at equilibrium assumed to be so large that no amount of thermal contact with other systems will appreciably change their temperature (and which are also assumed to have no controllable parameters, not that this matters for thermal control theory). The control transitions available via thermal contact theory with heat baths are easy to state: Any system can be cooled if its temperature is higher than some available heat bath, or heated if it is cooler than some such bath. 6. Thermodynamics We are now in a position to do some non-trivial thermodynamics. In fact, we can consider two different thermodynamic theories that can thought of as two extremes. To be precise: Maximal no-feedback thermodynamics is specified like this: • The controlled object is a fixed, finite collection of mutually isolated statistical mechanical systems, assumed to be both thermally and parameter stable. • The control operations are (i) arbitrary entropy-non-decreasing transition maps on the combined states of the system; (ii) leaving the systems alone for a time longer than the equilibration timescale of each system. • There are no feedback measurements. • The control processes are arbitrary sequences of control operations terminating in operation (ii) (that is, arbitrary sequences after which the systems are allowed to reach equilibrium). The only constraints on this control theory are that control operations do not actually decrease phase-space volume, and that the control operations to apply are chosen once-and-for-all and not changed on the basis of feedback. By contrast, here is minimal thermodynamics, obtained simply by conjoining thermal contact theory and adiabatic control theory: • The control operations are (i) moving two systems into or out of thermal contact; (ii) making smooth changes in the parameters determining the Hamiltonians of one or more system over some finite interval of time; (iii) leaving the systems alone for a time longer than the equilibration timescale of each system. • There are no feedback measurements. • The control processes are arbitrary sequences of control operations terminating in operation (iii) (that is, arbitrary sequences after which the systems are allowed to reach equilibrium). The control theory for maximal thermodynamics is straightforward. The theory induces transitions between equilibrium states; no such transition can decrease entropy; transitions are otherwise totally arbitrary. So we can induce a transition xy between two equilibrium states x, y iff S(x) ≤ S(y). It is a striking feature of thermodynamics that under weak assumptions minimal thermodynamics has exactly the same control theory, so that the apparently much greater strength of maximal no-feedback thermodynamics is illusory. To begin a demonstration, recall that in the previous sections we defined the heat flow into a system as the change in its energy due to thermal contact, and the work done on a system as the change in its energy due to modification of the parameters. By decomposing any control process into periods of arbitrarily short length—in each of which we can linearise the total energy change as the change that would have occurred due to parameter change while treating each system as isolated plus the change that would have occurred due to entropy-increasing evolution while holding the dynamics fixed—and summing the results, we can preserve these concepts in minimal thermodynamics. For any system, we then have Δ U = Q + W , where U is the expected energy, Q is the expected heat flow into the system, and W is the expected work done on the system. This result also holds for any collection of systems, up to and including the entire controlled object; in the latter case, Q is zero and W can again be interpreted as the energy cost of performing the control process. The reader will probably recognise this result as another form of the First Law of Thermodynamics. In this context, it is a fairly trivial result: Its content, insofar as it has any, is just that there is a useful decomposition of energy changes by their various causes. In phenomenological treatments of thermodynamics the First Law gets physical content via some independent understanding of what “work done” is (in the axiomatic treatment of [10], for instance, it is understood in terms of the potential energy of some background weight). But the real content of the First Law from that perspective is that there is a thermodynamical quantity called energy which is conserved. In our microphysical-based framework the conservation of (expected) energy is a baseline assumption and does not need to be so derived. The concept of a quasi-static transition also generalises from adiabatic control theory to minimal thermodynamics. If dU is the change in system energy during an extremely small step of such a control process, we have d U = I ( U V I ) V J , S d V I + ( U S ) V I d S and, given that quasi-static adiabatic processes are entropy-conserving, we can identify the first term as the expected work done on the system in this small step and the second as the expected heat flow into the system. Using our existing definitions we can rewrite this as d U = I P I d V I + T d S , yet another form of the First Law, but it is important to recognise that from our perspective, the expression itself has no physical content and is just a result of partial differentiation. The content comes in the identification of the first term as work and the second as heat. Putting our results so far together, we know that Any given system can be induced to make any entropy-nondecreasing transition between states. Any given system’s entropy may be reduced by allowing it to exchange heat with a system at a lower temperature, at the cost of increasing that system’s temperature by a greater amount. The total entropy of the controlled object may not decrease. The only remaining question is then: Which transitions between collections of systems that do not decrease the total entropy can be induced by a combination of (1) and (2)? So far as I know there is no general answer to the question. However, we can answer it fully if we assume that one of the systems is what I will call a Carnot system: A system such that for any value of S, ( U S ) | V I takes all positive values on the constant-S hypersurface. The operational content of this claim is that a Carnot system in any initial equilibrium state can be controlled so as to take on any temperature by an adiabatic quasi-static process. The ideal gas is an example of a Carnot system: Informally, it is clear that its temperature can be arbitrarily increased or decreased by adiabatically changing its volume. More formally, from its equation of state (8) we have 0 = N V d V + 3 N 2 U d U | δ S = 0 , so that the energy can be changed arbitrarily through adiabatic processes, and the temperature is proportional to the energy. Of course, no gas is ideal for all temperatures and in reality the most we can hope for is a system that behaves as a Carnot system across the relevant range of temperatures. In any case, given a Carnot system we can transfer entropy between systems with arbitrarily little net entropy increase. For given two systems at temperatures TA, TB with TA > TB, we can (i) adiabatically change the temperature of the Carnot system to just below TA; (ii) place it in thermal contact with the hotter system, so that heat flows into the Carnot system with arbitrarily little net entropy increase; (iii) adiabatically lower the Carnot system to a temperature just above TB; (iv) place it in thermal contact with the colder system, so that (if we wait the right period of time) heat flows out of the Carnot system with again arbitrarily little net entropy increase. (In the thermodynamics literature this kind of process is called a Carnot cycle: Hence my name for Carnot systems.) We then have a complete solution to the control problem for minimal thermodynamics: The possible transitions of the controlled object are exactly those which do not decrease the total entropy of all of the components. So “minimal” thermodynamics is, indeed, not actually that minimal. The major loophole in all this—feedback—will be discussed from Section 9 onwards. Firstly, though, it will be useful to make a connection with the Second Law of Thermodynamics in its more phenomenological form. 7. The Second Law of Thermodynamics While “the Second Law of Thermodynamics” is often read simply as synonymous with “entropy cannot decrease”, in phenomenal thermodynamics it has more directly empirical statements, each of which translates straightforwardly into our framework. Here’s the first: The Second Law (Clausius statement): No sequence of control processes can induce heat flow Q from one system with an inverse temperature 1/TA, heat flow Q into a second system with a lower inverse temperature 1/TB, while leaving the states of all other systems unchanged. This is a generalisation of the basic result of thermal contact theory, and the argument is essentially the same: Any such process decreases the entropy of the first system by more than it increases the entropy of the second. Since the entropy of the remaining systems is unchanged (they start and end the process in the same equilibrium states), the process is overall entropy-decreasing and thus forbidden by the statistical-mechanical dynamics. If both temperatures are positive, the condition becomes the more familiar one that TB cannot be higher than TA. And the second: The Second Law (Kelvin statement): No sequence of control processes can induce heat flow Q from any one system with positive temperature while leaving the states of all other systems unchanged. By the conservation of energy, any such process must result in net work Q being generated; an alternative way to give the Kelvin version is therefore “no process can extract heat Q from one system and turn it into work while leaving the states of all other systems unchanged”. In any case, the Kelvin version is again an almost immediate consequence of the principle that Gibbs entropy is non-decreasing: Since temperature is the rate of change of energy with entropy at constant parameter value, heat flow from a positive-temperature system must decrease its entropy, which (since the other systems are left unchanged) is again forbidden by the statistical-mechanical dynamics. In both cases the “leaving the states of all other systems unchanged” clause is crucial. It is trivial to move heat from system A to system B with no net work cost if, for instance, system C, a box of gas, is allowed to expand in the process and generate enough work to pay for the work cost of the transition. Thermodynamics textbooks often use the phrase “operating in a cycle” to describe this constraint, and it will be useful to cast that notion more explicitly in our framework. Specifically, let’s define heat bath thermodynamics (without feedback) as follows: • The controlled object consists of (a) a collection of heat baths at various initial temperatures; (b) another finite collection of statistical-mechanical systems, the auxiliary object, containing at least one Carnot system, and whose initial states are unconstrained. • The control operations are (a) moving one or more systems in the auxiliary object into or out of thermal contact with other auxiliary-object systems and/or with one or more heat baths; (b) applying any desired smooth change to the parameters of the systems in the auxiliary object over some finite period of time; (c) inducing one or more systems in the auxiliary object to evolve in an arbitrary entropy-nondecreasing way. • There are no feedback measurements. • A control process is an arbitrary sequence of control operations. In this framework, a control process is cyclic if it leaves the state of the auxiliary object unchanged. The Clausius and Kelvin statements are then, respectively, that no cyclic process can have as its sole effect on the heat baths (a) that net heat Q flows from one bath to one with a higher temperature at no cost in work, and (b) that net heat Q from one bath is converted into work. And again, these are fairly immediate consequences of the fact that entropy is nondecreasing. But perhaps we don’t care about cyclic processes? What does it matter what the actual final state of the auxiliary system is, provided the process works? We can make this intuition more precise like this: A control process delivers a given outcome repeatably if (i) we can perform it arbitrarily often using the final state of each process as the initial state of the next, and (ii) the Hamiltonian of the auxiliary object is the same at the end of each process as at the beginning. The Clausius statement, for instance, is now that no process can repeatably cause any quantity Q of heat to flow from one heat bath to another of higher temperature at no cost in work and with no heat flow between other heat baths. This offers no real improvement, though. In the Clausius case, any such heat flow is entropy-decreasing on the heat baths: Specifically, if they have temperatures TA and TB with TA > TB, a transfer of heat Q between them leads to an entropy increase of Q/(TATB). So the entropy of the auxiliary object must increase by at least this much. By conservation of energy the auxiliary object’s expected energy must be constant in this process. But the entropy of the auxiliary object has a maximum for given expected energy [11] and so this can be carried out only finitely many times. A similar argument can readily be given for the Kelvin statement. I pause to note that we can turn these entirely negative constraints on heat and work into quantitative limits in a familiar way by using our existing control theory results. (Here I largely recapitulate textbook thermodynamics.) Given two heat baths having temperatures TA, TB with TA > TB, and a Carnot system initially at temperature TA, the Carnot cycle to transfer heat from the colder system to the hotter is: Adiabatically transition the Carnot system to the lower temperature TB. Place the Carnot system in thermal contact with the lower-temperature heat bath, and modify its parameters quasi-statically so as to cause heat to flow from the heat bath to the system. (That is, carry out modifications which if done adiabatically would decrease the system’s temperature.) Do so until heat QB has been transferred to the system. Adiabatically transition the Carnot system to temperature TA. Place the Carnot system in thermal contact with the higher-temperature heat bath, and return its parameters quasi-statically to their initial values. At the end of this process the Carnot system has the same temperature and parameter values as at the beginning and so will be in the same equilibrium state; the process is therefore cyclic, and the entropy and energy of the Carnot system will be unchanged. But the entropy of the system is changed only by the heat flow in steps 2 and 4. If the heat flow out of the system in step 4 is QA, then the entropy changes in those steps are respectively +QB/TB and −QA/TA, so that QA/QB = TA/TB. By conservation of energy the net work done on the Carnot system in the cycle is W = QAQB, and we have the familiar result that W = ( T A T B ) Q B for the amount of work required by a Carnot cycle-based heat pump to move a quantity of heat from a lower- to a higher-temperature heat bath. Since the process consists entirely of quasi-static modifications of parameters (and the making and breaking of thermal contact), it can as readily be run in reverse, giving us the equally-familiar formula for the efficiency of a heat engine: TB/TA. And since (on pain of violating the Kelvin statement) all reversible heat engines have the same efficiency (and all irreversible ones a lower efficiency), this result is general and not restricted to Carnot cycles. 8. The One-Molecule Carnot System The Carnot systems used in our analysis so far have been assumed to be parameter-stable, thermally stable systems that can be treated via the microcanonical ensemble (and thus, in effect, to be macroscopically large). But in fact, this is an overly restrictive conception of a Carnot system, and it will be useful to relax it. All we require of such a system is that for any temperature T it possesses states which will transfer heat to and from temperature-T heat baths with arbitrarily low entropy gain, and that it can be adiabatically and quasi-statically transitioned between any two such states. As I noted in Section 5, it is a standard result in statistical mechanics that a system of any size in equilibrium with a heat bath of temperature T is described by the canonical distribution for that temperature, having probability density at energy U proportional to eU/T. There is no guarantee that adiabatic, quasi-static transitions preserve the form of the canonical ensemble, but any system where this is the case will satisfy the criteria required for Carnot systems. I call such systems canonical Carnot systems; from here on, Carnot systems will be allowed to be either canonical or microcanonical. To get some insight into which systems are canonical Carnot systems, assume for simplicity that there is only one parameter V and that the Hamiltonian can be written in the form required by the adiabatic theorem: H ^ [ V ] = i U i ( V ) | ψ i ( V ) ψ i ( V ) | . Then if the system begins in canonical equilibrium, its initial state is ρ ( V ) = 1 Z i e β U i ( V ) | ψ i ( V ) ψ i ( V ) | . By the adiabatic theorem, if V is altered sufficiently slowly to V′ while the system continues to evolve under Hamiltonian dynamics, it will evolve to ρ ( V ) = i e β U i ( V ) | ψ i ( V ) ψ i ( V ) | . This will itself be in adiabatic form if we can find β′ and Z′ such that e β U i ( V ) Z = e β U i ( V ) Z for which a necessary and sufficient condition is that U i ( V ) U j ( V ) = f ( V , V ) ( U i ( V ) U j ( V ) ) , or equivalently that Ui(V) = f(V) + g(i)h(V). For an ideal gas, elementary quantum mechanics tells us that the energy of a given mode is inversely proportional to the volume of the box in which the gas is confined: (Quick proof sketch: increasing the size of the box by a factor K decreases the gradient by that factor, and hence decreases the kinetic energy density by a factor K2. Energy is energy density × volume.) U i ( V ) = g ( i ) V . So an ideal gas is a canonical Carnot system. This result is independent of the number of particles in the gas and independent of any assumption that the gas spontaneously equilibrates. So in principle, even a gas with a single particle—the famous one-molecule gas introduced by [12]—is sufficient to function as a Carnot system. Any repeatable transfer of heat between heat baths via arbitrary entropy-non-decreasing operations on auxiliary systems can in principle be duplicated using only quasi-static operations on a one-molecule gas [13]. For the rest of the paper, I will consider how the account developed is modified when feedback is introduced. The one-molecule gas was introduced into thermodynamics for just this purpose, and will function as a useful illustration. 9. Feedback What happens to the Gibbs entropy when a system with state ρ is measured? The classical case is easiest to analyse: Suppose phase space is decomposed into disjoint regions Γi and that Γ i ρ = p i . Then pi is the probability that a measurement of which phase-space region the system lies in will give result i. The state can be rewritten in the form ρ = i p i ρ i , ρ i ( x ) = 1 p i ρ ( x ) if x ∈ Γi and is zero otherwise. and by probabilistic conditionalisation, ρi is the state of the system after the measurement if result i is obtained. The expected value of the Gibbs entropy after the measurement (“p-m”) is then S G p m = i S G ( ρ i ) p i . But we have S G ( ρ ) = ( i p i ρ i ) ln ( i p i ρ i ) which, since the ρi are mutually disjoint, reduces to S G ( ρ ) = i p i ρ i ln ( p i ρ i ) = i p i ln p i ρ i i p i ρ i ln ρ i . But the integral in the first term is just 1 (since the ρi are normalised) and the integral in the second term is −SG(ρi). So we have S G p m = S G ( ρ ) ( i p i ln p i ) . That is, measurement may decrease entropy for two reasons. Firstly, pure chance may mean that the measurement happens to yield a post-measurement state with low Gibbs entropy. But even the average value of the post-measurement entropy decreases, and the level of the decrease is equal to the Shannon entropy of the probability distribution of measurement outcomes. A measurement process which has a sufficiently dramatic level of randomness could, in principle, lead to a very sharp decrease in average Gibbs entropy [14]. In the quantum case, the situation is slightly more complicated. We can represent the measurement by a collection of mutually orthogonal projectors Π̂i summing to unity, and define measurement probabilities p i = Tr ( Π ^ i ρ ) and post-measurement states ρ i = 1 p i Π ^ i ρ Π ^ i , but ρ is not necessarily equal to a weighted sum of these states. We can think of the measurement process, however, as consisting of two steps: A diagonalisation of ρ so that it does have this form (a non-selective measurement, or Luders projection, in foundations-of-QM jargon) followed by a random selection of the state. Mathematically the first process increases Gibbs (i. e. , von Neumann) entropy, and the second mathematically has the same form as the classical analysis, so that in the quantum case (42) holds as an inequality rather than as a strict equality. (Of course, how this process of measurement is to be interpreted—and even if it can really be thought of as measuring anything—is a controversial question and depends on one’s preferred solution to the quantum measurement problem.) Insofar as “the Second Law of Thermodynamics” is taken just to mean “entropy never decreases”, then, measurement is a straightforward counter-example, as has been widely recognised (see, for instance, [12,15], [16] [ch.5],or [17]) [18]. From the control-theory perspective, though, the interesting content of thermodynamics is which transitions it allows and which it forbids, and the interesting question about feedback measurements is whether they permit transitions which feedback-free thermodynamics does not. Here the answer is again unambiguous: It does. To be precise: Define heat bath thermodynamics with feedback as follows: • Arbitrary feedback measurements may be made. • A control process is an arbitrary sequence of control operations. In this framework, the auxiliary object can straightforwardly be induced (with high probability) to transition from equilibrium state x to equilibrium state y with SG(y) < SG(x). Firstly, pick a measurement such that performing it transitions x to xi with probability pi, such that i p i ln p i S G ( x ) S G ( y . ) The expected value of the entropy of the post-measurement state will be much less than that of y; for an appropriate choice of measurement, with high probability the actually-obtained post-measurement state xi will satisfy SG(xi) < SG(y). Now perform an entropy-increasing transformation from xi to y. (For instance, perform a Hamiltonian transformation of xi to some equilibrium state, then use standard methods of equilibrium thermodynamics to change that state to y). As such, the scope of controlled transitions of the auxiliary object is total: It can be transitioned between any two states. As a corollary, the Clausius and Carnot versions of the Second Law do not apply to this control theory: energy can be arbitrarily transferred from one heat bath to another, or converted from a heat bath into work. In fact, the full power of the arbitrary transformations available on the auxiliary system is not needed to produce these radical results. Following Szilard’s classic method, let us assume that the auxiliary system is a one-molecule gas confined to a cylindrical container by a movable piston at each end, so that the Hamiltonian of the gas is parametrised by the position of the pistons. Now suppose that the position of the gas atom can be measured. If it is found to be closer to one piston than the other, the second piston can rapidly be moved at zero energy cost to the mid-point between the two. As a result, the volume of the gas has been halved without any change in its internal energy (and so its entropy has been decreased by ln 2; cf Equation (8).) If we now quasi-statically and adiabatically expand the gas to its original volume, its energy will decrease and so work will have been extracted from it. Now suppose we take a heat bath at temperature T and a one-atom gas at equilibrium also at temperature T. The above process allows us to reduce the energy of the box and extract some amount of work δW from it. Placing it back in thermal contact with the heat bath will return it to its initial state and so, by conservation of energy, extracts heat δQ = δW from the bath. This is a straightforward violation of the Kelvin version of the Second Law. If we use the extracted work to heat a heat bath which is hotter than the original bath, we generate a violation of the Clausius version also. To make this explicit, let’s define Szilard theory as follows: • The controlled object consists of (a) a collection of heat baths at various initial temperatures; (b) a one-atom gas as defined above. • The control operations are (a) moving the one-atom gas into or out of thermal contact with one or more heat baths; (b) applying any desired smooth change in the positions of the pistons confining the one-atom gas. • The only possible feedback measurement is a measurement of the position of the atom in the one-atom gas. • A control process is an arbitrary sequence of control operations. Then the control operations available in Szilard theory include arbitrary cyclic transfers of heat between heat baths and conversion of heat into work. The use of a one-atom gas in this algorithm is not essential. Suppose that we measure instead the particle density in each half of a many-atom gas at equilibrium Random fluctuations ensure that one side of the gas is at a slightly higher density than the other; compressing the gas slightly using the piston on the low-density side will reduce its volume at a slightly lower cost in work than would be possible on average without feedback; iterating such processes will again allow heat to be converted into work. (The actual numbers in play here are utterly negligible, of course—as for the one-atom gas—but we are interested here in in-principle possibility, not practicality [19]. The most famous example of measurement-based entropy decrease, of course, is Maxwell’s demon: A partition is placed between two boxes of gas initially at equilibrium at the same temperature. A flap, which can be opened or closed, is placed in the partition, and at short time intervals δt the boxes are measured to ascertain if, in the next period of time δt any particles will collide with the flap from (a) the left or (b) the right. If (a) holds but (b) does not, the flap is opened for the next δt seconds. Applying this alternation of feedback measurement and control operation for a sufficiently long time will reliably cause the density of the gas on the left to be much lower than on the right. Quasi-statically moving the partition to the left will then allow work to be extracted. The partition can then be removed, and reinserted in the middle; the temperature of the box will have been reduced. Placing the box in thermal contact with a heat bath will then extract heat from the bath equal to the work done; the Kelvin version of the Second Law is again violated. I will refrain from formally stating the “demonic control theory” into which these results could be embedded, but it is fairly clear that such a theory could be formulated. 10. Landauer’s Principle and the Physical Implementation of Control Processes Szilard control theory, and demonic control theory, allow thermodynamically forbidden transitions. Big deal, one might reasonably think: So does abracadabra control theory, where the allowed control operations include completely arbitrary shifts in a system’s state. We don’t care about abracadabra control theory because we have no reason to think that it is physically possible; we only have reason to care about entropy-decreasing control theories based on measurement if we have reason to think that they are physically possible. Of course, answering the general question of what is physically possible isn’t easy. Is it physically possible to build mile-long relativistic starships? The answer turns on rather detailed questions of material science and the like. But no general physical principle forbids it. Similarly, detailed problems of implementation might make it impossible to build a scalable quantum computer, but the theory of fault-tolerant quantum computation [20,21] gives us strong reasons to think that such computers are not ruled out in principle. On the other hand, we do have reason to think that faster-than-light starships, or computers that can compute Turing-non-computable functions are in principle ruled out. It is this “in-principle” question of implementability that is of interest here. To answer that question, consider again heat-bath control theory. The action takes place mostly with respect to the auxiliary object: The heat baths are not manipulated in any way beyond moving into or out of contact with that object. We can then imagine treating the auxiliary object, and the control machinery, as a single larger system: We set the system going, and then simply allow it to run. It churns away, from time to time establishing or breaking physical contact with a heat bath or perhaps drawing on or topping up an external energy reservoir, and in due course completes the control process it was required to implement. This imagined treatment of the system can be readily incorporated into our system: We can take the auxiliary object of heat-bath theory with feedback together with its controlling mechanisms, draw a box around both together, and treat the result as a single auxiliary object for a heat-bath theory without feedback. Put another way, if the feedback-based control processes we are considering are physically possible, we ought to be able to treat the machinery that makes the measurement as physical, and the machinery that decides what operation to perform based on a given feedback result as likewise physical, and treat all that physical apparatus as part of the larger auxiliary object. Let’s call the assumption that this is possible the automation constraint; to violate it is to assume that some aspects of computation or of measurement cannot be analysed as physical processes, an assumption I will reject here without further discussion. But we already know that heat bath theory without feedback does not permit any repeatable transfer of heat into work, or of a given quantity of heat from a cold body to a hotter body. Such transfers are possible, but only if the auxiliary object increases in Gibbs entropy. And given that the auxiliary object breaks into controlling sub-object and controlled sub-object and that ex hypothesi the control processes we are considering leave the controlled sub-object’s state unchanged, we can conclude that the Gibbs entropy of the controlling sub-object must have increased. This raises an interesting question. From the perspective of the controlling system, control theory with feedback looks like a reasonable idealisation, but from the external perspective, we know that something must go wrong with that idealisation. The resolution of this problem lies in the effects of the measurement process on the controlling system itself: The process of iterated measurement is radically indeterministic from the perspective of the controlling object, and it can have only a finite number of relevantly distinct states, so eventually it runs out of states to use. This point (though controversial; cf [22,23], and references therein) has been widely appreciated in the physics literature and can be studied from a variety of perspectives; in this rest of this section, I briefly describe the most commonly discussed one. Keep in mind in the sequel that we already know that somehow the controlling system’s strategy must fail (at least given the automation constraint): The task is not to show that it does but to understand how it does. The perspective we will discuss uses what might be called a computational model of feedback: It is most conveniently described within quantum mechanics. We assume that the controlling object consists, at least in part, of some collection of N systems - bits—each of whose Hilbert space is the direct sum of two memory subspaces 0 and 1 and each of which begins with its state somewhere in the 0 subspace. A measurement with two outcomes is then a dynamical transition which leaves the measured system alone and causes some so-far-unused bit to transition into the 1 subspace if one outcome is obtained and to remain in the 0 subspace if the other is obtained. That is, if is some unitary transformation of the bit’s Hilbert space that maps the 0 subspace into the 1 subspace, the measurement is represented by some unitary transformation V ^ = P ^ T ^ + ( 1 ^ P ^ ) 1 ^ on the joint system of controlled object and bit (with (, 1 − ) being the projectors defining the measurement. A feedback-based control processes based on the result of this measurement is then represented by a unitary transformation of the form U ^ = U ^ 0 P ^ 0 + U ^ 1 P ^ 1 where 0, 1 project onto the 0 and 1 subspaces and Û0 and Û1 are unitary operations on the controlled system. The combined process of followed by Û represents the process of measuring the controlled object and then performing Û0 on it if one result is obtained and Û1 if the other is. Measurements with 2N outcomes, and control operations based on the results of such measurements, can likewise be represented through the use of N bits. The classical case is essentially identical (but the formalism of quantum theory makes the description simpler in the quantum case). The problem with this process is that eventually, the system runs out of unused bits. (Note that the procedure described above only works if the bit is guaranteed to be in the 0 subspace initially. To operate repeatably, the system will then have to reset some bits to the initial state. But Landauer’s Principle states that such resetting carries an entropy cost. Since the principle is controversial (at least in the philosophy literature!) I will work through the details here from a control-theory perspective. Specifically, let’s define a computational process as follows: It consists of N bits (the memory) together with a finite system (the computer) and another system (the environment). A computation is a transition which is deterministic at the level of bits: that is, if the N bits begin, collectively, in subspaces that encode the binary form of some natural number n, after the transition they are found, collectively, in subspaces encoding f(n) for some fixed function f. (Reference [24] is a highly insightful discussion which inter alia considers the case of indeterministic computation.) The control processes are arbitrary unitary (quantum) or Hamiltonian (classical) evolutions on the combined system of memory, computer, and environment; the question of interest is what constraints on the transitions of computer and environment are required for given computational transitions to be implemented. For the sake of continuity with the literature I work in the classical framework (the quantum generalisation is straightforward); for simplicity I assume that the bits have equal phase space V assigned to 0 and 1. If the function f is one-to-one, the solution to the problem is straightforward. The combined phase space of the memory can be partitioned into 2N subspaces each of equal volume and each labelled with the natural number they represent. There is then a phase-space-preserving map from n to f(n) for each n, and these maps can be combined into a single map from the memory to itself. One-to-one (‘reversible’) computations can then be carried out without any implications for the states of computer or environment. But now suppose that the function f takes values only between 1 and 2M (M < N), so that any map implementing f must map the bits M + 1, . . . N into their zero subspaces independent of input. Any such map would map the uniform distribution over the memory (which has entropy N ln 2V) to one with support in a region of volume (2V)M ×VNM (and so with maximum entropy M ln 2V +(NM) ln V). Since the map as a whole is by assumption entropy-preserving, it must increase the joint entropy of system plus environment by (NM) ln 2. In the limiting case of reset, M = 0 (f(n) = 0 for all n) and so the computer and environment must jointly increase in entropy by at least N ln 2. This is Landauer’s principle: Each bit that is reset generates at least ln 2 entropy. If the computer is to carry out the reset operation repeatably, its own entropy cannot increase without limit. So a repeatable reset process dumps at least entropy ln 2 per bit into the environment. In the special case where the environment is a heat bath at temperature T, Landauer’s principle becomes the requirement that reset generates T ln 2 heat per bit. A more realistic feedback-based control theory, then, might incorporate Landauer’s Principle explicitly, as in the following (call it computation heat-bath thermodynamics): • The controlled object consists of (a) a collection of heat baths at various initial temperatures; (b) another finite collection of statistical-mechanical systems, the auxiliary object, containing at least one Carnot system, and whose initial states are unconstrained; (c) a finite number N of 2-state systems (“bits”), the computational memory, each of which begins in some fixed (“zero”) initial state with probability 1. • The control operations are (a) moving one or more systems in the auxiliary object into or out of thermal contact with other auxiliary-object systems and/or with one or more heat baths; (b) applying any desired smooth change to the parameters of the systems in the auxiliary object over some finite period of time; (c) inducing one or more systems in the auxiliary object to evolve in an arbitrary entropy-nondecreasing way; (d) erasing M bits of the memory—that is, restoring them to their zero states—and at the same time transferring heat M ln 2/T to some heat bath at temperature T ; (e) applying any computation to the computational memory. • Arbitrary feedback measurements may be made (including the memory bits) provided that: (a) they have finitely many results; (b) the result of the measurement is faithfully recorded in the state of some collection of bits which initially each have probability 1 of being in the 0 state. • A control process is an arbitrary sequence of control operations. At first sight, measurement in this framework is in the long run entropy-increasing: A measurement with 2M outcomes having probabilities p1, . . . p2M will reduce the entropy by ΔS = − ∑i pi ln pi, but the maximum value of this is M ln 2, which is the entropy increase required to erase the M bits required to record the result. But as Zurek [15] has pointed out, Shannon’s noiseless coding theorem allows us to compress those M bits to, on average, −∑i pi ln pi bits, so that the overall process can be made entropy-neutral. This strategy of using Landauer’s principle to explain why Maxwell demons cannot repeatably violate the Second Law has a long history (see [25] and references therein). It has recently come under sharp criticism by John Earman and John Norton [22,26] as either trivial or question-begging: They argue that any such defences (‘exorcisms’) rely on arguments for Landauer’s Principle that are either Sound (that is, start off by assuming the Second Law), or Profound (that is, do not so start off). Exorcisms relying on Sound arguments are question-begging; those relying on Profound exorcisms leave us no good reason to accept Landauer’s principle in the first place. Responses to Earman and Norton (see, e. g. [27,28]) have generally embraced the first horn of the dilemma, accepting that Landauer’s Principle does assume the Second Law but arguing that use of it can still be pedagogically illuminating. (See [26,29] for responses to this move.) But I believe the dialectic here fails to distinguish between statistical mechanics and thermodynamics. The argument here for Landauer’s Principle does indeed assume that the underlying dynamics are entropy-non-decreasing, and from that perspective appeal to Landauer’s principle is merely of pedagogical value: It helps us to make sense of how feedback processes can be entropy-decreasing despite the fact that any black-box process, even if it involves internal measurement of subsystems, cannot repeatedly turn heat into work. But (this is one central message of this paper) that dynamical assumption within statistical mechanics should not simply be identified with the phenomenological Second Law. In Earman and Norton’s terminology, the argument for Landauer’s Principle is Sound with respect to statistical mechanics, but Profound with respect to phenomenological thermodynamics. 11. Conclusion The results of my exploration of control theory can be summarised as follows: In the absence of feedback, physically possible control processes are limited to inducing transitions that do not lower Gibbs entropy. That limit can be reached with access to very minimal control resources: Specifically, a single Carnot system and the ability to adiabatically control and put it in thermal contact with other systems. Introducing feedback allows arbitrary transitions. If we try to model the feedback process as an internal dynamical process in a larger system, we find that feedback does not increase the power of the control process. (3) and (4) can be reconciled by considering the physical changes to the controlling system during feedback processes. In particular, on a computation model of control and feedback, the entropy cost of resetting the memory used to record the result of measurement at least cancels out the entropy reduction induced by the measurement. I will end with a more general moral. As a rule, and partly for pedagogical reasons, foundational discussions of thermal physics tend to begin with thermodynamics and continue to statistical mechanics. The task of recovering thermodynamics from successfully grounded statistical mechanics is generally not cleanly separated from the task of understanding statistical mechanics itself, and the distinctive requirements of thermodynamics blur into the general problem of understanding statistical-mechanical irreversibility. Conversely, foundational work on thermodynamics proper is often focussed on thermodynamics understood phenomenologically: A well-motivated and worthwhile pursuit, but not one that obviates the need to understand thermodynamics from a statistical-mechanical perspective. The advantage of the control-theory way of seeing thermodynamics is that it permits a clean separation between the foundational problems of statistical mechanics itself and the reduction problem of grounding thermodynamics in statistical mechanics. I hope to have demonstrated: (a) These really are distinct problems, so that an understanding of (e.g.) why systems spontaneously approach equilibrium does not in itself suffice to give an understanding of thermodynamics; but also (b) that such an understanding, via the interpretation of thermodynamics as the control theory of statistical mechanics, can indeed be obtained, and can shed light on a number of extant problems at the statistical-mechanics/ thermodynamics boundary. In writing this paper I benefitted greatly from conversations with David Albert, Harvey Brown, Wayne Myrvold, John Norton, Jos Uffink, and in particular Owen Maroney. I also wish to acknowledge comments from an anonymous referee. References and Notes 1. In fact, the etymology of thermodynamics according to the Oxford English Dictionary, is just that it is the study of heat (thermo) and work (dynamics) and their interaction. (I am grateful to Jos Uffink for this observation.). 2. Wallace, D. The Non-Problem of Gibbs vs. Boltzmann Entropy. 2014. unpublished work. 3. Wallace, D. There are No Statistical-Mechanical Probabilities. 2014. unpublished work. 4. Wallace, D. Inferential vs dynamical conceptions of physics. 2013. arXiv:1306.4907v1. 5. Strictly speaking there is generally no finite time after which the system has exactly equilibrated, but for any given level of approximation there will be a timescale after which the system has equilibrated to within that level. 6. Wallace, D. What Statistical Mechanics Actually Does.  2014. unpublished work. 7. This is my own formulation of the argument, but I do not claim any particular originality for it; for an argument along similar lines, see [30] (pp.541–548). 8. An anonymous reviewer (to whom I’m grateful for prompting me to clarify the preceding argument) queries whether the argument is in any case necessary (this is one thermodynamic fact the readership will take for granted). But notice that what is derived here is not really a thermodynamic fact but a statistical-mechanical one, providing the statistical-mechanical underpinning to one of the basic assumptions of phenomenological thermodynamics. 9. See, e. g. [31] (ch.XVII sections 10–14) or [32] (pp. 193–196). 10. Lieb, E.H.; Yngvason, J. The physics and mathematics of the second law of thermodynamics. Phys. Rep 1999, 310, 1–96, doi:10.1016/S0370-1573(98)00082-9. 11. The canonical distribution can be characterised as the distribution which maximises Gibbs entropy for given expected energy, so this maximum is just the entropy of that canonical distribution. 12. Sizilard, L. Uber die Entropieverminderung in einem thermodynamischen System bei Eingriffen intelligenter wesen. Feld, B.T., Szilard, GW., Eds.; MIT Press: Cambridge, MA, USA; Volume 53, . (On the Decrease of Entropy in a Thermodynamic System by the Intervention of Intelligent Beings. 13. The name one-molecule is a little unfortunate: the “molecule” here is monatomic and lacks internal degrees of freedom. 14. An anonymous referee worries that something is wrong here: “I cannot convert heat to work merely by discovering where I left the car keys”. However, I can convert heat to work (more accurately: increase by capability for turning heat into work) merely by remembering which side of the partition I left my one-molecule gas; that I cannot do this with my car keys relies on mundane features of their non-isolated state and macroscopic scale. 15. Zurek, W.H. Algorithmic randomness and physical entropy. Phys. Rev. A 1989, 40, 4731–4751, doi:10.1103/PhysRevA.40.4731. 16. Albert, D.Z. Time and Chance; Harvard University Press: Cambridge, MA, USA, 2000. 17. Hemmo, M.; Shenker, O. The Road to Maxwell’s Demon: Conceptual Foundations of Statistical Mechanics; Cambridge University Press: Cambridge, UK, 2012. 18. Several of these accounts (notably [16,17] use a Boltzmannian setup for statistical mechanics (I am grateful to an anonymous referee for pressing this point). I argue in [2] that nothing very deep hangs on the decision to do so; in this context, the main point to note is that Boltzmann, unlike Gibbs, entropy is not reduced by measurement, but rather, measurement gives us the capacity to perform a subsequent entropy-reducing control operation which could not be carried out if our only information about the system was its macrostate and the standard probability distribution. This illustrates the point (which I develop further in [2]) that Boltzmann entropy in the absence of additional probabilistic assumptions is almost totally uninformative about a thermodynamic system’s behaviour. 19. For forceful defence of the idea that the practicalities are what prevents Second Law violation in these cases, see [33]. 20. Preskill, J. Fault-tolerant quantum computation. Introd. Quantum Comput. Inf 1998, 213. 21. Shor, P.W. Fault-tolerant Quantum Computation. Proceedings of the IEEE 37th Annual Symposium on Foundations of Computer Science, Los Alamitos, California; IEEE Computer Society Press, 1996; pp. 56–65. 22. Earman, J.; Norton, J. EXORCIST XIV The wrath of Maxwell’s demon. part II. from Szilard to Landauer and beyond. Stud. Hist. Philos. Mod. Phys 1999, 30, 1–40. 23. Maroney, O. Information Processing and Thermodynamic Entropy. In Stanford Encyclopedia of Philosophy; Zalta, E.N., Ed.; 2009. 24. Maroney, O.J.E. The (absence of a) relationship between thermodynamic and logical reversibility. Stud. Hist. Philos. Mod. Phys 2005, 36, 355–374, doi:10.1016/j.shpsb.2004.11.006. 25. Leff, H.; Rex, A.F. Maxwell’s Demon: Entropy, Information, Computing, 2nd ed. ed.; Institute of Physics Publishing: London, UK, 2002. 26. Norton, J.D. Eaters of the lotus: Landauer’s principle and the return of Maxwell’s demon. Stud. Hist. Philos. Mod. Phys 2005, 36, 375–411, doi:10.1016/j.shpsb.2004.12.002. 27. Bennett, C.H. Notes on Landauer’s principle, reversible computation, and Maxwell’s demon. Stud. Hist. Philos. Mod. Phys 2003, 34, 501–510, doi:10.1016/S1355-2198(03)00039-X. 28. Ladyman, J.; Presnell, S.; Short, A. The use of the information-theoretic entropy in thermodynamics. Stud. Hist. Philos. Mod. Phys 2008, 39, 315–324, doi:10.1016/j.shpsb.2007.11.004. 29. Norton, J. Waiting for Landauer. 2011. Available online: http://philsci-archive.pitt.edu/8635/ (accessed on 1 December 2013). 30. Tolman, R.C. The Principles of Statistical Mechanics; Oxford University Press: New York, 1938. 31. Messiah, A. Quantum Mechanics, Volume II; North-Holland Publishing Company: Netherlands, 1962. 32. Weinberg, S. Quantum Mechanics; Cambridge University Press: Cambridge, UK, 2013. 33. Norton, J. The End of the Thermodynamics of Computation: A no go Result. In Proceedings of the Philosophy of Science Association 23rd Biennial Meeting Collected Papers, San Diego, CA, USA; 2012. Available online: http://philsci-archive.pitt.edu/9658/ (accessed on 1 December 13).
3f0fd6915bbc31ab
Ergodic Theory and Dynamical Systems Positive Lyapunov exponent and minimality for a class of one-dimensional quasi-periodic Schrödinger equations a1 Department of Mathematics, Royal Institute of Technology, 100 44 Stockholm, Sweden (e-mail: Department of Mathematics, University of Toronto, Toronto, ON, Canada M5S 3G3 (e-mail: Article author query bjerklov k   [Google Scholar]  We study the discrete quasi-periodic Schrödinger equation \[-(u_{n+1}+u_{n-1})+\lambda V(\theta+n\omega)u_n=Eu_n\] with a non-constant C1 potential function $V:\mathbb{T}\to\mathbb{R}$. We prove that for sufficiently large $\lambda$ there is a set $\Omega\subset\mathbb{T}$ of frequencies $\omega$, whose measure tends to 1 as $\lambda\to\infty$, with the following property. For each $\omega\in\Omega$ there is a ‘large’ (in measure) set of energies E, all lying in the spectrum of the associated Schrödinger operator (and hence giving a lower estimate on the measure of the spectrum), such that the Lyapunov exponent is positive and, moreover, the projective dynamical system induced by the Schrödinger cocycle is minimal but not ergodic. (Received January 23 2004) (Revised October 30 2004)
b3b1e876cf57debc
Take the 2-minute tour × Teaching graduate analysis has inspired me to think about the completeness theorem for Fourier series and the more difficult Plancherel theorem for the Fourier transform on $\mathbb{R}$. There are several ways to prove that the Fourier basis is complete for $L^2(S^1)$. The approach that I find the most interesting, because it uses general tools with more general consequences, is to use apply the spectral theorem to the Laplace operator on a circle. It is not difficult to show that the Laplace operator is a self-adjoint affiliated operator, i.e., the healthy type of unbounded operator for which the spectral theorem applies. It's easy to explicitly solve for the point eigenstates of the Laplace operator. Then you can use a Fredholm argument, or ultimately the Arzela-Ascoli theorem, to show that the Laplace operator is reciprocal to a compact operator, and therefore has no continuous spectrum. The argument is to integrate by parts. Suppose that $$\langle -\Delta \psi, \psi\rangle = \langle \vec{\nabla} \psi, \vec{\nabla \psi} \rangle \le E$$ for some energy $E$, whether or not $\psi$ is an eigenstate and even whether or not it has unit norm. Then $\psi$ is microscopically controlled and there is only a compact space of such $\psi$ except for adding a constant. The payoff of this abstract proof is the harmonic completeness theorem for the Laplace operator on any compact manifold $M$ with or without boundary. It also works when $\psi$ is a section of a vector bundle with a connection. My question is whether there is a nice generalization of this approach to obtain a structure theorem for the Laplace operator, or the Schrödinger equation, in non-compact cases. Suppose that $M$ is an infinite complete Riemannian manifold with some kind of controlled geometry. For instance, say that $M$ is quasiisometric to $\mathbb{R}^n$ and has pinched curvature. (Or say that $M$ is amenable and has pinched curvature.) Maybe we also have the Laplace operator plus some sort of controlled potential --- say a smooth, bounded potential with bounded derivatives. Then can you say that the spectrum of the Laplace or Schrödinger operator is completely described by controlled solutions to the PDE, which can be interpreted as "almost normalizable" states? There is one case of this that is important but too straightforward. If $M$ is the universal cover of a torus $T$, and if its optional potential is likewise periodic, then you can use "Bloch's theorem". In other words you can solve the problem for flat line bundles on $T$, where you always just have a point spectrum, and then lift this to a mixed continuous and point spectrum upstairs. So you can derive the existence of a fancy spectrum that is not really explicit, but the non-compactness is handled using an explicit method. I think that this method yields a cute proof of the Plancherel theorem for $\mathbb{R}$ (and $\mathbb{R}^n$ of course): Parseval's theorem as described above gives you Fourier completeness for both $S^1$ and $\mathbb{Z}$, and you can splice them together using the Bloch picture to get completeness for $\mathbb{R}$. share|improve this question Only a simple remark. In the non-compact case, the paradigmatic example is the harmonic oscillator $$ -\Delta_{\mathbb R^d}+\frac{\vert x\vert^2}{4} $$ with spectrum $\frac{d}{2}+\mathbb N$. The eigenvectors are the Hermite functions with an explicit expression from the so-called Maxwellian $\psi_0=(2\pi)^{-d/4}\exp{-\frac{\vert x\vert^2}{4}}$ and the creation operators $(\alpha!)^{-1/2}(\frac{x}{2}-\frac{d}{dx})^\alpha \psi_0$. In one dimension the operator $-\frac{d^2}{dx^2}+x^4$ (quartic oscillator) has also a compact resolvent, but nothing explicit is known about the eigenfunctions. –  Bazin May 2 '12 at 13:53 More subtle is the compactness of the resolvent of the 2D $$ -\Delta_{\mathbb R^2}+x^2y^2. $$ –  Bazin May 2 '12 at 13:54 I just saw this playing around on meta.... Are you asking a question beyond that spectrally almost every solution is polynomially bounded? –  Helge Aug 15 '12 at 18:53 @Helge - That's part of the story, but in the ordinary Plancherel theorem, not the hardest part to state or prove. You would also want some statement about the spectral measure (that is, the projection-valued measure produced by the spectral theorem) associated to the Laplace or Schrodinger operator. Again, if you have a Laplace operator on a closed manifold, there is an algorithm to diagonalize it completely. The completeness theorem is considered very important, and not just the fact that you can find eigenfunctions. –  Greg Kuperberg Aug 18 '12 at 3:16 2 Answers 2 Since this has not been mentioned, let me point to the Weyl-Stone-Titchmarsh-Kodaira theorem which gives the generalized Fourier transform and Plancherel formula of a selfadjoint Sturm-Liouville operator. The ODE section in Dunford-Schwartz II presents this. See also the nice original paper Kodaira (1949). The (one-dimensional) Schrödinger operator with periodic potential (Hill's operator) is also treated in Kodaira's paper. In several variables, scattering theory provides Plancherel theorems. For the Dirichlet Laplacian in the exterior of a compact obstacle, one can find a result of this kind in chapter 9 of M.E. Taylor's book PDE II. Formula (2.15) in that chapter is the Plancherel theorem of the Fourier transform $\Phi$ defined in (2.8). Stone's formula represents the (projection-valued) spectral measure of a selfadjoint operator as the limit of the resolvent at the real axis. It is a key ingredient in proofs of these results. share|improve this answer Too big to fit well as comment: There is a seeming-technicality which is important to not overlook, the question of whether a symmetric operator is "essentially self-adjoint" or not. As I discovered only embarrasingly belatedly, this "essential self-adjointness" has a very precise meaning, namely, that the given symmetric operator has a unique self-adjoint extension, which then is necessarily given by its (graph-) closure. In many natural situations, Laplacians and such are essentially self-adjoint. But with any boundary conditions, this tends not to be the case, exactly as in the simplest Sturm-Liouville problems on finite intervals, not even getting to the Weyl-Kodaira-Titchmarsh complications. Gerd Grubb's relatively recent book on "Distributions and operators" discusses such stuff. The broader notion of Friedrichs' canonical self-adjoint extension of a symmetric (edit! :) semi-bounded operator is very useful here. At the same time, for symmetric operators that are not essentially self-adjoint, the case of $\Delta$ on $[a,b]$ with varying boundary conditions (to ensure symmetric-ness) shows that there is a continuum of mutually incomparable self-adjoint extensions. Thus, on $[0,2\pi]$, the Dirichlet boundary conditions give $\sin nx/2$ for integer $n$ as orthonormal basis, while the boundary conditions that values and first derivatives match at endpoints give the "usual" Fourier series, in effect on a circle, by connecting the endpoints. This most-trivial example already shows that the spectrum, even in the happy-simple discrete case, is different depending on boundary conditions. share|improve this answer Your Answer
9fa632df242b13e7
Take the 2-minute tour × I was hoping that someone could give me the more fundamental reason that we take as the temporal part of a quantum wavefunction the function $e^{-i\omega t}$ and not $e^{+i\omega t}$? Clearly $e^{-i\omega t}$ solves the time dependent Schrödinger equation whereas $e^{+i\omega t}$ does not. However, the Schrödinger equation, when it was first developed, was merely a hypothesis. It was new physics and, as such, could not be derived from previous work. Hence, why did Schrödinger and his contemporaries choose $e^{-i\omega t}$ and, thus, why does an antiparticle with wavefunction temporal dependence $e^{+i\omega t}$ correspond to backwards time travel or negative energy? share|improve this question 1 Answer 1 up vote 6 down vote accepted In mathematics, there is a complete symmetry between $+i$ and $-i$. Both the imaginary unit and the minus imaginary unit obey $$ i^2 = (-i)^2 = -1 $$ The exchange of $i$ and $-i$ is known as the ${\mathbb Z}_2$ automorphism group of the complex numbers ${\mathbb C}$. When you introduce the complex numbers for the first time, it's a complete convention whether you call a square root of $(-1)$ as $+i$ or $-i$. However, in physics, we have to break the symmetry between $+i$ and $-i$ because we must know whether a wave in a particular situation is $\exp(i\omega t)$ or $\exp(-i\omega t)$, for example. In particular, $xp-px=i\hbar$ and not $-i\hbar$. Also, and the following choice of the sign is actually not independent from the previous one in the commutator, Schrödinger's equation was chosen to be $$ H |\psi \rangle = i\hbar\frac{d}{dt}|\psi\rangle $$ where $H$ is the Hamiltonian that may be replaced by $H=E$ when acting on an energy eigenstate $|\psi\rangle$. This equation is totally universal everywhere in quantum mechanics where a Hamiltonian is well-defined (it may be even quantum field theory or some descriptions of string theory). The equation above, with $H=E$, is solved by $$|\psi(t)\rangle = \exp(Et/i\hbar) |\psi(0)\rangle = \exp(-iEt/ \hbar) |\psi(0)\rangle = \exp(-i\omega t) |\psi(0)\rangle $$ All the forms are equivalent because $1/i = -i$ – this equation is equivalent to $i^2=-1$ – and because $E=\hbar\omega$ without a minus sign. So your sign is wrong; the sign you denounced is the right one and the sign you wanted is the incorrect one. Just to be sure, in quantum field theory, we work with various objects – quantum fields – that are expanded into terms that depend on time as $\exp(-i\omega t)$ while there must also be terms that depend on time via $\exp(+i\omega t)$. But these are terms in operators, not the time dependence of the wave function. One must be careful about the precise statements and objects. I haven't made any statement of the sort that only the expression $\exp(-i\omega t)$ and not $\exp(+i\omega t)$ appears in quantum theory papers and books. Of course, both of them may appear somewhere – in quantum field theory, both of them have to appear because there are both creation and annihilation operators, both particles and antiparticles. But when we are asking how an energy $E$ wave function (and I mean the ket vector) depends on time, it's always via $\exp(-iEt/\hbar)$. The bra vector has the opposite sign (plus) in the exponent. share|improve this answer And I was thinking it was simply a normalization and convergence issue... –  Dylan Sabulsky Dec 7 '12 at 5:32 Your Answer
dca9366652a9804b
Take the 2-minute tour × Recently there have been some interesting questions on standard QM and especially on uncertainty principle and I enjoyed reviewing these basic concepts. And I came to realize I have an interesting question of my own. I guess the answer should be known but I wasn't able to resolve the problem myself so I hope it's not entirely trivial. So, what do we know about the error of simultaneous measurement under time evolution? More precisely, is it always true that for $t \geq 0$ $$\left<x(t)^2\right>\left<p(t)^2\right> \geq \left<x(0)^2\right>\left<p(0)^2\right>$$ (here argument $(t)$ denotes expectation in evolved state $\psi(t)$, or equivalently for operator in Heisenberg picture). I tried to get general bounds from Schrodinger equation and decomposition into energy eigenstates, etc. but I don't see any way of proving this. I know this statement is true for a free Gaussian wave packet. In this case we obtain equality, in fact (because the packet stays Gaussian and because it minimizes HUP). I believe this is in fact the best we can get and for other distributions we would obtain strict inequality. So, to summarize the questions 1. Is the statement true? 2. If so, how does one prove it? And is there an intuitive way to see it is true? share|improve this question Why do you think it would apply? You can't really make a measurement that way (either you measure at $t=0$ or at $t=T$, but never both), so you basically have two different $\psi$ solutions. Both will obey the principle independently. Am I misunderstanding your question? –  Sklivvz Mar 19 '11 at 16:06 If your wavepacket, to begin with, saturates the uncertainty bound (i.e. is a coherent state) then this is trivially true - coherent states stay coherent under time-evolution. If your initial state is not a coherent state then the evolution is clearly more involved, but in that case you could expand your arbitrary initial state in the coherent state basis - so that this inequality (as established for coherent states) could still be used, component by component to show that it remains true for the arbitrary state. Or perhaps not. Chug and plug, baby, chug and plug. –  user346 Mar 19 '11 at 16:08 I don’t think the statement is true. Put the minimum uncertainty wave packet at t=0. What was the uncertainty before, at t<0? it was larger so it has been decreasing before t=0. More generally, you cannot derive time asymmetric statements from time symmetric laws. –  user566 Mar 19 '11 at 16:39 @Moshe: there are loopholes in your argument: there might be no minimum for a given system (just infimum) and if there is minimum, it might be preserved in evolution (as for free Gaussian). Still, nice idea and I'll try to use it to find a counterexample in some simple system. As for the second statement: right, so I am sure you'll tell me that we can't obtain second law too... just kiddin', I don't want to get into this discussion that made Boltzmann commit suicide :) –  Marek Mar 19 '11 at 16:47 @Marek, in any example you can solve the Schrodinger equation, you'll find that the quantity you are interested in grows away from t=0, both towards the past and towards the future, this is guaranteed by symmetry. As for the general statement, it is also true for the second law. You cannot derive time asymmetric conclusions from time symmetric laws without extra input, this is just basic logic, nothing to do with physics. The whole discussion is what is that extra input and where does it come in. –  user566 Mar 19 '11 at 16:57 5 Answers 5 up vote 35 down vote accepted The question asks about the time dependence of the function $$f(t) := \langle\psi(t)|(\Delta \hat{x})^2|\psi(t)\rangle \langle\psi(t)|(\Delta \hat{p})^2|\psi(t)\rangle,$$ $$\Delta \hat{x} := \hat{x} - \langle\psi(t)|\hat{x}|\psi(t)\rangle, \qquad \Delta \hat{p} := \hat{p} - \langle\psi(t)|\hat{p}|\psi(t)\rangle, \qquad \langle\psi(t)|\psi(t)\rangle=1.$$ We will here use the Schroedinger picture where operators are constant in time, while the kets and bras are evolving. Edit: Spurred by remarks of Moshe R. and Ted Bunn let us add that (under assumption (1) below) the Schroedinger equation itself is invariant under the time reversal operator $\hat{T}$, which is a conjugated linear operator, so that $$\hat{T} t = - t \hat{T}, \qquad \hat{T}\hat{x} = \hat{x}\hat{T}, \qquad \hat{T}\hat{p} = -\hat{p}\hat{T}, \qquad \hat{T}^2=1.$$ Here we are restricting ourselves to Hamiltonians $\hat{H}$ so that $$[\hat{T},\hat{H}]=0.\qquad (1)$$ Moreover, if $$|\psi(t)\rangle = \sum_n\psi_n(t) |n\rangle$$ is a solution to the Schroedinger equation in a certain basis $|n\rangle$, then $$\hat{T}|\psi(t)\rangle := \sum_n\psi^{*}_n(-t) |n\rangle$$ will also be a solution to the Schroedinger equation with a time reflected function $f(-t)$. Thus if $f(t)$ is non-constant in time, then we may assume (possibly after a time reversal operation) that there exist two times $t_1<t_2$ with $f(t_1)>f(t_2)$. This would contradict the statement in the original question. To finish the argument, we provide below an example of a non-constant function $f(t)$. Consider a simple harmonic oscillator Hamiltonian with the zero point energy $\frac{1}{2}\hbar\omega$ subtracted for later convenience. $$\hat{H}:=\frac{\hat{p}^2}{2m}+\frac{1}{2}m\omega^{2}\hat{x}^2 -\frac{1}{2}\hbar\omega=\hbar\omega\hat{N},$$ where $\hat{N}:=\hat{a}^{\dagger}\hat{a}$ is the number operator. Let us put the constants $m=\hbar=\omega=1$ to one for simplicity. Then the annihilation and creation operators are $$\hat{a}=\frac{1}{\sqrt{2}}(\hat{x} + i \hat{p}), \qquad \hat{a}^{\dagger}=\frac{1}{\sqrt{2}}(\hat{x} - i \hat{p}), \qquad [\hat{a},\hat{a}^{\dagger}]=1,$$ or conversely, $$\hat{x}=\frac{1}{\sqrt{2}}(\hat{a}^{\dagger}+\hat{a}), \qquad \hat{p}=\frac{i}{\sqrt{2}}(\hat{a}^{\dagger}-\hat{a}), \qquad [\hat{x},\hat{p}]=i,$$ $$\hat{x}^2=\hat{N}+\frac{1}{2}\left(1+\hat{a}^2+(\hat{a}^{\dagger})^2\right), \qquad \hat{p}^2=\hat{N}+\frac{1}{2}\left(1-\hat{a}^2-(\hat{a}^{\dagger})^2\right).$$ Consider Fock space $|n\rangle := \frac{1}{\sqrt{n!}}(\hat{a}^{\dagger})^n |0\rangle$ such that $\hat{a}|0\rangle = 0$. Consider initial state $$|\psi(0)\rangle := \frac{1}{\sqrt{2}}\left(|0\rangle+|2\rangle\right), \qquad \langle \psi(0)| = \frac{1}{\sqrt{2}}\left(\langle 0|+\langle 2|\right).$$ $$|\psi(t)\rangle = e^{-i\hat{H}t}|\psi(0)\rangle = \frac{1}{\sqrt{2}}\left(|0\rangle+e^{-2it}|2\rangle\right),$$ $$\langle \psi(t)| = \langle\psi(0)|e^{i\hat{H}t} = \frac{1}{\sqrt{2}}\left(\langle 0|+\langle 2|e^{2it}\right),$$ $$\langle\psi(t)|\hat{x}|\psi(t)\rangle=0, \qquad \langle\psi(t)|\hat{p}|\psi(t)\rangle=0.$$ $$\langle\psi(t)|\hat{x}^2|\psi(t)\rangle=\frac{3}{2}+\frac{1}{\sqrt{2}}\cos(2t), \qquad \langle\psi(t)|\hat{p}^2|\psi(t)\rangle=\frac{3}{2}-\frac{1}{\sqrt{2}}\cos(2t),$$ because $\hat{a}^2|2\rangle=\sqrt{2}|0\rangle$. Therefore, $$f(t) = \frac{9}{4} - \frac{1}{2}\cos^2(2t),$$ which is non-constant in time, and we are done. Or alternatively, we can complete the counter-example without the use of above time reversal argument by simply performing an appropriate time translation $t\to t-t_0$. share|improve this answer I was thinking of trying to work out some harmonic oscillator example myself (because I have few further questions and it seems like simplest system where something nontrivial is happening) but you've beat me to it. Thanks! –  Marek Mar 20 '11 at 18:57 Although there is one thing that bugs me. I believe the calculation is essentially right, however we have $f(0) = 1/4$ which means it minimizes HUP (unless I am misunderstanding your conventions) and therefore $\psi(0)$ would have to be Gaussian -- a contradiction with your initial state. Is there a little mistake in calculation somewhere or do I have a flaw in my argument? –  Marek Mar 20 '11 at 19:02 Okay, I fixed it (I hope) :) –  Marek Mar 20 '11 at 19:20 Dear @Marek: I agree, there was powers of $2$ missing in three formulas. –  Qmechanic Mar 20 '11 at 19:32 One thing that's worth noting: you say that the Schrodinger equation is not invariant under time reversal. It's true that simply substituting $t\to -t$ is not invariant, but simultaneously changing $t\to -t$ and complex conjugating $\psi\to\psi^*$ does leave the equation invariant. That means that, for every solution $\psi(t)$, there is a corresponding solution $\psi^*(-t)$ that "looks like" the same state going backwards in time (and in particular has the same expectation values for all operators). That's what people mean when they say that the Schrodinger equation has time-reversal symmetry. –  Ted Bunn Mar 21 '11 at 13:02 The Schrodinger equation is time-symmetric. The answer is therefore No. From all of the comments, I feel like I must be oversimplifying or missing something, but I can't see what. share|improve this answer I'm with you, but it is probably useful for Marek to see for himself how this works in the simple example to be convinced of the general statement. –  user566 Mar 19 '11 at 17:19 Yes, this seems like a good argument to settle the original question. But it brings in further questions :) In particular, Moshe's solution (minimum growing towards both future and past) is a kind of bounce. But on both sides of that bounce I suppose the inequality would be satisfied. In other words, would the statement hold if we allowed these simple bouncy solutions and the time "t=0". Or to put it more clearly: I should've asked more general question of what does the uncertainty as a function of time look like... We now know it need not be monotone but perhaps it has other nice properties. –  Marek Mar 19 '11 at 18:07 I can't make heads or tails of this sentence: In other words, would the statement hold if we allowed these simple bouncy solutions and the time "t=0". I don't know if anything interesting in general can be said about the time evolution of $\Delta x\,\Delta p$, other than of course that it's bounded below. –  Ted Bunn Mar 19 '11 at 18:09 @Ted: ah, that was indeed not very clear. The best rephrasing is probably this: whether there exists time $t_0$ such that the inequality holds for all times $t \geq t_0$. But it is a different question. –  Marek Mar 19 '11 at 20:15 I thnk that @Marek and I are in complete agreement. Just to be explicit, let me answer @Carl's question about how we know $\Delta p$ is constant. Marek is right: For a free particle, $p^n$ commutes with the Hamiltonian, so all expectation values $\langle p^n\rangle$ are constant. So $\Delta p^2=\langle p^2\rangle-\langle p\rangle^2$ is constant. (Indeed, the entire probability distribution for $p$ is constant in time.) As a result, a Gaussian wave packet for a free particle does not remain minimum-uncertainty for all time. It spreads in real space while remaining the same in momentum space. –  Ted Bunn Mar 20 '11 at 14:05 No. Here's a simple example where it shrinks: You have a particle that has a 50% chance of being on the left going right, and a 50% chance of being on the right going left. This has a macroscopic error in both position and momentum. If you wait until it passes half way, it has a 100% chance of being in the middle. This has a microscopic error in position. There will also only be a microscopic change in momentum. (I'm not entirely sure of this as the possibilities hit each other, but if you just look right before that, or make them miss a little, it still works.) As such, the error in position decreased significantly, but the error in momentum stayed about the same. share|improve this answer Think in terms of Harmonic Functions and their Maximum Principle (or Mean Value Theorem). For simplicity (and, in fact, without loss of generality), let's just think in terms of a free particle, ie, $V(x,y,z) = 0$. When the Potential vanishes, the Schrödinger equation is nothing but a Laplace one (or Poisson equation, if you want to put a source term). And, in this case, you can apply the Mean Value Theorem (or the Maximum Principle) and get a result pertaining your question: in this situation you saturate the equality. Now, if you have a Potential, you can think in terms of a Laplace-Beltrami operator: all you need to do is 'absorb' the Potential in the Kinetic term via a Jacobi Metric: $\tilde{\mathrm{g}} = 2\, (E - V)\, \mathrm{g}$. (Note this is just a conformal transformation of the original metric in your problem.) And, once this is done, you can just turn the same crank we did above, ie, we reduced the problem to the same one as above. ;-) I hope this helps a bit. share|improve this answer I am sorry but I don't see how this is related to uncertainty and time evolution. Could you explain that? –  Marek Mar 19 '11 at 20:51 @Marek: the point was made explicit by Qmechanic, in his answer above. If you apply what i said in the Schrödinger picture, you get evolving states whose magnitude is always bound by the Mean Value Theorem. (If we were talking about bounded operators, this could be made rigorous with a bit of Functional Analysis.) –  Daniel Mar 20 '11 at 19:32 A physical way of seeing this is that the phase space volume of a system is preserved. Hamiltonian mechanics preserves the volume of a system on its energy surface H = E, which in quantum mechanics corresponds to the Schrodinger equation. The phase space volume on the energy surface of phase space is composed of units of volume $\hbar^{2n}$ for the momentum and position variables plus the $\hbar$ of the energy $i\hbar\partial\psi/\partial t~=~H\psi$. This is then preserved. Any growth in the uncertainty $\Delta p\Delta q~=~\hbar/2$ would then imply the growth in the phase space volume of the system. This would then mean there is some dissipative process, or the quantum dynamics is replaced by some master equation with a thermal or environmental loss of some form. For a pure unitary evolution however the phase space volume of the system, or equivalently the $Tr\rho$ and $Tr\rho^2$ are constant. This means the uncertainty relationship is a Fourier transform between complementary observables which preserve an area $\propto~\hbar$. share|improve this answer -1, this is completely irrelevant to my question. I am interested just in pure states and for those phase volume is always zero and so trivially conserved. But this doesn't give any information on the behavior of uncertainty. –  Marek Mar 21 '11 at 13:20 The volume a system occupies in phase space defines entropy as $S~=~k~log(\Omega)$ for $\Omega$. The von Neumann entropy $$ S~=~-k~Tr~\rho log(\rho). $$ A mixed state has each element of $\rho~=~1/n$ and the trace is $\sum(1/n)log(1/n)$ $~=~log(n)$. A pure state then occupies a phase space region that is normalized to unit volume --- not zero. –  Lawrence B. Crowell Mar 21 '11 at 14:45 Your Answer
018bbd53086c546a
Hartree–Fock method From Wikipedia, the free encyclopedia   (Redirected from Hartree-Fock method) Jump to: navigation, search In computational physics and chemistry, the Hartree–Fock (HF) method is a method of approximation for the determination of the wave function and the energy of a quantum many-body system in a stationary state. The Hartree–Fock method often assumes that the exact, N-body wave function of the system can be approximated by a single Slater determinant (in the case where the particles are fermions) or by a single permanent (in the case of bosons) of N spin-orbitals. By invoking the variational method, one can derive a set of N-coupled equations for the N spin orbitals. A solution of these equations yields the Hartree–Fock wave function and energy of the system. Especially in the older literature, the Hartree–Fock method is also called the self-consistent field method (SCF). In deriving what is now called the Hartree equation as an approximate solution of the Schrödinger equation, Hartree required the final field as computed from the charge distribution to be "self-consistent" with the assumed initial field. Thus, self-consistency was a requirement of the solution. The solutions to the non-linear Hartree–Fock equations also behave as if each particle is subjected to the mean field created by all other particles (see the Fock operator below) and hence the terminology continued. The equations are almost universally solved by means of an iterative method, although the fixed-point iteration algorithm does not always converge.[1] This solution scheme is not the only one possible and is not an essential feature of the Hartree–Fock method. The Hartree–Fock method finds its typical application in the solution of the Schrödinger equation for atoms, molecules, nanostructures[2] and solids but it has also found widespread use in nuclear physics. (See Hartree–Fock–Bogoliubov method for a discussion of its application in nuclear structure theory). In atomic structure theory, calculations may be for a spectrum with many excited energy levels and consequently the Hartree–Fock method for atoms assumes the wave function is a single configuration state function with well-defined quantum numbers and that the energy level is not necessarily the ground state. For both atoms and molecules, the Hartree–Fock solution is the central starting point for most methods that describe the many-electron system more accurately. The rest of this article will focus on applications in electronic structure theory suitable for molecules with the atom as a special case. The discussion here is only for the Restricted Hartree–Fock method, where the atom or molecule is a closed-shell system with all orbitals (atomic or molecular) doubly occupied. Open-shell systems, where some of the electrons are not paired, can be dealt with by one of two Hartree–Fock methods: Brief history[edit] The origin of the Hartree–Fock method dates back to the end of the 1920s, soon after the discovery of the Schrödinger equation in 1926. In 1927 D. R. Hartree introduced a procedure, which he called the self-consistent field method, to calculate approximate wave functions and energies for atoms and ions. Hartree was guided by some earlier, semi-empirical methods of the early 1920s (by E. Fues, R. B. Lindsay, and himself) set in the old quantum theory of Bohr. In the Bohr model of the atom, the energy of a state with principal quantum number n is given in atomic units as E = -1 / n^2. It was observed from atomic spectra that the energy levels of many-electron atoms are well described by applying a modified version of Bohr's formula. By introducing the quantum defect d as an empirical parameter, the energy levels of a generic atom were well approximated by the formula E = -1/(n+d)^2, in the sense that one could reproduce fairly well the observed transitions levels observed in the X-ray region (for example, see the empirical discussion and derivation in Moseley's law). The existence of a non-zero quantum defect was attributed to electron-electron repulsion, which clearly does not exist in the isolated hydrogen atom. This repulsion resulted in partial screening of the bare nuclear charge. These early researchers later introduced other potentials containing additional empirical parameters with the hope of better reproducing the experimental data. Hartree sought to do away with empirical parameters and solve the many-body time-independent Schrödinger equation from fundamental physical principles, i.e., ab initio. His first proposed method of solution became known as the Hartree method. However, many of Hartree's contemporaries did not understand the physical reasoning behind the Hartree method: it appeared to many people to contain empirical elements, and its connection to the solution of the many-body Schrödinger equation was unclear. However, in 1928 J. C. Slater and J. A. Gaunt independently showed that the Hartree method could be couched on a sounder theoretical basis by applying the variational principle to an ansatz (trial wave function) as a product of single-particle functions. In 1930 Slater and V. A. Fock independently pointed out that the Hartree method did not respect the principle of antisymmetry of the wave function. The Hartree method used the Pauli exclusion principle in its older formulation, forbidding the presence of two electrons in the same quantum state. However, this was shown to be fundamentally incomplete in its neglect of quantum statistics. It was then shown that a Slater determinant, a determinant of one-particle orbitals first used by Heisenberg and Dirac in 1926, trivially satisfies the antisymmetric property of the exact solution and hence is a suitable ansatz for applying the variational principle. The original Hartree method can then be viewed as an approximation to the Hartree–Fock method by neglecting exchange. Fock's original method relied heavily on group theory and was too abstract for contemporary physicists to understand and implement. In 1935 Hartree reformulated the method more suitably for the purposes of calculation. The Hartree–Fock method, despite its physically more accurate picture, was little used until the advent of electronic computers in the 1950s due to the much greater computational demands over the early Hartree method and empirical models. Initially, both the Hartree method and the Hartree–Fock method were applied exclusively to atoms, where the spherical symmetry of the system allowed one to greatly simplify the problem. These approximate methods were (and are) often used together with the central field approximation, to impose that electrons in the same shell have the same radial part, and to restrict the variational solution to be a spin eigenfunction. Even so, solution by hand of the Hartree–Fock equations for a medium sized atom were laborious; small molecules required computational resources far beyond what was available before 1950. Hartree–Fock algorithm[edit] The Hartree–Fock method is typically used to solve the time-independent Schrödinger equation for a multi-electron atom or molecule as described in the Born–Oppenheimer approximation. Since there are no known solutions for many-electron systems (there are solutions for one-electron systems such as hydrogenic atoms and the diatomic hydrogen cation), the problem is solved numerically. Due to the nonlinearities introduced by the Hartree–Fock approximation, the equations are solved using a nonlinear method such as iteration, which gives rise to the name "self-consistent field method." The Hartree–Fock method makes five major simplifications in order to deal with this task: • The Born–Oppenheimer approximation is inherently assumed. The full molecular wave function is actually a function of the coordinates of each of the nuclei, in addition to those of the electrons. • Typically, relativistic effects are completely neglected. The momentum operator is assumed to be completely non-relativistic. • The variational solution is assumed to be a linear combination of a finite number of basis functions, which are usually (but not always) chosen to be orthogonal. The finite basis set is assumed to be approximately complete. • Each energy eigenfunction is assumed to be describable by a single Slater determinant, an antisymmetrized product of one-electron wave functions (i.e., orbitals). • The mean field approximation is implied. Effects arising from deviations from this assumption, known as electron correlation, are completely neglected for the electrons of opposite spin, but are taken into account for electrons of parallel spin.[3][4] (Electron correlation should not be confused with electron exchange, which is fully accounted for in the Hartree–Fock method.)[4] Relaxation of the last two approximations give rise to many so-called post-Hartree–Fock methods. Greatly simplified algorithmic flowchart illustrating the Hartree–Fock method Variational optimization of orbitals[edit] The variational theorem states that for a time-independent Hamiltonian operator, any trial wave function will have an energy expectation value that is greater than or equal to the true ground state wave function corresponding to the given Hamiltonian. Because of this, the Hartree–Fock energy is an upper bound to the true ground state energy of a given molecule. In the context of the Hartree–Fock method, the best possible solution is at the Hartree–Fock limit; i.e., the limit of the Hartree–Fock energy as the basis set approaches completeness. (The other is the full-CI limit, where the last two approximations of the Hartree–Fock theory as described above are completely undone. It is only when both limits are attained that the exact solution, up to the Born–Oppenheimer approximation, is obtained.) The Hartree–Fock energy is the minimal energy for a single Slater determinant. The starting point for the Hartree–Fock method is a set of approximate one-electron wave functions known as spin-orbitals. For an atomic orbital calculation, these are typically the orbitals for a hydrogenic atom (an atom with only one electron, but the appropriate nuclear charge). For a molecular orbital or crystalline calculation, the initial approximate one-electron wave functions are typically a linear combination of atomic orbitals (LCAO). The orbitals above only account for the presence of other electrons in an average manner. In the Hartree–Fock method, the effect of other electrons are accounted for in a mean-field theory context. The orbitals are optimized by requiring them to minimize the energy of the respective Slater determinant. The resultant variational conditions on the orbitals lead to a new one-electron operator, the Fock operator. At the minimum, the occupied orbitals are eigensolutions to the Fock operator via a unitary transformation between themselves. The Fock operator is an effective one-electron Hamiltonian operator being the sum of two terms. The first is a sum of kinetic energy operators for each electron, the internuclear repulsion energy, and a sum of nuclear-electronic Coulombic attraction terms. The second are Coulombic repulsion terms between electrons in a mean-field theory description; a net repulsion energy for each electron in the system, which is calculated by treating all of the other electrons within the molecule as a smooth distribution of negative charge. This is the major simplification inherent in the Hartree–Fock method, and is equivalent to the fifth simplification in the above list. Since the Fock operator depends on the orbitals used to construct the corresponding Fock matrix, the eigenfunctions of the Fock operator are in turn new orbitals which can be used to construct a new Fock operator. In this way, the Hartree–Fock orbitals are optimized iteratively until the change in total electronic energy falls below a predefined threshold. In this way, a set of self-consistent one-electron orbitals are calculated. The Hartree–Fock electronic wave function is then the Slater determinant constructed out of these orbitals. Following the basic postulates of quantum mechanics, the Hartree–Fock wave function can then be used to compute any desired chemical or physical property within the framework of the Hartree–Fock method and the approximations employed. Mathematical formulation[edit] The Fock operator[edit] Main article: Fock matrix Because the electron-electron repulsion term of the electronic molecular Hamiltonian involves the coordinates of two different electrons, it is necessary to reformulate it in an approximate way. Under this approximation, (outlined under Hartree–Fock algorithm), all of the terms of the exact Hamiltonian except the nuclear-nuclear repulsion term are re-expressed as the sum of one-electron operators outlined below, for closed-shell atoms or molecules (with two electrons in each spatial orbital).[5] The "(1)" following each operator symbol simply indicates that the operator is 1-electron in nature. \hat F[\{\phi_j\}](1) = \hat H^{\text{core}}(1)+\sum_{j=1}^{N/2}[2\hat J_j(1)-\hat K_j(1)] is the one-electron Fock operator generated by the orbitals \phi_j, and \hat H^{\text{core}}(1)=-\frac{1}{2}\nabla^2_1 - \sum_{\alpha} \frac{Z_\alpha}{r_{1\alpha}} is the one-electron core Hamiltonian. Also \hat J_j(1) is the Coulomb operator, defining the electron-electron repulsion energy due to each of the two electrons in the jth orbital.[5] Finally \hat K_j(1) is the exchange operator, defining the electron exchange energy due to the antisymmetry of the total n-electron wave function. [5] This (so called) "exchange energy" operator, K, is simply an artifact of the Slater determinant. Finding the Hartree–Fock one-electron wave functions is now equivalent to solving the eigenfunction equation: \hat F(1)\phi_i(1)=\epsilon_i \phi_i(1) where \phi_i\;(1) are a set of one-electron wave functions, called the Hartree–Fock molecular orbitals. Linear combination of atomic orbitals[edit] Main articles: basis set (chemistry) and basis set Typically, in modern Hartree–Fock calculations, the one-electron wave functions are approximated by a linear combination of atomic orbitals. These atomic orbitals are called Slater-type orbitals. Furthermore, it is very common for the "atomic orbitals" in use to actually be composed of a linear combination of one or more Gaussian-type orbitals, rather than Slater-type orbitals, in the interests of saving large amounts of computation time. Various basis sets are used in practice, most of which are composed of Gaussian functions. In some applications, an orthogonalization method such as the Gram–Schmidt process is performed in order to produce a set of orthogonal basis functions. This can in principle save computational time when the computer is solving the Roothaan–Hall equations by converting the overlap matrix effectively to an identity matrix. However, in most modern computer programs for molecular Hartree–Fock calculations this procedure is not followed due to the high numerical cost of orthogonalization and the advent of more efficient, often sparse, algorithms for solving the generalized eigenvalue problem, of which the Roothaan–Hall equations are an example. Numerical stability[edit] Numerical stability can be a problem with this procedure and there are various ways of combating this instability. One of the most basic and generally applicable is called F-mixing or damping. With F-mixing, once a single electron wave function is calculated it is not used directly. Instead, some combination of that calculated wave function and the previous wave functions for that electron is used—the most common being a simple linear combination of the calculated and immediately preceding wave function. A clever dodge, employed by Hartree, for atomic calculations was to increase the nuclear charge, thus pulling all the electrons closer together. As the system stabilised, this was gradually reduced to the correct charge. In molecular calculations a similar approach is sometimes used by first calculating the wave function for a positive ion and then to use these orbitals as the starting point for the neutral molecule. Modern molecular Hartree–Fock computer programs use a variety of methods to ensure convergence of the Roothaan–Hall equations. Weaknesses, extensions, and alternatives[edit] Of the five simplifications outlined in the section "Hartree–Fock algorithm", the fifth is typically the most important. Neglecting electron correlation can lead to large deviations from experimental results. A number of approaches to this weakness, collectively called post-Hartree–Fock methods, have been devised to include electron correlation to the multi-electron wave function. One of these approaches, Møller–Plesset perturbation theory, treats correlation as a perturbation of the Fock operator. Others expand the true multi-electron wave function in terms of a linear combination of Slater determinants—such as multi-configurational self-consistent field, configuration interaction, quadratic configuration interaction, and complete active space SCF (CASSCF). Still others (such as variational quantum Monte Carlo) modify the Hartree–Fock wave function by multiplying it by a correlation function ("Jastrow" factor), a term which is explicitly a function of multiple electrons that cannot be decomposed into independent single-particle functions. An alternative to Hartree–Fock calculations used in some cases is density functional theory, which treats both exchange and correlation energies, albeit approximately. Indeed, it is common to use calculations that are a hybrid of the two methods—the popular B3LYP scheme is one such hybrid functional method. Another option is to use modern valence bond methods. Software packages[edit] For a list of software packages known to handle Hartree–Fock calculations, particularly for molecules and solids, see the list of quantum chemistry and solid state physics software. See also[edit] 1. ^ Froese Fischer, Charlotte (1987). "General Hartree-Fock program". Computer Physics Communication 43 (3): 355–365. Bibcode:1987CoPhC..43..355F. doi:10.1016/0010-4655(87)90053-1{{inconsistent citations}}  2. ^ Abdulsattar, Mudar A. (2012). "SiGe superlattice nanocrystal infrared and Raman spectra: A density functional theory study". J. Appl. Phys. 111 (4): 044306. Bibcode:2012JAP...111d4306A. doi:10.1063/1.3686610.  3. ^ Hinchliffe, Alan (2000). Modelling Molecular Structures (2nd ed.). Baffins Lane, Chichester, West Sussex PO19 1UD, England: John Wiley & Sons Ltd. p. 186. ISBN 0-471-48993-X.  4. ^ a b Szabo, A.; Ostlund, N. S. (1996). Modern Quantum Chemistry. Mineola, New York: Dover Publishing. ISBN 0-486-69186-1.  5. ^ a b c Levine, Ira N. (1991). Quantum Chemistry (4th ed.). Englewood Cliffs, New Jersey: Prentice Hall. p. 403. ISBN 0-205-12770-3. • Levine, Ira N. (1991). Quantum Chemistry (4th ed.). Englewood Cliffs, New Jersey: Prentice Hall. pp. 455–544. ISBN 0-205-12770-3.  • Cramer, Christopher J. (2002). Essentials of Computational Chemistry. Chichester: John Wiley & Sons, Ltd. pp. 153–189. ISBN 0-471-48552-7.  External links[edit]
98c5c9d3bd2f4a90
Take the 2-minute tour × I have been wondering about the axiom of choice and how it relates to physics. In particular, I was wondering how many (if any) experimentally-verified physical theories require axiom of choice (or well-ordering) and if any theories actually require constructability. As a math student, I have always been told the axiom of choice is invoked because of the beautiful results that transpire from its assumption. Do any mainstream physical theories require AoC or constructability, and if so, how do they require AoC or constructability? share|improve this question I've never bothered tracing what depends on AC and what doesn't, but I suspect it runs deep enough to touch most of the math underlying physics. For instance, it's good to know that we're talking about something that exists when we use bases for infinite-dimensional vector spaces. –  Chris White Nov 10 '12 at 4:41 I think that Banach - Tarski theorem which depends crucially upon choice axiom may have some physical meaning - e.g. in terms of creation of more than one particles out of one when given with enough energy. However, the question of whether this is so or not belongs more to the domain of philosophy than physics. –  user10001 Nov 10 '12 at 12:20 @ChrisWhite: right, however physicist very often assume other things that actually don't exist for e.g. general infinite-dimensional vector spaces, neither with or without the axiom of choice. –  leftaroundabout Nov 10 '12 at 12:45 I suspect much of physics wouldn't need the full strength AC. Much can be done with countable AC. But, as @ChrisWhite says, measure theory would founder without full AC, although I suspect someone will come up with a measure theory without full AC one day. For me, though, the classic example that catches my eye here is not measure theory, but Tychonov's theorem - the product of compact sets is compact - is equivalent to AC but this is a very applied-maths-sounding theorem: it would be hard to say "throw that one out" - it's going to underpin many mathematical physics ideas. –  WetSavannaAnimal aka Rod Vance Nov 3 '13 at 23:06 4 Answers 4 up vote 15 down vote accepted No, nothing in physics depends on the validity of the axiom of choice because physics deals with the explanation of observable phenomena. Infinite collections of sets – and they're the issue of the axiom of choice – are obviously not observable (we only observe a finite number of objects), so experimental physics may say nothing about the validity of the axiom of choice. If it could say something, it would be very paradoxical because axiom of choice is about pure maths and moreover, maths may prove that both systems with AC or non-AC are equally consistent. Theoretical physics is no different because it deals with various well-defined, "constructible" objects such as spaces of real or complex functions or functionals. For a physicist, just like for an open-minded evidence-based mathematician, the axiom of choice is a matter of personal preferences and "beliefs". A physicist could say that any non-contractible object, like a particular selected "set of elements" postulated to exist by the axiom of choice, is "unphysical". In mathematics, the axiom of choice may simplify some proofs but if I were deciding, I would choose a stronger framework in which the axiom of choice is invalid. A particular advantage of this choice is that one can't prove the existence of unmeasurable sets in the Lebesgue theory of measure. Consequently, one may add a very convenient and elegant extra axiom that all subsets of real numbers are measurable – an advantage that physicists are more likely to appreciate because they use measures often, even if they don't speak about them. share|improve this answer You're really hating on the axiom of choice, and it's not clear why. If you want a new measure theory, you're perfectly free to come up with a new definition of "measure." No need to throw out a huge chunk of math to do it. And all the "open-minded" mathematicians you speak of died a long time ago. –  Chris White Nov 10 '12 at 20:17 I am not "hating it", I am mostly indifferent towards it and slightly prefer non-AC over AC. I hope it's not a heresy yet. ;-) No one needs to throw any papers in maths – I just said that the detailed technical parts of those papers that depend on the axiom of choice are irrelevant for physics and irrelevant for any branch of maths that resembles the methods in physics. And that there's no scientific evidence - and can't be any scientific evidence - in favor or against the axiom of choice. –  Luboš Motl Nov 10 '12 at 21:12 Whether someone died isn't decisive about statements of validity and consistency of assumptions and theories in maths or science. And the independence of the axiom of choice of the other axioms - i.e. the consistency of the other axioms with AC as well as non-AC (one of them) - was proved by Paul Cohen in the 1960s. Whoever doesn't understand that this means that AC and non(AC) are equally consistent with maths shouldn't call himself or herself a mathematician. Maybe he or she is an activist but not a rationally thinking person. –  Luboš Motl Nov 10 '12 at 21:15 The textbook formulation of functional analysis depends on the axiom of choice, eg via Hahn-Banach. This means that discarding the axoim of choice will break the textbook formulation of quantum mechanics as well. However, as we're dealing with (separable) Hilbert spaces, there exists countable bases and we should be able to replace the axiom of choice with a less 'paradox' alternative like the Solovay model and still get the right physics. The full Hahn-Banach theorem cannot be recovered, though, as it implies the existence of an unmeasurable set. share|improve this answer This is just wrong, Christoph. A textbook presentation of a math problem may decide to believe the axiom of choice but one may do all the things at least equally well in systems, like Solovay models, that assume the AC is false. Nothing in quantum physics would break down if one used non-AC in all textbooks. Your suggestion that one uses the AC with infinite bases in QM is wrong, too. All the structures that matter in QM, like the Hilbert space of L^2 integrable functions (well, some equivalence classes), are continuous and well-behaved, incompatible with the discrete AC-like selection. –  Luboš Motl Nov 10 '12 at 7:21 @LubošMotl: please re-read my answer - I do not disagree –  Christoph Nov 10 '12 at 7:29 @LubošMotl: clarified my answer a bit, but imo it was fine as it was... –  Christoph Nov 10 '12 at 7:43 All the functional analysis that physicists use it is restricted to cases where the Hahn-Banach theorem is only used with at worst countable dependent choice, and you really don't need it for physics, as Lubos Motl explains clearly. This is pro-choice FUD, the "full Hahn-Banach theorem" is going on about vector spaces of basis size aleph_continuum, and nonsense like that. –  Ron Maimon Nov 11 '12 at 4:54 Rigorous arguments in functional analisis are made much simpler by employing the axiom of choice. As we are free to model our physics in any set theory we like, and any set theory containing ZF contains a model of ZFC, we are entitled to use this simplification without fear of inconsistency. Discarding the axiom of choice would only make concepts and proofs more tedious, without giving any higher degree of assurance of the results. For example, the standard proof of the spectral theorem for self-adjoint operators depends on the axiom of choice, I believe, and much in mathematical physics depends on the spectral theorem. On the other hand, already on the level of theoretical physics, one often replaces scrupulously integral by finite sums, takes limits irrespective of their mathematical existence, and employs lots of other mathematically dubious trickery to get quickly at the results. So on this level of reasoning, nothing depends on subtilities that make a difference only when one begins to care about precise definitions and arguments in the presence of infinity. share|improve this answer The following paper may be of interest: Norbert Brunner, Karl Svozil, Matthias Baaz, "The Axiom of Choice in Quantum Theory". Mathematical Logic Quarterly, vol. 42 (1) pp. 319-340 (1996). The abstract is as follows: We construct peculiar Hilbert spaces from counterexamples to the axiom of choice. We identify the intrinsically effective Hamiltonians with those observables of quantum theory which may coexist with such spaces. Here a self adjoint operator is intrinsically effective if and only if the Schrödinger equation of its generated semigroup is soluble by means of eigenfunction series expansions. Also relevant is the fact that classical analysis doesn't require much more than dependent choice, which is consistent with "All sets of reals are Lebesgue measurable". However the combination of the two statements requires a stronger assumption as a theory (inaccessible cardinals). What does baffle me, however, with physicists that have strong objections to the Banach-Tarski paradox, that it makes much less sense that a set can be partitioned into strictly more [non-empty] parts than elements. And that is a consequence of having all sets Lebesgue measurable. So while you may sleep quietly knowing that you cannot partition an orange into five parts and combining the parts into two oranges (thus solving world hunger), you have an equally disturbing problem. You can cut out a line [read: the real numbers] into more parts than points. share|improve this answer Your Answer
93001bd9125be200
Psychology Wiki Many-minds interpretation Revision as of 19:08, September 21, 2006 by Lifeartist (Talk | contribs) 34,191pages on this wiki The many-minds interpretation of quantum mechanics extends the many-worlds interpretation by proposing that the distinction between worlds should be made at the level of the mind of an individual observer. The concept was first introduced in 1970 by H. Dieter Zeh as a variant of the Hugh Everett interpretation in connection with quantum decoherence, and later (in 1981) explicitly called a Many-(or multi-)consciousness Interpretation. The name many-minds interpretation was first used by David Albert and B. Loewer in their 1988 work Interpreting the Many Worlds Interpretation. The central problems Edit One of the central problems in interpretation of quantum theory is the duality of time evolution of physical systems: 1. Unitary evolution by the Schrödinger equation, 2. Nondeterministic, nonunitary change during measurement of physical observables, at which time the system "selects" a single value in the range of possible values for the observable. This process is known as wavefunction collapse. Moreover, the process of observation occurs outside the system, which presents a problem on its own if one considers the universe itself to be a quantum system. This is known as the measurement problem. In the introduction to his paper, The Problem Of Conscious Observation In Quantum Mechanical Description (June, 2000) H. D. Zeh offered an empirical basis for connecting the processes involved in (2) with conscious observation: "John von Neumann seems to have first clearly pointed out the conceptual difficulties that arise when one attempts to formulate the physical process underlying subjective observation within quantum theory. He emphasized the latter’s incompatibility with a psycho-physical parallelism, the traditional way of reducing the act of observation to a physical process. Based on the assumption of a physical reality in space and time, one either assumes a coupling (causal relationship — one-way or bidirectional) of matter and mind, or disregards the whole problem by retreating to pure behaviorism. However, even this may remain problematic when one attempts to describe classical behavior in quantum mechanical terms. Neither position can be upheld without fundamental modifications in a consistent quantum mechanical description of the physical world." The Many-worlds Interpretation Edit Main article: Many-worlds interpretation Hugh Everett described a way out of this problem by suggesting that the universe is in fact indeterminate as a whole. That is, if you were to measure the spin of a particle and find it to be "up", in fact there are two "yous" after the measurement, one who measured the spin up, the other spin down. This relative state formulation, where all states (sets of measures) can only be measured relative to other such states, avoids a number of problems in quantum theory, including the original duality - no collapse takes place, the indeterminacy simply grows (or moves) to a larger system. Effectively by looking at the system in question, you take on its indeterminacy. Everett claims that the universe has a quantum state, which he called the universal wavefunction, that always evolves according to the Schrödinger equation or some relativistic equivalent; now the measurement problem suggests the universal wavefunction will be in a superposition corresponding to many different definite macroscopic realms (`macrorealms'); that one can recover the subjective appearance of a definite macrorealm by postulating that all the various definite macrorealms are actual---`we just happen to be in one rather than the others' in the sense that "we" are in all of them, but each are mutally inobservable. Continuous infinity of minds Edit The idea of many minds was suggested early on by Zeh in 1995. He argues that in a decohering no-collapse universe one can avoid the necessity of macrorealms by introducing a new psycho-physical parallelism, in which individual minds supervene on each non-interfering component in the physical state. Zeh indeed suggests that, given decoherence, this is the most natural interpretation of quantum mechanics. The main difference between `many minds' and `many worlds' interpretations then lies in the definition of the preferred quantity. The `many minds' interpretations suggests that to solve the measurement problem, there is no need to secure a definite macrorealm: the only thing that's required is appearance of such. A bit more precisely: the idea is that the preferred quantity is whatever physical quantity, defined on brains (or brains and parts of their environments), has definite-valued states (eigenstates) that underpin such appearances, i.e. underpin the states of belief in, or sensory experience of, the familiar macroscopic realm. In its original version (related to decoherence), there is no process of selection. The process of quantum decoherence explains in terms of the Schrödinger equation how certain components of the universal wave function become irreversibly dynamically independent of one another (separate worlds - even though there is but one quantum world that does NOT split). These components may (each) contain definite quantum states of observers, while the total quantum state may not. These observer states may then be assumed to correspond to definite states of awareness (minds), just as in a classical description of observation. States of different observers are consistently entangled with one another, thus warranting objective results of measurements. However Albert and Loewer suggest that the mental does not supervene on the physical, because individual minds have trans-temporal identity of their own. The mind selects one of these identities to be its non-random reality, while the universe itself is unaffected. The process for selection of a single state remains unexplained. This is particularly problematic because it is not clear how different observers would thus end up agreeing on measurements, which happens all the time here in the real world. There is assumed to be a sort of feedback between the mental process that leads to selection and the universal wavefunction, thereby effecting other mental states as a matter of course. In order to make the system work, the "mind" must be separate from the body, an old duality of philosophy to replace the new one of quantum mechanics. In general this interpretation has received little attention, largely for this last reason. Objections Edit Objections that apply to the many-worlds interpretation also apply to the many-mind interpretations. On the surface both of these theories expressly violate Occam's Razor; proponents counter that in fact these solutions minimize entities by simplifying the rules that would be required to describe the universe. Another serious objection is that workers in no collapse interpretations have produced no more than elementary models based on the definite existence of specific measuring devices. They have assumed, for example, that the Hilbert space of the universe splits naturally into a tensor product structure compatible with the measurement under consideration. They have also assumed, even when describing the behavior of macroscopic objects, that it is appropriate to employ models in which only a few dimensions of Hilbert space are used to describe all the relevant behavior. In his ‘What is it like to be Schrödinger's cat?’ (2000), Peter J. Lewis argues that the many minds interpretation of quantum mechanics has absurd implications for agents facing life-or-death decisions. In general, the many minds theory holds that a conscious being who observes the outcome of a random zero-sum experiment will evolve into two successors in different observer states, each of whom observes one of the possible outcomes. Moreover, the theory advises you to favor choices in such situations in proportion to the probability that they will bring good or bad results to your various successors. But in a life-or-death case like getting into the box with Schrödinger’s cat, you will only have one successor, since one of the outcomes will ensure your death. So it seems that the many minds interpretation advises you to get in the box with the cat, since it is certain that your only successor will emerge unharmed. Compare: Quantum suicide and Quantum immortality See alsoEdit External linksEdit Around Wikia's network Random Wiki
6e9b4de738677b93
Take the 2-minute tour × In mathematical physics and other textbooks we find the Legendre polynomials are solutions of Legendre's differential equations. But I didn't understand where we encounter Legendre's differential equations (physical example). What is the basic physical concept behind the Legendre polynomials? How important are they in physics? Please explain simply and give a physical example. share|improve this question en.wikipedia.org/wiki/… –  John Rennie Jan 24 '13 at 13:48 These polynomials are not really physics, they are simply a useful mathematical tool that appear in the solutions to many physical problems with spherical symmetries. I think the question is fine because they do come up a lot. –  dmckee Jan 24 '13 at 14:55 There is a very definite physical notion behind Legendre polynomials: a rank-$l$ Legendre polynomial corresponds to a spin-$l$ representation of the orthogonal group $SO(3)$. These are the usual traceless, symmetric tensors of rank $l$ we use in field theory all the time. If in QM you scatter two spinless particles, you measure the angular distribution and find that it is described by $P_2(\cos \theta)$ (for example), then you can be sure that the particles are exchanging a spin-2 resonance. –  Vibert Jan 25 '13 at 0:01 @Vibert That is a physical notion attached to the polynomials, but the math exists independently of the physics. The distinction here is that physicists must learn the math but mathematicians can know the polynomials while being ignorant of the physics. –  dmckee Jan 25 '13 at 5:02 4 Answers 4 up vote 6 down vote accepted The Legendre polynomials occur whenever you solve a differential equation containing the Laplace operator in spherical coordinates with a separation ansatz (there is extensive literature on all of those keywords on the internet). Since the Laplace operator appears in many important equations (wave equation, Schrödinger equation, electrostatics, heat conductance), the Legendre polynomials are used all over physics. There is no (inarguable) physical concept behind the Legendre polynomials, they are just mathematical objects which form a complete basis between -1 and 1 (as do the Chebyshev polynomials). share|improve this answer I disagree with the last statement (see my comment above). Legendre polynomials correspond to $SO(3)$ (tensor) representations that are well-known, even to undergraduates. Just look up the partial wave expansion in a QM textbook. The Chebyshev polynomials play the same role, but in two dimensions. –  Vibert Jan 25 '13 at 0:03 If you do partial wave expansion, you do that because the Legendre polynomials are eigenfunctions of the $\vartheta$ part of the Laplace operator. The connection to the $SO(3)$ representation is interesting though. –  Rafael Reiter Jan 25 '13 at 0:06 Yes, that is exactly what I mean: Legendre polynomials are eigenfunctions of the Laplacian on $S^2$, and indeed they correspond to representations of $SO(3)$ - that's not an accident. You are free to think that this fact isn't important, but it's the 3D equivalent of classifying the representations of the Lorentz group - you do agree that it's meaningful to talk about scalars, spinors, currents, tensors etc. in particle physics, right? –  Vibert Jan 25 '13 at 0:12 I know too little about particle physics to give a qualified answer. But I think we disagree on the more fundamental question of how "physically" you interpret a mathematical object, which is more of a philosophical question. –  Rafael Reiter Jan 25 '13 at 0:19 Here's my 30 seconds hand waving argument for "Why is it that we always encounter new special functions $f_n$ with orthogonality relations??" $$\int f^*_n\cdot f_m=\delta_{mn}$$ Super broadly speaking, in physics we dealing with the dynamics of certain degrees of freedom. These often employ smooth symmetries, that is we're dealing with Lie groups, which are also manifold in themselfs. Take e.g. the Laplacian $\Delta=\nabla\cdot\nabla$ and the associated symmetries $R$ acting as $\nabla\to R\nabla$ in such a way that that $R\nabla\cdot R\nabla=\nabla\cdot\nabla$. Now in case one is dealing with a "rotation" in the broadest sense of the word, one often has a compact manifold, where we can savagely define things like integration on the group, and these symmetry groups also permit pretty unitary matrix representations. That is there are necessarily matrices $U$ with and well, the matrix coefficients $U_{kn}$ must be some complex functions. To put it short, special functions are representation theory magic. @zonk: Yes, it's the default theory. But of course, you only see the direct relation to special functions if you take the abstract Lie group theory and actually sit down and write down the matrices in some base. E.g. for the rotation group matrices $D$, you find $$ \begin{array}{lcl} D^j_{m'm}(\alpha,\beta,\gamma)&=& e^{-im'\alpha } [(j+m')!(j-m')!(j+m)!(j-m)!]^{1/2} \sum\limits_s \left[\frac{(-1)^{m'-m+s}}{(j+m-s)!s!(m'-m+s)!(j-m'-s)!} \right.\\ &&\left. \cdot \left(\cos\frac{\beta}{2}\right)^{2j+m-m'-2s}\left(\sin\frac{\beta}{2}\right)^{m'-m+2s} \right] e^{-i m\gamma} \end{array}.$$ Very sweet, right? Now here you have the Legendre Polynomials $P_\ell^m$ $$ D^{\ell}_{m 0}(\alpha,\beta,0) = \sqrt{\frac{4\pi}{2\ell+1}} Y_{\ell}^{m*} (\beta, \alpha ) = \sqrt{\frac{(\ell-m)!}{(\ell+m)!}} \, P_\ell^m ( \cos{\beta} ) \, e^{-i m \alpha } $$ so that $$ \int_0^{2\pi} d\alpha \int_0^\pi \sin \beta d\beta \int_0^{2\pi} d\gamma \,\, D^{j'}_{m'k'}(\alpha,\beta,\gamma)^\ast D^j_{mk}(\alpha,\beta,\gamma) = \frac{8\pi^2}{2j+1} \delta_{m'm}\delta_{k'k}\delta_{j'j}.$$ share|improve this answer That is a very interesting view/line of argument. Do you find this in standard Lie groups literature? –  Rafael Reiter Jan 24 '13 at 15:47 ...and how do the $f$'s relate to the $U$'s? –  Rafael Reiter Jan 24 '13 at 16:18 If you want to know why computational physicians like Legendre Polynomials, the answer is rather simple. As the other people has already pointed out, the Legendre Polynomials are orthogonal, they can be a very good basis for many applications. For example, if one tries to construct a function which fits the experiment or simulation data within the estimate error-bar and interpolates between the limited number of available data points, the Legendre Polynomials can be a very useful, so does the Chebyshev polynomials. The function constructed from the the Legendre Polynomials does not suffer the Runge's problem. share|improve this answer Rather than thinking about the abstract orthonormal basis of the Legendre polynomials $P_l(x)$, I find it easier to visualize these polynomials by looking at $P_l(\cos\theta)$. These are simply the Spherical Harmonics with azimuthal symmetry: $$ Y_l^{m=0} = n_l P_l(\cos\theta)$$ where $n_l$ is a normalization factor that only depends on $l$. In this beautiful image of the spherical harmonics on Wikipedia by Inigo.quilez, the $P_l(\cos\theta)$ correspond to the center column of the image ($m=0$). Note the symmetry about the $z$-axis. Plot of Spherical Harmonics by Inigo.quilez (Wikipedia) These are come up very often in physics, for example, while solving the Laplace equation ($\nabla^2\Phi = 0$) with azimuthally symmetric boundary conditions. share|improve this answer Your Answer
d82301e96da0e331
Take the 2-minute tour × Is the Born rule a fundamental postulate of quantum mechanics, or can it be inferred from unitary evolution? share|improve this question As the page about postulates you linked to correctly says, the Born-like rules to calculate probabilities from state vectors and operators are among the general postulates of quantum mechanics. It doesn't mean that they can't be derived from some other assumptions. However, the other assumptions clearly have to be connected with the notion of "probability" in one way or another, so they will be either a special or generalized formulation of the Born rule, anyway. Saying that the evolution is unitary doesn't say anything about probabilities - it can't "replace" the Born rule. –  Luboš Motl Nov 23 '12 at 17:43 @LubošMotl I felt that if experimental apparatus must obey the same laws as the system under observation, then Born rule must follow from unitary evolution in all situations. Please can you elaborate on this comment "However, the other assumptions clearly have to be connected with the notion of "probability" in one way or another, so they will be either a special or generalized formulation of the Born rule" –  Prathyush Nov 24 '12 at 4:28 I gave a derivation of the Born rule in the last answer to the question physics.stackexchange.com/q/19500 –  Stephen Blake Aug 6 '13 at 20:37 related or duplicate: physics.stackexchange.com/q/73329 –  Ben Crowell Aug 6 '13 at 22:27 5 Answers 5 up vote 2 down vote accepted The Born rule is a fundamental postulate of quantum mechanics and therefore it cannot be derived from other postulates --precisely your first link emphasizes this--. In particular the Born rule cannot be derived from unitary evolution because the rule is not unitary $$A \rightarrow B_1$$ $$A \rightarrow B_2$$ $$A \rightarrow B_3$$ $$A \rightarrow \cdots$$ The Born rule can be obtained from non-unitary evolutions. share|improve this answer This argument is actually not valid because it does not count in unknown states from the environment which could differ for different outcomes. –  A.O.Tell Nov 24 '12 at 13:35 That is not true. Adding the environment and its equation of evolution gives an isolated system whose exact evolution is non-unitary. –  juanrga Nov 26 '12 at 11:15 You are arguing that the same input state gives different output states, which is not unitary. That argument is false because you don't know that the input state is different for different outcomes, simply because you don't know the state of the unknown environment, by definition, that leads to the different outcomes. I'm not saying that your conclusion is wrong, but your argument certainly is. –  A.O.Tell Nov 26 '12 at 12:36 Either if you assume that the same initial environment state $A\otimes E$ or not $A\otimes E_1,A\otimes E_2,A\otimes E_3\dots$ the evolution of the composite isolated system continues being non-unitary. von Neuman understood this and introduced his non-unitary evolution postulate in orthodox QM. –  juanrga Nov 26 '12 at 20:46 That's not what you wrote in your answer however –  A.O.Tell Nov 26 '12 at 21:34 Strictly speaking, the Born rule cannot be derived from unitary evolution, furthermore, in some sense the Born rule and unitary evolution are mutually contradictory, as, in general, a definite outcome of measurement is impossible under unitary evolution - no measurement is ever final, as unitary evolution cannot produce irreversibility or turn a pure state into a mixture. However, in some cases, the Born rule can be derived from unitary evolution as an approximate result - see, e.g., the following outstanding work: http://arxiv.org/abs/1107.2138 (accepted for publication in Physics Reports). The authors show (based on a rigorously solvable model of measurements) that irreversibility of measurement process can emerge in the same way as irreversibility in statistical physics - the recurrence times become very long, infinite for all practical purposes, when the apparatus contains a very large number of particles. However, for a finite number of particles there are some violations of the Born rule (see, e.g., the above-mentioned work, p. 115). share|improve this answer Unfortunately the article is completely wrong. I know two of the authors and their works on perpetual machines and supposed violations of the second law of thermo. –  juanrga Nov 24 '12 at 11:28 Thank you, I will take a look at the article referred to see if there is any weight in their arguments. Probably they are wrong as juanrga says, as most papers in this field are. –  Prathyush Nov 24 '12 at 12:58 @juanrga: Maybe you're right, and the article is indeed completely wrong, but until you offer some specific arguments, why should I believe you, rather than the authors and the referees of their published articles? You mentioned their articles on other topics, but I am not sure this is relevant. –  akhmeteli Nov 24 '12 at 13:34 @Prathyush: You may wish to start with their article arxiv.org/abs/quant-ph/0702135 , which is much shorter (see references to their journal articles there). –  akhmeteli Nov 24 '12 at 13:53 @akhmeteli Thank you I will look into it, Indeed since I haven't gone deeply into the article, I must not comment on its factual accuracy. May I ask what you thought about the article? –  Prathyush Nov 24 '12 at 17:32 The idea of deriving the Born rule (and in fact the whole measurement postulate) from the usual unitary evolution of quantum systems is at the very heart of a realist interpretation of quantum theory. If the quantum state really describes a the true internal state of a system and measurement is just a certain kind of interaction, then there should be only one single law for the time evolution. Quantum theory however is fundamentally non-local and separating systems is conceptually hard, which makes observer and experiment impossible to describe separately. There should be a system containing both parts however and which follows a simple law of time evolution. Of course, the obvious candidate for such a law is unitary evolution, simply because that is what we observe for systems that we isolate as good as possible. It is usually argued that this route leads to the Everett interpretation of quantum theory, where observations are relative to the observer and realized by entangled states. There have been several attempts to derive the Born rule in this context, but all that seem valid require additional assumptions that are questionable (and may in fact be inconsistent with the realist approach or other fundamental assumptions). The reason why there cannot be a derivation that just uses ordinary unitary evolution and results in the Born rule is not even unitarity but the linearity of the theory. Say there is an evolution that takes out input to the measurement output, and we decide to measure a|A>+b|B> in the basis {|A>,|B>}. Then independently from the environment the Born rule predicts that |A> and |B> are invariant under measurement. A superposition (|A>+B>)/sqrt(2) should end up in either |A> or |B> depending on a possible environment state if the Born rule applies. The linearity of the theory requires that the outcome is a superposition of |A> and |B> however (the phase may change though). Everett's answer to this problem is that the superposition comes out, but with the outcomes entangled with the observer seeing either outcome. But this creates two observers that are unaware of their own amplitude. Because of the linearity their future evolution is independent from the branch amplitude, and it's therefore hard to argue that any aspects of their perceived reality would depend on the branch amplitude. Interestingly approaches to fix this issue, like the use of decision theory, advanced branch counting, etc, in some form introduce a nonlinear element to the theory. Be it a measure of branch amplitude, a cutoff amplitude or amplitude discretization, a stability rule (envariance or quantum darwinism). There are also approaches that don't hide the nonlinearity in additional assumptions that may collide with the linear evolution. Those are explicit nonlinear variations of the Schroedinger equation that can in fact produce an evolution that allows the Born rule to emerge. Of course, this is not something that most theorists embrace, simply because the linearity of quantum theory is such an attractive feature. But there's one more approach that I personally favor. The nonlinearity could be only subjective to an observer, caused by incomplete knowledge about the universe. An observer, i.e. a local mechanism realized within quantum theory, can only gather information by interacting with his environment. Certain information however is inaccessible dynamically, hidden outside the observer's light cone or just not available for direct interaction. Considering this, it can be shown that reconstructing the best possible state description an observer can come up with must follow a dynamic law that is not unitary all the time, but also contains sudden state jumps with random outcomes driven by incoming priorly unknown information from the environment. It can be shown that a photon from the environment with entirely unknown polarization can cause a subjective state jump that corresponds exactly to the Born rule. This is of course a bold claim. But please see http://arxiv.org/abs/1205.0293 for a proper derivation and discussion of the details. If you you would like to look at a more gently introduction to the idea you can also read the (less complete but more intuitive) blog I've set up for this: http://aquantumoftheory.wordpress.com share|improve this answer I don't know if environment is a necessary concept in the measurement problem, For example, will a photographic Plate work in perfect vacuum. Thought I don't have the opportunity of experiment with such a situation, I believe a photographic must work normally in vacuum where there is no environment or extraneous photons. –  Prathyush Nov 25 '12 at 17:02 Even in your perfect vacuum you always have an interacting environment. And of course the environment may not be needed for the resolution of the measurement problem, but it might possibly be necessary, and so you cannot simply exclude it. It is at least a plausible source for randomness due to our lack of information about its state. –  A.O.Tell Nov 25 '12 at 17:11 In some situations where you cannot remove it from the experimental setup you will have to include the environment in the theory. What do you mean even in perfect vacuum you have the interaction environment? The basic process in a photographic plate is a light sensitive chemical reaction right? So an environment wont play a role –  Prathyush Nov 25 '12 at 17:16 The environment always plays a role in quantum theory. You cannot remove the quantum fields from space, no matter how perfect your vacuum is. There will always be interaction on some level, and ignoring that is surely not helpful for understanding the properties of quantum systems. You seem to be thinking is more or less classical terms with your photographic plate example. –  A.O.Tell Nov 25 '12 at 17:21 Also, in order to see if your plate has been affected by light you have to look at it. So at the very latest then you will subject it to massive interaction with an unknown environment –  A.O.Tell Nov 25 '12 at 17:22 The use of the word "postulate" in the question may indicate an unexamined assumption that we must or should discuss this sort of thing using an imitation of the axiomatic approach to mathematics -- a style of physics that can be done well or badly and that dates back to the faux-Euclidean presentation of the Principia. If we make that choice, then in my opinion Luboš Motl's comment says all that needs to be said. (Gleason's theorem and quantum Bayesianism (Caves 2001) might also be worth looking at.) However, the pseudo-axiomatic approach has limitations. For one thing, it's almost always too unwieldy to be usable for more than toy theories. (One of the only exceptions I know of is Fleuriot 2001.) Also, although mathematicians are happy to work with undefined primitive terms (as in Hilbert's saying about tables, chairs, and beer mugs), in physics, terms like "force" or "measurement" can have preexisting informal or operational definitions, so treating them as primitive notions can in fact be a kind of intellectual sloppiness that's masked by the superficial appearance of mathematical rigor. So what can physical arguments say about the Born rule? The Born rule refers to measurements and probability, both of which may be impossible to define rigorously. But our notion of probability always involves normalization. This suggests that we should only expect the Born rule to apply in the context of nonrelativistic quantum mechanics, where there is no particle annihilation or creation. Sure enough, the Schrödinger equation, which is nonrelativistic, conserves probability as defined by the Born rule, but the Klein-Gordon equation, which is relativistic, doesn't. This also gives one justification for why the Born rule can't involve some other even power of the wavefunction -- probability wouldn't be conserved by the Schrödinger equation. Aaronson 2004 gives some other examples of things that go wrong if you try to change the Born rule by using an exponent other than 2. The OP asks whether the Born rule follows from unitarity. It doesn't, since unitarity holds for both the Schrödinger equation and the Klein-Gordon equation, but the Born rule is valid only for the former. Although photons are inherently relativistic, there are many situations, such as two-source interference, in which there is no photon creation or annihilation, and in such a situation we also expect to have normalized probabilities and to be able to use "particle talk" (Halvorson 2001). This is nice because for photons, unlike electrons, we have a classical field theory to compare with, so we can invoke the correspondence principle. For two-source interference, clearly the only way to recover the classical limit at large particle numbers is if the square of the "wavefunction" ($\mathbf{E}$ and $\mathbf{B}$ fields) is proportional to probability. (There is a huge literature on this topic of the photon "wavefunction". See Birula 2005 for a review. My only point here is to give a physical plausibility argument. Basically, the most naive version of this approach works fine if the wave is monochromatic and if your detector intercepts a part of the wave that's small enough to look like a plane wave.) Since the Born rule has to hold for the electromagnetic "wavefunction," and electromagnetic waves can interact with matter, it clearly has to hold for material particles as well, or else we wouldn't have a consistent notion of the probability that a photon "is" in a certain place and the probability that the photon would be detected in that place by a material detector. The Born rule says that probability doesn't depend on the phase of an electron's complex wavefunction $\Psi$. We could ask why the Born rule couldn't depend on some real-valued function such as $\operatorname{\arg} \Psi$ or $\mathfrak{Re} \Psi$. There is a good physical reason for this. There is an uncertainty relation between phase $\phi$ and particle number $n$ (Carruthers 1968). For fermions, the uncertainty in $n$ in a given state is always small, so the uncertainty in phase is very large. This means that the phase of the electron wavefunction can't be observable (Peierls 1979). I've seen the view expressed that the many-worlds interpretation (MWI) is unable to explain the Born rule, and that this is a problem for MWI. I disagree, since none of the arguments above depended in any way on the choice of an interpretation of quantum mechanics. In the Copenhagen interpretation (CI), the Born rule typically appears as a postulate, which refers to the undefined primitive notion of "measurement;" I don't consider this an explanation. We often visualize the MWI in terms of a bifurcation of the universe at the moment when a "measurement" takes place, but this discontinuity is really just a cartoon picture of the smooth process by which quantum-mechanical correlations spread out into the universe. In general, interpretations of quantum mechanics are explanations of the psychological experience of doing quantum-mechanical experiments. Since they're psychological explanations, not physical ones, we shouldn't expect them to explain a physical fact like the Born rule. Aaronson, "Is Quantum Mechanics An Island In Theoryspace?," http://arxiv.org/abs/quant-ph/0401062 Bialynicki-Birula, "Photon wave function", 2005, http://arxiv.org/abs/quant-ph/0508202 Carruthers and Nieto, "Phase and Angle Variables in Quantum Mechanics", Rev Mod Phys 40 (1968) 411; copy available at http://www.scribd.com/doc/147614679/Phase-and-Angle-Variables-in-Quantum-Mechanics (may be illegal, or may fall under fair use, depending on your interpretation of your country's laws) Caves, Fuchs, and Schack, "Quantum probabilities as Bayesian probabilities", 2001, http://arxiv.org/abs/quant-ph/0106133; see also Scientific American, June 2013 Fleuriot, A Combination of Geometry Theorem Proving and Nonstandard Analysis with Application to Newton's Principia, Springer, 2001 Halvorson and Clifton, "No place for particles in relativistic quantum theories?", 2001, http://philsci-archive.pitt.edu/195/ Peierls, Surprises in Theoretical Physics, section 1.3 share|improve this answer Since the Born rule has to hold for the electromagnetic "wavefunction," and electromagnetic waves can interact with matter, it clearly has to hold for material particles as well, or else we wouldn't have a consistent notion of the probability that a photon "is" in a certain place and the probability that the photon would be detected in that place by a material detector. Could you explain this in more detail? –  Sebastian Henckel Aug 8 '13 at 20:52 @SebastianHenckel: This is not completely thought out and may be wrong. But suppose that the rule for electrons is not the Born rule but a rule saying that probability is $\propto|\Psi|^p$, where $p\ne 2$. If you scatter an EM wave off of an electron, they interact through some wave equation such that the scattered part of $\Psi$ is proportional to the amplitude of the EM wave: amplitude is proportional to amplitude. But then the electron is acting like a detector, and $p\ne 2$ means that the probability of detection isn't proportional to the probability that the photon was there. –  Ben Crowell Aug 8 '13 at 21:08 I like this argument. The interaction between the photon and the electron however is quantum electrodynamics all the way through, and that's something I don't know much about. However, thanks for making a connection between electrons and waves I never thought about. The pure de Broglie argument always seemed very ad hoc, and this makes it somewhat more plausible. –  Sebastian Henckel Aug 8 '13 at 21:30 It took me a while to read this answer. you said "The OP asks whether the Born rule follows from unitarity. It doesn't, since unitarity holds for both the Schrödinger equation and the Klein-Gordon equation, but the Born rule is valid only for the former." Isn't Born rule applicable even in Relativistic Quantum mechanics(Any field theory in general), not in the sense of KG equation but the KG field. Also Would you comment on my recent answer on a related topic, physics.stackexchange.com/questions/76132/… –  Prathyush Sep 4 '13 at 8:54 @Prathyush: My relativistic field theory is pretty weak, so if you want a really coherent explanation of why the Born rule doesn't apply to the KG equation, you're probably better off posting that as a question and letting someone more competent answer. But basically I think the concept is that in relativistic QM, we have to give up on the idea of having eigenstates of position, so the whole Copenhagen-ish interpretation of a position measurement as projecting the wavefunction down to a delta function doesn't really work. –  Ben Crowell Sep 4 '13 at 15:45 It is independent, but it is not fundamental, as it applies only to highly idealized kinds of measurements. (Realistic measurements are governed by POVMs instead.) In fact, the role of Born's rule in quantum mechanics is marginal (after the standard introduction and the derivation of the notion of expectation). It is hardly ever used for the analysis of real problems, except to shed light on problems in the foundations of quantum mechanics. share|improve this answer One day I will learn about POVM's its been on my list of To Do's for a long time. –  Prathyush Nov 24 '12 at 19:23 POVMs can be regarded as Born type measurements in a larger space, so you're back where you started. –  A.O.Tell Nov 24 '12 at 22:45 @A.O.Tell: On the formal level, yes. But in this larger space, one never does any measurements that would deserve that name. –  Arnold Neumaier Nov 26 '12 at 9:37 That statement would require an exact definition of what a measurement is and how it is applied to a subsystem. Also, it makes no practical difference. If you know how a Born style measurement works you understand how a POVM works. –  A.O.Tell Nov 26 '12 at 12:39 @A.O.Tell: It is enough to know what is really measured. Measure the mass of the sun, the halflife of Technetium, or the width of a spectral line in the Balmer series, and try to express it in terms of the Born rule! –  Arnold Neumaier Nov 26 '12 at 12:55 Your Answer
393185a360241410
Printer friendly version Jump directly to Registration starts October 8 at 7:45am in the Clough Undergraduate Learning Common (CULC) building. The conference will start on October 8 at 8:15am with an opening address by College of Sciences Dean Paul Goldbart in CULC room 152. The conference will end on October 11 at noon with a closing address by Prof. Evans Harrell in CULC room 152. Plenary Session All plenary lectures and short talks will take place in the Clough Undergraduate Learning Common (CULC) room 152 8:30am Plenary Lecture9:30am10:00am Short Talk/Poster11:00am Plenary Lecture Michael AizenmanTitle: Emergent Pfaffian Relations in Quasi-Planar Models Coffee Break Short Talk Nelson Javier Buitrago AzaTitle: Large Deviation Principles for Weakly Interacting Fermions Abstract: We show that the Gärtner–Ellis scaled cumulant generating function of fluctuation measures associated to KMS states of weakly interacting fermions on the lattice can be written as the limit of a sequence of logarithms of Gaussian Grassmann–Berezin integrals. Moreover, the covariances of the Gaussian integrals have a uniform determinant bound. As a consequence, the Grassmann integral representation may be used to obtain convergent expansions of the generating function in terms of powers of its parameter. The derivation and analysis of these expansions are studied via Brydges–Kennedy tree expansions. The proof of uniformity of the determinant bound given here uses Hölder inequalities for Schatten norms as a key argument. Søren FournaisTitle: The semi-classical limit of large fermionic systems This is bases on joint work with Mathieu Lewin and Jan Philip Solovej David MüllerTitle: Lieb-Thirring and Cwickel-Lieb-Rozenblum inequalities for perturbed graphene with a Coulomb impurity Abstract: We study the two dimensional massless Coulomb-Dirac operator restricted to its positive spectral subspace and prove estimates on the negative eigenvalues created by electromagnetic perturbations. Takuya MineTitle: Spectral shift function for the magnetic Schroedinger operators Abstract: The spectral shift function (SSF) for the Schroedinger operator is usually defined only when the scalar potential decays sufficiently fast. In the case of the magnetic Schroedinger operator in the Euclidean plane, the vector potential has long-range decay if the total magnetic flux is non-zero, and then SSF cannot be defined in the ordinary sense. In this talk, we show that SSF for the magnetic Schroedinger operator in the Euclidean plane can be defined in some weak sense, even if the total magnetic flux is non-zero. In particular, we give some explicit formula for the SSF for the Aharonov-Bohm magnetic field. Alessandro GiulianiTitle: Universality of transport coefficients in the Haldane-Hubbard model. Abstract: In this talk I will review some selected aspects of the theory of interacting electrons on the honeycomb lattice, with special emphasis on the mathematics of the Haldane-Hubbard model: this is a model for interacting electrons on the hexagonal lattice, in the presence of nearest and next-to-nearest neighbor hopping, as well as of a transverse dipolar magnetic field. I will discuss the key properties of its phase diagram, most notably the phase transition from a standard insulating phase to a Chern insulator, across a critical line, where the system exhibits semi-metallic behavior. I will also review the universality of its transport coefficients, including the quantization of the transverse conductivity within the gapped phases, and that of the longitudinal conductivity on the critical line. The methods of proof combine constructive Renormalization Group methods with the use of Ward Identities and the Schwinger-Dyson equation. Based on joint works with Vieri Mastropietro, Marcello Porta, Ian Jauslin. Yoshiko OgataTitle: A class of asymmetric gapped Hamiltonians on quantum spin chains and its characterization. Abstract: Recently, the classification problem of gapped Hamiltonians attracts a lot of attentions. We consider this problem for a class of Hamiltonians on quantum spin chains. This class is characterized by five qualitative properties. On the other hand, the Hamiltonians are MPS(Matrix product state)-Hamiltonian with some structure. This structure enable us to classify them. Conference Photo Coffee Break Kimmy CushmanTitle: Lie Algebras in Quantum Field Theories Abstract: We discuss the progress made in the study of Lie Algebras through the lens of quantum field theory. We compare the physical and mathematical interpretations of Clifford Algebras. We plan to investigate the effects of choice of bases and new bases representations for this algebra on Dirac Spinor theories. Shingo KukitaTitle: non-Markovian dynamics from singular perturbation method Abstract: We derived a complete positive map representing non-Markovian dynamics for a finite dimensional system by using a singular perturbation method. A mixing property of the environment coupled with the target system plays an important role. In this presentation, we will explain our derivation of the complete positive map and compere its dynamics with that of other non-Markovian master equations. Josiah ParkTitle: Asymptotics for Steklov Eigenvalues on Non-Smooth Domains Abstract: We study eigenfunctions and eigenvalues of the Dirichlet-to-Neumann operator on boxes in Euclidean spaces. We consider bounds on the counting function for the Steklov spectrum on such domains. Diane PelejoTitle: Maximum Fidelity under Mixed Unitary or Unital Quantum Channels Abstract: Let $\rho_1$ and $\rho_2$ be fixed quantum states. We describe a simple algorithm to determine the maximum value for the fidelity $F(\rho_1,\Phi(\rho))$ between $\rho_1$ and an image $\Phi(\rho_2)$ of $\rho_2$ under any mixed unitary channel $\Phi$ or under any unital channel $\Phi$. Itaru Sasaki Title: Embedded Eigenvalues and Neumann-Wigner Potentials for Relativistic Schrodinger Operators Abstract: We construct Neumann-Wigner type potentials for the massive relativistic Schrodinger operator in one and three dimensions for which a strictly positive eigenvalue embedded in the continuous spectrum exists. We show that in the non-relativistic limit these potentials converge to the classical Neumann-Wigner potentials. Thus, the potentials constructed this talk can be considered as a relativistic generalization of the Neumann-Wigner potentials. Cem YuceTitle: Self-accelerating Parabolic Cylinder Waves in 1-D Abstract: We introduce a new self-accelerating wave packet solution of the Schrodinger equation in one dimension. We obtain an exact analytical parabolic cylinder wave for the inverted harmonic potential. We show that truncated parabolic cylinder waves exhibits their accelerating feature. Michael WeinsteinTitle: Honeycomb Schroedinger Operators in the Strong Binding Regime Abstract: We discuss the Schroedinger operator for a large class of periodic potentials with the symmetry of a hexagonal tiling of the plane. The potentials we consider are superpositions of localized potential wells, centered on the vertices of a regular honeycomb structure, corresponding to the single electron model of graphene and its artificial analogues. We consider the regime of strong binding, where the depth of the potential wells is large. Our main result is that for sufficiently deep potentials, the lowest two Floquet-Bloch dispersion surfaces, when appropriately rescaled, converge uniformly to those of the two-band tight-binding model, introduced by PR Wallace (1947) in his pioneering study of graphite. We then discuss corollaries, in the strong binding regime, on (a) the existence of spectral gaps for honeycomb potentials with PT symmetry-breaking perturbations, and (b) the existence of topologically protected edge states for honeycomb structures with "rational edges". This is joint work with CL Fefferman and JP Lee-Thorp. Svetlana JitomirskayaTitle: Quasiperiodic Schrodinger operators: sharp arithmetic spectral transitions and universal hierarchical structure of eigenfunction Abstract: We will review recent results on sharp arithmetic spectral transitions in some popular models: Harper's, extended Harper's, Maryland, as well as the general class of analytic potentials (papers joint with A. Avila, R. Han, H. Kruger, W. Liu, C. Marx, F. Yang, S. Zhang, and Q. Zhou) and then focus on a recently discovered universal hierarchical structure in the behavior of quasiperiodic eigenfunctions (joint work with W. Liu). The structure is governed by the continued fraction expansion of the frequency and explains some predictions in physics literature. Coffee Break Short Talk Matthew ChaTitle: The complete set of infinite volume ground states for Kitaev's abelian quantum double models Abstract: We study the set of infinite volume ground states of Kitaev's quantum double model on $\mathbb{Z}^2$ for an arbitrary finite abelian group $G$. In the finite volume, the ground state space is frustration-free and the low-lying excitations correspond to abelian anyons. The ribbon operators act on the ground state space to create pairs of single excitations at their endpoints. It is known that in the infinite volume these models have a unique frustration-free ground state. We show that the complete set of ground states decomposes into $|G|^2$ different charged sectors, corresponding to the different types of abelian anyons (or superselection sectors). In particular, all pure ground states are equivalent to the single excitation states. Our proof proceeds by showing that each ground state can be obtained as the weak$*$-limit of the finite volume ground states of the quantum double model with a suitable boundary term. The boundary terms allow for states which represent an excitation pair with one excitation in the bulk and one pinned to the boundary to be included in the ground state space. This is joint work with P. Naaijkens and B. Nachtergaele. Christoph FischbacherTitle: The proper dissipative extensions of a dual pair Abstract: Let A and (−B) be dissipative operators on a Hilbert space H and let (A,B) form a dual pair, i.e. A⊂B*, resp. B⊂A*. We present a method of determining the proper dissipative extensions A' of this dual pair, i.e. A⊂A'⊂B* provided that D(A)∩D(B) is dense in H. Applications to symmetric operators, symmetric operators perturbed by a relatively bounded dissipative operator and more singular differential operators are discussed. Eugene DumitrescuTitle: Discrimination of correlated and entangling quantum channels with selective process tomography Abstract: The accurate and reliable characterization of quantum dynamical processes underlies efforts to validate quantum technologies, where discrimination between competing models of observed behaviors inform efforts to fabricate and operate qubit devices. We present a novel protocol for quantum channel discrimination that leverages advances in direct characterization of quantum dynamics (DCQD) codes. We demonstrate that DCQD codes enable selective process tomography to improve discrimination between entangling and correlated quantum dynamics. Numerical simulations show selective process tomography requires only a few measurement configurations to achieve a low false alarm rate and that the DCQD encoding improves the resilience of the protocol to hidden sources of noise. Our results show that selective process tomography with DCQD codes is useful for efficiently distinguishing sources of correlated crosstalk from uncorrelated noise in current and future experimental platforms. Yese J. FelipeTitle: Quantum Music: Applying Quantum Theory to Music Theory and Composition Abstract: Classical and popular music is written so that the melody, harmony, rhythm is independent of the listener and the instance when it is played, providing the same experience to all listeners. By applying concepts from Quantum Theory to Music Theory, a linear combination of melodies and harmonies can be composed, establishing a unique experience for different listeners. The application of some concepts of Quantum Theory and their effects on the outcome of a quantum musical composition will be discussed. Maciej ZworskiTitle: Microlocal methods in dynamical systems Abstract: Microlocal analysis exploits mathematical manifestations of the classical/quantum (particle/wave) correspondence and has been a successful tool in spectral theory and partial differential equations. We can say that these last two fields lie on the quantum/wave side. Recently, microlocal methods have been applied to the study of classical dynamical problems, in particular of chaotic (Anosov) flows. I will illustrate this by proving that the order of vanishing of the dynamical zeta function at zero for negatively curved surfaces is given by the absolute value of the Euler characteristic (joint work with S Dyatlov). Peter KuchmentTitle: Analytic properties of dispersion relations and spectra of periodic operators Abstract: The talk will survey some known results and unresolved problems concerning analytic properties of dispersion relations and their role in various spectral theory problems for periodic operators of mathematical physics, such as spectral structure, embedded impurity eigenvalues, Greens function asymptotics, Liouville theorems, etc. Coffee Break Short Talk Thomas Norman DamTitle: The Spin-Boson model in the strong interaction limit Abstract: The Spin-Boson model is a model from QFT which describes a two level system coupled to a scalar field. In this talk, I will present new results about the strong interaction limit of the massive Spin-Boson model. As the interaction approaches infinity, one can under very general assumptions describe the asymptotics of the resolvent (in a suitable sense), the ground state energy and the ground state eigenvector. One application of these results is to show the existence of a non degenerate exited state in the strong interaction limit and prove that the energy of the excited state converges to the energy of the ground state. If time allows, I will also talk about the strategy to prove the above mentioned results. This is joint work with Jacob Schach Møller. Atsuhide IshidaTitle: Non-existence of the wave operators for the repulsive Hamiltonians Abstract: We consider the quantum systems described by the Schroedinger equation equipped with so-called repulsive part. In this quantum system, we can see the characteristic property in which the free dynamics of the particle disperse in an exponential order in time. I will report in this talk that we can find a counter example of the slow decaying interaction potential such that the wave operators do not exist and we come to conclusion of the borderline between the short-range and long-range. Shanshan LiTitle: Continuous Time Quantum Walks in finite Dimensions Abstract: We consider the quantum search problem with a continuous time quantum walk for networks of finite spectral dimension of the network Laplacian. For general networks of fractal (integer or non-integer) dimension, for which in general the fractal dimension is not equal to the spectral dimension, it suggests that the spectral dimension is the scaling exponent that determines the computational complexity of the search. Our results are consistent with those of Childs and Goldstone [Phys. Rev. A 70 (2004), 022314] for lattices of integer dimension. For general fractals, we find that the Grover limit of quantum search can be obtained whenever the spectral dimension is larger than four. This complements the recent discussion of mean-field networks by Chakraborty et al. [Phys. Rev. Lett. 116 (2016), 100501] showing that for all those networks spatial search by quantum walk is optimal. Fernando BrandaoTitle: Quantum Approximate Markov Chains and the Locality of Entanglement Spectrum Abstract: In this talk I will show that quantum many-body states satisfying an area law for entanglement have a local entanglement spectrum, i.e. the entanglement spectrum can be approximated by the spectrum of a local model acting on the boundary of the region. The result follows from a version of the Hammersley-Clifford Theorem (which states that classical Gibbs states are equivalent to Markov networks) for quantum approximate Markov chains. In particular I'll argue that those are in one-to-one correspondence to 1D quantum Gibbs states. Special Sessions All special session talks will take place in the Skiles building. Saturday 10/8 GraphsNew topicsQ.I.RandomMany-body RoomSkiles 006Skiles 202Skiles 268Skiles 005Skiles 249 Pavel ExnerTitle: Singular Schrödinger operators with interactions supported by sets of codimension one Abstract: In this talk we discuss Schr\"odinger operators with a singular `potentials' supported by a subsets $\Gamma$ of a configuration space having codimension one. Some of them can be formally written as $-\Delta-\alpha \delta(x-\Gamma)$ with $\alpha>0$, where $\Gamma$ is a manifold in~$\mathbb{R}^d$, but we introduce also more singular interactions like $\delta'$ as well as the most general ones parametrized by a family of four function on $\Gamma$. We discuss relations between the spectra of these operators and the geometry of $\Gamma$ using, in particular, inequalities between operators corresponding to different `potentials'. Martin FraasTitle: Perturbation Theory of Non-Demolition Measurements Abstract: In a non-demolition measurement an observable on a quantum system is measured through a direct measurement on a sequence of probes subsequently interacting with the system. Recent interest in developing a theory of this process originates in the photon counting experiments of Haroche. Mathematically the problem is equivalent to the study of statistics of long products of completely positive maps -- which all commute in the non-demolition case. I will describe a mathematical theory of non-demolition measurements for observables with arbitrary spectra, and a theory describing the process for small Hamiltonian perturbations of the non-demolition case. The talk is based on joint works with M. Ballesteros, N. Crawford, J. Fröhlich and B. Schubnel. Mario BertaTitle: Multivariate Trace Inequalities Abstract: We prove several trace inequalities that extend the Golden-Thompson and the Araki-Lieb-Thirring inequality to arbitrarily many matrices. In particular, we strengthen Lieb’s triple matrix inequality. As an example application of our four matrix extension of the Golden-Thompson inequality, we prove remainder terms for the monotonicity of the quantum relative entropy and strong subadditivity of the von Neumann entropy in terms of recoverability. We find the first explicit remainder terms that are tight in the commutative case. Our proofs rely on complex interpolation theory as well as asymptotic spectral pinching, providing a transparent approach to treat generic multivariate trace inequalities. Houssam Abdul RahmanTitle: Entanglement and Transport in the disordered Quatum XY chain. Abstract: For a class of disordered Quantum XY chains, we prove that the dynamical entanglement of a broad class of product states satisfies a constant bound. Corollaries include area laws for eigenstates and thermal states. These results correspond to the absence of information transport. We also present and discuss some new results about the particle number transport and the energy transport in the disordered XY chain. We will draw the relation between these results and the notion of Many-body localization. Christian HainzlTitle: Spectral theoretic aspects of the BCS theory of superconductivity Abstract: The critical temperature in the BCS theory of superconductivity, in the presence of external fields, is determined by a linear two-body operator. I present the corresponding operator and its properties in the case of bounded potentials as well as in the case of a constant external magnetic field. Cesar de OliveiraTitle: Approximations of Neumann nonuniformly collapsing strips Abstract: Consider the Neumann Laplacian in the region below the graph of $\varepsilon g(x)$ for smooth $g: [a,\infty) \to (0,\infty)$ and diverging $\lim_{x\to\infty}g(x)=\infty$. The effective operator as $\varepsilon \to 0$ is found to have Robin boundary conditions at $a$. Then we recover such effective operator through suitable uniformly collapsing regions as~$\varepsilon \to 0$; in such approach, we have (roughly) got norm resolvent convergence for~$g$ diverging less than exponential and strong resolvent convergence otherwise. Emil ProdanTitle: A geometric identity for index theory Abstract: The index theorem for the Hall conductivity in 2-dimensions given by Bellissard at al [J. Math. Phys. 1994] relies on a remarkable geometric identity discovered by Alain Connes just a few years before. Relatively recently, this geometric identity was extended to higher dimensions, enabling index theorems for certain non-linear transport coefficients. This in turn confirmed the stability agains strong disorder of various invariants for topological insulators. In this talk I will describe the geometrical principles behind these generalizations. Debbie LeungTitle: Embezzlement of entanglement, conservation laws, and nonlocal games Abstract: Consider two remote parties Alice and Bob, who share quantum correlations in the form of a pure entangled state. Without further interaction, the 'Schmidt coefficients' of the entangled state are invariant; in particular, the amount of entanglement is conserved. van Dam and Hayden found that reordering these coefficients (corresponding to allowed local operations) can effect an apparent violation of the conservation law nearly perfectly, a phenomenon called 'embezzlement'. We discuss how the same mathematics can explain coherent manipulation of spins in NMR and other approximate violation of conservation laws. We show how this phenomenon gives rise to a quantum generalization of nonlocal games that cannot be won with finite amount of entanglement. (Joint work with Ben Toner, John Watrous and Jesse Wang.) Dhriti DolaiTitle: Spectral Statistics of Random Schroedinger Operators with Non-Ergodic Random Potential Abstract: It is known from earlier result of Gordon-Jaksic-Molchanov-Simon [1], that the spectrum of the random Schrodinger operators with unbounded potentials (non stationary) is pure point. Recently we obtain the eigenvalue statistics for this model and it is turn out that the statistics is Poisson. It is an analogous of Minami’s work on stationary potential [2]. This is a joint work with Anish Mallick. [1] Gordon, Y. A; Jaksic, V; Molchanov, S; Simon, B: Spectral properties of random Schrodinger operators with unbounded potentials, Comm. Math. Phys. 157(1), 23-50, 1993. [2] Minami, Nariyuki: Local Fluctuation of the Spectrum of a Multidimensional Anderson Tight Binding Model, Commun. Math. Phys. 177(3), 709-725, 1996. [3] Dolai, Dhriti; Mallick, Anish: Spectral Statistics of Random Schrdinger Operators with Unbounded Potentials, arXiv:1506.07132 [math.SP]. [4] Combes, Jean-Michel; Germinet, Francois; Klein, Abel: Generalized Eigenvalue-Counting Estimates for the Anderson Model, J Stat Physics. 135(2), 201-216, 2009. Marius LemmTitle: Condensation of fermion pairs in a domain Abstract: We consider a gas of fermions at zero temperature and low density, interacting via a microscopic two body potential which admits a bound state. The particles are confined to a domain with Dirichlet (i.e. zero) boundary conditions. Starting from the microscopic BCS theory, we derive an effective macroscopic Gross-Pitaevskii (GP) theory describing the condensate of fermion pairs. The GP theory also has Dirichlet boundary conditions. Along the way, we prove that the GP energy, defined with Dirichlet boundary conditions on a bounded Lipschitz domain, is continuous under interior and exterior approximations of that domain. This is joint work with Rupert L. Frank and Barry Simon. Claudio CacciapuotiTitle: Existence of Ground State for the NLS on Star-like Graphs Abstract: We consider a nonlinear Schrödinger equation (NLS) on a Star-like graph (a graph composed by a compact core to which a finite number of half-lines are attached). At the vertices of the graph interactions of delta-type can be present and an overall external potential is admitted. Our goal is to show that the NLS dynamics on a star-like graph admits a ground state of prescribed mass $m$ under mild and natural hypotheses. By ground state of mass $m$ we mean a minimizer of the NLS energy functional constrained to the manifold of mass ($L^2$-norm) equal to $m$. When existing, the ground state is an orbitally stable standing wave for the NLS evolution. We prove that a ground state exists whenever the quadratic part of the energy admits a simple isolated eigenvalue at the bottom of the spectrum (the linear ground state) and $m$ is sufficiently small. This is a major generalization of a result previously obtained for a graph with a single vertex (a star graph) with a delta interaction in the vertex and without potential terms. The main tools of the proof are concentration-compactness and bifurcation techniques. This is a joint work in collaboration with Domenico Finco and Diego Noja. Rainer DickTitle: Dressing up for length gauge: Mathematical aspects of a debate in quantum optics Abstract: A debate about the correct form of the interaction Hamiltonian in quantum optics has been going on since Lamb’s investigation of optical line shapes in 1952. Surprisingly, the debate has never been settled, but rather intensified in recent years with the observation of phenomena on atomic time scales in attosecond spectroscopy. In short, the debate concerns the description of matter-photon interactions through vector potentials (“velocity gauge”) or electric fields (“length gauge”) in the Schrödinger equation. Observational evidence is inconclusive, since the observationally preferred interaction terms depend on observed systems and parameters. Indeed, more experimental observations seem to favor the length gauge, which is surprising from a fundamental theory perspective. I will review the problem both from a theoretical and an experimental perspective, and then point out that the underlying transformation between velocity gauge and length gauge is actually an incomplete gauge transformation which should rather be addressed as a basic dressing operation for the Schrödinger field. This observation and a study of the coupled Schrödinger-Maxwell system will help us to understand why predictions in velocity gauge and length gauge differ, and why length gauge may be preferred in quantum optical systems. Beth RuskaiTitle: Extreme Points of Unital Quantum Channels Abstract: Several new classes of extreme points of unital and trace-preserving completely positive (CP) maps are analyzed. One class is not extreme in either the convex set of unital CP maps or the set of trace-preserving CP maps and is factorizablle. Another class is extreme for both the set of unital CP maps and the set of trace-preserving CP maps, except for certain critical parameters. For those parameters the linear dependence of the matrices in the Choi product condition are associated with representations of the symmetric group. Milivoje LukicTitle: KdV equation with almost periodic initial data Abstract: The KdV equation is known to be integrable for some classes of initial data, such as decaying, periodic, and finite-gap quasiperiodic. In this talk, we will describe recent progress for almost periodic initial data, centered around a conjecture of Percy Deift that the solution is almost periodic in time. We will discuss the proof of existence, uniqueness, and almost periodicity in time, in the regime of absolutely continuous and sufficiently 'thick' spectrum. In particular, this result proves Deift's conjecture for small analytic quasiperiodic initial data with Diophantine frequency. The talk is based on joint work with Ilia Binder, David Damanik, and Michael Goldstein. Marcello PortaTitle: Mean field evolution of fermionic systems Abstract: In this talk I will discuss the dynamics of interacting fermionic systems in the mean field regime. Compared to the bosonic case, fermionic mean field scaling is naturally coupled with a semiclassical scaling, making the analysis more involved. As the number of particles grows, the quantum evolution of the system is expected to be effectively described by Hartree-Fock theory. The next degree of approximation is provided by a classical effective dynamics, corresponding to the Vlasov equation. I will consider initial data which are close to quasi-free states, at zero (pure states) or at positive temperature (mixed states), with an appropriate semiclassical structure. Under mild regularity assumptions on the interaction potential, I will show that the time evolution of such initial data stays close to a quasi-free state, with reduced one-particle density matrix given by the solution of the time-dependent Hartree-Fock equation. The result can be extended to Coulomb interactions, under the assumption that the solution of the time-dependent Hartree-Fock equation preserves the semiclassical structure of the initial data. If time permits, the convergence from the time-dependent Hartree-Fock equation to the Vlasov equation will also be discussed. The results hold for all semiclassical times, and give effective bounds on the rate of convergence towards the effective dynamics as the number of particles goes to infinity. 3:30pmCoffee Break (in CULC) Zhiqin LuTitle: Ground State of Quantum Layers Abstract: I will give a survey of the existence of ground state of quantum layers in this talk, and I will also present some new results and discuss the relation of this spectrum problem with differential geometry. Some of the results are joint with Julie Rowlett and David Krejcirik. Vit JakubskyTitle: On dispersion of wave packets in Dirac materials Abstract: We show that a wide class of quantum systems with translational invariance can host dispersionless, soliton-like, wave packets. We focus on the settings where the effective, two-dimensional Hamiltonian acquires the form of Dirac operator. The proposed framework for construction of the dispersionless wave packets is illustrated on systems with topologically nontrivial effective mass. Our analytical predictions are accompanied by a numerical analysis and possible experimental realizations are discussed. Mark WildeTitle: Universal Recoverability in Quantum Information Abstract: The quantum relative entropy is well known to obey a monotonicity property (i.e., it does not increase under the action of a quantum channel). Here we present several refinements of this entropy inequality, some of which have a physical interpretation in terms of recovery from the action of the channel. The recovery channel given here is explicit and universal, depending only on the channel and one of the arguments to the relative entropy. Time permitted, we discuss several application to the 2nd law of thermodynamics, uncertainty relations, and Gaussian quantum information. Tatyana ShcherbynaTitle: Local regime of 1d random band matrices Abstract: Random band matrices (RBM) are natural intermediate models to study eigenvalue statistics and quantum propagation in disordered systems, since they interpolate between mean-field type Wigner matrices and random Schrodinger operators. In particular, RBM can be used to model the Anderson metal-insulator phase transition (crossover) even in 1d. In this talk we will discuss an application of the supersymmetric method (SUSY) to the analysis of the bulk local regime of some specific types of RBM. We present rigorous SUSY results about the crossover for 1d RBM on the level of characteristic polynomials, as well as some progress in studying of the density of states and usual second correlation function. Michele CorreggiTitle: Local Density Approximation for the Almost-bosonic Anyon Gas Abstract: We study a one-parameter one-body energy functional with a self-consistent magnetic field, which describes a quantum gas of almost-bosonic anyons in the average-field approximation. For the homogeneous gas we prove the existence of the thermodynamic limit of the energy at fixed effective statistics parameter and the independence of such a limit from the shape of the domain. This result is then used in a local density approximation to derive an effective Thomas-Fermi like model for the trapped anyon gas in the limit of a large effective statistics parameter (i.e., "less-bosonic" anyons). Joint work with D. Lundholm, N. Rougerie Hiroaki NiikuniTitle: Schrödinger operators on a zigzag supergraphene-based carbon nanotube Abstract: In this talk, we study the spectrum of a periodic Schrödinger operator on a zigzag super carbon nanotube, which is a generalization of thezigzag carbon nanotube. We prove that its absolutely continuous spectrum has the band structure.Moreover, we show that its eigenvalues with infinite multiplicities consisting ofthe Dirichlet eigenvalues and points embedded in the spectral band for some corresponding Hill operator. We also give the asymptotics for the spectral band edges. Brian SwingleTitle: Tensor networks, entanglement, and geometry Abstract: Tensor networks are entanglement-based tools which are useful for representing quantum many-body states, especially thermal states of local Hamiltonians. I will discuss some recent results constructing tensor networks for a wide variety of states of quantum matter. I will also briefly describe recent conjectures relating tensor networks and entanglement to the emergence of quantum gravity via the AdS/CFT correspondence. Based in part on 1607.05753, 1602.02805, and 1407.8203 with John McGreevy and Shenglong Xu. Jeongwan HaahTitle: Local Approximate Quantum Error Correction Abstract: We study the fundamental limits on reliably storing quantum information in lattices of qubits by deriving tradeoff bounds for approximate quantum error correcting codes. We introduce a notion of local approximate correctability and code distance, and give a number of equivalent formulations thereof, generalizing error correction criteria in exact settings. Our tradeoff bounds relate the spatial dimension of the lattice, the number of physical qubits, the number of encoded qubits, the code distance, the accuracy parameter that quantifies how well erasure can be recovered, and the locality parameter that specifies the length scale at which the recovery operates. Connection to the topological order will be discussed. Joint work with S. Flammia, M. Kastoryano, and I. Kim. Joe ChenTitle: Spectral decimation and its application to spectral analysis on infinite fractal lattices Abstract: The method of spectral decimation originated from Rammal and Toulouse in the 80s, and has since been developed to tackle spectral problems on self-similar fractals by Bellissard, Fukushima, Shima, Malozemov, Teplyaev, etc. In this talk we present two concrete spectral problems on infinite fractal lattices which are inspired by the study of quasi-periodic and random Schrodinger operators. In both problems, we use spectral decimation in an essential way, and reduce the problem to the analysis of a certain 1-dimensional complex dynamical system. We hope that these models can help enlighten the mechanisms behind the spectral properties of more complicated Schrodinger operators. 1) On the integer half-line ($\mathbb{Z}_+$) endowed with a fractal self-similar Laplacian parametrized by a single parameter $p\in (0,1)$, we prove that the Laplacian spectrum is purely singularly continuous whenever $p\neq \frac{1}{2}$. (If $p=\frac{1}{2}$ one recovers the usual Laplacian on $\mathbb{Z}_+$, whose spectrum is absolutely continuous.) To our knowledge this may be the simplest toy model for exhibiting purely singularly continuous spectrum. 2) On the infinite Sierpinski gasket lattice (SGL), we establish an exponential decay of the resolvent associated with the Laplace or Schrodinger operator, based on spectral decimation and a heat kernel upper estimate. This leads to a proof of Anderson localization on SGL by the methods of Simon-Wolff and Aizenman-Molchanov. This is based on joint works with S. Molchanov (UNC-Charlotte) and A. Teplyaev (UConn). Rupert FrankTitle: Derivation of an effective evolution equation for a strongly coupled polaron Abstract: Fröhlich's polaron Hamiltonian describes an electron coupled to the quantized phonon field of an ionic crystal. We show that in the strong coupling limit the dynamics of the polaron is approximated by an effective non-linear partial differential equation due to Landau and Pekar, in which the phonon field is treated as a classical field. The talk is based on joint works with B. Schlein and with Z. Gang. Petr SieglTitle: Non-self-adjoint graphs Abstract: On finite metric graphs, we consider Laplace operators subject to various classes of non-self-adjoint boundary conditions imposed at graph vertices. We investigate spectral properties, existence of a Riesz basis of projectors and similarity transforms to self-adjoint Laplacians. Among other things, we describe a simple way to relate the similarity transforms between Laplacians on certain graphs with elementary similarity transforms between matrices defining the boundary conditions. The talk is based on: [1] A. Hussein, D. Krejcirik and P. Siegl: Non-self-adjoint graphs, Transactions of the AMS, 367, (2015) 2921-2957. Volkher ScholzTitle: Matrix product approximations to multipoint functions in two-dimensional conformal field theory Abstract: Matrix product states (MPS) illustrate the suitability of tensor networks for the description of interacting many-body systems: ground states of gapped 1-D systems are approximable by MPS as shown by Hastings [J. Stat. Mech. Theor. Exp., P08024 (2007)]. In contrast, whether MPS and more general tensor networks can accurately reproduce correlations in critical quantum systems, respectively quantum field theories, has not been established rigorously. Ample evidence exists: entropic considerations provide restrictions on the form of suitable Ansatz states, and numerical studies show that certain tensor networks can indeed approximate the associated correlation functions. Here we provide a complete positive answer to this question in the case of MPS and 2D conformal field theory: we give quantitative estimates for the approximation error when approximating correlation functions by MPS. Our work is constructive and yields an explicit MPS, thus providing both suitable initial values as well as a rigorous justification of variational methods. Peter PicklTitle: Derivation of the Maxwell-Schrödinger Equations from the Pauli-Fierz Hamiltonian Abstract: We consider the spinless Pauli-Fierz Hamiltonian which describes a quantum system of non-relativistic identical particles coupled to the quantized electromagnetic field. We study the time evolution in a mean-field limit where the number N of charged particles gets large while the coupling to the radiation field is rescaled by 1/√N. At time zero we assume that almost all charged particles are in the same one-body state (a Bose-Einstein condensate) and we assume also the photons to be close to a coherent state. We show that at later times and in the limit N -> ∞ the charged particles as well as the photons exhibit condensation, with the time evolution approximately described by the Maxwell-Schrödinger system, which models the coupling of a non-relativistic particle to the classical electromagnetic field. Boris GutkinTitle: Quantum chaos in many-particle systems Abstract: Upon quantisation, systems with classically chaotic dynamics exhibit universal spectral and transport properties effectively described by Random Matrix Theory. Semiclassically this remarkable phenomenon can be attributed to the existence of pairs of classical orbits with small action differences. So far, however, the scope of the theory has, by and large, been restricted to single-particle systems. I will discuss an extension of this program to chaotic systems with a large number of particles. The crucial step is introducing a two-dimensional symbolic dynamics which allows an effective representation of periodic orbits in many-particle chaotic systems with local interactions. By using it we show that for a large number of particles the dominant correlation mechanism among periodic orbits essentially differs from the one of the single-particle theory. Its implications on spectral properties of many-particle quantum systems will be discussed as well. Nicolas RougerieTitle: Rigidity of the Laughlin liquid Abstract: The Laughlin state is a well-educated ansatz for the ground state of 2D particles subjected to large magnetic fields and strong interactions. It is of importance to understand the rigidity of its response to perturbations. Indeed, this is a crucial ingredient in the Fractional Quantum Hall Effect, where the Laughlin state is the cornerstone of our current theoretical understanding. In this talk we shall consider general N-particle wave functions that have the form of a product of the Laughlin state and an analytic function of the N variables. This is the most general form of a wave function that can arise through a perturbation of the Laughlin state by external potentials or impurities, while staying in the lowest Landau level and maintaining the strong correlations of the original state. We show that the perturbation can only shift or lower the 1-particle density but nowhere increase it above a maximum value. Regardless of the analytic prefactor, the density satisfies the same bound as the Laughlin function itself in the limit of large particle number. Consequences of this incompressibility bound for the response of the Laughlin state to external fields will be discussed. joint work with Elliott H. Lieb and Jakob Yngvason Sunday 10/9 GraphsNew topicsQ.I.RandomMany-body RoomSkiles 006Skiles 202Skiles 268Skiles 005Skiles 249 James KennedyTitle: Eigenvalue estimates for quantum graphs Abstract: A classical problem in the analysis of (partial) differential operators such as the Laplacian on domains or manifolds is to understand how their eigenvalues depend on the underlying geometry of the object on which they are defined. This dependence can take various forms, such as asymptotics or trace formulae, but we will be interested in bounds on the (low) eigenvalues of the operator. A basic example of this is the Faber--Krahn inequality, which states that the first eigenvalue of the Dirichlet Laplacian is smallest among all domains with given volume when the domain is a ball. Interest in problems of this nature on metric graphs, which in the prototype case simply concerns estimating the eigenvalues the Laplacian with Kirchhoff conditions at the vertices, seems only to have developed in the last couple of years (with a few notable exceptions, such as works of Nicaise and Friedlander). This is also at odds with the relatively well-developed parallel body of literature on the eigenvalues of discrete and normalised Laplacians. This will be a first attempt to provide a natural framework for such eigenvalue estimates in the easiest case of the spectral gap of the Kirchhoff Laplacian: which geometric and algebraic quantities of a graph, such as total length, diameter, number of edges or vertices, connectivity, Betti number etc. enable one to control the eigenvalue(s), and how? Which bounds are possible? We shall attempt to demonstrate that on the one hand such questions can be surprisingly subtle. But on the other, one can come a long way armed with little more than elementary variational principles, a workhorse of the PDE theory which becomes very powerful on graphs, but seems to have been largely overlooked by much of the graph theory community until recently. This talk is based on joint, ongoing work with Gregory Berkolaiko, Pavel Kurasov, Gabriela Malenova and Delio Mugnolo. Anushya ChandranTitle: Heating in periodically driven Floquet systems Abstract: Periodically driven quantum systems (Floquet systems) do not have a conserved energy. Thus, statistical mechanical lore holds that if they thermalize, it must be to infinite temperature. I will first show this holds in undriven systems that satisfy the eigenstate thermalization hypothesis. I will then present two counter-examples to infinite temperature heating. The first is the bosonic O(N) model at infinite N, in which the steady states are paramagnetic and have non-trivial correlations. The second is the Clifford circuit model, which can fail to heat depending on the choice of circuit elements. The resulting steady states can then be localized or delocalized but not ergodic. Such models shed light on the nature of interacting Floquet localization. Isaac KimTitle: Markovian marginals Abstract: We introduce the notion of so called Markovian marginals, which is a natural framework for constructing solutions to the quantum marginal problem. We show that a set of reduced density matrices on overlapping supports necessarily has a global state that is compatible with all the given reduced density matrices, provided that they satisfy certain (nonlinear) local constraints. Vojkan JaksicTitle: Adiabatic theorems and Landauer's principle in quantum statistical mechanics Abstract: The Landauer principle asserts that the energy cost of erasure of one bit of information by the action of a thermal reservoir in equilibrium at temperature T is never less than k_BT log 2. We discuss Landauer's principle for quantum statistical models describing a finite level quantum system S coupled to an infinitely extended thermal reservoir R and link the saturation of Landauer's bound to adiabatic theorems in quantum statistical mechanics (for states and relative entropy). Furthermore, by extending the adiabatic theorem to Renyi's relative entropy, we extend the Landauer principle to the level the Full Counting Statistics (FCS) of energy transfer between S and R. This allows to elucidate the nature of Landauer's principle FCS fluctuations. This talk is based on joint works with Tristan Benoist, Martin Fraas, and Claude-Alain Pillet. Ian JauslinTitle: Ground state construction of bilayer graphene Abstract: We consider a model of weakly-interacting electrons in bilayer graphene. Bilayer graphene is a 2-dimensional crystal consisting of two layers of carbon atoms in a hexagonal lattice. Our main result is an expression of the free energy and two-point Schwinger function as convergent power series in the interaction strength. In this talk, I discuss the properties of the non-interacting model, and exhibit three energy regimes in which the energy bands are qualitatively different. I then sketch how this decomposition may be used to carry out the renormalization group analysis used to prove our main result. This is joint work with Alessandro Giuliani. Ram BandTitle: Quantum graphs which optimize the spectral gap Abstract: A finite discrete graph is turned into a quantum (metric) graph once a finite length is assigned to each edge and the one-dimensional Laplacian is taken to be the operator. We study the dependence of the spectral gap (the first positive Laplacian eigenvalue) on the choice of edge lengths. In particular, starting from a certain discrete graph, we seek the quantum graph for which an optimal (either maximal or minimal) spectral gap is obtained. We fully solve the minimization problem for all graphs. We develop tools for investigating the maximization problem and solve it for some families of graphs. The talk is based on a joint work with Guillaume Levy. Pieter NaaijkensTitle: Operator algebras and data hiding in topologically ordered systems Abstract: The total quantum dimension is an invariant of topological phases, related to the anyonic excitations a topologically ordered state supports. In this talk I will discuss the total quantum dimension in the thermodynamic limit of topologically ordered quantum spin systems. In particular, I will discuss how the anyons can be used to hide data in the state. While not a practical way of data hiding, it sheds new light on the total quantum dimension: in particular, I will outline how deep results from operator algebra (and subfactors in particular) can be used to quantify how much information can be hidden, and how this is related to the quantum dimension. Joint work with Leander Fiedler and Tobias Osborne. Ke LiTitle: Discriminating quantum states: the multiple Chernoff distance Abstract: Suppose we are given n copies of one of the quantum states {rho_1,..., rho_r}, with an arbitrary prior distribution that is independent of n. The multiple hypothesis Chernoff bound problem concerns the minimal average error probability P_e in detecting the true state. It is known that P_e~exp(-En)decays exponentially to zero. However, this error exponent E is generally unknown, except for the case r=2. In this talk, I will give a solution to the long-standing open problem of identifying the above error exponent, by proving Nussbaum and Szkola's conjecture that E=min_{i eq j} C(rho_i, rho_j). The right-hand side of this equality is called the multiple quantum Chernoff distance, and C(rho_i,rho_j):=max_{0 <= s <= 1} {-log Tr rho_i^s rho_j^(1-s)} has been previously identified as the optimal error exponent for testing two hypotheses, rho_i versus rho_j. The main ingredient of our proof is a new upper bound for the average error probability, for testing an ensemble of finite-dimensional, but otherwise general, quantum states. This upper bound, up to a states-dependent factor, matches the multiple-state generalization of Nussbaum and Szkola's lower bound. Specialized to the case r=2, we give an alternative proof to the achievability of the binary-hypothesis Chernoff distance, which was originally proved by Audenaert et al. Abel KleinTitle: Eigensystem multiscale analysis for Anderson localization in energy intervals I Abstract: We perform an eigensystem multiscale analysis for proving localization (pure point spectrum with exponentially decaying eigenfunctions, dynamical localization) for the Anderson model in an energy interval. In particular, it yields localization for the Anderson model in a nonempty interval at the bottom of the spectrum. This eigensystem multiscale analysis in an energy interval treats all energies of the finite volume operator at the same time, establishing level spacing and localization of eigenfunctions with eigenvalues in the energy interval in a fixed box with high probability. In contrast to the usual strategy, we do not study finite volume Green's functions. Instead, we perform a multiscale analysis based on finite volume eigensystems (eigenvalues and eigenfunctions). In any given scale we only have decay for eigenfunctions with eigenvalues in the energy interval, and no information about the other eigenfunctions. For this reason, going to a larger scale requires new arguments that were not necessary in our previous eigensystem multiscale analysis for the Anderson model at high disorder, where in a given scale we have decay for all eigenfunctions. Phan Thanh NamTitle: Stability of 2D focusing many-boson systems Abstract: We consider a 2D quantum system of N bosons, interacting via a pair potential of the form $N^{2\beta-1}w(N^\beta (x-y))$. In the focusing case $w<0$, the stability of the second kind of the system is not obvious. We will show that if the system is trapped by an external potential $|x|^s$ and $\beta<(s+1)/(s+2)$, then the leading order behavior of ground states in the large N limit is described by the corresponding cubic nonlinear Schr\"odinger energy functional. In particular, our result covers the dilute regime $\beta>1/2$, where the range of the interaction is much smaller than the average distance between particles. This is joint work with Mathieu Lewin and Nicolas Rougerie. Boris GutkinTitle: Spectral statistics of nearly unidirectional quantum graphs Abstract: Quantum Hamiltonian systems with unidirectional classical dynamics posses a number of intriguing spectral properties. In particular, their energy levels are quasi-degenerate and have anomalous spectral statistics. We look at the unidirectional quantum graphs as a toy model for this phenomenon. Their spectrum is doubly degenerate with the same statistics as in the Gaussian Unitary Ensembles of random matrices. However, adding a backscattering at one of the graph's bonds lifts the degeneracies. Based on a random matrix model we derive an analytic expression for the anomalous nearest neighbor distribution between energy levels. As we show the result agrees excellently with the actual statistics in most of the cases. Yet, it exhibits quite substantial deviations for classes of graphs with strong localization of eigenfunctions. The talk is based on the joint work (arXiv:1503.01342) with M. Akila. Subir SachdevTitle: The Sachdev-Ye-Kitaev models of non-Fermi liquids and black holes Abstract: The SYK models are simple Hamiltonians of fermions with random all-to-all interactions. Their ground states largely self-average over disorder, and have a gapless excitation spectrum with no quasiparticle structure. They provide a model of non-Fermi liquids, and also, remarkably, of black holes in two-dimensional anti-de Sitter space Graeme SmithTitle: Uniformly additive entropic formulas Abstract: Information theory establishes the fundamental limits on data transmission, storage, and processing. Quantum information theory unites information theoretic ideas with an accurate quantum-mechanical description of reality to give a more accurate and complete theory with new and more powerful possibilities for information processing. The goal of both classical and quantum information theory is to quantify the optimal rates of interconversion of different resources. These rates are usually characterized in terms of entropies. However, nonadditivity of many entropic formulas often makes finding answers to information theoretic questions intractable. In a few auspicious cases, such as the classical capacity of a classical channel, the capacity region of a multiple access channel and the entanglement assisted capacity of a quantum channel, additivity allows a full characterization of optimal rates. Here we present a new mathematical property of entropic formulas, uniform additivity, that is both easily evaluated and rich enough to capture all known quantum additive formulas. We give a complete characterization of uniformly additive functions using the linear programming approach to entropy inequalities. In addition to all known quantum formulas, we find a new and intriguing additive quantity: the completely coherent information. We also uncover a remarkable coincidence---the classical and quantum uniformly additive functions are identical; the tractable answers in classical and quantum information theory are formally equivalent. Alexander ElgartTitle: Eigensystem multiscale analysis for Anderson localization in energy intervals II Shannon StarrTitle: Robust Bounds for Emptiness Formation Probability for Dimers Abstract: Emptiness formation probability is a measurable quantity associated to a ground state or equilibrium state of a quantum spin system. It was originally promoted by V Korepin. For the XXZ chain, a relation with the 6 vertex model discovered by Lieb allows for robust bounds using the reflection positivity technique. For dimers, emptiness formation probability for a lattice rotated by 45 degrees is more natural. The basic technique applies but there are extra mathematical issues, including discovering a quantum spin system associated to the lattice model. This is joint work with Scott Williams, a student at UAB. 3:30pmCoffee Break (in CULC) Evans HarrellTitle: Pointwise control of eigenfunctions on quantum graphs Abstract: Pointwise bounds on eigenfunctions are useful for establishing localization of quantum states, and they have implications for the distribution of eigenvalues and for physical properties such as conductivity. In the low-energy regime, localization is associated with exponential decrease through potential barriers. We adapt the Agmon method to control this tunneling effect for quantum graphs with Sobolev and pointwise estimates. It turns out that as a generic matter, the rate of decay is controlled by an Agmon metric related to the classical Liouville-Geen approximation for the line, but more rapid decay is typical, arising from the geometry of the graph. In the high-energy regime one expects states to oscillate but to be dominated by a 'landscape function' in terms of the potential and features of the graph. We discuss the construction of useful landscape functions for quantum graphs. Marco MerkliTitle: Evolution of a two-level system strongly coupled to a thermal bath Abstract: We consider a quantum process where electric charge, or excitation energy, is exchanged between two agents, and in the presence of a thermal environment. In some chemical processes in biology (photosynthesis), the agent-reservoir interaction energy is large, at least of the same size as the agents' energy difference. We present a rigorous analysis of the effective dynamics of the agents in this coupling regime, valid for all times. In particular, we derive a generalization of the Marcus formula from quantum chemistry, predicting the reaction rate. Our generalization shows that by coupling one agent more strongly to the environment than the other one, a significant speedup of the process can be achieved. Our analytic method is based on a resonance expansion of the reduced agent dynamics, cast in the framework of the strongly coupled spin-boson system. Stefan BoettcherTitle: The Renormalization Group Solution of Quantum Walks on Complex Networks Abstract: Replacing the stochastic evolution operator in the master equation of the classical random walk with a unitary operator leads to a spectrum of new phenomena. Such a quantum walk has gain considerable interest in quantum information sciences as it is the "engine" that drives Grover's quantum search to gain a quadratic speed-up over classical randomized algorithms. The spreading dynamics on regular lattices already leads to numerous fascinating features, such as localization and violation of Polya's theorem, however, the motion is universally ballistic in all dimensions and reveals little insight about the intricate nature of the quantum dynamics. We use the renormalization group to produce non-trivial, exact results for the asymptotic scaling of the probability density function for quantum walks on various complex networks (Sierpinski, Migdal-Kadanoff, Hanoi). These elucidate the subtle interplay of quantum effects and internal ("coin") degrees of freedom with the geometry of the network and the spectral properties of the evolution operator by which one can control the behavior. Jeffrey SchenkerTitle: Localization in the disordered Holstein model Abstract: The Holstein model (in the one particle sector) describes a lattice particle interacting with independent Harmonic oscillators at each site of the lattice. We consider this model with on site disorder in the particle potential. This is proposed a simple model in which it may be possible to test some ideas regarding multi/many-body localization. Provided the oscillator frequency is not too small and the hopping is weak, we are able to prove localization for the eigenfunctions, in particle position and in oscillator Fock space. Some open problems regarding the character of high energy eigenstates will be discussed. (Joint work with Rajinder Mavi.) Bruno NachtergaeleTitle: Stability of Frustration-Free Ground States of Quantum Lattice Systems Abstract: We study frustration-free quantum lattice systems with a non-vanishing spectral gap above one or more (infinite-volume) ground states. The ground states are called stable if arbitrary perturbations of the Hamiltonian that are uniformly small throughout the lattice have only a perturbative effect. In the past several years such stability results have been obtained in increasing generality aimed at applications to topological phases. We discuss the works by Bravyi-Hastings-Michalakis and Michalakis-Zwolak, and some recent extensions of these results to systems with spontaneous symmetry breaking in joint work with Robert Sims and Amanda Young. Gueorgui RaykovTitle: Local Eigenvalue Asymptotics of the Perturbed Krein Laplacian Abstract: I will consider the Krein Laplacian on a regular bounded domain, perturbed by a real-valued multiplier V vanishing on the boundary. Assuming that V has a definite sign, I will discuss the asymptotics of the eigenvalue sequence which converges to the origin. In particular, I will show that the effective Hamiltonian that governs the main asymptotic term of this sequence, is the harmonic Toeplitz operator with symbol V, unitarily equivalent to a pseudodifferential operator on the boundary. This is a joint work with Vincent Bruneau (Bordeaux, France). The partial support of the Chilean Science Foundation Fondecyt under Grant 1130591 is gratefully acknowledged. Hal TasakiTitle: What is thermal equilibrium and how do we get there? Abstract: We discuss the foundation of equilibrium statistical mechanics in terms of isolated macroscopic quantum systems. We shall characterize thermal equilibrium based on "typicality" picture and a large-deviation type consideration. We then present a simple (and hopefully realistic) condition based on the notion of effective dimension which guarantees that a nonequilibrium initial state evolves into the thermal equilibrium. Michael WalterTitle: Entanglement in Random Tensor Networks Abstract: Motivated by recent research in quantum information and gravity, we study tensor networks with large bond dimension, obtained by contracting random stabilizer states. We find that their bipartite and multipartite entanglement properties are dictated by the geometry of the network and explain how this relates to non-standard entropy inequalities. We further consider 'holographic' bulk-boundary mappings defined by such tensor networks and discuss their properties as quantum subsystem codes. Techniques used include spin models for random tensor averages and a new formula for the third moment of a random stabilizer state. Per von SoostenTitle: Localizationiin the Hierarchical Anderson Model Abstract: We will consider a hierarchical version of the classical Anderson model on the lattice and present results to the extent that the hierarchical model remains localized throughout its range of parameters. Our argument is based on renormalization ideas that transform the Hamiltonian into a regime of high disorder. This talk is based on joint work with Simone Warzel. Jan Philip SolovejTitle: Zero modes for Dirac operators with magnetic links Abstract: The occurence of zero modes for Dirac operators with magnetic fields is the cause of break down of stability of matter for charged systems. All known examples of magnetic fields leading to zero modes are geometrically very complex. In order to better understand this geometry I will discuss singular magnetic fields supported on a finite number of possibly interlinking field lines (magnetic links). I will show that the occurence of zero modes is intimately connected to the twisting and interlinking of the field lines. The result will rely on explicitly calculating appropriate spectral flows for the Dirac operators. This is joint work with Fabian Portmann and Jeremy Sok. Kenichi ItoTitle: Branching form of the resolvent at threshold for discrete Laplacians Abstract: We compute an explicit expression of the resolvent around the threshold zero for an ultra-hyperbolic operator of signature $(p,q)$, which includes the Laplacian as a special case. In particular, we classify a branching form of the resolvent; The resolvent has a square-root singularity if $(p,q)$ is odd-even or even-odd, a logarithm singularity if $(p,q)$ is even-even, and a dilogarithm singularity if $(p,q)$ is odd-odd. We apply the same computation scheme to the discrete Laplacian around thresholds embedded in continuous spectrum as well as those at end points, and obtain similar results, presenting a practical procedure to expand the resolvent around these thresholds. This talk is based on a recent joint work with Arne Jensen (Aalborg University). Paul GoldbartTitle: Universality in transitionless quantum driving Abstract: A time-dependent quantum system, if prepared in some instantaneous eigenstate of its Hamiltonian, typically exhibits nonadiabaticity: it develops quantum amplitudes to be found in orthogonal instantaneous eigenstates. When the time dependence is slow, these amplitudes are small, as seen explicitly, e.g., in the Landau-Majorana-Zener model. Berry (2009) has shown how to construct Hamiltonian terms that stifle nonadiabaticity, regardless of the pace of the time dependence of the original Hamiltonian: this is transitionless quantum driving. We discuss the extension of transitionless quantum driving to systems possessing exact degeneracies amongst their instantaneous energy eigenvalues and, as a result, exhibit the Wilczek-Zee (1984) nonabelian extension of Berry's connection (1984). We also discuss how a particular stifling term serves to protect adiabaticity for a surprisingly large family of systems. We conclude by mentioning some settings in which transitionless quantum driving should be realizable, experimentally. This talk is based on work done with Rafael Hipolito. F. Wilczek and A. Zee (1984) Appearance of gauge structure in simple dynamical systems, Physical Review Letters 52, 2111-2114. M. V. Berry (2009) Transitionless quantum driving, Journal of Physics A: Mathematical and Theoretical 42, 365303 [9 pages]. M. V. Berry (1984) Quantal phase factors accompanying adiabatic changes, Proceedings of the Royal Society of London Series A 392, 45-57. Monday 10/10 GraphsNew topicsQ.I.Random/Many-body RoomSkiles 006Skiles 202Skiles 268Skiles 249 Françoise TrucTitle: Topological Resonances on Quantum Graphs Abstract: In this paper, we try to put the results of Smilansky and al. on "Topological resonances" on a mathematical basis.A key role in the asymptotic of resonances near the real axis for Quantum Graphs is played by the set of metrics for which there exists compactly supported eigenfunctions. We give several estimates of the dimension of this semi-algebraic set, in particular in terms of the girth of the graph. The case of trees is also discussed. Takahiro MorimotoTitle: Classification theory of topological insulators with Clifford algebras and its application to interacting fermions Abstract: The topological classification of noninteracting fermionic ground states is established as the tenfold way. Systems of non-interacting fermions are divided into ten symmetry classes. For each dimension, five out of ten symmetry classes contain nontrivial topological insulators (TIs) or superconductors (TSCs) characterized by Z or Z_2 topological numbers. Later, it was revealed that the noninteracting topological classification Z is unstable to interactions and reduces to Z_8 (Z_16) in the case of 1D (3D) time-reversal symmetric TSCs. In this talk, first, we review the classification theory of noninteracting topological insulators in terms of an extension problem of associated Clifford algebras. This enables us to concisely derive the tenfold way classification and also to classify topological crystalline insulators [1]. Then we apply the Clifford algebra approach to the breakdown of the ten-fold way in the presence of quartic fermion-fermion interactions for any dimension of space [2]. Specifically, we study the effects of interactions on the boundary gapless modes of TIs in terms of boundary dynamical masses. Breakdown of the noninteracting topological classification occurs when the quantum non-linear sigma models for the boundary dynamical masses favor quantum disordered phases. For the ten-fold way, we find that (i) Z_2 is always stable, (ii) Z in even dimensions is always stable, (iii) Z in odd dimensions is unstable and reduces to Z_N that can be identified explicitly for any dimension and any defining symmetries. We also apply our method to the topological crystalline insulator (SnTe) and find the reduction of the noninteracting topological classification Z to Z_8. [1] T. Morimoto, and A. Furusaki, Phys. Rev. B 88, 125129 (2013). . [2] T. Morimoto, A. Furusaki, and C. Mudry, Phys. Rev. B 92, 125104 (2015 Nilanjana DattaTitle: Contractivity properties of a quantum diffusion semigroup Abstract: We consider a quantum generalization of the classical heat equation, and study contractivity properties of its associated semigroup. We prove a Nash inequality and a logarithmic Sobolev inequality for Gaussian states. The former leads to an ultracontractivity result. This in turn implies that the largest eigenvalue and the purity of any state, evolving under the action of the semigroup, decrease inverse polynomially in time, while its entropy increases logarithmically in time. This is joint work with Cambyse Rouze' and Yan Pautrat. Rafael DucatezTitle: Anderson localization for infinitely many interacting particules under Hartree Fock theory Abstract: We prove the occurrence of Anderson localisation for a system of infinitely many particles interacting with a short range potential, within the ground state Hartree-Fock approximation. We assume that the particles hop on a discrete lattice and that they are submitted to an external periodic potential which creates a gap in the non-interacting one particle Hamiltonian. We also assume that the interaction is weak enough to preserve a gap. We prove that the mean-field operator has exponentially localised eigenvectors, either on its whole spectrum or at the edges of its bands, depending on the strength of the disorder. Jens BolteTitle: Spectra of interacting particles on quantum graphs Abstract: One reason for the success of one-particle quantum graph models is that their spectra are determined by secular equations involving finite-dimensional determinants. In general, one cannot expect this to extend to interacting many-particle models. In this talk I will introduce two-particle quantum graph models with interactions that allow one to express eigenfunctions in terms of a Bethe ansatz. From this a secular equation will be determined, and eigenvalues can be calculated numerically. The talk is based on joint work with George Garforth. Carlos sa de MeloTitle: Effects of spin-orbit coupling on the Berezinskii-Kosterlitz-Thouless transition. Abstract: We investigate the Berezinskii-Kosterlitz-Thouless (BKT) transition in a two-dimensional (2D) neutral Fermi system with spin-orbit coupling (SOC), as a function of the two-body binding energy and a perpendicular Zeeman field [1,2]. By including a generic form of the SOC, as a function of Rashba and Dresselhaus terms, we study the evolution between the equal Rashba-Dresselhaus (ERD) and the Rashba-only (RO) cases. We show that in the ERD case, at fixed non-zero Zeeman field, the BKT transition temperature TBKT is increased by the effect of the SOC for all values of the binding energy. We also find a significant increase in the value of the Clogston limit compared to the case without SOC. Furthermore, we demonstrate that the superfluid density tensor becomes anisotropic (except in the RO case), leading to an anisotropic phase-fluctuation action that describes elliptic vortices and anti-vortices, which become circular in the RO limit. This deformation constitutes an important experimental signature for superfluidity in a 2D Fermi system with ERD SOC. Finally, we show that the anisotropic sound velocity exhibit anomalies at low temperatures in the vicinity of quantum phase transitions between topologically distinct uniform superfluid phases. [1] Jeroen P. A. Devreese, Jacques Tempere, and Carlos A. R. Sá de Melo, Phys. Rev. Lett. 113, 165304 (2014). [2] Jeroen P. A. Devreese, Jacques Tempere, and Carlos A. R. Sá de Melo, Physical Review A 92, 043618 (2015). William SlofstraTitle: Tsirelson's problem and linear system games Abstract: In quantum information, we frequently consider (for instance, whenever we talk about entanglement) a composite system consisting of two separated subsystems. A standard axiom of quantum mechanics states that a composite system can be modeled as the tensor product of the two subsystems. However, there is another less restrictive way to model a composite system, which is used in quantum field theory: we can require only that the algebras of observables for each subsystem commute within some larger subalgebra. Tsirelson's question (which comes in several variants) asks whether the correlations arising from commuting-operator models can always be represented by tensor-product models. I will give examples of linear system non-local games which cannot be played perfectly with tensor-product strategies, but can be played perfectly with commuting-operator strategies, resolving (one version of) Tsirelson's question in the negative. From these examples, we can also derive other consequences for the theory of non-local games, such as the undecidability of determining whether a non-local game has a perfect commuting-operator strategy. Francois HuveneersTitle: A random matrix approach to Many-Body Localization Abstract: The localized phase in interacting systems is usually understood in a perturbative sense, as a robustness of Anderson localization when perturbing away from the non-interacting limit. In this talk, I will present a new approach, relying as much as possible on random matrix theory, which is generally used to describe ergodic systems (cf. ETH). The localized phase emerges then as an instability of the random matrix theory when adding disordered spins. This new view point is especially useful to analyze the influence of ergodic spots on the localized phase: It yields a detailed description of the boundary region near the spot, and naturally leads to the discussion of the stability of the localized phase upon bringing it in contact with a piece of ergodic material. I will also describe how the theory can be tested, and I will show some (preliminary) numerical results. From a joint work with Wojciech De Roeck (arXiv:1608.01815) Jon HarrisonTitle: n-particle quantum statistics on graphs Abstract: For particles in three or more dimensions the forms of quantum statistics of indistinguishable particles are either Bose-Einstein or Fermi-Dirac corresponding to the two abelian representations of the first homology group of the configuration space. Restricting particles to the plane the fundamental group of the configuration space is the braid group and a new form of particle statistics corresponding to its abelian representations appears, anyon statistics. Restricting the dimension of the space further to a quasi-one-dimensional quantum graph opens new forms of statistics determined by the connectivity of the graph. We develop a full characterization of abelian quantum statistics on graphs which leads to an alternative proof of the structure theorem for the first homology group of the n-particle configuration space. For two connected graphs the statistics are independent of the particle number. On three connected non-planar graphs particles are either bosons or fermions while in three connected planar graphs they are anyons. Graphs with more general connectivity exhibit interesting mixtures of these behaviors which we illustrate. For example, a graph can be constructed where particles behave as bosons, fermions and anyons depending on the region of the graph that they inhabit. An advantage of this direct approach to analysis of the first homology group is that it makes the physical origin of these new forms of statistics clear. This is work with Jon Keating, Jonathan Robbins and Adam Sawicki at Bristol. Maksym SerbynTitle: Properties of many-body localized phase: entanglement spec Abstract: Many body localization allows quantum systems to escape thermalization via emergence of extensive number of conserved quantities. I will demonstrate how the existence of these local conserved quantities is manifested in various properties of many-body localized phase. I will demonstrate the power-law form of the entanglement spectrum in the MBL phase, which follows from existence of local conserved quantities. I will discuss general implications of this result for variational studies of highly excited eigenstates in many-body localized systems, and show an implementation of a matrix-product state algorithm which allows us to access the eigenstates of large systems close to the delocalization transition. In addition, I will discuss statistics of matrix elements of local operators and use it to probe delocalization transition. John ImbrieTitle: Constructive Methods for Localization and Eigenvalue Statistics Abstract: Convergent expansions for eigenvalues and eigenvectors lead to new insights in many-body and single-body quantum systems with disorder. I will review recent work elucidating the way randomness localizes eigenfunctions, smooths out eigenvalue distributions, and produces eigenvalue separation. 3:30pmCoffee Break (in CULC) Tracy WeyandTitle: Zeta Functions of the Dirac Operator on Quantum Graphs Abstract: The spectral zeta function generalizes the Riemann zeta function by replacing the sum over integers with a sum over a spectrum. Here we consider the spectrum of the Dirac operator acting on a metric graph. Since all eigenvalues are roots of a secular equation, we can calculate the spectral zeta function by applying the argument principle to a particular contour integral. This will be done first for a rose graph, and then for general graphs with self-adjoint vertex matching conditions. We will also discuss how this function can then be used to compute the spectral determinant. Po-Yao ChangTitle: Entanglement negativity in many-body physics Abstract: Entanglement measures are powerful techniques of extracting quantum information in a many-body state. However, most of the entanglement measures focus on a bipartite system in a pure state. To characterize quantum entanglement of a tripartite system in a mixed state, entanglement negativity is proposed. This talk will present the current developments of computing entanglement negativity and their applications. Three methods will be demonstrated: an overlap matrix approach for free-fermion systems[1], the conformal field theory approach for a local quantum quench[2], and a surgery method for Chern-Simons theories[3]. [1] P.-Y. Chang and X. Wen, Phys. Rev. B 93, 195140 (2016). [2] X. Wen, P.-Y. Chang and S. Ryu, Phys. Rev. B 92, 075109 (2015). [3] X. Wen, P.-Y. Chang and S. Ryu, arXiv:1606.04118. Carlos Ortiz-MarreroTitle: Categories and Topological Quantum Computing Abstract: Quantum computation is defined to be any computational model based upon the theoretical ability to manufacture, manipulate, and measure quantum states. (2+1)-dimensional topological phases of matter (TPM) promise a route to quantum computation where quantum information is topologically protected against decoherence. In this talk, we will explore the underling mathematical theory that is driving the classification of these TPM. We will mainly concentrate on the algebraic/categorical structure behind such phases and explain where this structure fits in describing TPM. Finally, we will discuss some recent developments in the mathematical classification pertinent to TPM, namely the classification of (pre-)modular categories. Alain JoyeTitle: Representations of CCR describing infinite coherent states Abstract: We investigate the infinite volume limit of quantized photon fields in multimode coherent states. We show that for states containing a continuum of coherent modes, it is natural to consider their phases to be random and identically distributed. The infinite volume states give rise to Hilbert space representations of the canonical commutation relations which are random as well and can be expressed with the help of Itô stochastic integrals. We analyze the dynamics of the infinite coherent state alone and that of open systems consisting of small quantum systems coupled to the infinite coherent state. Under the free field dynamics, the initial phase distribution is shown to be driven to the uniform distribution, and coherences in small quantum systems interacting with the infinite coherent state, are shown to exhibit Gaussian time decay, instead of the exponential decay caused by infinite thermal states. Joint work with Marco Merkli. Jiri LipovskyTitle: How to find the effective size of a non-Weyl graph Abstract: We study the asymptotics of the number of resolvent resonances in a quantum graph with attached halflines. It has been proven that in some cases the constant by the leading term of the asymptotics (the effective size of the graph) is smaller than one expects by the Weyl law since some resonances escape to infinity. We show how to find this effective size by the method of pseudo-orbit expansion. Furthermore, we prove two theorems on the effective size of certain type of graphs with standard (Kirchhoff) coupling. Israel KlichTitle: Novel quantum phase transition from bounded to extensive entanglement entropy. Abstract: I will describe a continuous family of frustration-free Hamiltonians with exactly solvable ground states. We prove that the ground state of our model is non-degenerate and exhibits a novel quantum phase transition from bounded entanglement entropy to a massively entangled state with volume entropy scaling. The ground state may be interpreted as a deformation away from the uniform superposition of colored Motzkin paths, showed by Movassagh and Shor to have a large (square-root) but sub-extensive scaling of entanglement into a state with an extensive entropy. Vern PaulsenTitle: Perfect embezzlement of a Bell State Abstract: Van Dam and Hayden showed that if Alice and Bob each have finite dimensional state spaces, then using local unitary operations and a shared entangled state on some bipartite resource space, with vanishingly small error, they can "appear" to produce an entangled state. Hence, the term "embezzlement". We prove that perfect embezzlement is impossible in this framework even when the shared resource space is allowed to be infinite dimensional. But if one allows the commuting operator model, then one can embezzle perfectly. We then relate this to recent work on the conjectures of Tsirelson and Connes. Finally, we show that this implies a perfect commuting strategy for a game of Regev and Vidick which has no perfect bipartite strategy. Chris Laumann Title: Many-body localization in mean-field quantum glasses Abstract: The central assumption of statistical mechanics is that interactions between particles establish local equilibrium. Isolated quantum systems, however, need not equilibrate; for example, this happens when sufficient quenched disorder causes localization. Unfortunately there are few tractable models to study this phenomenon. In this talk, I will briefly review the basic phenomenology of many-body localization and then introduce a family of mean-field spin glass models known to be tractable: the quantum p-spin models. I will argue that the quantum dynamics in these models exhibits a localized phase that cannot be detected in the canonical thermodynamic analysis. The properties of the phase and the mobility edge which separates it from the ergodic regime can be analytically estimated using several techniques. The localized eigenstates concentrate on clusters within Hilbert space which exhibit distinct magnetization patterns as characterized by an eigenstate variant of the Edwards-Anderson order parameter. Based on joint work with: C. L. Baldwin, A. Pal, A. Scardicchio Vladimir RabinovichTitle: Essential spectrum of Schrödinger operators with no periodic potentials on periodic graphs Abstract: We consider Schrödinger operators $H$ with bounded uniformly continuous electric potentials on periodic graphs $\Gamma$ provided by the standard Kirchhoff-Neumann conditions at every vertex. Following to [1-4] we define for $H$ a family of limit operators and we show that the essential spectrum of $H$ is the union of the spectra of all limit operators. We give applications of this result to calculations of the essential spectra of Schr\"{o}dinger operators on periodic graphs with periodic electric potentials perturbed by a slowly oscillating at infinity terms. 1: V.S.Rabinovich, S. Roch, B.Silbermann, Limit Operators and its Applications in the Operator Theory, In ser. Operator Theory: Advances and Applications, vol 150, ISBN 3-7643-7081-5, Birkhäuser Velag, 2004, 392 pp. 2: V. Rabinovich, Essential spectrum of perturbed pseudodifferential operators. Applications to the Schrödinger, Klein-Gordon, and Dirac operators, Russian Journal of Math. Physics, Vol.12, No.1, 2005, p. 62-80 3: V.S. Rabinovich, S. Roch, The essential spectrum of Schrödinger operators on lattice, Journal of Physics A, Math. Theor. 39 (2006) 8377-8394 4: V.S. Rabinovich, S. Roch, Essential spectra of difference operators on $\mathbb{Z}^n$-periodic graphs, J. of Physics A: Math. Theor. ISSN 1751-8113, 40 (2007) 10109-10128 Shina TanTitle: Exact relations for two-component Fermi gases with contact interactions Abstract: Ultracold atomic gases created in experiments are so dilute that the average inter-atomic distance is much larger than the characteristic range of the atomic interaction, and so cold that the thermal de Broglie wave length is much larger than that range. Normally they are weakly interacting. By tuning them near a Feshbach resonance, near which the two-body scattering length can be made arbitrarily large, however, people can easily make them strongly interacting. When the scattering length is much larger than the range, we can consider an idealized model in which the range of the interaction is taken to be zero. Within such a model, the scattering length becomes the only parameter for the atomic interactions, if the atoms are fermionic and there are no more than two spin states involved. In such a model, the momentum distribution of the atoms behaves as C/k^4+O(1/k^6) when the wave number k goes to infinity. The coefficient C is known as the contact. There are some exact relations relating the energy, pressure, and the two-body short-range correlation functions, etc. All of them involve the contact C. In particular, the energy of such a gas is a linear functional of the momentum distribution, for both the ground state and all excited states. This is true even if the scattering length is comparable to or larger than the average interatomic distance, such that the gas is strongly interacting. Anna VershyninaTitle: Quantum analogues of geometric inequalities for Information Theory Abstract: Geometric inequalities, such as entropy power inequality or the isoperimetric inequality, relate geometric quantities, such as volumes and surface areas. Entropy power inequality describes how the entropy power of a sum of random variables behaves to the sum of entropy powers. The isoperimetric inequality for entropies relates the entropy power and the Fisher information, and implies that Gaussians have minimal entropy power among random variables with a fixed Fisher information. Classically, these inequalities have useful applications for obtaining bounds on channel capacities, and deriving Log-Sobolev inequalities. In my talk I provide quantum analogues of certain well-known inequalities from classical information theory, with the most notable being the isoperimetric inequality for entropies. The latter inequality is useful for the study of convergence of certain semigroups to fixed points. In the talk I demonstrate how to apply the isoperimetric inequality for entropies to show exponentially fast convergence of quantum Ornstein-Uhlenbeck (qOU) semigroup to a fixed point of the process. The inequality representing the fast convergence can be viewed as a quantum analogue of a classical Log-Sobolev inequality. As a separate result, necessary for the fast convergence of qOU semigroup, I argue that gaussian thermal states minimize output entropy for the attenuator semigroup among all states with a given mean-photon number. (based on a joint work with S. Huber and R. Koenig) Vieri Mastropietro Title: Localization of Interacting Fermions in the Aubry-André Model Abstract: We establish exponential decay of the zero temperature correlations of a fermionic system with a quasi-periodic Aubry-André potential and a many body short range interaction, for weak hopping and interactions and almost everywhere in the frequency and phase. Such decay indicates localization of the ground state.The proof is based on rigorous Renormalization Group methods and it is inspired by techniques developed to deal with KAM Lindstedt series. New problems are posed by the simultaneous presence of loops and small divisors. Nicholas ReadTitle: Compactly-supported Wannier functions, algebraic K-theory, and tensor network states Robert SeiringerTitle: Decay of correlations and absence of superfluidity in the disordered Tonks-Girardeau gas Social Events Public Lectures Sunday, October 9, at 6pm, Rafael Benguria will deliver a public lecture in the Clough Undergraduate Learning Building of Georgia Tech. The banquet will take place on Monday, October 10 at 7pm, at Gordon Biersh in Midtown Atlanta. The price is $45 per person. Ticket for the Banquet will be on sale Saturday, Sunday and Monday during coffee and lunch breaks. Please bring cash or a check. We do not have access to a secure credit card payment system at the conference venue.
2c378e8682158078
Wednesday, August 29, 2012 Quantum Gravity and Taxes The other day I got caught in a conversation about the Royal Institute of Technology and how it deals with value added taxes. After the third round of explanation, I still hadn’t quite understood the Swedish tax regulations. This prompted my conversation partner to remark Swedish taxes are more complicated than my research. The only thing I can say in my defense is that in a very real sense taxes are indeed more complicated than quantum gravity. True, the tax regulations you have to deal with to get through life are more a matter of available information than of understanding. Applying the right rule in the right place requires less knowledge than you need for, say, the singularity theorems in general relativity. In the end taxes are just basic arithmetic manipulations. But what’s the basis of these rules? Where do they come from? Tax regulations, laws in general, and also social norms have evolved along with our civilizations. They’re results of a long history of adaption and selection in a highly complex, partly chaotic, system. This result is based on vague concepts like “fairness”, “higher powers”, or “happiness”, that depend on context and culture and change with time. If you think about it too much, the only reason our societies’ laws and norms work is inertia. We just learn how our environment works and most of us most of the time play by the rules. We adapt and slowly change the rules along with our adaption. But ask where the rules come from or by what principles they evolve, and you’ll have a hard time coming up with a good reason for anything. If you make it more than five why’s down the line, I cheer for you. We don’t have the faintest clue how to explain human civilization. Nobody knows how to derive the human rights from the initial conditions of the universe. People in general, and men in particular, with all their worries and desires, their hopes and dreams, do not make much sense to me, fundamentally. I have no clue why we’re here or what we’re here for, and in comparison to understanding Swedish taxes, quantizing gravity seems like a neatly well-defined and solvable problem. Saturday, August 25, 2012 How to beat a cosmic speeding ticket xkcd: The Search After I had spent half a year doing little more than watching babies grow and writing a review article on the minimal length, I got terribly bored with myself. So I’m apparently one of the world experts on quantum field theories with a minimal length scale. That was not exactly among my childhood aspirations. As a child I had a (mercifully passing) obsession with science fiction. To this day contact to extraterrestrial intelligent beings is to me one of the most exciting prospects of technological progress. I think the plausible explanation why we have so far not made alien contact is that they use a communication method we have no yet discovered, and if there is any way to communicate faster than the speed of light, clearly that’s what they would use. Thus, we should work on building a receiver for the faster-than-light signals! Except, well, that our present theories don’t seem to allow for such signals to begin with. Every day is a winding road, and after many such days I found myself working on quantum gravity. So when the review was finally submitted, I thought it is time to come back to superluminal information exchange, which resulted in a paper that’s now published The basic idea isn’t so difficult to explain. The reason that it is generally believed nothing can travel faster than the speed of light is that Einstein’s special relativity sets the speed of light as a limit for all matter that we know. The assumptions for that argument are few, the theory is extremely well in agreement with experiment, and the conclusion is difficult to avoid. Strictly speaking, special relativity does not forbid faster-than-light propagation. However, since in special relativity a signal moving forward in time faster than the speed of light for one observer might appear like a signal moving backwards in time for another observer, this can create causal paradoxa. There are three common ways to allow superluminal signaling, and each has its problems: First, there are wormholes in general relativity, but they generically also lead to causality problems. And how creation, manipulation, and sending signals through them would work is unclear. I’ve never been a fan of wormholes. Second, one can just break Lorentz-invariance and avoid special relativity altogether. In this case one introduces a preferred frame and observer independence is violated. This avoids causal paradoxa because there’s now a distinguished direction “forward” in time. The difficulty here is that special relativity describes our observations extremely well and we have no evidence for Lorentz-invariance violation whatsoever. There is then explaining to do why we have not noticed violations of Lorentz-invariance before. Many people are working on Lorentz invariance violation, and that by itself limits my enthusiasm. Third, there are deformations of special relativity which avoid an explicit breaking of Lorentz-invariance by changing the Lorentz-transformations. In this case, the speed of light becomes energy-dependent so that photons with high energy can, in principle, move arbitrarily fast. Since in this case everybody agrees that a photon moves forward in time, this does not create causal paradoxa, at least not just because of the superluminal propagation. I was quite excited about this possibility for a while, but after some years of back and forth I’ve convinced myself that deformed special relativity creates more problems than it solves. It suffers from various serious difficulties that prevent a recovery of the standard model and general relativity in the suitable limits, notoriously the problem of multi-particle states and non-locality (which we discussed here). So, none of these approaches is very promising and one is really very constrained in the possible options. The symmetry-group of Minkowski-space is the Lorentz-group plus translations. It has one free parameter and that’s the speed of massless particles. It’s a limiting speed. End of story. There really doesn’t seem to be much wiggle room in that. Then it occurred to me that it is not actually difficult to allow several different speeds of lights to be invariant, as long as can never measure them at the same time. And that would be the case if one had particles propagating in a background that is a superposition of Minkowski-spaces with different speeds of light. Because in this case then you would use for each speed of light the Lorentz-transformation that belongs to it. In other words, you blow up the Lorentz-group to a one-parameter family of groups that acts on a set of spaces with different speeds of lights. You have to expect the probability for a particle to travel through an eigenspace that does not belong to the measured speed of light to be small, so that we haven’t yet noticed. To good precision, the background that we live in must be in an eigenstate, but it might have a small admixture of other speeds, faster and slower. Particles then have a small probability to travel faster than the speed of light through one of these spaces. If you measure a state that was in a superposition, you collapse the wavefunction to one eigenstate, or let us better say it decoheres. This decoherence introduces a preferred frame (the frame of the measurement) which is how causal paradoxa are avoided: there is a notion of forward that comes in through the measurement. In contrast to the case in which Lorentz invariance is violated though, this preferred frame does not appear on the level of the Lagrangian - it is not fundamentally present. And in contrast to deformations of special relativity, there is no issue here with locality because two observers never disagree on the paths of two photons with different speeds: Instead of there being two different photons, there’s only one, but it’s in a superposition. Once measured, all observers agree on the outcome. So there’s no Box Problem. That having been said, I found it possible to formulate this idea in the language of quantum field theory. (It wasn’t remotely as straight forward as this summary might make it appear.) In my paper, I then proposed a parameterization of the occupation probability of the different speed of light eigenspaces and the probability of particles to jump from one eigenstate to another upon interaction. So far so good. Next one would have to look at modifications of standard model cross-sections and see if there is any hope that this theoretical possibility is actually realized in nature. We still have a long way to go on the way to build the cell phone to talk to aliens. But at least we know now that it’s not incompatible with special relativity. Wednesday, August 22, 2012 How do science blogs change the face of science? The blogosphere is coming to age, and I’m doing my annual contemplation of its influence on science. Science blogs of course have an educational mission, and many researchers use them to communicate the enthusiasm they have for their research, may that be by discussing their own work or that of colleagues. But blogs were also deemed useful to demonstrate that scientists are not all dusty academics, withdrawn professors or introverted nerds who sit all day in their office, shielded by piles of books and papers. Physics and engineering are fields where these stereotypes are quite common – or should I say “used to be quite common”? Recently I’ve been wondering if not the perception of science that the blogosphere has created is replacing the old nerdy stereotype with a new stereotype. Because the scientists who blog are the ones who are most visible, yet not the ones who are actually very representative characters. This leads to the odd situation in which the avid reader of blogs, who otherwise doesn’t have much contact with academia, is left with the idea that scientists are generally interested in communicating their research. They also like to publicly dissect their colleagues’ work. And, judging from the photos they post, they seem to spend a huge amount of time travelling. Not to mention that, well, they all like to write. Don’t you also think they all look a little like Brian Cox? I find this very ironic. Because the nerdy stereotype for all its inaccuracy still seems to fit better. Many of my colleagues do spend 12 hours a day in their office scribbling away equations on paper or looking for a bug in their code. They’d rather die than publicly comment on anything. Their Facebook accounts are deserted. They think a hashtag is a drug, and the only photo on their iPhone shows that instant when the sunlight fell through the curtains just so that it made a perfect diffraction pattern on the wall. They're neither interested nor able to communicate their research to anybody except their close colleagues. And, needless to say, very few of them have even a remote resemblance to Brian Cox. So the funny situation is that my online friends and contacts think it’s odd if one of my colleagues is not available on any social networking platform. Do they even exist for real? And my colleagues still think I’m odd taking part in all this blogging stuff and so on. I’m not sure at all these worlds are going to converge any time soon. Sunday, August 19, 2012 Book review: “Why does the world exist?” by Jim Holt Why Does the World Exist?: An Existential Detective Story By Jim Holt Liveright (July 16, 2012) Yes, I do sometimes wonder why the world exists. I believe however it is not among the questions that I am well suited to find an answer to, and thus my enthusiasm is limited. While I am not disinterested in philosophy in principle, I get easily frustrated with people who use words as if they had any meaning that’s not a human construct, words that are simply ill-defined unless the humans themselves and their language are explained too. I don’t seem to agree with Max Tegmark on many points, but I agree that you can’t build fundamental insights on words that are empty unless one already has these fundamental insights - or wants to take the anthropic path. In other words, if you want to understand nature, you have to do it with a self-referential language like mathematics, not with English. Thus my conviction that if anybody is to understand the nature of reality, it will be a mathematician or a theoretical physicist. For these reasons I’d never have bought Jim Holt’s book. I was however offered a free copy by the editor. And, thinking that I should broaden my horizon when it comes to the origin of the universe and the existence or absence of final explanations, I read it. Holt’s book is essentially a summary of thoughts on the question why there isn’t nothing, covering the history of the question as well as the opinions of currently living thinkers. The narrative of the book is Holt’s own quest for understanding that lead him to visit and talk to several philosophers, physicists and other intellectuals, including Steven Weinberg, Alan Guth and David Deutsch. Many others are mentioned or cited, such as Stephen Hawking, Max Tegmark and Roger Penrose. The book is very well written, though Holt has a tendency to list exactly what he ate and drank when and where which takes up more space than it deserves. There are more bottles of wine and more deaths on the pages of his book than I had expected, though that is balanced with a good sense of humor. Since Holt arranges his narrative along his travel rather than by topic, the book is sometimes repetitive when he reminds the reader of something (eg the “landscape”) that was already introduced earlier. I am very impressed by Holt’s interviews. He has clearly done a lot of own thinking about the question. His explanations are open-minded and radiate well-meaning, but he is sharp and often critical. In many cases what he says is much more insightful than what his interview partners have to offer. Holt’s book is good summary of just how bizarre the world is. The only person quoted in this book who made perfect sense to me is Woody Allen. On the very opposite end is a philosopher named Derek Parfit who hates the “scientizing” of philosophy, and some of his colleagues who believe in “panpsychism”, undeterred by the total lack of scientific evidence. The reader of the book is also confronted with John Updike who belabors the miserable state of string theory “This whole string theory business… There’s never any evidence, right? There are men spending their whole careers working on a theory of something that might not even exist”, and Alex Vilenkin who has his own definition of “nothing,” which, if you ask me, is a good way to answer the question. Towards the end of the book Jim Holt also puts forward his own solution to the problem of why there is something rather than nothing. Let me give you a flavor of that proof: “Reality cannot be perfectly full and perfectly empty at the same time. Nor can it be ethically the best and causally the most orderly at the same time (since the occasional miracle could make reality better). And it certainly can’t be the ethically best and the most evil at the same time.” Where to even begin? Every second word in this “proof” is undefined. How can one attempt to make an argument along these lines without explaining “ethically best” in terms that are not taken out of the universe whose existence is supposed to be explained? Not to mention that all along his travel, nobody seems to have told Holt that, shockingly, there isn’t only system of logic, but a whole selection of them. This book has been very educational for me indeed. Now I know the names of many ism’s that I do not want to know more about. I hate the idea that I’d have missed this book if it hadn't been for the free copy in my mail box. That having been said, to get anything out of this book you need to come with an interest in the question already. Do not expect the book to create this interest. But if you come with this interest, you’ll almost surely enjoy reading it. Wednesday, August 15, 2012 "Rapid streamlined peer-review" and its results Contains 0% Quantum Gravity. "Scientific Reports" is a new open access journal from the Nature Publishing Group, which advertises its "rapid peer review and publication of research... with the support of an external Editorial Board and a streamlined peer-review system." In this journal I recently found this article "Testing quantum mechanics in non-Minkowski space-time with high power lasers and 4th generation light sources" B. J. B. Crowley et al Scientific Reports 2, Article number: 491 Note the small volume number, all fresh and innocent. It's a quite interesting article that calculates the cross-section of photons scattering off electrons that are collectively accelerated by a high intensity laser. The possibility to maybe test Unruh radiation in a similar fashion has lately drawn some attention, see eg this paper. But this is explicitly not the setup that the authors of the present paper are after, as they write themselves in the text. What is remarkable about this paper is the amount of misleading and wrong statements about exactly what it is they are testing and what not. In the title it says they are testing "quantum mechanics in non-Minkowski space-time." What might that mean, I was wondering? Initially I thought it's another test of space-time non-commutativity, which is why I read the paper in the first place. The first sentence of the abstract reads "A common misperception of quantum gravity is that it requires accessing energies up to the Planck scale of 1019GeV, which is unattainable for any conceivable particle collider." Two sentences later, the authors no longer speak of quantum gravity but "a semiclassical extension of quantum mechanics ... under the assumption of weak gravity." So what's non-Minkowski then? And where's quantum gravity? What they do in fact in the paper is that they calculate the effect of the acceleration on the electrons and argue that via the equivalence principle this should be equivalent to testing the influence of gravity. (At least locally, though there's not much elaboration on this point in the paper.) Now, strictly speaking we do of course never make any experiment in Minkowski space - after all we sit in a gravitational field. In the same sense we have countless tests of the semi-classical limit of Einstein's field equations. So I read and I am still wondering, what is it that they test? In the first paragraph then the reader learns that the Newton-Schrödinger equation (which we discussed here) is necessary "to obtain a consistent description of experimental findings" with a reference to Carlip's paper and a paper by Penrose on state reduction. Clearly a misunderstanding, or maybe they didn't actually read the papers they cite. They also don't actually use the Schrödinger-Newton equation however - as I said, there isn't actually a gravitational field in their setup. "We do not concern ourselves with the quantized nature of the gravitational field itself." Fine, no need to quantize what's not there. Then on page two the reader learns "Our goal is to design an experiment where it may be possible to test some aspects of general relativity..." Okay, so now they're testing neither quantum mechanics nor quantum gravity, nor the Schrödinger-Newton equation, nor semi-classical gravity, but general relativity? Though, since there's no curvature involved, it would be more like testing the equivalence principle, no? But let's move on. We come across the following sentence: "[T]he most prominent manifestation of quantum gravity is that black holes radiate energy at the universal temperature - the Hawking temperature." Leaving aside that one can debate how "prominent" an effect black hole evaporation is, it's also manifestly wrong. Black hole evaporation is an effect of quantum field theory in curved spacetime. It's not a quantum gravitational effect, that's the exact reason why it's been dissected since decades. The authors then go on to talk about Unruh radiation and make an estimate showing that they are not testing this regime. It follows the actual calculation, which, as I said, is in principle interesting. But at the end of the calculation we are then informed that this "provid[es], for the first time, a direct way to determine the validity of the models of quantum mechanics in curved space-time, and the specific details of the coupling between classical and quantized fields." Except that there isn't actually any curved space-time in this experiment, unless they mean the gravitational field of the Earth. And the coupling to this has been tested for example in this experiment (and in some follow-up experiements to this), which the authors don't seem to be aware of or at least don't cite. Again, at the very best I think they're proposing to test the equivalence principle. In the closing paragraph they then completely discard the important qualifier that the space-time is not actually curved and that it's in the best case an indirect test by claiming that, on the contrary, "[T]he scientific case described in this letter is very compelling and our estimates indicate that a direct test of the semiclassical theory of quantum mechanics in curved space-time will become possible." Emphasis mine. So, let's see what have we. We started with a test of quantum mechanics in non-Minkowski space, came across some irrelevant mentioning of quantum gravity, a misplaced referral to the Schrödinger-Newton equation, testing general relativity in the lab, further irrelevant and also wrong comments about quantum gravity, to direct tests of quantum mechanics in curved space time. All by looking at a bunch of electrons accelerated in a laser beam. Misleading doesn't even begin to capture it. I can't say I'm very convinced by the quality standard of this new journal. Sunday, August 12, 2012 What is transformative research and why do we need it? Since 2007, the US-American National Science Foundation (NSF) has an explicit call for “transformative research” in their funding criteria. Transformative research, according to the NSF, is the type of research that can “radically change our understanding of an important existing scientific or engineering concept or educational practice or leads to the creation of a new paradigm or field of science, engineering, or education.” The European Research Council (ERC) calls it “frontier research” and explains that this frontier research is “at the forefront of creating new knowledge[. It] is an intrinsically risky endeavour that involves the pursuit of questions without regard for established disciplinary boundaries or national borders.” The best way to understand this type of research is that it’s of high risk with a potential high payoff. It’s the type of blue-sky research that is very unlikely to be pursued in for-profit organizations because it might have no tangible outcome for decades. Since one doesn’t actually know if some research has a high payoff before it’s been done, one should better call it “Potentially Transformative Research.” Why do we need it? If you think of science being an incremental slow push on the boundaries of knowledge, then transformative research is a jump across the border in the hope to land on save ground. Most likely, you’ll jump and drown, or be eaten by dragons. But if you’re lucky and, let’s not forget about that, smart, you might discover a whole new field of science and noticeably redefine the boundaries of knowledge. The difficulty is of course to find out if the potential benefit justifies the risk. So there needs to be an assessment of both, and a weighting of them against each other. Most of science is not transformative. Science is, by function, conservative. It conserves the accumulated knowledge and defends it. We need some transformative research to overcome this conservatism, otherwise we’ll get stuck. That’s why the NSF and ERC acknowledge the necessity of high-risk, high-payoff research. But while it is clear that we need some of it, it’s not a priori clear we need more of it than we already have. Not all research should aspire to be transformative. How do we know we’re too conservative? The only way to reliably know is to take lots of data over a long time and try to understand where the optimal balance lies. Unfortunately, the type of payoff that we’re talking about might take decades to centuries to appear, so that is, at present, not very feasible. In lack of this the only thing we can do is to find a good argument for how to move towards the optimal balance. One way you can do this is with measures for scientific success. I think this is the wrong approach. It’s like setting prices in a market economy by calculating them from the product’s properties and future plans. It’s not a good way to aggregate information and there’s no reason to trust whoever comes up with the formula for the success measure knows what they’re doing. The other way is to enable a natural optimization process, much like the free market prices goods. Just that in science the goal isn’t to price goods but to distribute researchers over research projects. How many people should optimally work on which research so their skills are used efficiently and progress is as fast as possible? Most scientists have the aspiration to make good use of their skills and to contribute to progress, so the only thing we need to do is to let them follow their interests. Yes, that’s right. I’m saying the best we can do is trust the experts to find out themselves where their skills are of best use. Of course one needs to provide a useful infrastructure for this to work. Note that this does not mean everybody necessarily works on the topic they’re most interested in, because the more people work on a topic the smaller the chances become that there are significant discoveries for each of them to be made. The tragedy is of course that this is nowhere like science is organized today. Scientists are not free to choose on which problem to use their skills. Instead, they are subject to all sorts of pressures which prevent the optimal distribution of researchers over projects. The most obvious pressures are financial and time pressure. Short term contracts put a large incentive on short-term thinking. Another problem is the difficulty for researchers to change topics, which has the effect that there is a large (generational) time-lag in the population of research fields. Both of these problems cause a trend towards conservative rather than transformative research. Worse: They cause a trend towards conservative rather than transformative thinking and, by selection, a too small ratio of transformative rather than conservative researchers. This is why we have reason to believe the fraction of transformative research and researchers is presently smaller than optimal. How can we support potentially transformative research? The right way to solve this problem is to reduce external pressure on researchers and to ensure the system can self-optimize efficiently. But this is difficult to realize. If that is not possible, one can still try to promote transformative research by other means in the hope of coming closer to the optimal balance. How can one do this? The first thing that comes to mind is to write transformative research explicitly into the goals of the funding agencies, encourage researchers to propose such projects, and peers to review them favorably. This most likely will not work very well because it doesn’t change anything about the too conservative communities. If you random sample a peer review group for a project, you’re more likely to get conservative opinions just because they’re more common. As a result, transformative research projects are unlikely to be reviewed favorably. It doesn’t matter if you tell people that transformative research is desirable, because they still have to evaluate if the high risk justifies the potential high payoff. And assessment of tolerable risk is subjective. So what can be done? One thing that can be done is to take a very small sample of reviewers, because the smaller the sample the larger the chance of a statistical fluctuation. Unfortunately, this also increases the risk that nonsense will go through because the reviewers just weren’t in the mood to actually read the proposal. The other thing you can do is to pre-select researchers so you have a subsample with a higher ratio of transformative to conservative researchers. This is essentially what FQXi is doing. And, in their research area, they’re doing remarkably well actually. That is to say, if I look at the projects that they fund, I think most of it won’t lead anywhere. And that’s how it should be. On the downside, it’s all short-term projects. The NSF is also trying to exploit preselection in a different form in their new EAGER and CREATIV funding mechanism that are not at all assessed by peers but exclusively by NSF staff. In this case the NSF staff is the preselected group. However, I am afraid that the group might be too small to be able to accurately assess the scientific risk. Time will tell. Putting a focus on transformative research is very difficult for institutions with a local presence. That’s because when it comes to hire colleagues who you have to get along with, people naturally tend to select those who fit in, both in type of research and in type of personality. This isn’t necessarily a bad thing as it benefits collaborations, but it can promote homogeneity and lead to “more of the same” research. It takes a constant effort to avoid this trend. It also takes courage and a long-term vision to go for the high-risk, high payoff research(er), and not many institutions can afford this courage. So here is again the financial pressure that hinders leaps of progress just because of lacking institutional funding. It doesn’t help that during the last weeks I had to read that my colleagues in basic research in Canada, the UK and also the USA are looking forward to severe budget cuts: “Of paramount concern for basic scientists [in Canada] is the elimination of the Can$25-million (US$24.6-million) RTI, administered by the Natural Sciences and Engineering Research Council of Canada (NSERC), which funds equipment purchases of Can$7,000–150,000. An accompanying Can$36-million Major Resources Support Program, which funds operations at dozens of experimental-research facilities, will also be axed.” [Source: Nature] “Hanging over the effective decrease in support proposed by the House of Representatives last week is the ‘sequester’, a pre-programmed budget cut that research advocates say would starve US science-funding agencies.” [Source: Nature] “[The] Engineering and Physical Sciences Research Council (EPSRC) [is] the government body that holds the biggest public purse for physics, mathematics and engineering research in the United Kingdom. Facing a growing cash squeeze and pressure from the government to demonstrate the economic benefits of research, in 2009 the council's chief executive, David Delpy, embarked on a series of controversial reforms… The changes incensed many physical scientists, who protested that the policy to blacklist grant applicants was draconian. They complained that the EPSRC's decision to exert more control over the fields it funds risked sidelining peer review and would favour short-term, applied research over curiosity-driven, blue-skies work in a way that would be detrimental to British science.” [Source:Nature] So now more than ever we should make sure that investments in basic research are used efficiently. And one of the most promising ways to do this is presently to enable more potentially transformative research. Thursday, August 09, 2012 Thinking, Fast and Slow By Daniel Kahneman Farrar, Straus and Giroux (October 25, 2011) The book is well written, reads smoothly, is well organized, and thoroughly referenced. As a bonus, the appendix contains reprints of Kahneman’s two most influential papers that contain somewhat more details than the summary in the text. He narrates along the story of his own research projects and how they came into being which I found a little tiresome after he elaborated on the third dramatic insight that he had about his own cognitive bias. Or maybe I'm just jealous because a Nobel Prize winning insight in theoretical physics isn't going to come by that way. In summary, it’s a well-written and thoroughly useful book that is interesting for everybody with an interest in human decision-making and its shortcomings. I'd give this book four out of five stars. Tuesday, August 07, 2012 Why does the baby cry? Fact sheet. Gloria at 2 months, crying. Two weeks after delivery, when the husband went back to work and my hemoglobin level had recovered enough to let me think about anything besides breathing, I seemed to be spending a lot of time on The One Question: Why does the baby cry? We had been drowned in baby books that all had something helpful to say. Or so I believe, not having read them. But what really is the evolutional origin of all that crying to begin with? That’s what I was wondering. Is there a reason to begin with? You don’t need a degree to know that baby cries if she’s unhappy. After a few weeks I had developed a trouble-shooting procedure roughly like this: Does she have a visible reason to be unhappy? Does she stop crying if I pick her up? New diaper? Clothes comfortable? Too warm? Too cold? Is she bored? Is it possible to distract her? Hungry? When I had reached the end of my list I’d start singing. The singing almost always helped. After that, there’s the stroller and white noise and earplugs. Yes, the baby cries when she’s unhappy, no doubt about that. But both Lara and Gloria would sometimes cry for no apparent reason, or at least no reason that Stefan and I were able to figure out. The crying is distressing for the parents and costs the baby energy. So why, if it’s such an inefficient communication channel, does the baby cry so much? If the baby is trying to tell us something, why haven't hundred thousands of years of evolution been sufficient to teach caregivers what it is that she wants? I came up with the following hypotheses: A) She doesn’t cry for any reason, it’s just what babies do. I wasn’t very convinced of this because it doesn’t actually explain anything. B) She cries so I don’t misplace or forget about her. I wasn’t very convinced of this either because after two months or so, my brain had classified the crying as normal background noise. Also, babies seem to cry so much it overshoots the target: It doesn’t only remind the caregivers, it frustrates them. C) It’s a stress-test. If the family can’t cope well, it’s of advantage for future reproductive success of the child if the family breaks up sooner rather than later. D) It’s an adaption delay. The baby is evolutionary trained to expect something else than what it gets in modern western societies. If I’d just treat the baby like my ancestors did, she wouldn’t cry so much. So I went and looked what the scientific literature has to say. I found a good review by Joseph Soltis from the year 2004 which you can download here. The below is my summary of these 48 pages. First, let us clarify what we’re talking about. The crying of human infants changes after about 3 months because the baby learns to make more complex sounds and also becomes more interactive. In the following we’ll only consider the first three months that are most likely to be nature rather than nurture. Here are some facts about the first three months of baby’s crying that seem to be established pretty well. All references can be found in Soltis’ paper. • Crying increases until about 6 weeks after birth, followed by a gradual decrease in crying until 3 or 4 months, after which it remains relatively stable. Crying is more frequent in the later afternoon and early evening hours. These crying patterns have been found in studies of very different cultures, from the Netherlands, from South African hunter-gatherers, from the UK, Manilia, Denmark, and North America. • Chimpanzees too have a peak in crying frequency at approximately 6 weeks of life, and a substantial decline in crying frequency by 12 weeks. • The cries of healthy, non-stressed infants last on the average 0.5-1.5 seconds with a fundamental pitch in the range of 200-600 Hz. The melody is either falling or rising/falling (as opposed to rising, falling/rising or flat). • Serious illness, both genetic and acquired, is often accompanied by abnormal crying. The most common cry characteristic indicating serious pathology is an unusually high pitched cry, in one case study above 2000 Hz, and in many other studies exceeding 1500 Hz. (That’s higher than most sopranos can sing.) Examples are: bacterial meningitis 750-1000 Hz, Krabbe’s disease up to 1120 Hz, hypoglycemia up to 1600 Hz. Other abnormal cry patters that have been found in illness is biphonation (the simultaneous production of two fundamental frequencies), too low pitch, and deviations from the normal cry melodies. • Various studies have been conducted to find out how well adults are able to tell the reason for a baby’s cry by playing them previously recorded cries. These studies show mothers are a little bit better than random chance when given a predefined selection of choices (eg pain, anger, other, in one study), but by and large mothers as well as other adults are pretty bad at figuring out the reason for a baby’s cry. Without being given categories, participants tend to attribute all cries to hunger. • It has been reported in several papers that parents described a baby’s crying as the most proximate cause triggering abuse and infanticide. It has also been shown that especially the high pitched baby cries produce a response of the autonomic nervous system, measureable for example by the heart rate or skin conductance (the response is higher than for smiling babies). It has also been shown that abusers exhibit higher autonomic responses to high-pitched cries than non-abusers. • Excessive infant crying is the most common clinical complaint of mothers with infants under three months of age. • Excessive infant crying that begins and ends without warning is called “colic.” It is often attributed to organic disorders, but if the baby has no other symptoms it is estimated that only 5-10% of “colic” go back to an organic disorder, the most common one being lactose intolerance. If the baby has other symptoms (flexed legs, spasm, bloating, diarrhea), the ratio of organic disorder goes up to 45%. The rest cries for unknown reasons. Colic usually improves by 4 months, or so they tell you. (Lara’s didn’t improve until she was 6 months. Gloria never had any.) • Colic is correlated with postpartum depression which is in turn robustly associated with reduced maternal care. • Records and media reports kept by the National Center on Shaken Baby Syndrome implicate crying as the most common trigger. • In a survey among US mothers, more infant crying was associated with lower levels of perceived infant health, more worry about baby’s health, and less positive emotion towards the infant. • Some crying bouts are demonstrably unsoothable to typical caregiving responses in the first three months. Well, somebody has to do these studies. • In studies of nurses judging infant pain, the audible cry was mostly redundant to facial activity in the judgment of pain. Now let us look at the hypotheses researchers have put forward and how well they are supported by the facts. Again, let me mention that everybody agrees the baby cries when in distress, the question is if that’s the entire reason. 1. Honest signal of need. The baby cries if and only if she needs or wants something, and she cries to alert the caregivers of that need. This hypothesis is not well supported by the facts. Baby’s cries are demonstrably inefficient of bringing the baby the care it allegedly needs because caregivers don’t know what she wants and in many cases there doesn’t seem to be anything they can do about it. This is the scientific equivalent of my hypothesis D which I found not so convincing. 2. Signal of vigor. This hypothesis says that the baby cries to show she’s healthy. The more the baby cries (in the “healthy” pitch and melody range), the stronger she is and the more the mother should care because it’s a good investment of her attention to raise offspring that’s likely to reproduce successfully. Unfortunately, there’s no evidence linking a high amount of crying to good health of the child. In contrast, as mentioned above, parents perceive children as more sickly if they cry more, which is exactly the opposite of what the baby allegedly “wants” to signal. Also, lots of crying is apparently maladaptive according to the evidence listed above, because it can cause violence against the child. It’s also unclear why, if the baby isn’t seriously sick and too weak to cry, a not-so-vigorous child should alert the caregivers to his lack of vigor and trigger neglect. It doesn’t seem to make much sense. This is the scientific equivalent of my hypothesis B which I didn’t find very convincing either. 3. Graded signal of distress. The baby cries if she’s in distress, and the more distress the more she cries. This hypothesis is, at least for what pain is concerned, supported by evidence. Pretty much everybody seems to agree on that. As mentioned above however, while distress leads to crying, this leaves open the question why the baby is in distress to begin with and why it cries if caregivers can’t do anything about it. Thus, while this hypothesis is the least controversial one, it’s also the one with the smallest explanatory value. 4. Manipulation: The baby cries so mommy feeds her as often as possible. Breastfeeding stimulates the production of the hormone prolactin; prolactin inhibits estrogen production, which often (though not always) keeps the estrogen level below the threshold necessary for the menstrual cycle to set it. This is called lactational amenorrhea. In other words, the more the baby gets mommy to feed her, the smaller the probability that a younger sibling will compete for resources, thus improving the baby’s own well-being. The problem with this hypothesis is that it would predict the crying to increase when the mother’s body has recovered, some months after birth, and is in shape to carry another child. Instead however, at this time the babies cry less rather than more. (It also seems to say that having siblings is a disadvantage to one’s own reproductive success, which is quite a bold statement in my opinion.) 5. Thermoregulatory assistance. An infant’s thermoregulation is not very well developed, which is why you have to be so careful to wrap them warm when it’s cold and to keep them in the shade when it’s hot. According to this hypothesis the baby cries to make herself warm and also to alert the mother that it needs assistance with thermoregulation. It’s an interesting hypothesis that I hadn’t heard of before and it doesn’t seem to have been much studied. I would expect however that in this case the amount of crying depends on the external temperature, and I haven’t come across any evidence for that. 6. Inadequacy of central arousal. The infant’s brain needs a certain level of arousal for proper development. Baby starts crying if not enough is going on, to upset herself and her parents. If there’s any factual evidence speaking for this I don’t know of it. It seems to be a very young hypothesis. I’m not sure how this is compatible with my finding that the Lara after excessive crying would usually fall asleep, frequently in the middle of a cry, and that excitement (people, travel, noise) were a cause for crying too. 7. Underdeveloped circadian rhythm. The infant’s sleep-wake cycle is very different from an adult’s. Young babies basically don’t differentiate night from day. It’s only at around two to three months that they start sleeping through the night and develop a daily rhythm. According to this hypothesis it’s the underdeveloped circadian rhythm that causes the baby distress, probably because certain brain areas are not well synched with other daily variations. This makes a certain sense because it offers a possible explanation for the daily return of crying bouts in the late afternoon, and also for why they fade when the babies sleep through the night. This too is a very young hypothesis that is waiting for good evidence. 8. Behavioral state. The baby’s mind knows three states: Sleep, awake, and crying. It’s a very minimalistic hypothesis, but I’m not sure it explains anything. This is the scientific equivalent of my hypothesis A, the baby just cries. Apparently nobody ever considered my hypothesis D, that baby cries to move herself into an optimally stable social environment which would have developmental payoffs. It’s probably very difficult a case to make. The theoretical physicist in me is admittedly most attracted to one of the neat and tidy explanations in which the crying is a side-effect of a physical development. So if your baby is crying and you don’t know why, don’t worry. Even scientists who have spent their whole career on this question don’t actually know why the baby cries. Sunday, August 05, 2012 Erdös and amphetamines: check Some weeks ago I wrote a review on Jonah Lehrer's book "Imagine," in which I complained about missing references. Now that it turns out Lehrer fabricated quotes and facts on various occasions (see eg here and here), I recalled that I meant to look up a reference on an interesting story he told, that the famous mathematician Paul Erdös kept up his productivity by taking benzedrine. Benzedrine belongs to the amphetamines, also known as speed. Lehrer did not quote any source for this story. So I did look it up, and it turns out it's true. In Paul Hoffman's biography of Erdös one finds: Erdös first did mathematics at the age of three, but for the last twenty-five years of his life, since the death of this mother, he put in nineteen-hour days, keeping himself fortified with 10 to 20 milligrams of Benzedrine or Ritalin, strong espresso, and caffeine tablets. "A mathematician," Erdös was fond of saying, "is a machine for tuning coffee into theorems." When friends urged him to slow down, he always had the same response: "There'll be plenty of time to rest in the grave." (You can read chapter 1 from the book, which contains this paragraph, here). Benzedrine was available on prescription in the USA during this time. Erdös lived to the age of 83. During his lifetime, he wrote or co-authored 1,475 academic papers. Lehrer also relates the following story in his book Ron Graham, a friend and fellow mathematician, once bet Erdos five hundred dollars that he couldn't abstain from amphetamines for thirty days. Erdos won the wager but complained that the progress of mathematicians had been set back by a month: "Before, when I looked at a piece of blank paper, my mind was filled with ideas," he complained. "Now all I see is a blank piece of paper. (Omitted umlauts are Lehrer's, not mine.) Lehrer does not mention Erdös was originally prescribed benzedrine to treat depression after his mother's death. I'm not sure exactly what the origin of this story is. It is mentioned in a slightly different wording in this PDF by Joshua Hill: Erdős's friends worried about his drug use, and in 1979 Graham bet Erdős $500 that he couldn't stop taking amphetamines for a month. Erdős accepted, and went cold turkey for a complete month. Erdős's comment at the end of the month was "You've showed me I'm not an addict. But I didn't get any work done. I'd get up in the morning and stare at a blank piece of paper. I'd have no ideas, just like an ordinary person. You've set mathematics back a month." He then immediately started taking amphetamines again. Hill's article is not quoted by Lehrer, and there's no reference in Hill's article. It also seems to go back to Paul Hoffman's book (same chapter). (Note added: I revised the above paragraph, because I hadn't originally seen it in Hoffman's book.) Partly related: Calculate your Erdős number here, mine is 4. Friday, August 03, 2012 Lara and Gloria are presently very difficult. They have learned to climb the chairs and upwards from there; I constantly have to pick them off the furniture. Yesterday, I turned my back on them for a second, and when I looked again Lara was sitting on the table, happily pulling a string of Kleenex out of the box, while Gloria was moving away the chair Lara had used to climb up. During the last month, the girls have added a few more words to their vocabulary. The one that's most obvious to understand is "lallelalle," which is supposed to mean "empty", and usually a message to me to refill the apple juice. Gloria also has found a liking in the word "Haar" (hair), and she's been saying "Goya" for a while, which I believe means "Gloria". Or maybe yogurt. They both can identify most body parts if you name them. Saying "feet" will make them grab their feet, "nose" will have them point at their nose, and so on. If Gloria wants to make a joke, she'll go and grab her sister's nose instead. Gloria also announces that she needs a new diaper by padding her behind, alas after the fact. I meanwhile am stuck in proposal writing again. The organization for the conference in October and the program in November is going nicely, and I'm very much looking forward to both events. My recent paper was accepted for publication in Foundations of Physics, and I've wrapped up another project that had been in my drawer for a while. Besides this, I've spent some time reading up the history of Nordita, which is quite interesting actually, maybe I'll have a post on this at some point. I finally said good bye to my BlackBerry and now have an iPhone, which works so amazingly smoothly I'm deeply impressed. Below a little video of the girls that I took the other day. YouTube is offering a fix for shaky videos, which is why you might see the borders moving around. I hope your summer is going nicely and that you have some time to relax! Wednesday, August 01, 2012 Letter of recommendation 2.0 I am currently reading Daniel Kahneman’s book “Thinking, fast and slow,” which summarizes a truly amazing amount of studies. Among many other cognitive biases, Kahneman explains that it is difficult for people to accept that often algorithms based on statistical data produce better predictions than experts. This is difficult to accept even when one is shown evidence that the algorithm is better. He cites many examples for that, among them forecasting the future success of military personnel, quality of wine, or treatment of patients. The reason, Kahneman explains, is that humans are not as efficient screening and aggregating data as software. Humans are prone to miss details, especially if the data is noisy, they get tired or fall for various cognitive biases in their interpretation of data. Generally, the human brain does not effortlessly engage in Bayesian inference. In combination with it trying to save energy and effort, this leads to mistakes. Humans are especially bad in making summary judgements of complex information, Kahneman writes, while at the same time being overly confident about the accuracy of their judgement. One of his examples is: “Experienced radiologists who evaluate chest X-rays as “normal or “abnormal” contradict themselves 20% of the time when they see the same picture on separate occasions.” Interestingly however, Kahneman also cites evidence that expert intuition can be very valuable, provided the expert’s judgement is about a situation where learning from experience is possible. (Expert judgement is an illusion when a data series is entirely uncorrelated.) He thus suggests that judgements should be based on an analysis of statistical data from past performance, combined with expert intuition. We should overcome our disliking of statistical measures, he writes “to maximize predictive accuracy, final decisions should be left to formulas, especially in low-validity environments” (when prediction is difficult due to a large amount of relevant factors). This made me question my own objections to using measures for scientific success, as scientific success is of the type of prediction that is very difficult to make because luck plays a big role. Part of my disliking arguably stems from a general unease of leaving decisions about people’s future to a computer. While that is the case, and probably part of the reason I don’t like the idea, it’s not the actual problem I have belabored in my earlier blogposts. For me the main problem with using measures for scientific success is that I’d like to see evidence they are actually working, and do not adversely affect research. I am worried particularly that a widely used measure for scientific success would literally redefine what we mean by success in the first place. A small mistake, implemented and streamlined globally, could in this way dramatically slow down progress. But I am wondering now if not, based on what Kahneman writes, I have to conclude that in addition to asking for letters of recommendation (the “expert’s intuiton”) it would be valuable to judge researchers’ past performance on a point scale. Consider that you’d be asked to fill out a questionnaire for each of your students and postdocs, ranking him or her from 0 to 5 for those characteristics typically named in letters: technical skills, independence, creativity, and so on, and also add your confidence on these judgements. You could update your scores if your opinion changes. What a hiring committee would do with these scores is a different question entirely. The benefit of this would be the assembly of a data base needed to discover predictors for future performance, if they exist. The difficulty is that the experts in question are rarely offering a neutral judgement; many have a personal interest in seeing their students succeed, so there needs to be some incentive for accuracy. The risk would be that such a predictor might become a self-fulfilling prophecy. At least until a reality check documents that actually, despite all the honors, prices and awards, very little has happened in terms of actual progress. Either way, now that I think about it, such a ranking would be temptingly useful for hiring committees to sort through large numbers of applicants quickly. I wouldn’t be surprised if somebody tries this rather sooner or later. Would you welcome it?
1d7df5eeaf3a93d7
The Mechanical Theory of Everything mteDr. Joseph M. Brown’s latest book, The Mechanical Theory of Everything, is available now from Basic Research Press and Amazon. This volume represents a lifetime of research into the problems and foundations of biology, physics, mathematics, and language. Click here for more information, or click here to order your copy. Also available on Amazon. The Schrödinger Equation is a Newtonian Equation When a photon impacts a free matter particle at rest, mass and momentum are imparted to the particle, and it accelerates to velocity $v$.  The center of mass of the imparted mass is captured at such a radius that the angular momentum imparted to the system is equal to Planck’s constant $h$.  The imparted mass and the impacted particle mass remain at a fixed distance from each other, and they begin rotating about their common center of mass as the system center of mass translates in a straight line.  This motion is manifested as a matter particle undulating as it translates.  The wavelength of this undulation is $h/(mv)$, where $m$ is the matter particle mass.  The Schrödinger equation models the dynamics of the motion of the matter particle relative to a reference frame moving with the captured mass/matter particle system.  Solution to the equation gives the velocity of the matter particle as a function of the location of the particle.  We derive the Schrödinger equation by balancing the centrifugal forces against the centripetal forces.  Thus, we show that the Schrödinger equation is a Newtonian equation.  [Click here for the full article.] The Neutrino: A Counter Example to the Second Law of Thermodynamics by Joseph M. Brown The kinetic particle model of the neutrino was first discovered in 1968-9 and published in Brown and Harmon [1]. All that was known at that time was that the neutrino had to be the result of a complete condensation of the ether gas which pervades the universe. Shortly after that time it was discovered that the Maxwell-Boltzmann parameters vr and vm arranged in the form [(vr—vm)/vm]2 had the value 1/137.1. Since vr and vm characterize the gas that makes up the ether and the magnitude of the parameters so arranged was close to the fine structure constant [2], the researchers were encouraged that the kinetic particle approach to physical theory must have merit. A little over ten years later it was discovered that if background particles were condensed, as required by the neutrino model, and aligned to all move in the same direction without changing their individual speeds, then if they were squeezed together so they all touched each other without changing their energy then the condensed assembly would translate at the speed vr—vm (see [3] and [4]). Thus, it was known that the speed of light is vr—vm. Further, the condensation and acceleration process described above provided a means for extracting background particles, which were forming the condensed state. However, it was not known at that time (1982) how the background particles could come in from the background and result in a complete condensation. It was not known how this condensation could be possible until 2012 [5]. The following paragraphs outline the rigorous analysis of the neutrino. This is a proof that a stable inhomogeneous state of Newtonian particles can exist. This analysis shows that the second law of thermodynamics is not universally true. In this analysis the ether gas is made of brutino particles and is called the brutino gas. Click here to read the full article
7962a4f299f8ee2d
Friday, March 31, 2006 Quantum Probability I took part in a brief discussion over at antimeta which reminded me that I ought to get back to a document I started writing on quantum mechanics for dummies. One of my pet peeves is that I believe there to be a little bit of a conspiracy to make quantum mechanics seem less accessible to people. Not a deliberate conspiracy - but people maintaining an aura of mystery about it that puts people off the subject. All of the fuzzy talk about quantum mechanics in the popular science press does nothing to help the situation. In particular, there is a core of quantum mechanics that I believe requires few prerequisites beyond elementary probability theory, vector spaces and complex numbers. Anyway, I did some more digging on the web and found this course by Greg Kuperberg. The opening paragraphs almost take the words I wanted to say out of my mouth. In particular, despite the mystical mumbo-jumbo that is often written on the subject, the rules of quantum mechanics are "rigorous and clear" and "The precepts of quantum mechanics are neither a set of physical forces nor a geometrical model for physical objects. Rather, they are a variant, and ultimately a generalization, of classical probability theory." Most of all "...more mathematicians could and should learn quantum mechanics...". You don't even have to understand F=ma to get started with quantum mechanics and get to the point where you can really and truly get to grips, directly, with the so-called paradoxes of quantum mechanics such as the Bell Paradox. The strange thing is that you won't find words like this in most of the quantum mechanics textbooks. They throw you into physical situations that require finding tricky solutions to the Schrödinger equation while completely failing to give any insight into the real subject matter of quantum mechanics. Most QM books I know are really introductions to solving partial differential equations. (Remark to physicists: I bet you didn't know you could get the simultaneous eigenvalues for the energy and angular momentum operators for the hydrogen atom by a beautifully simple method that doesn't require even looking at a differential equation...) The best thing about the newly appearing field of quantum computing is that it's slowly forcing people to thing about quantum mechanics separately from the mechanics. So even though I haven't read that course myself yet, I'm recommending that everyone read it :-) And some time I might get back to the even more elementary introduction I hope to put together. Labels: , Thursday, March 30, 2006 A Neat Proof Technique Last night I read this paper. There wasn't really anything in the paper I wanted, I was more interesting in flexing my newly developing computer science muscles. Anyway, like with all computer science papers I've managed to finish, I felt like I understood the paper but had no idea what the punch line was. Still, it was worthwhile because part way through there was a neat mathematical proof technique used that I think is worthy of a mention. The paper is about the generalisations of 'fold' and 'unfold' that I played with recently. The usual fold function acts on a list like Imagine it written as where the cons function is a 'constructor' that constructs a list from a list element and another list. [] is the empty list. The fold function takes three arguments, a binary function f, a constant g, and a list. It then replaces cons by f and [] by g. For example fold (+) 0 [4,8,15,16,23,42] and so is 108, the sum of the numbers. An obvious theorem is that fold cons [] x = x. fold also generalises to structures such as trees. In this case there is another constructor that builds a tree from a node and its children. I'll stick with lists as the proof technique is much the same. The authors prove a bunch of theorems about the fold function. But now they want to prove something is true of all of the elements in a list (actually, a tree, but I use a list). Suppose the list is as above and you want to prove the property P holds for all elements in that list. Then you want to prove P(4) && P(8) && P(15) && P(16) && P(23) && P(42) = True. (I'm using Haskell's && to mean 'and'.) In other words you want to prove that fold (&&) True [P(4),P(8),P(15),P(16),P(23),P(42)] = True. The authors then proceed to show their theorem is true using the theorems about fold they just proved. Neat eh? But Maybe I haven't explained very well so I'll try to spell it out differently. The authors want to prove theorems about all elements of a certain datastructure. So what they do is take the logical proposition that expresses what they want and rewrite the proposition so that the proposition itself in the same 'shape' as the datastructure. The statement of the truth of the proposition is an application of fold to the datastructure. So they can now use the theory of datastructures they develop to prove something about the structure of the logical proposition itself - namely that it's true. You need to read the actual paper (top left, page 5) to see the actual details of the proof technique. In the particular case of lists the proof technique turns out to be standard induction. If you can prove P of an an element of the list, and you can prove "if P is true of an element it must be true of the next element", then P holds for every element of the list. Every time you generalise fold to a different datastructure there is a new induction principle that goes with it. I suppose what they have actually done is prove a metatheorem about certain types of logical proposition and then used that to prove specific propositions. Well, I hope that makes some kind of sense. I think it's quite a neat trick and that many of the paper's readers might miss the cool bit of self-reference that's going on here. Monday, March 27, 2006 The General Theory of Self-Reproducing Programs I'm guessing that anyone reading this is already familiar with the idea of programs that output themselves. If you're not there's a great list of such programs here. But what I was surprised to discover at the weekend was that there is in fact a bit of general theory about this. In particular, what do we need in a computer language to guarantee we can write a self-replicator? Consider the following in Haskell: let p x = x ++ show x in putStrLn $ p"let p x = x ++ show x in putStrLn $ p" Evaluate this expression in an interactive Haskell session and it prints itself out. But there's a nice little cheat that made this easy: the Haskell 'show' function conveniently wraps a string in quotation marks. So we simply have two copies of once piece of code: one without quotes followed by one in quotes. In C, on the other hand, there is a bit of a gotcha. You need to explicitly write code to print those extra quotation marks. And of course, just like in Haskell, this code needs to appear twice, once out of quotes and once in. But the version in quotes needs the quotation marks to be 'escaped' using backslash so it's notactually the same as the first version. And that means we can't use exactly the same method as with Haskell. The standard workaround is not to represent the quotation marks directly in the strings, but instead to use the ASCII code for this character and use C's convenient %c mechanism to print at. For example: Again we were lucky, C provides this great %c mechanism. What do you need in a language to be sure you can write a self-replicator? It turns out there is a very general approach to writing self-replicators that's described in Vicious Circles. What follows is essentially from there except that I've simplified the proofs by reducing generality. We'll use capital letters to represent programs. Typically these mean 'inert' strings of characters. I'll use square brackets to indicate the function that the program evaluates. So if P is a program to compute the mathematical function p, we write [P](x) = p(x). P is a program and [P] is a function. We'll consider both programs that take arguments like the P I just mentioned, and also programs, R, that take no arguments, so [R] is simply the output or return value of the program R. Now we come to an important operation. We've defined [P](x) to be the result of running P with input x. Now we define P(x) to be the program P modified so that it no longer takes an argument or input but instead substitutes the 'hard-coded' value of x instead. In other words [P(x)] = [P](x). P(x) is, of course, another program. There are also many ways of implementing P(x). We could simply evaluate [P](x) and write a program to simply print this out or return it. On the other hand, we could do the absolute minimum and write a new piece of code that simply calls P and supplies it with a hard-coded argument. Whatever we choose is irrelevant to the following discussion. So here's the demand that we make of our programming language: that it's powerful enough for us to write a program that can compute P(x) from inputs P and x. This might not be a trivial program to write, but it's not conceptually hard either. It doesn't have gotchas like the quotation mark issue above. Typically we can compute P(x) by some kind of textual substitution on P. With that assumption in mind, here's a theorem: any program P that takes one argument or input has a fixed point, X, in the sense that running P with input X gives the same result as just running X. Given an input X, P acts just like an interpreter for the programming language as it outputs the same thing as an interpreter would given input X. So here's a proof: Define the function f(Q) = [P](Q(Q)). We've assumed that we can write a program that computes P(x) from P and x so we know we can write a program to compute Q(Q) for any Q. We can then feed this as an input to [P]. So f is obviously computable by some program which we call Q0. So [Q0](Q) = [P](Q(Q)). Now the fun starts: [P](Q0(Q0)) = [Q0](Q0) (by definition of Q0) = [Q0(Q0)] (by definition of P(x)) In other words Q0(Q0) is our fixed point. So now take P to compute the identity function. Then [Q0(Q0)] = [P](Q0(Q0)) = Q0(Q0). So Q0(Q0) outputs itself when run! What's more, this also tells us how to do other fun stuff like write a program to print itself out backwards. And it tells us how to do this in any reasonably powerful programming language. We don't need to worry about having to work around problems like 'escaping' quotation marks - we can always find a way to replicate the escape mechanism too. So does it work in practice? Well it does for Haskell - I derived the Haskell fragment above by applying this theorem directly, and then simplifying a bit. For C++, however, it might give you a piece of code that is longer than you want. In fact, you can go one step further and write a program that automatically generates a self-replicator. Check out Samuel Moelius's kpp. It is a preprocessor that converts an ordinary C++ program into one that can access its own source code by including the code to generate its own source within it. Another example of an application of these methods is Futamura's theorem which states that there exists a program that can take as input an interpreter for a language and output a compiler. I personally think this is a little bogus. Stanislaw Lem has Passed Away He has an obituary in The Times. If you haven't read any of his science fiction you really should read some. For lovers of mathematical poetry out there, here is a translation of a passage from his Cyberiad: Come, let us hasten to a higher plane, Where dyads tread the fairy fields of Venn, Their indices bedecked from one to n, Comingled in an endless Markov chain! Come, every frustrum longs to be a cone, And every vector dreams of matrices. Hark to the gentle gradient of the breeze It whispers of a more ergodic zone. In Riemann, Hilbert or in Banach space Let superscripts and subscripts go their ways. Our asymptotes no longer out of phase, We shall encounter, counting, face to face. I'll grant thee random access to my heart, Thou'lt tell me all the constants of thy love; And so we two shall all love's lemmas prove, And in our bound partition never part. For what did Cauchy know, or Christoffel, Or Fourier, or any Boole or Euler, Wielding their compasses, their pens and rulers, Of thy supernal sinusoidal spell? Cancel me not--for what then shall remain? Abscissas, some mantissas, modules, modes, A root or two, a torus and a node: The inverse of my verse, a null domain. Ellipse of bliss, converge, O lips divine! The product of our scalars is defined! Cyberiad draws nigh, and the skew mind Cuts capers like a happy haversine. I see the eigenvalue in thine eye, I hear the tender tensor in thy sigh. Bernoulli would have been content to die, Had he known such a2cos(2φ) ! The Most Amazing and Mysterious Thing in All of Mathematics I think that a good candidate is the table of the Homotopy Groups of Spheres. For those not familiar with algebraic topology, πm(Sn) is the set of equivalence classes of continuous functions from the m-dimensional sphere to the n-dimensional sphere where two functions are considered equivalent if they are homotopic. An easy way to visualise this is that two functions are homotopic if you can interpolate a continuous animation between them. (Can you guess what industry I work in?) This set also has a group structure which is straightforward to define but which I won't go into here (unless someone requests it). That's all there is to it. How can such simplicity generate such complexity? Monstrous Moonshine is pretty mysterious too - but it takes a lot of work to state it for a non-expert. So this wins by default. Like John Baez I also wonder about the curious appearance of 24 on row 3. And an aside I discovered on Anarchaia: related to an earlier post. Friday, March 24, 2006 Talking of category theory...I can't remember if I previously mentioned the bizarre functorial property of the number six I came across in Designs, Codes and their Links This is the actual theorem: consider the category whose objects are the n element sets and whose arrows are the bijections between the sets. This category has a non-trivial functor to itself only for n=6. By smart use of Google Print you should be able view the proof. It's the first five pages of Chapter 6. (Don't make the obvious mistake with Google Print and end up with only three pages of that chapter.) Anyway, it's not too hard to give a bit of insight into what this means. Consider the set with n elements. You can build all kinds of combinatorial objects which have some underlying set. Permuting the original n-element set induces a permutation of the combinatorial object and hence its underlying set. If the underlying set also has n elements you usually just end up with the original permutation. For example consider n=3 and the combinatorial object that is the even permutations on the 3 element set. This also has three elements. But the induced permutations on this new set are equivalent to the original permutations (via a bijection from the 3 element set to the set of its even permutations). On the other hand, if n=6 then you can construct another 6 element combinatorial object where the induced action of S6 is quite diffferent to the original one. In fact, it gives an outer automorphism of S6, another bizarre thing that only exists for n=6. To see the actual details of the construction look here. I should also mention that Todd wrote a paper on this subject: The Odd Number 6, JA Todd, Math. Proc. Camb. Phil. Soc. 41 (1945) 66--68. It's also mentioned in the Wikipedia but that's only because yours truly wrote that bit. Anyway, I'm vaguely interested in how this connects to other exceptional objects in mathematics such as S(5,6,12), the Mathieu groups, the Golay codes, the Leech lattice, Modular Forms, String Theory, as well as Life, the Universe and Everything. Thursday, March 23, 2006 Sets, Classes and Voodoo After much anticipation I started reading Barwise and Moss's "Vicious Circles". Unfortunately by chapter 2 I've reached the point where the statements don't just seem incorrect, they don't even seem to be propositions. I raised my question on USENET but I may as well mention it here too. Here's how I understand the concept of a class in ZF Set Theory: talk about classes is really just talk about predicates. We enrich the language of Set Theory with a bunch of new terms ('class', 'subclass') and overload other terms ('is an element of', 'is a subset of') to give a new language that reifies classes, but instead of adding new axioms to deal with classes we provide a translation back to ZF without these extra terms. For example if P and Q are classes then "x is in P" means "P(x)" and "P is contained in Q means" "for all x, P(x) implies Q(x)" or even "a subclass of a set is a set" which translates to the axiom of separation. (We could alternatively add new axioms, instead of the translation, and then we'd get NBG Set Theory.) Am I right so far? (By the way, nobody ever seems to say what I've just said explicitly. In particular, it seems to me that once you add the term 'class' you need to start proving metatheorems about classes to show what kind of deductions about them are valid, but nobody ever seems to do this.) I understand that this is a sensible thing to do because of the overloading - in the enriched language sets and classes look similar and that allows us to do category theory, for example, in a much wider context, without having to define everything twice. (And also, maybe, because talk about sets is really a substitute for talk about classes...but that's a philosophical point for another day...) So what does "If a class is a member of a class then it is a set" mean in the context of ZF? A class is really just a predicate, so it doesn't make sense to me that there could be a predicate about predicates. So at this point the book is looking like Voodoo. Can anyone out there clarify this for me? (Hmmm...I wonder if the translation to ZF is an adjoint functor...ow...MUST STOP OBSESSING ABOUT ADJUNCTIONS...) Wednesday, March 22, 2006 The Representation of Integers by Quadratic Forms There's a nice article at Science News on the work of Bhargava on the representation of integers by quadratic forms. Rather than just restate what's written there (and in a few other blogs) let me quote the main theorem which is really quite amazing: If Q is a quadratic form: Zn->Z then if the image of Q contains {1, 2, 3, 5, 6, 7, 10, 13, 14, 15, 17, 19, 21, 22, 23, 26, 29, 30, 31, 34, 35, 37, 42, 58, 93, 110, 145, 203, 290}, then it contains every positiver integer. Closely related is Conway's 15-theorem which originally inspired Bhargava and which (I think) I first read about in the excellent book The Sensual Quadratic Form Check out the section on topographs where Conway works his usual magic and makes some parts of the theory seem so clear and obvious that even your cat could start proving theorems about quadratic forms. Tuesday, March 21, 2006 Category Theory Screws You Up! Well it does. Since I started a burst of intense category theory reading a couple of weeks ago (not that intense as I have a full time job) I've been showing unpleasant symptoms. These include insomnia, lack of concentration and grumpiness. We're not just talking correlation here, I have causal mechanisms too: how can I sleep when an example of an adjunction might pop into my mind at any moment, how can I concentrate when my brain is already fully occupied in finding those examples, and of course I'm grumpy with all this effort to understand difficult theorems that always turn out to be trivial and content-free. Fortunately I find that drugs help with the insomnia, but there's no cure for the other symptoms. At least I haven't reached the stage where I sit down to dinner wondering whether or not my eating it is an operation with a left or right adjoint. (But I thought it didn't I, so I must be pretty far gone.) And I'm not dreaming commutative diagrams yet. So here's my advice: if someone comes up to you in a shady bar or alleyway and offers you a monad, or an adjunction, or even an innocent little natural transformation, just say "no!". Friday, March 17, 2006 What can I do with adjoints? And Lemma 28. By 'do' I mean 'compute'. In order to teach myself about F-(co)algebras I wrote some literate Haskell that actually made use of them. The neat thing was that it gave me something for nothing. I wrote a generic unfold function (mostly not by thinking about it but by juggling all the functions that were available to me until I found one of the correct type) and I was amazed to find that it did something that I found useful, or at least interesting. Can I do the same for adjunctions? Unfortunately, all of the examples I tried were basically uninteresting. They were quintessential category theory, you spend ages unpacking the definitions only to find that what you ended up with was trivial. Unlike the fold/unfold case I didn't find myself ever defining a new function that I could use for something. I did just find this link with some intuition that I'd never read before: 'To get some understanding of adjunctions, it's best to think of adjunction as a relationship between a "path" to an element and a "data structure" or "space" of elements.' Hmmm...none of the category theory books talks about that. I think I can see it: a forgetful functor rubs out the path and just leaves you with the destination. (Don't take my word for it, I'm just guessing at this point.) Maybe I'm now on target to be able to understand Lambek & Scott which I bought 20 years ago (I'm older than you thought), not really knowing what it was actually about. BTW Check out the review on Amazon: "I was looking for a book for my girlfriend this Christmas and stumbled upon this one..." And I thought I'd mention Newton's Lemma 28. But after I asked a question about it on Usenet, some other people said far more interesting things that I could ever say. Thursday, March 16, 2006 Answers and questions You should now see why the CA rule arises. Wednesday, March 15, 2006 Homotopies between proofs and between programs Many years ago, while out drinking after an algebraic topology seminar, someone mentioned the idea of using algebraic topology to analyse computer programs. I never did get the details but I'm guessing it must have been something like the material presented at this conference. Anyway, the comment got me thinking. I spend much of my time refactoring code. This is making small modifications to software intended to restructure it to a better form without changing its behaviour (eg. I've just been dumped with a single C++ source file 0.25MB long most of which is one function and is in considerable need of some tidying up!). If you think of a program as a path to get you from an input to an output then a change to that program, that keeps the behaviour the same, is a kind of homotopy between paths. But I couldn't really get anywhere with that idea. In Baez's This Week's Finds this week he talks about homotopies between proofs. These are ways to convert one proof of a proposition into another proof of the same proposition. Cut elimination is such an operation but there are surely others. According to the Curry-Howard isomorphism, a computer program that maps type A to type B is essentially the same thing as a proof of B assuming A. So a homotopy between programs is the same thing as a homotopy between proofs. For example, Baez's example of converting between two proofs P -> R corresponds exactly to converting the piece of Haskell code f . ( g . h) to (f . g) . h using the associativity of . (. is Haskell for function composition.) So, is there some kind of interesting topology to be extracted from this? Does the space of programs that map input x to output f(x) have interesting fundamental groups or homology groups? I guess a good place to start would be with two simple programs to perform the same operation that can't be transformed into each other by simple steps. And do I really need omega-categories to make sense of this? Anyway, besides Baez's stuff there's also theo's comments with a real world example of what looks like a homotopy between proofs. Tuesday, March 14, 2006 Cellular automaton puzzle Here are the cells of a 1D cellular automaton: |a0|a1|a2|a3| ... Instead of a finite number of states the ai are integers. Here's the update rule: a0 <- a1 We start it in this state: Monday, March 13, 2006 Coalgebras and Automata fold is a really useful operator in a number of programming languages, including Haskell. It also hs a dual partner, unfold. But more importantly, both can be generalised, using a little category theory, to F-algebras and F-coalgebras. Amazingly, unfold, in the right category (as I discovered after a weekend of frenzied category theory) gives a really nice way to convert descriptions of automata into runnable automata. Anyway, I wrote the whole thing up as a piece of literate Haskell. UPDATE: That came out formatted ugly so I've moved it to my web site where I'll eventually format it better. Here's the link. Thursday, March 09, 2006 Smullyan's favourite game is called the Hypergame, invented by Zwicker. I hadn't heard of it until today when I read about it in Barwise and Moss's book Vicious Circles. Consider games where the players take turns and the game is guaranteed to terminate in a finite number of moves. These are call well-founded games. The Hypergame is quite simple. The first player chooses a well-founded game. The game now turns into that game and the second player starts by opening in that game with play continuing in that game. Eg. if the first player says "Chess (with whatever house rules are required to make it terminate)" then they start playing Chess with the second player moving first. Because of the stipulation of well-foundedness the Hypergame lasts precisely one move more than some well-founded game. Therefore the Hypergame is itself well-founded and always terminates. Anyway, Alice and Bob decide to play: Alice: Let's play the Hypergame! Bob: Cool! I'll go first. My move is to select the Hypergame. Alice: OK, now we're playing the hypergame. So my first move is to select the Hypergame. Bob: The Hypergame is cool, I pick that. Alice: I pick the Hypergame. Bob: I pick the Hypergame. Alice: I pick the Hypergame. Bob: I pick the Hypergame. Alice: I pick the Hypergame. Bob: I pick the Hypergame. Alice: I pick the Hypergame. Bob: I pick the Hypergame. Alice: I pick the Hypergame. Bob: I pick the Hypergame. Alice: I pick the Hypergame. Bob: I pick the Hypergame. Alice: I pick the Hypergame. Bob: I pick the Hypergame. Alice: I pick the Hypergame. Bob: I pick the Hypergame. Alice: I pick the Hypergame. Bob: I pick the Hypergame. Alice: I pick the Hypergame. Bob: I pick the Hypergame. Wednesday, March 08, 2006 It's a square, square world! I was browsing for books on elliptic functions on Ebay when I came across this link to a book by Oscar S Adams called "Elliptic Functions Applied To World Maps". This is pretty obscure, I thought to myself, so I googled Oscar. Sure enough, there really was an Oscar S Adams who worked on cartographic projections and his speciality was conformal (ie. angle preserving) projections including this one: Anyway, there are lots of interesting conformal projections out there. Some of them are quite pretty so I thought I'd share. I like the use of the Schwarz-Christoffel transformation, better known (to me) from 2D fluid dynamics, to map the world into a variety of polygons. I was also surprised to see that Charles Sanders Peirce worked in this area. Just for the hell of it I might place a bid on the book. UPDATE: Come on! 'fess up if you're the person bidding against me! Tuesday, March 07, 2006 Blind Games I've always been fascinated by games that involve bluffing and I've always been more fascinated by 'blind' games. I define a 'blind' game to be one where there is some state in the game that a player can choose not to reveal. Instead the player claims what that state is. The other player can choose to accept or reject this state of affairs, but if it is rejected then the actual state is revealed and one or other player is 'punished' as appropriate. (Eg. blind chess isn't 'blind'.) Consider this trivial game: you deal a pack of cards to N players. The players take turns and the winner is the first one to get rid of all their cards. A turn consists of discarding a set of cards that all have the same number or rank. This game isn't very interesting. But now apply the 'blind transform' to this game with the 'punishment' of picking up all of the discards. The players play cards face down and make some claim about what cards they are laying down. If someone challenges this claim you have a look and mete out the punishment. A trivial game has now been turned into a mildly entertaining game. It can be turned into an excellent game by means of the 'ethyl transform' - playing whle drunk. But moving on... One of my favourite games years ago, when I could talk people into playing it, was "Blind Le Truc". It's Le Truc but with all of the cards played face down but players claiming what the cards are. In tis case the punishment is losing a trick. Even played 'sighted' this game is full of bluff. Playing it blind brings it to a whole new level. Players can play against each other for extended periods of time playing what is practically an imaginary game going through all the motions of Le Truc without ever seeing a card. It's a lot of fun. Well, I think so anyway. Even the most trivial of games can become interesting. Eg. each player is given an entire suit of a deck of cards. Each round a player places one card in the middle. Whoever places the highest card in the centre wins that round. Play continues until nobody has any cards left and the winner is whoever won the most rounds. After applying the blind transform, with round losing as punishment, this game is turned into something fun, at least for a few minutes. Poker is also a blind version of a trivial game. How amenable are these games to analysis? That last game is pretty simple. Suppose two players each have only 3 cards. What is the optimal way to play? You can also apply the blind transform multiple times. It's more interesting if the nth application has a tougher punishment than the (n-1)th. In that case it corresponds to raising the stakes for being caught, or accusing someone of, 'cheating'. Friday, March 03, 2006 When is one thing equal to some other thing? That's the title of a rough draft of a paper by Barry Mazur (courtesy of Lambda the Ultimate). It's basically an introduction to the ideas of category theory but it has some nice comments along the way on what we mean by equivalence - often a tricky subject in category theory. And here's a coincidence. I'd just done a web search to find out what properties the number 691 has. (It pops up in the theory of modular forms.) And then I see a link to this paper (completely independently) which uses 691 as an example on the first page. Weird! Thursday, March 02, 2006 An Actual Application of Fractal Dimension According to this story, and others, a trove of 32 paintings, purported to be Jackson Pollocks, was discovered last year. But many people had doubts about whether or not they were really by the master of paint pouring or not. So a physicist, Richard Taylor, used some software to compute the fractal dimension of Pollock's paintings and compare them to these new paintings. It turned out that the fractal dimension didn't fit in with the trend of Pollock's work and hence they look like fakes. Of course, me mentioning this fact isn't intended to be an endorsement of the reliability of these methods. I put as much trust in them as I'd put in using catastrophe theory to navigate my way through a prison riot. Wednesday, March 01, 2006 A Cautionary Tale for Would-Be Generalisers You may or may not know this story already as it's been floating around for a while. But I'm going to retell it anyway: Define sinc(x) = sin(x)/x and sinc(0) = 1 Let 'I' mean integration from 0 to infinity. I sinc(x) = pi/2 I sinc(x)sinc(x/3) = pi/2 I sinc(x)sinc(x/3)sinc(x/5) = pi/2 I sinc(x)sinc(x/3)sinc(x/5)sinc(x/7) = pi/2 I sinc(x)sinc(x/3)sinc(x/5)sinc(x/7)sinc(x/9) = pi/2 You see the pattern right? So the story is that some guy was evaluating these integrals for some reason or other getting pi/2 all the time. They happily chugged on getting... I sinc(x)sinc(x/3)sinc(x/5)sinc(x/7)sinc(x/9)sinc(x/11) = pi/2 I sinc(x)sinc(x/3)sinc(x/5)sinc(x/7)sinc(x/9)sinc(x/11)sinc(x/13) = pi/2 So when the computer algebra package this person was using said: I sinc(x)sinc(x/3)sinc(x/5)sinc(x/7)sinc(x/9)sinc(x/11)sinc(x/13)sinc(x/15) = 467807924713440738696537864469/935615849440640907310521750000*pi they knew they'd hit a bug. They complained to the vendor who agreed that, yes, definitely, there was something screwy in their integration routine. Except there wasn't. Weird eh? I've known this story for a while but I've only just stumbled on Borwein's paper that explains the phenomenon. They're called Borwein Integrals now. Update: Corrected a small error. It's pi/2 for the first 7 terms, not just the first 6. Even more amazing!
b41628740bc7e6d0
lördag 26 september 2015 Volkswagen Emission Scandal vs German Political Correctness Leadership Germany with Angela Merkel is actively seeking to take the leading role in a giant transformation of the world economy into a new green economy with reductions of CO2 emissions to preindustrial levels as prime goal.  In this giant transformation German car industry has promoted the diesel engine as being more fuel efficient with less CO2 emission than the gasoline engine under strong support from German governmental political correctness. The Volkswagen emission scandal shows the hollowness and hypocrisy of this grand scale religion: To meet the strict demands of political correctness and moral leadership set by Germany, grand scale cheating is necessary and is accordingly delivered by Germany. The world is watching with amazement. And in China a new coal power plant is opened every day. torsdag 24 september 2015 Finite Element Quantum Mechanics 6: Basic Analysis vs Observation Let us now inspect the basics of the atomic model considered in this sequence of posts. Consider then a neutral atom of kernel charge $Z$ with $N=Z$ electrons occupying non-overlapping domains in space. Assume that the electrons are partitioned into a sequence of shells $S_m$ of increasing radius $r_m$ with corresponding widths $d_m$ each shell being filled by $2m^2$ electrons, for $m=1,...,M,$ with $M$ the number of shells. We consider a hypothetic atom with all shells fully filled with $2, 8, 18, 32, 50,...,$ electrons in successive shells displaying a basic aspect of the periodicity of the periodic table of elements. Consider now the case $d_m\sim m$ with $r_m\sim m^2$, and assume $r_1=d_1\sim\frac{1}{Z}$. The electron density $\rho_m$ in $S_m$, assumed to be spherically symmetric,  then satisfies • $\rho_mr_m^2d_m\sim m^2$ from which follows that  • $\rho_m\sim \frac{m^3}{r_m^3}$.                            (1) We now compute the following characteristics of this model: 1. $M^3\sim Z$, that is $M\sim Z^{\frac{1}{3}}$ 2. potential energy in $S_1\sim Z^2$ 3. potential energy in $S_m\sim m^2Z/r_m\sim Z/d_1\sim Z^2$ 4. total potential energy and thus total energy $\sim Z^{\frac{7}{3}}$.                             (2) We check that indeed there is room for $m^2$ electrons in shell $S_m$, because the volume of $S_m$ is $r_m^2d_m\sim m^5$, while the volume of an electron $\sim d_m^3\sim m^3$. We observe that (2) fits with observations. We understand that the electronic density is distributed so that the potential energy and thus total energy in each full shell is basically the same, which may be viewed to be a heavenly socialistic organization of the shell structure of an atom. Numerical computation seeking the ground state energy by relaxation in the Schrödinger model of post 5 starting from an initial density distribution according to (1), shows good correspondence with observation, supporting the basic analysis of this post. Numbers will be presented in an upcoming post. The basic aspect of this model as a form of electron density model, is that electrons (or shells in the present spherically symmetric case) keep individuality by occupying different domains of space, which makes it possible to accurately represent electron-electron repulsion. This feature is not present in standard density models such as Thomas-Fermi and Density Functional Theory. In these models electrons lack individuality as parts of electron clouds, which makes it difficult to represent electron-electron repulsion ab ibnitio. Recall also that in the standard Schrödinger equations wave functions appear as multi-dimensional linear combinations of products of one-electron wave functions defined in all of space by separate spatial variables, thus with each electron "both nowhere and everywhere" without individuality, which requires a statistical interpretation of the wave function as a multi-dimensional uncomputable monster. Another basic aspect of the presented model is continuity of electron density across inter-electron or inter-shell boundaries for the electron configuration of ground states. This allows atoms to have stable ground states as non-dissipative periodic states of minimal energy. Notice further that the size of the atom as $r_M\sim Z^{-\frac{1}{3}}$ with decreasing size as $Z$ increases, corresponds to the observed decrease of size moving to the right in each row of the periodic table: fredag 18 september 2015 Ripples in the Fabric of Space and Time? The code word of modern physics is: • fabric of space and time • observe gravitational waves—ripples in the fabric of space and time.  If we dare to ask what the meaning of "fabric of space and time" may be, we get the following illuminating lesson by leading physicists: • First of all, space-time is not a fabric. Space and time are not tangible 'things' in the same way that water and air are. It is incorrect to think of them as a 'medium' at all.  • No physicist or astronomer versed in these issues considers space-time to be a truly physical medium, however, that is the way in which our minds prefer to conceptualize this concept, and has done so since the 19th century.  • We really do not know what space-time is, other than two clues afforded by quantum mechanics and general relativity. • Space-time does not claim existence in its own right, but only as a structural quality of the [gravitational] field. (Einstein) • Space and time coordinates are just four out of many degrees of freedom we need, to specify a self-consistent theory. What we are going to have [in any future Theory of Everything] is not so much a new view of space and time, but a de-emphasis of space and time. (Steven Weinberg) • In the theory of gravity, you can't really separate the structure of space and time from the particles which are associated with the force of gravity [ such as gravitons]. The notion of a string is inseparable from the space and time in which it moves. (Michael Greene) The punch line of this educational experience is presented in this way:  • So, the question about what happens to space-time when a particle moves through it at near the speed of light is answered by saying that this is the wrong question to ask. Just because the brain can construct a question doesn't mean that the question has a physical answer! We understand that LIGO in its search for "ripples in the fabric of space and time" is studying "the wrong question" and thus can be viewed as a study into the ""fabric of fantasy" which has become such a fundamental part of modern physics demanding full devotion by the sharpest brains of modern physicists (see also here ): torsdag 17 september 2015 LIGO: Absurdity of Big Physics The Advanced LIGO Project has now been launched as the largest single experiment ever funded by NSF at $0.365 billion: • The LIGO scientific and engineering team at Caltech and MIT has been leading the effort over the past seven years to build Advanced LIGO, the world's most sensitive gravitational-wave detector. • Gravitational waves were predicted by Albert Einstein in 1916 as a consequence of his general theory of relativity, and are emitted by violent events in the universe such as exploding stars and colliding black holes.  • Experimental attempts to find gravitational waves have been on going for over 50 years, and they haven't yet been found. They're both very rare and possess signal amplitudes that are exquisitely tiny. • Although earlier LIGO runs revealed no detections, Advanced LIGO, also funded by the NSF, increases the sensitivity of the observatories by a factor of 10, resulting in a thousandfold increase in observable candidate objects.  • The original configuration of LIGO was sensitive enough to detect a change in the lengths of the 4-kilometer arms by a distance one-thousandth the diameter of a proton; this is like accurately measuring the distance from Earth to the nearest star—over four light-years—to within the width of a human hair.  • Advanced LIGO, which will utilize the infrastructure of LIGO, is much more powerful. • The improved instruments will be able to look at the last minutes of the life of pairs of massive black holes as they spiral closer together, coalesce into one larger black hole, and then vibrate much like two soap bubbles becoming one.  • In addition, Advanced LIGO will be used to search for the gravitational cosmic background, allowing tests of theories about the development of the universe only $10^{-35}$ seconds after the Big Bang. Read these numbers: The accuracy of old LIGO was   • the diameter of a human hair over a distance of 4 light-years,  • $10^{-35}$ seconds after Big Bang, and yet not the slightest little gravitational wave signal was recorded from even the most violent large scale phenomena thinkable. The conclusion should be clear: There are no gravitational waves. After all, why should there be any? By Einstein's general relativity which nobody claims to grasp? But this is not the way Big Physics works: The fact that nothing was found by the infinitely sensitive LIGO requires an even more infinitely sensitive Advanced LIGO at a cost of a half a billion to be built by eager physicists, and after Advanced LIGO has found nothing, funding for an Advanced Advanced LIGO will be requested and so on...but why are tax payers supplying this Big Money?   lördag 5 september 2015 Gerhard 't Hooft: Improved Understanding of Quantum Mechanics Needed Gerhard 't Hooft is one of the Nobel Laureates in Physics who is not happy with the present state of understanding of quantum mechanics and seeks to do something about it: Hooft starts out in Determinism beneath Quantum Mechanics with: • The need for an improved understanding of what Quantum Mechanics really is, needs hardly be explained in this meeting.  • My primary concern is that Quantum Mechanics, in its present state, appears to be mysterious. • It should always be the scientists’ aim to take away the mystery of things.  • It is my suspicion that there should exist a quite logical explanation for the fact that we need to describe probabilities in this world quantum mechanically.  • This explanation presumably can be found in the fabric of the Laws of Physics at the Planck scale. • However, if our only problem with Quantum Mechanics were our desire to demystify it, then one could bring forward that, as it stands, Quantum Mechanics works impeccably.  • It predicts the outcome of any conceivable experiment, apart from some random ingredient. This randomness is perfect. There never has been any indication that there would be any way to predict where in its quantum probability curve an event will actually be detected.  • Why not be at peace with this situation?  • One answer to this is Quantum Gravity. Attempts to reconcile General Relativity with Quantum Mechanics lead to a jungle of complexity that is difficult or impossible to interpret physically. In a combined theory, we no longer see “states” that evolve with “time”, we do not know how to identify the vacuum state, and so on.  • What we need instead is a unique theory that not only accounts for Quantum Mechanics together with General Relativity, but also explains for us how matter behaves.  • We should find indications pointing towards the correct unifying theory underlying the Standard Model, towards explanations of the presumed occurrence of supersymmetry, as well as the mechanism(s) that break it. We suspect that deeper insights in what and why Quantum Mechanics is, should help us further to understand these issues.  Hooft thus acknowledges that quantum mechanics is mysterious, which all prominent physicists do, but Hooft is not at peace with this situation, since after all the essence of science is understanding, although most of his colleagues seem to have accepted once and for all that quantum mechanics cannot be understood and cannot be reconciled with general relativity. Hooft then proceeds to seek a determinism behind quantum mechanics in the form of cellular automatons (also here). I am pursuing another route to an understandable form of quantum mechanics as analog computation with finite precision, which in a way connects to Hooft's cellular automaton's, but is expressed by Schrödinger type wave equations in a continuum mechanics framework. In this framework the finite precision computation makes a difference between smooth (strong) solutions and non-smooth (weak) solutions of the wave equations: Smooth solutions satisfy the wave equations exactly (with infinite precision), while non-smooth solutions satisfy the equations only in a weak sense with finite precision and loss of information as a form of dissipative radiation. This allows the ground state of an atom as a smooth solution without dissipation to be stable over time without dissipation, while an excited state as a non-smooth solution will return to the ground state under dissipative radiation. The situation is analogous to that described in my work together with Johan Hoffman on fluid mechanics, with turbulent solutions as non-smooth dissipative solutions of formally inviscid Euler equations, which allowed us to resolve d'Alembert's paradox (J Math Fluid Mech 2008) and formulate a new theory of flight (to appear in J Math Fluid Mech 2015), among other things. onsdag 2 september 2015 Finite Element Quantum Mechanics 5: 1d Model in Spherical Symmetry The new Schrödinger equation I am studying in this sequence of posts takes the following form, in spherical coordinates with radial coordinate $r\ge 0$ in the case of spherical symmetry, for an atom with kernel of charge $Z$ at $r=0$ with $N\le Z$ electrons of unit charge distributed in a sequence of non-overlapping spherical shells $S_1,...,S_M$ separated by spherical surfaces of radii $0=r_0<r_1<r_2<...<r_M=\infty$, with $N_j>0$ electrons in shell $S_j$ corresponding to the interval $(r_{j-1},r_j)$ for $j=1,...,M,$ and $\sum_j N_j = N$: Find a complex-valued differentiable function $\psi (r,t)$ depending on $r≥0$ and time $t$, satisfying for $r>0$ and all $t$, • $i\dot\psi (r,t) + H(r,t)\psi (r,t) = 0$              (1) where $\dot\psi = \frac{\partial\psi}{\partial t}$ and $H(r,t)$ is the Hamiltonian defined by • $H(r,t) = -\frac{1}{2r^2}\frac{\partial}{\partial r}(r^2\frac{\partial }{\partial r})-\frac{Z}{r}+ V(r,t)$, • $V(r,t)= 2\pi\int\vert\psi (s,t)\vert^2\min(\frac{1}{r},\frac{1}{s})R(r,s,t)s^2\,ds$, • $R(r,s,t) = (N_j -1)/N_j$ for $r,s\in S_j$ and $R(r,s,t)=1$ else, • $4\pi\int_{S_j}\vert\psi (s,t)\vert^2s^2\, ds = N_j$ for $j=1,...,M$.                  (2) Here $-\frac{Z}{r}$ is the kernel-electron attractive potential and $V(r,t)$ is the electron-electron repulsive potential computed using the fact that the potential $W(s)$ of a spherical uniform surface charge distribution of radius $r$ centered at $0$ of total charge $Q$, is given by $W(s)=Q\min(\frac{1}{r},\frac{1}{s})$, with a reduction for a lack of self-repulsion within each shell given by the factor $(N_j -1)/N_j$. The $N_j$ electrons in shell $S_j$ are thus homogenised into a spherically symmetric charge distribution of total charge $N_j$. This is a free boundary problem readily computable on a laptop, with the $r_j$ representing the free boundary separating shells of spherically symmetric charge distribution of intensity $\vert\psi (r,t)\vert^2$ and a free boundary condition asking continuity and differentiability of $\psi (r,t)$.    Separating $\psi =\Psi +i\Phi$ into real part $\Psi$ and imaginary part $\Phi$, (1) can be solved by explicit time stepping with (sufficiently small) time step $k>0$ and given initial condition (e.g. as ground state): • $\Psi^{n+1}=\Psi^n-kH\Phi^n$,  • $\Phi^{n+1}=\Phi^n+kH\Psi^n$,  for $n=0,1,2,...,$ where $\Psi^n(r)=\Psi (r,nk)$ and $\Phi^n(r)=\Phi (r,nk)$, while stationary ground states can be solved by the iteration • $\Psi^{n+1}=\Psi^n-kH\Psi^n$,  • $\Phi^{n+1}=\Phi^n-kH\Phi^n$,  while maintaining (2). A remarkable fact is that this model appears to give ground state energies as minimal eigenvalues of the Hamiltonian for both ions and atoms for any $Z$ and $N$ within a percent or so, or alternatively ground state frequencies from direct solution in time dependent form. Next I will compute excited states and transitions between excited states under exterior forcing. Specifically, what I hope to demonstrate is that the model can explain the periods of the periodic table corresponding to the following sequence of numbers of electrons in shells of increasing radii: 2, (2, 8), (2, 8, 8), (2, 8, 18, 8), (2, 8, 18, 18, 8)... which to be true lacks convincing explanation in standard quantum mechanics (according to E. Serri among many others). The basic idea is thus to represent the total wave function $\psi (r,t)$ as a sum of shell wave functions with non-overlapping supports in the different in shells requiring $\psi (r,t)$ and thus $\vert\psi (r,t)\vert^2$ to be continuous across inter-shell boundaries as free boundary condition, corresponding to continuity of charge distribution as a classical equilibrium condition. I have also with encouraging results tested this model for $N\le 10$ in full 3d geometry without spherical shell homogenisation with a wave function as a sum of electronic wave functions with non-overlapping supports separated by a free boundary determined by continuity of wave function including charge distribution. We compare with the standard (Hartree-Fock-Slater) Ansatz of quantum mechanics with a multi-dimensional wave function $\psi (x_1,...,x_N,t)$ depending on $N$ independent 3d coordinates $x_1,...,x_N,$ as a linear combination of wave functions of the multiplicative form • $\psi_1(x_1,t)\times\psi_2(x_2,t)\times ....\times\psi_N(x_N,t)$,   with each electronic wave function $\psi_j(x_j,t)$ with global support (non-zero in all of 3d space). Such multi-d wave functions with global support thus depend on $3N$ independent space coordinates and as such defy both direct physical interpretation and computability, as soon as $N>1$, say. One may argue that since such multi-d wave function cannot be computed, it does not matter that they have no physical meaning, but the net output appears to be nil, despite the declared immense success of standard quantum mechanics based on this Ansatz.
27e73b6c6ef61e60
Dismiss Notice Dismiss Notice Join Physics Forums Today! Two-electron Ground State of a Spin-Independent Hamiltonian is a singlet 1. Jun 8, 2012 #1 The problem is from Ashcroft&Mermin, Ch32, #2(a). (This is for self-study, not coursework.) The mean energy of a two-electron system with Hamiltonian [tex]\mathcal{H} = -\frac{\hbar^2}{2m}(∇_1^2 + ∇_2^2) + V(r_1, r_2)[/tex] in the state ψ can be written (after an integration by parts) in the form: E = \int d{\bf r}_1 d{\bf r}_2 \left[\frac{\hbar^2}{2m}\{|∇_1\psi|^2 + |∇_2\psi|^2\} + V({\bf r}_1,{\bf r}_2)|\psi|^2 \right] Show that the lowest value of the above equation assumes over all normalized antisymmetric differentiable wavefunctions ψ that vanish at infinity is the triplet ground-state energy E_t, and that when symmetric functions are used the lowest value is the singlet ground-state energy E_s. 2. Relevant equations Just basic knowledge of quantum mechanics? The proof should be elementary it seems... 3. The attempt at a solution Before proving this theorem, it seemed natural to me that the ground state should be singlet because if the spin state is triplet, it is symmetric so the spatial wavefunction is anti-symmetric, and thus the existence of a node indicates that it has higher energy than the singlet wavefunction. But I don't know how to proceed from the statement of the problem. Naive applicatin of the Euler-lagrange equation gives me Schrodinger's equation with zero energy. I don't know how I should use the parity of wavefunction since it is enclosed by the absolute signs. Thank you for your help in advance. 2. jcsd 3. Mar 16, 2015 #2 Hi wc2351, it just so happens that I just solved this problem for the class I am TAing. You have the right idea for the proof, but let me lay down the final steps in order for the benefit of anyone who might read it. The key is that the two-electrons Schrödinger equation for the orbital wavefunction [itex]\psi(\mathbf{r}_1,\mathbf{r}_2)[/itex] can be viewed as the single-particle Schrödinger equation in 6 dimensions. Therefore, all we know about the single particle wavefunction also applies here. I) As you point out, the orbital ground state cannot have any nodes because a node costs more kinetic energy than it can decrease the potential energy. II) Because the Schrödinger equation is real, the eigenfunctions can always be chosen real. More precisely, if [itex]\psi[/itex] is a solution to the time-independent Schrödinger equation, then so is [itex]\psi + \psi^*[/itex]. Because the orbital ground state wavefunction has no nodes, it can then always be chosen positive. III) As a bonus, this shows that the orbital ground state wavefunction is unique, because two positive wavefunctions cannot be orthogonal. IV) Because the potential is symmetric under interchange of particles, we must have: [itex]|\psi(\mathbf{r}_1,\mathbf{r}_2)|^2 = |\psi(\mathbf{r}_2,\mathbf{r}_1)|^2[/itex] But the wavefunction is real so: [itex]\psi(\mathbf{r}_1,\mathbf{r}_2)^2= \psi(\mathbf{r}_2,\mathbf{r}_1)^2[/itex] and it can be chosen positive so we can take the square root: [itex]\psi(\mathbf{r}_1,\mathbf{r}_2)= \psi(\mathbf{r}_2,\mathbf{r}_1)[/itex] i.e. the orbital ground state is symmetric. V) By Pauli exclusion, the spin ground state must therefore be antisymmetric. But for two electrons, the only antisymmetric spin state is the singlet. That completes the proof. Have something to add? Draft saved Draft deleted Similar Discussions: Two-electron Ground State of a Spin-Independent Hamiltonian is a singlet
fcaa43dcfcc679d0
Varieties of Emergence David J. Chalmers Department of Philosophy University of Arizona Tucson, AZ 85721. [[Written for the Templeton Foundation workshop on emergence in Granada, August 2002. Given the informal nature of the workshop, I haven't been especially careful with citations and such, but I should note up front that not much of what follows is fundamentally original with me. I hope that nevertheless there is something useful at least in the way I have put things together.]] Two concepts of emergence The term "emergence" has the potential to cause no end of confusion in science and philosophy, as it is used to express two quite different concepts. We can label these concepts strong emergence and weak emergence. Both of these concepts are extremely important, but it is vital to keep them separate. As far as I can tell, the papers for the Granada workshop are about evenly divided between papers on strong emergence and papers on weak emergence, so there is a danger of miscommunication here. We can say that a high-level phenomenon is strongly emergent with respect to a low-level domain when truths concerning that phenomenon are not deducible even in principle from truths in the low-level domain. Strong emergence is the notion of emergence that is most common in philosophical discussion of emergence, and is the notion invoked by the "British emergentists" of the 1920s. We can say that a high-level phenomenon is weakly emergent with respect to a low-level domain when truths concerning that phenomenon are unexpected given the principles governing the low-level domain. Weak emergence is the notion of emergence that is most common in recent scientific discussion of emergence, and is the notion that is typically invoked by proponents of emergence in complex systems theory. (See Bedau 1997 for a nice discussion of the notion of weak emergence.) These definitions of strong and weak emergence are first approximations, which might later be refined. But they are enough to exhibit the key differences between the notion. As just defined, cases of strong emergence will likely also be cases of weak emergence (although this depends on just how "unexpected" is understood). But vases of weak emergence need not be cases of strong emergence. It often happens that a high-level phenomenon is unexpected given principles of a low-level domain, but is nevertheless deducible in principle from truths concerning that domain. The emergence of high-level patterns in cellular automata — a paradigm of emergence in recent complex systems theory — provides a clear example. If one is given only the basic rules governing a cellular automaton, then the formation of complex high-level patterns (such as gliders) may well be unexpected, so these patterns are weakly emergent. But the formation of these patterns is straightforwardly deducible from the rules (and initial conditions), so these patterns are not strongly emergent. Of course, to deduce the facts about the patterns in this case may require a fair amount of calculation, which is why their formation was not obvious to start with. Nevertheless, upon examination these high-level facts are a straightforward consequence of low-level facts. So this is a clear case of weak emergence without strong emergence. Strong emergence has much more radical consequences than weak emergence. If there are phenomena that are strongly emergent with respect to the domain of physics, then our conception of nature needs to be expanded to accommodate them. That is, if there are phenomena whose existence is not deducible from the facts about the exact distribution of particles and fields throughout space and tie (along with the laws of physics), then this suggests that new fundamental laws of nature are needed to explain these phenomena. The existence of phenomena that are merely weakly emergent with respect to the domain of physics does not have such radical consequences. The existence of unexpected phenomena in complex biological systems, for example, does not on its own threaten the completeness of the catalog of fundamental laws found in physics. As long as the existence of these phenomena is deducible in principle from a physical specification of the world (as in the case of the cellular automaton), then no new fundamental laws or properties are needed: everything will still be a consequence of physics. So if we want to use emergence to draw conclusions about the structure of nature at the most fundamental level, it is not weak emergence but strong emergence that is relevant. Of course weak emergence may still have important consequences for our understanding of nature. Even if weakly emergent phenomena do not require the introduction of new fundamental laws, they may still require in many cases the introduction of further levels of explanation above the physical level, in order to make these phenomena maximally comprehensible to us. Further, by showing how a simple starting point can have unexpected consequences, the existence of weakly emergent phenomena can be seen as showing that a simple physicalist picture of the world need not be overly reductionist, but rather can accommodate all sorts of unexpected richness at higher levels. In a way, the philosophical morals of strong emergence and weak emergence are diametrically opposed. Strong emergence, if it exists, can be used to reject the physicalist picture of the world as fundamentally incomplete. By contrast, weak emergence can be used to support the physicalist picture of the world, by showing how all sorts of phenomena that might seem novel and irreducible at first sight can nevertheless be grounded in underlying simple laws. In what follows, I will say a little more about both strong and weak emergence. Strong emergence We have seen that strong emergence, if it exists, has radical consequences. The question that immediately arises, then, is: Are there strongly emergent phenomena? My own view is that the answer to this question is yes. I think there is exactly one clear case of a strongly emergent phenomenon, and that is the phenomenon of consciousness. We can say that a system is conscious when there is something it is like to be that system: that is, when there is something it feels like from the system's own perspective. It is a key fact about nature that it contains conscious systems; I am one such. And there is reason to believe that the facts about consciousness are not deducible from any number of physical facts. I have argued this case at length elsewhere (Chalmers 1996, 2002) and will not repeat the case here. But I will mention two well-known avenues of support. First, it seems that a colorblind scientist given complete physical knowledge about brains could nevertheless not deduce what it is like to have a conscious experience of red. Second, it seems logically coherent in principle that there could be a world physically identical to this one, but lacking consciousness entirely, or containing conscious experiences different from our own. If these claims are correct, it appears to follow that facts about consciousness are not deducible from physical facts alone. If this is so, then what follows? I think that even if consciousness is not deducible from physical facts, states of consciousness are still systematically correlated with physical states. In particular, it remains plausible that in the actual world, the state of a person's brain determines their state of consciousness, in the sense that duplicating the brain state will cause the conscious state to be duplicated too. That is, consciousness still supervenes on the physical domain. But importantly, this supervenience holds only with the strength of laws of nature (in the philosophical jargon, it is natural or nomological supervenience). In our world, it seems to be a matter of law that duplicating physical states will duplicate consciousness; but in other worlds with different laws, a system with the same physical state as me might have no consciousness at all. This suggests that the lawful connection between physical processes and consciousness is not itself derivable from physical laws, but instead involves further basic laws of its own. These are what we might call fundamental psychophysical laws. I think this provides a good general model for strong emergence. We can think of paradigm strongly emergent phenomena as being systematically determined by low-level facts without being deducible from those facts. In philosophical language, they are naturally but not logically supervenient on low-level facts. In any case like this, fundamental physical laws need to be supplemented with further fundamental laws to ground the connection between low-level properties and high-level properties. Something like this seems to be what the British emergentist C.D. Broad had in mind, when he invoked the need for "trans-ordinal laws" connecting different levels of nature. Are there other cases of strong emergence, besides consciousness? I think that there are no other clear cases, and that there are fairly good reasons to think that there are no other cases. Elsewhere (Chalmers 1996; Chalmers and Jackson 2001) I have argued that given a complete catalog of physical facts about the world, supplemented by a complete catalog of facts about consciousness, a Laplacean superbeing could in principle deduce all the high-level facts about the world, including the high-level facts about chemistry, biology, economics, and so on. If this is right, then phenomena in this domain may be weakly emergent from the physical, but they are not strongly emergent (or if they are strongly emergent, this strong emergence will derive wholly from a dependence on the strongly emergent phenomena of consciousness). One might wonder about cases in which high-level laws, say in chemistry, are not obviously derivable from low-level laws of physics. How can I know now that this is not the case? Here, one can reply by saying that even if the high-level laws are not deducible from the low-level laws, it remains plausible that they are deducible (or nearly so) from the low-level facts. For example, if one knows the complete distribution of atoms in space and time, it is plausible that one can deduce from there the complete distribution of chemical molecules, whether or not the laws governing molecules are immediately deducible from the laws governing atoms. So any emergence here is weaker than the sort of emergence that I suggest is present in the case of consciousness. Still, this suggests the possibility of an intermediate but still "radical" sort of emergence, in which high-level facts and laws are not deducible from low-level laws (combined with initial conditions). If this intermediate sort of emergence exists, then if our Laplacean superbeing is armed only with low-level laws and initial conditions (as opposed to all the low-level facts throughout space and time), it will be unable to deduce the facts about some high-level phenomena. This will presumably go along with a failure to be able to deduce even all the low-level facts from low-level laws plus initial conditions (if the low-level facts were derivable, the demon could deduce the high-level facts from there). So this sort of emergence entails a sort of incompleteness of physical laws even in characterizing the systematic evolution of low-level processes. The best way of thinking of this sort of possibility is as involving a sort of downward causation. It requires basic principles saying that when certain high-level configurations occur, certain consequences will follow. (These are what McLaughlin 1993 calls configurational laws.) These consequences will themselves either be cast in low-level terms, or will be cast in high-level terms that put strong constraints on low-level facts. Either way, low--level laws will be incomplete as a guide to both the low-level and the high-level evolution of processes in the world. (In such a case, one might respond by introducing new, highly complex low-level laws to govern evolution in these special configurations, allowing low-level laws to be complete once again. But the point of this sort of emergence will still remain: it will just have to be rephrased, by saying that non-configurational low-level laws are an incomplete guide to the evolution of processes. See Meehl and Sellars 1956 for related ideas here.) I don't think there is anything incoherent about the idea of this sort of downward causation. (Jaegwon Kim [e.g. Kim 1992, 1999] argues against downward causation, but I'm not sure to what extent we disagree — something to discuss at the workshop.) I don't know whether there are any examples of it in the actual world, however. While it's certainly true that we can't currently deduce all high-level facts and laws from low-level laws plus initial conditions, I don't know of any compelling evidence for high-level facts and laws (outside the case of consciousness) that are not deducible in principle. Others may know more about this than me, however. Perhaps the most interesting potential case of downward causation is in the case of quantum mechanics, at least on certain "collapse" interpretations thereof. On these interpretations, there are two principles governing the evolution of the quantum wavefunction: the linear Schrödinger equation, which governs the standard case, and a nonlinear measurement postulate, which governs special cases of "measurement". In these cases, the wavefunction is held to undergo a sort of "quantum jump" quite unlike the usual case. A key issue is that no-one knows just what the criteria for a "measurement" is; but it is clear that for this interpretation to work, measurements must involve certain highly specific criteria, most likely at a high-level. If so, then we can see the measurement postulate as itself a sort of configurational law, involving downward causation. Of course in this case the configurational law is in effect already built into involves emergent behavior. Both of these can be seen as "strong" varieties of emergence in that they involve in-principle nondeducibility and novel fundamental laws. But they are quite different in character. If I am right about consciousness, then it is a case of an emergent quality, while if the relevant interpretations of quantum mechanics are correct, then it is more like a case of emergent behavior. One can in principle have one sort of radical emergence without the other. If one has emergent qualities without emergent behavior, one has an "epiphenomenalist" picture on which there is a new fundamental quality that plays no causal role with respect to the lower level. If one has emergent behavior without emergent qualities, one has a picture of the world on which the only fundamental properties are physical, but on which their evolution is governed in part by high-level configurational laws. One might also in principle have both emergent qualities and emergent causation together. If so, one has a picture on which a new fundamental quality is itself involved in laws of "downward causation" with respect to low-level processes. This last option can be illustrated by combining the cases of consciousness and quantum mechanics discussed above, as in the familiar interpretations of quantum mechanics according to which it is consciousness itself that is responsible for wavefunction collapse. On this picture, the emergent quality of consciousness is not epiphenomenal, but plays a crucial causal role. My own view is that there is just one sort of emergent quality (relative to the physical domain), namely consciousness. I don't know whether there is any emergent causation, but it seems to me that if there is any emergent causation, quantum mechanics is the most likely locus for it. If both sorts of emergence exist, it is natural to examine the possibility of a close connection between them, perhaps along the lines mentioned in the last paragraph. For now, however, I think the question remains wide open. Weak emergence Weak emergence does not yield the same sort of radical metaphysical expansion in our conception of the world as strong emergence, but it is no less interesting for that. I think it is vital for understanding all sorts of phenomena in nature, and in particular to understanding biological, cognitive, and social phenomena. Others can address those issues better than I can, however. Instead, I'll conclude by attaching a something I wrote a number of years ago (as a graduate student in 1990) but never published. This was in effect a meditation on clarifying and refining the notion of weak emergence, as it applies to a number of familiar examples. Emergence is a tricky concept. It's easy to slide it down a slippery slope, and turn it into something implausible and easily dismissable. But it's not easy to delineate the interesting middle ground in between. Two unsatisfactory definitions of emergence, at either end of the spectrum: (1) Emergence as "inexplicable" and "magical". This would cover high-level properties of a system that are simply not deducible from its low-level properties, no matter how sophisticated the deduction. There is little evidence for this sort of emergence, except perhaps, in the difficult case of consciousness, but let's leave that aside for now. All material properties seem to follow from low-level physical properties. This is not usually the sort of "emergence" intended by people who invoke the notion in contemporary scientific discussions, but it is near enough to the neighborhood that it often leads to confusion. (2) Emergence as the existence of properties of a system that are not possessed by any of its parts. This, of course, is so ubiquitous a phenomenon that it's not deeply interesting. Under this definition, file cabinets and decks of cards (not to mention XOR gates) have plenty of emergent properties — so this is surely not what we mean. The challenge, then, is to delineate a concept of emergence that falls between the overly radical (1) and the overly general (2). After all, serious people do like do use the term, and they think they mean something interesting by it. It probably will help to focus on a few core examples of "emergence": (A) The game of Life: High-level patterns and structure emerge from simple low-level rules. (B) Connectionist networks: High-level "cognitive" behaviour emerges from simple interactions between dumb threshold logic units. (C) The operating system (Hofstadter's example): The fact that overloading occurs just around when there are 35 users on the system seems to be an emergent property of the system. (D) Evolution: Intelligence and many other interesting properties emerge over the course of evolution by genetic recombination, mutation and natural selection. Note that in all these cases, the "emergent" properties are in fact deducible (perhaps with great difficulty) from the low-level properties (perhaps in conjunction with knowledge of initial conditions), so a more sophisticated concept than (1) is required. Another stab at a definition might be: (3) Emergent = "deducible but not reducible". Biological and psychological laws and properties are frequently said not to be reducible to physical laws and properties. For many reasons, not the least being that the high-level laws/properties in question might be found associated with all kinds of different physical laws/properties as substrates. (A universe without protons and electrons might nevertheless include learning and memory.) There are some problems with this definition, though. Firstly, it's not clear what is gained by trying to explicate emergence in terms of the almost-equally-murky concept of "reduction". Secondly, it seems to let in some not-paradigmatically-emergent phenomena, and it's not clear how some emergent phenomena like (A) or (C) would fit this definition. I think that (3) picks out a very interesting class, but it's not quite the class we're after. It's on the right track, though, I think. The notion of reduction is intimately tied to the ease of understanding one level in terms of another. Emergent properties are usually properties that are more easily understood in their own right than in terms of properties at a lower level. This suggests an important observation: Emergence is a psychological property. It is not a metaphysical absolute. Properties are classed as "emergent" based at least in part on (1) the interestingness to a given observer of the high-level property at hand; and (2) the difficulty of an observer's deducing the high-level property from low-level properties. The properties of XOR are an obvious consequence of the properties of its parts. Emergent properties aren't. We might as well give this a number: (4) Emergent high-level properties are interesting, non-obvious consequences of low-level properties. This still can't be the full story, though. Every high-level physical property is a consequence of low-level properties, usually non-obviously. It feels unsatisfactory, for instance, to say that computations performed by a COBOL program are an emergent property relative to the low-level circuit operations — at least this feels much less "emergent" than a connectionist network. So something is missing. The trouble seems to lie with the complex, kludgy organization of the COBOL circuits. The low-level stuff may be simple enough, but all the complexity of the high-level behaviour is due to the complex structure that is given to the low-level mechanisms (by programming). Whereas in the case of connectionism or the game of life it feels that we have simplicity in both low-level mechanisms and their organization. So in those cases, we have much more of a "something for nothing" feel. Let's try for another number: (5) Emergence is the phenomenon wherein complex, interesting high-level function is produced as a result of combining simple low-level mechanisms in simple ways. I think this is much closer to a good definition of emergence. Note that COBOL programs, and many biological systems, are excluded by the requirement that not only the mechanisms but their principles of combination be simple. (Of course simplicity, complexity and interestingness are psychological concepts, at least for now, though we might try to explicate them in terms of Chaitin-Kolmogorov-Solomonoff complexity if we felt like it. My intuition is that this is likely to prove a little simplistic, although Chaitin has an interesting paper that attempts to derive a notion of the "organization" of a system using similar considerations.) And note also that most things that satisfy this definition should also satisfy (4) — due to our feeling that simple principles should have simple consequences (or else complex but uninteresting consequences, like random noise). Any complex, interesting consequence is likely to be non-obvious. This does indeed fit in with the feeling that emergence is a "something for nothing" phenomenon — though in a more subtle and satisfactory way than set forth in (1), for instance. It's a phenomenon whereby "something stupid buys you something smart". And most of our examples fit. The game of Life and connectionist networks are obvious: interesting high-level behaviour as a consequence of simple dynamic rules for low-level cell dynamics. In evolution, the genetic mechanisms are very simple, but the results are very complex. (Note that there is a small difference, in that in the latter case the emergence is diachronic, i.e. over time, whereas in the first two cases the emergence is synchronic, i.e. not over time but over levels present at a given time.) We're still not completely there — it's not clear how (C), the operating system example, fits into this paradigm of emergence. But throwing in a smidgen of teleology should get us the rest of the way. I.e., we have to notice that everything here has to be relativized to design. So we design the game of Life according to certain simple principles, but complex, interesting properties leap out and surprise us. Similarly for the connectionist network — we only design it at a low level (though in this case we hope that complex high-level properties will emerge). Whereas in the COBOL case — and in the case of much traditional AI — you only get out what you put in (N.B. I'm not necessarily knocking this: at least here, I'm trying to explicate emergence, not to defend it). And now the operating system example fits in well. The design principles of the system in this case are quite complex — unlike the other cases that fit (5) above — but still the figure "35" is not a part of that design at all. So: (6) Emergence is the phenomenon wherein a system is designed according to certain principles, but interesting properties arise that are not included in the goals of the designer. Notice the appearance of the word "goal" — this is important, any design is goal-relative. So the notion now is quite teleological. I notice that Russ Abbott makes a similar point in a recent posting. Notice, however, that as we've conceded that emergence is a psychological property, we're able to construe teleology in a psychological, non-absolute way. So for our purposes here, we only need the appearance of teleology. This is nice, because it allows us to include system where strictly speaking, "design" doesn't apply at all. In evolution, for instance, there is no "designer", but it is easy to treat evolutionary processes as processes of design. On more than one level. We can view evolution as teleological at the level of the gene — as in Dawkins' theory, for instance. Then the appearance of complex, interesting high-level properties such as intelligence is quite emergent. We also can reconstrue evolution as teleological at the level of the organism (this is perhaps a more straightforward Darwinian view of things). On this construal, the most salient adaptive phenomena like intelligence are no longer emergent, but the goal of the design process. However, this view does open up the possibility of other kinds of emergent phenomena: firstly, non-selected-for byproducts of the evolutionary process (such as Gould and Lewontin's "Spandrels"); secondly and more intriguingly, it allows an explanation for why consciousness seems emergent. Raw consciousness may not have been selected for, but it somehow emerges as a byproduct of selection for adaptive process such as intelligence. It's probably foolish to search for a definitive construal of "emergence": like most psychological concepts, it probably is best construed as a "family resemblance" — each of the "definitions" outlined above might play some role. Personally, I'm happiest with a combination of (5) and (6) — with (5) being the "core" variety of emergence, and (6) being a more general variety of which (5) is a special case. Bedau, M. 1997. Weak emergence. Philosophical Perspectives 11:375-399. Broad, C.D. 1925. The Mind and its Place in Nature. Routledge. Chalmers, D.J. 1996. The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press. Chalmers, D.J. 2002. Consciousness and its place in nature. Chalmers, D.J. & Jackson, F. 2001. Conceptual analysis and reductive explanation. Philosophical Review 110:315-61. Kim, J. 1992. The nonreductivist's trouble with mental causation. In (J. Heil & A. Mele, eds) Mental Causation. Oxford University Press. Kim, J. 1999. Making sense of emergence. Philosophical Studies 95:3-36. McLaughlin, B.P. 1992. The rise and fall of British emergentism. In (A. Beckermann, H. Flohr, & J. Kim, eds) Emergence or Reduction?: Prospects for Nonreductive Physicalism. De Gruyter.
0dc7510fd7ad4f0b
Pin Me What is the Science Behind God's Fist - The Research on Rogue Waves written by: Dr. Crystal Cooper • edited by: Ricky • updated: 6/29/2011 Scientists are not content with the mere observation of the mysterious phenomena that are rogue waves, but are carrying out intensive research. This article examines how optics, neural networks, and quantum physics are related to rogue wave research. • slide 1 of 4 We started talking about the physics of rogue waves in our previous article and will continue the discussion here. The old, linear models of hydrodynamics neither account for nor predict the existence of monster waves.ig20 hurricane 04 02  The theory that they are just combinations or superpositions of small ones that form during storms does not explain their sudden appearance in calm waters, for example. Physicists, mathematicians, engineers, and oceanographers now have several different newer models to explain their existence. NOAA once had a program where rogue waves were studied in great detail. Researchers there found that the lack of predictability happens due to the fact that most measurements are based on ocean wave models as stationary random Gaussian processes. A stationary process has a probability distribution which is the same for all times and all positions. A Gaussian process has a probability of occurrence that is based on the Gaussian or normal distribution, also known as a "bell curve". The NOAA researchers studied rogue wave data at various places around the world and proposed different non-linear models. • slide 2 of 4 Current Research What follows is an overview of current day research into the science of extreme waves. • Nonlinear Models: Researchers at the University of Massachusetts believe that sea-floor topography, near surface currents, and the wind are a major factor in their genesis. They have constructed several nonlinear models which they use as the basis for numerical simulations. • Neural Networks: At Texas A&M, researchers have found a mathematical model, based on neural networks and data from buoys, that is able to make predictions of wave heights off the coast of the United States for up to 24 hours. This model is used in coastal areas of Maine and Texas, and also in Alabama. • Schrödinger's equation: This is used in quantum physics to explain the behavior of atoms and particles. Norway researchers have successfully used the nonlinear version to model the physical characteristics and sudden appearance of monster waves. • Optics: Some researchers are studying optics to come up with viable models. At UCLA's Henry Samueli School of Engineering and Applied Science, they have developed experiments and mathematical models for optical rogue waves that they believe are applicable to hydrodynamics. Their work also uses the nonlinear Schrödinger equation as a basis. Optical rogue waves are easier to create experimentally and detect than their ocean counterparts. In the next part of this series, we will examine monster waves and naval architecture. • slide 3 of 4
f8ee783dfd84dfb8
Theoretical Chemistry Theoretical chemistry is the discipline that uses quantum mechanics, classical mechanics, and statistical mechanics to explain the structures and dynamics of chemical systems and to correlate, understand, and predict their thermodynamic and kinetic properties. Modern theoretical chemistry may be roughly divided into the study of chemical structure and the study of chemical dynamics. The former includes studies of: (1) electronic structure, potential energy surfaces, and force fields; (2) vibrational-rotational motion; and (3) equilibrium properties of condensed-phase systems and macro-molecules. Chemical dynamics includes: (1) bimolecular kinetics and the collision theory of reactions and energy transfer; (2) unimolecular rate theory and metastable states; and (3) condensed-phase and macromolecular aspects of dynamics. A critical issue crossing all boundaries is the interaction of matter and radiation. Spectroscope experiments are used as both structural and dynamic probes and to initiate chemical processes (as in photochemistry and laserinduced chemistry), and such experiments must be understood theoretically. There are also many subfields of theoretical chemistry—for example, biomedical structure-activity relationships, the molecular theory of nuclear magnetic resonance spectra, and electron-molecule scattering—that fit into two or more of the areas listed. Another source of overlap among the categories is that some of the techniques of theoretical chemistry are used in more than one area. For example, statistical mechanics includes the theory and the set of techniques used to relate macroscopic phenomena to properties at the atomic level, and it is used in all six subfields listed. Furthermore, the techniques of quantum mechanics and classical-mechanical approximations to quantum mechanics are used profitably in all six subfields as well. Condensed-phase phenomena are often treated with gas-phase theories in instances in which the effects of liquid-phase solvent or solid-state lattice are not expected to dominate. There are many specialized theories, models, and approximations as well. Because quantum and statistical mechanics are also parts of physics, theoretical chemistry is sometimes considered a part of chemical physics. There is no clear border between theoretical physical chemistry and theoretical chemical physics. Three Modes of Science Modern science is sometimes said to proceed by three modes—experiment, theory, and computation. This same division may be applied to chemistry. From this point of view, theoretical chemistry is based on analytical theory, whereas computational chemistry is concerned with predicting the properties of a complex system in terms of the laws of quantum mechanics (or classical approximations to quantum mechanics, in the domains in which such classical approximations are valid that govern the system's constituent atoms or its constituent nuclei and electrons, without using intermediate levels of analytical chemical theory. Thus, in principle, computational chemistry assumes only such basic laws as the Schrödinger equation, Newton's laws of motion, and the Boltzmann distribution of energy states. In practice, though, computational chemistry is a subfield of theoretical chemistry, and predictions based on approximate theories, such as the dielectric continuum model of solvents, often require considerable computer programming and number crunching. The number of subfields of chemistry in which significant progress can be made without large-scale computer calculations is dwindling to zero. In fact, computational advances and theoretical understanding are becoming more and more closely linked as the field progresses. Computational chemistry is sometimes called molecular modeling or molecular simulation. Electronic Structure Perhaps the single most important concept in theoretical chemistry is the separation of electronic and nuclear motions, often called the Born-Oppenheimer approximation, after the seminal work of Max Born and Robert Oppenheimer(1927), although the basic idea must also be credited to Walter Heitler, Fritz London, Friedrich Hund, and John Slater. The critical facts that form a basis for this approximation are that electrons are coupled to nuclei by Coulomb forces, but electrons are much lighter—by a factor of 1,800 to 500,000—and thus, under most circumstances, they may be considered to adjust instantaneously to nuclear motion. Technically we would describe the consequence of this large mass ratio by saying that a chemical system is usually electronically adiabatic. When electronic adiabaticity does hold, the treatment of a chemical system is greatly simplified. For example, the H2 molecule is reduced from a four-body problem to a pair of two-body problems: one, called the electronic structure problem, considers the motion of two electrons moving in the field of fixed nuclei; and another, called the vibration-rotation problem or the dynamics problem, treats the two nuclei as moving under the influence of a force field set up by the electronic structure. In general, because the energy of the electronic subsystem depends on the nuclear coordinates, the electronic structure problem provides an effective potential energy function for nuclear motion. This is also called the potential energy hypersurface. The atomic force field (i.e., the set of all the forces between the atoms) is the gradient of this potential energy function. Thus, when the Born-Oppenheimer approximation is valid and electronic motion is adiabatic, the end result of electronic structure theory is a potential energy function or atomic force field that provides a starting point for treating vibrations, equilibrium properties of materials, and dynamics. Robert Mulliken, Road Hoffman, Kenichi Fukui, John Pople, and Walter Kohn won Nobel Prizes in chemistry for their studies of electronic structure, including molecular orbital theory. Some important problem areas in which the Born-Oppenheimer separation breaks down are photochemical reactions involving visible and ultraviolet radiation and electrical conductivity. Even for such cases, though, it provides a starting point for more complete treatments of electronic-nuclear coupling. In the subfield of theoretical dynamics, the most important unifying concept is transition state theory, which was developed by Henry Eyring, Eugene Wigner, M. G. Evans, and Michael Polanyi. A transition state is a fleeting intermediate state (having a lifetime on the order of 10 femtoseconds) that represents the hardest-to-achieve configuration of a molecular system in the process of transforming itself from reactants to products. A transition state is sometimes called an activated complex or a dynamical bottleneck. In the language of quantum mechanics, it is a set of resonances or metastable states, and in the language of classical mechanics, it is a hypersurface in phase space. Transition states are often studied by semiclassical methods as well; these methods represent a hybrid of quantum mechanical and classical equations. Transition state theory assumes that a good first approximation to the rate of reaction is the rate of accessing the transition state. Transition state theory is not useful for all dynamical processes, and in a more general context a variety of simulation techniques (often called molecular dynamics) are used to explain observable dynamics in terms of atomic motions. Predictive Power In the early days of theoretical chemistry, the field served mainly as a tool for understanding and correlating data. Now, however, owing to advances in computational science, theory and computation can often provide reliable predictions of unmeasured properties and rates. In other cases, where measurements do exist, theoretical results are sometimes more accurate than measured ones. Examples are the properties of simple molecules and reactions such as D + H 2 →HD + H, or the heats of formation of reactive species. Computational chemistry often provides other advantages over experimentation. For example, it provides a more detailed view of phenomena such as the structure of transition states or a faster way to screen possibilities. An example of the latter is provided in the field of drug design, in which thousands of candidate molecules may be screened for their likely efficiency or bioavailability by approximate calculations—for example, of the electronic structure or free energy of desolvation—and, relying on the results of these calculations, candidates may be prioritized for synthesis and testing in laboratory studies. In conclusion, theoretical chemistry, by combining tools of quantum mechanics, classical mechanics, and statistical mechanics, allows chemists to predict materials' properties and rates of chemical processes, even in many cases in which they have not yet been measured or even observed in the laboratory; whereas for processes that have been observed, it provides a deeper level of understanding and explanations of trends in the data. SEE ALSO Computational Chemistry ; Molecular Modeling ; Quantum Chemistry . Donald G. Truhlar Atkins, P. W., and Friedman, R. S. (1996). Molecular Quantum Mechanics , 3rd edition. New York: Oxford University Press. Baer, Michael, ed. (1985). Theory of Chemical Reaction Dynamics , Vol. 1. Boca Raton, FL: CRC Press. Cramer, Christopher J. (2002). Essentials of Computational Chemistry: Theories and Models. New York: Wiley. Eyring, Henry; Walter, John; and Kimball, George E., eds. (1944). Quantum Chemistry. New York: Wiley. Irikura, Karl K., and Frurip, David J., eds. (1998). Computational Thermochemistry: Prediction and Estimation of Molecular Thermodynamics. Washington, DC: American Chemical Society. Jensen, Frank (1999). Introduction to Computational Chemistry. New York: Wiley. Leach, Andrew R. (2001). Molecular Modeling: Principles and Applications , 2nd edition. Upper Saddle River, NJ: Prentice Hall. Levine, Raphael D., and Bernstein, Richard B. (1987). Molecular Reaction Dynamics and Chemical Reactivity. New York: Oxford University Press. Lipkowitz, Kenny B., and Boyd, Donald B., eds. (1990–2001). Reviews in Computational Chemistry , Vols. 1–17. New York: VCH. McQuarrie, Donald A. (1976). Statistical Mechanics. New York: Harper & Row. Ratner, Mark A., and Schatz, George C. (2000). Introduction to Quantum Mechanics in Chemistry. Upper Saddle River, NJ: Prentice Hall. Simons, Jack, and Nichols, Jeff (1997). Quantum Mechanics in Chemistry. New York: Oxford University Press. Thompson, Donald L., ed. (1998). Modern Methods for Multidimensional Dynamics Computations in Chemistry. River Edge, NJ: World Scientific. Truhlar, Donald G.; Howe, W. Jeffrey; Hopfinger, Anthony J.; et al., eds. (1999). Rational Drug Design. New York: Springer. Other articles you might like: Follow Founder on our Forum or Twitter Also read article about Theoretical Chemistry from Wikipedia User Contributions:
e954dd531ce79ba3
Disclaimer: I do not know a whole terrible lot about the intricacies of either chaos theory or quantum mechanics, let alone the combination of the two, this is more a philosophical thing than a scientific one, I know I get a lot of things wrong (on both sides) Further disclaimer (thank to ariels for the information): The 'snapshot' mentioned above is a well defined object in dynamics (its mathematical form containing firm proofs and a specific ontology). However, I think the point below still stands. Though the 'snapshot' as defined mathematically may be vastly different than the 'snapshot' the lay person is familiar with, I think Feyerabend would still argue that the very choosing of the term 'snapshot' is a metaphorical/rhetorical one, that cannot be encompassed by an easy rationality... Applying Philosophy of Science Feyerabendian and Lakatosian analyses of Quantum Chaos I will be discussing an article entitled “Chaos on the Quantum Scale” by Mason A. Porter and Richard L. Liboff from the November-December 2001 issue of American Scientist. The article discusses recent advances in recent attempts to model systems that behave chaotically on the quantum (sub-atomic) scale. It will be helpful to briefly summarize the main points of the article: The first few introductory paragraphs relate quantum mechanics and chaos theory by placing emphasis on their respective uses of uncertainty. From this common point of uncertainty, the authors state that because scientists seem to ‘find’ chaotic phenomena at all scales, they cannot rule out the possibility of chaos at the sub-atomic level. The next section of the article is a brief history of chaos theory that describes the early work of Henri Poincaré and mentions the later work in the 1960’s by meteorologist Edward Lorenz. They then explain that chaos has been found in so many disparate disciplines of science, and once again reiterate that they cannot rule it out at the quantum level. Here they also mention possible applications of such quantum level chaos in nanotechnology. From here they move into the largest section of the article, the billiard-themed thought experiment/model. They move from a simple two dimensional billiard table to increasingly more chaotic and quantum-like billiard tables. There is a two-dimensional table with a circular rail, a spherical ‘table’, a spherical table with wave-particles as ‘balls’, and finally, a spherical table with an oscillating boundary and with wave-particles of different frequencies. Within this section, they also explain the more technical aspects of their attempt to model quantum chaos. They explain their plotting methods (the Poincaré section) as well as their mathematical methods as well (the Schrödinger equation and Hamiltonians). With the final few examples they show us that they cannot as yet model true quantum chaos, but only semi-quantum chaos (which requires mathematics from the realm of classical physics as well as quantum mechanics). After this admission, they go on to describe in detail future applications that successful quantum chaotic modeling will have in nanotechnology, from superconducting quantum-interference devices (SQUIDs) to carbon nanotubes. The final sentence of the article sums up the general attitude of the authors: “As we have shown… this theory possesses beautiful mathematical structure and the potential to aid progress in several areas of physics both in theory and in practice” (Porter 537). I shall now attempt to analyze the article in light of two very different ‘theories’ (though one can certainly not firmly be called a ‘theory’): namely, those of Paul Feyerabend and Imre Lakatos. I will begin my discussion with Feyerabend’s thought, and then move on to Lakatos. After these analyses, I will engage both authors with each other, and attempt to bring out certain problems in each of their ‘theories’ that I see myself. Paul Feyerabend introduces the Chinese Edition to his book Against Method by stating his thesis that: the events, procedures and results that constitute the sciences have no common structure; there are no elements that occur in every scientific investigation but are missing elsewhere. Concrete developments… have distinct features and we can often explain why and how these features led to success. But not ever discovery can be accounted for in the same manner, and procedures that paid off in the past may create havoc when imposed on the future. Successful research does not obey general standards; it relies now on one trick, now on another…(AM 1). So, we can (and do) explain why certain scientific developments/revolutions do occur, but we should not expect these explanations to bud into theories, and we should definitely not expect that our explanations should apply in all cases. This inability for universally applicable theories to be universally applied, is not a result of our inability to hit upon the correct theory, but is a result of the non-uniform character of what we call ‘science’. Science is not a homogenous enterprise. It comprises everything from sociology to quantum mechanics. Before we can expect to have an absolute theory (which Feyerabend thinks is neither possible nor desirable) we would have to have an absolute definition of what ‘science’ is. (Here we can see the influence of Wittgenstein’s idea of language games on Feyerabend’s thought). Perhaps science isn’t something we can have a theory about. So, it being understood that Feyerabend believes that ‘science’ is not homogenous, and that we can only explain individual cases with individual criteria, what processes would he think applicable in the article at hand? Obviously this is a difficult question to answer. I think a fruitful way of approaching the task is through a very un-Feyerabendian process. By seeing what he has done in the past (e.g. in his previous analyses of scientific ‘developments’) we may be able to surmise what he would be likely to note in our particular example. In Feyerabend’s analysis of Galileo (specifically in chapter 7 of Against Method) he emphasizes the role of rhetoric, and ‘propaganda’ in scientific change. He states that:>/p> Galileo replaces one natural interpretation by a very different and as yet (1630) at least partly unnatural interpretation. How does he proceed? How does he manage to introduce absurd and counterinductive assertions, such as the assertion that the earth moves, and yet get them a just and attentive hearing? One anticipates that arguments will not suffice - an interesting and highly important limitation of rationalism – and Galileo’s utterances are indeed arguments in appearance only. For Galileo uses propaganda (AM 67). So, as it seems that an analysis of non-argumentative (rhetorical) uses of language aided Feyerabend in his discussion of Galileo. Thus, one possibly fruitful method of analysis may be to search out similar uses of language in our article. Which is precisely what I will do. Here is a good example of the use of non-rational, non-argumentative means of convincing someone of your point: The trail of evidence towards a commingling of quantum mechanics and chaos started late in the 19th century, when … Henri Poincaré started working on equations to predict the positions of the planets as they rotated around the sun (Porter 532). Here we are led to believe by Porter/Liboff that Poincaré’s work is part of a ‘trail of evidence’ that provides support for their work (‘the commingling of quantum mechanics and chaos’). By the appeal to an accepted authority (it is generally accepted in the chaos community that Poincaré is the ‘father of chaos theory’) we are supposed to lend further credence to their own work (though, as we are told in the last portion of the article, this work has not provided a true connection between the two theories). But, is there, in Poincaré’s work any evidence of this commingling of chaos and quantum mechanics? Hardly. The ‘evidence’ they refer to is simply the birth of chaos theory. If we accept their claim, one might analogously state that my birth contains ‘evidence’ for whom I will marry in the future. (Putting aside genetic predisposition toward certain possible mates, this is absurd.) We cannot (rationally) justify the claim that the birth of chaos theory provides evidence for the future ‘commingling’ of that theory with quantum mechanics. It does, however, provide a nice segue for the authors into a historical summary of the birth of chaos theory. Rather than an argument, it is a literary device (like exaggeration, alliteration, etc.) that aids both the achievement of the authors’ goal (describing quantum chaos) and making the text itself more fluid. Staunch rationalists would argue (Feyerabend might say) that this example mistakes a literary device for a scientific argument, and that if we simply separated the two, the problem would dissolve. Feyerabend’s position however, is that we are unable to separate the two. He states in Against Method That interests, forces, propaganda and brainwashing techniques play a much greater role than is commonly believed in …the growth of science, can also be seen from an analysis of the relation between idea and action. It is often taken for granted that a clear and distinct understanding of new ideas precedes, and should precede, their formulation and institutional expression. (An investigation starts with a problem, says Popper.) First, we have an idea, or a problem, then we act, i.e. either speak, or build, or destroy. Yet this is certainly not the way in which small children develop. They use words … they play with them, until they grasp a meaning that has so far been beyond their reach… There is no reason why this mechanism should cease to function in the adult. We must expect, for example, that the idea of liberty could be made clear only by means of the very same actions, which were supposed to create liberty (AM 17). Putting aside the theory of language acquisition proposed here, we see that Feyerabend believes that the form of our investigation is just as important as the content or result of it. Thus, we cannot understand an argument separately from the language it is phrased in, language that often contains suggestive (propagandistic) phrases. In other words what you say is often inseparable from how you say it Analogies to real world objects are also used by Porter/Liboff. For example: “A buckyball has a soccer-ball shape…” (Porter 536); “Nanotubes can also vibrate like a plucked guitar string…” (Porter 537); and, “Such a plot represents a series of snapshots of the system under investigation” (Porter 534). These analogies appear to be used simply to enhance the more abstract qualities of the quantum-chaotic world the authors are describing, and make them more understandable. But, it seems there is more going on here. If we view the article in the Feyerabendian sense that I have been developing above, the choice of metaphor can also affect the readers’ conception of the ‘ideas’ that the authors are attempting to put across. In particular, the ‘snapshot’ analogy seems suggestive to me. What the authors describe as ‘snapshots’ are Poincaré sections taken from higher-than-three dimensional systems. In effect, two-dimensional plots that are, by a mathematical process, abstracted from ‘multi-dimensional masses.’ These are possibly some of the most theoretical objects ever created yet the authors describe them as ‘snapshots’. Obviously there are qualities of the Poincaré section that lend it to the comparison: both a snapshot and a Poincaré section are thought to be reports of a particular time and space. But, other aspects of the comparison may (hopefully, for the Porter/Liboff) lead the reader into accepting highly theoretical concepts as real objects, more so than they would have without the analogy. Obviously the creation of a photographic snapshot is itself based on theory, but it is one that we use (and accept) in everyday life, one that we accept without reservations. Not only that, but the real-life snapshot (as opposed to the Poincaré section snapshot) represents things which we already accept as existing in the real world. In comparing the Poincaré section to a snapshot, the authors attempt to further solidify the reality of the objects that the section represents. Rather than seeing the n-dimensional objects of the Poincaré section as abstract objects, we are now more suggested to picture them as objects like our vacation slides, or wedding photos. Imre Lakatos’ great contribution to the history and philosophy of science (and the historiography of science) is the concept of the research programme. As a general illustration of the role of a research programme, the following quote may be helpful: the great scientific achievements are research programmes which can be evaluated in terms of progressive and degenerating problemshifts; and scientific revolutions consist of one research programme superseding (overtaking in progress) another (Lakatos 115). How can we apply such a methodology to the emergence of quantum-chaos? Well, to start with, we might ask just what research programme, or programmes we are working with. Are quantum mechanics, chaos theory and quantum-chaos all individual research programmes, and, if so, how do we explain the emergence of quantum-chaos (a theory that contains elements of both quantum mechanics and chaos theory) in relation to the other two? I shall attempt to answer these two questions in order. To answer the first, we should define more firmly what Lakatos means by the term ‘research programme’. He states that: The basic unit of appraisal must be not an isolated theory or conjunction of theories but rather a ‘research programme’, with a conventionally accepted (and thus by provisional decision ‘irrefutable’) ‘hard core’ and with a ‘positive heuristic’ which defines problems, outlines the construction of a belt of auxiliary hypotheses, foresees anomalies and turns them victoriously into examples, all according to a preconceived plan. The scientist lists anomalies, but as long as his research programme sustains its momentum, he may freely put them aside. It is primarily the positive heuristic of his programme, not the anomalies, which dictate the choice of his problems (Lakatos 116). So, in order to determine whether or not our three ‘categories’ can be aptly described as research programmes they must have a ‘hard core’ (which I take to mean principles or examples that one has to accept in order to work within the research programme), and also a ‘positive heuristic’ that determines what problems will be addressed (and how to address them). For brevity’s sake I shall limit my discussion to the ‘hard core’ and the problem-determining function of the positive heuristic while ignoring the role of anomalies in negative determination of problems (a role that Lakatos, unlike Popper, believes is secondary to that of the positive heuristic). Quantum mechanics definitely seems to have a ‘hard core’ that its adherents agree is irrefutable, and essential to its elaboration. Historical examples of such an irrefutable core can be found in papers (from the late 19th century to the first quarter of the 20th) by Planck, Bohr, Einstein and others. These papers contain principles that form the unshakeable core of quantum mechanics even now. Here is just one example, which should suffice to illustrate the point: Today we know that no approach which is founded on classical mechanics and electrodynamics can yield a useful radiation formula. … Planck in his fundamental investigation based his radiation formula…on the assumption of discrete portions of energy quanta from which quantum theory developed rapidly (Einstein 63). So, quantum mechanical theory develops directly from Planck’s assumption of quanta. Although this is an oversimplification, it does illustrate that there are basic assumptions which quantum theorists are unwilling to sacrifice. We have our ‘hard core’, now the question is: does quantum mechanics have its own ‘positive heuristic’? I think the easiest way to answer this is to rephrase the question slightly: has quantum mechanics generally determined its own problems positively (i.e. set out to solve them) before they are negatively determined by emergent anomalies? Obviously searching out the ‘general’ answer to this question is well beyond the scope of this essay, but finding a few examples can at least allow us to provisionally classify quantum mechanics as a research programme. One example is the full, and accurate, derivation of Planck’s law. Planck proposed the idea of quanta (discrete units of energy) in 1900 and the perfection of a law describing this idea was worked on until 1926. The idea of quanta was proposed as a basic tenet of quantum mechanics (it was ‘anomalous’ only for the then degenerating research programme of classical mechanics), though it could not be perfectly derived. So, setting it up as a problem, quantum mechanics attempted to ‘solve’ it (and eventually did). The problem of splitting the atom, though it may have been motivated by outside political factors, was internally posed to quantum mechanics as well, and consequently solved as ‘predicted’ by theory. Undoubtedly, then, Lakatos would define quantum mechanics as a research programme, and not merely a theory contained within a larger research programme. Can the same be said of chaos theory? Well, chaos theory seems to have its own ‘hard core’. This much we can see from the Porter/Liboff. The theory’s basic assumption is that some phenomena… depend intimately on a system’s initial conditions, so that an imperceptible change in the beginning value of a variable can make the outcome of a process impossible to predict (Porter 532). All applications of chaos theory work outward from this core principle, which is also historically situated (in the article) through the work of Poincaré: Poincaré started working on equations to predict the positions of the planets as they rotated around the sun… Note the starting positions and velocities, feed them into a set of equations based on Newton’s laws of motion, and the results should predict future positions. But the outcome turned Poincaré’s expectations upside down. With only two planets under consideration, he found that even tiny differences in the initial conditions… elicited substantial changes in future positions (Porter 532). So, like quantum mechanics, the hard core of chaos is situated historically in a few irrefutable examples and principles. For quantum mechanics some examples of the core principles are the Heisenberg uncertainty principle and Planck’s assumption of discrete quanta. The individuals most often recognized historically as exemplars of quantum mechanical theory are Einstein, Bohr, Born, Ehrenfest, to name a few. These examples are constantly cited and referred to both pedagogically, and in scientists’ description of the birth of their field. Chaos theory’s core principle is that we cannot accurately predict the future state of a dynamical (i.e. chaotic) system. This principle is exemplified in the early work of Poincaré (which is generally seen as proto-chaotic) and the later meteorological studies of Lorenz (who is also mentioned by Porter/Liboff). Now we move on to the question of whether or not chaos theory has a positive heuristic, which determines the problems to be solved. It seems, at least prima facie (which is as far as such a limited study can go) that, unlike quantum mechanics (whose scope is internally limited to the ‘quantum realm’) that chaos theory has the potential to be applied to any system. In this respect, can it be considered a research programme? If it has historically been applied only within other research programmes (meteorology, electrodynamics, planetary motion, to name only a few mentioned in the article itself) it does not seem plausible that it can define its own problems and attempt to solve them in seclusion from other research programmes. Rather than a research programme, I propose that chaos theory is a self-contained theory (a modeling or mathematical tool) that functions within a variety of established and independent research programmes. On this view, it would appear that quantum-chaos, far from being an independent research programme, is the result of a development that is internal to the progressive research programme of quantum mechanics. Quantum-chaos is not an entirely new system of ideas, but a growth of new ideas within the boundaries of the quantum realm. That is, without quantum mechanics, there would be no realm in which to create quantum-chaos, and no ‘rules’ with which to describe it. Critique of Feyerabend and Lakatos Now that we have seen a few of the ideas of Feyerabend and Lakatos in application (albeit forcefully) I shall move on to a critical engagement of the two, playing off their views (as well as my own) against one another. I will start with Lakatos. It seems that though the research programme is a valuable historiographical lens with which to view scientific history, it has obvious limitations. Although it enables the historian of science to encompass more examples than something like (what Lakatos calls) a ‘conventionalist’ historiography, it is by no means all encompassing. The main problem that I see with his methodology is one that Lakatos states himself. The methodology of research programmes – like any other theory of scientific rationality – must be supplemented by empirical-external history. No rationality theory will ever solve the problems like why Mendelian genetics disappeared in Soviet Russia in the 1950’s, or why certain schools of research into genetic racial differences or into the economics of foreign aid came into disrepute in the Anglo-Saxon countries in the 1960’s… (Lakatos 119). So, like most other rationalist reconstructions of the history of science, his attempt must be supplemented by psychological, sociological and other explanations. The difference between a falsificationist like Popper and someone like Lakatos is that Lakatos at least admits that there are other factors in the history of science than rational ones. But, for a rationalist project, whose aim is to explain all scientific change, this fundamental problem simply cannot be overcome. The problem is that the human agents in science (who, despite any talk of a ‘third world’ are key agents in scientific change) are never fully, or exclusively, rational. If we are bound by a purely rational reconstruction of the history of science, then the irrational in science (which Lakatos admits exists) will always elude our methodological understanding. Lakatos denies that any theory of scientific rationality can succeed in this task. The problem of irrationality in science is one that I believe Feyerabend can overcome more easily. To him it seems that if a completely rational reconstruction (based on the rigorous application of a specific ‘system’) is bound to fail, then should we not look at the possibility of an irrational, even non-systematic explanation of the history of science? Obviously such an explanation could not be termed a ‘methodology’ but through something like it, we could attempt to explain any historical stage of science. Such an irrational, anti-methodological approach is precisely what Paul Feyerabend calls for. Feyerabend’s explanations do not rely on the constancy of a specific method or concept, but fluctuate based on the particular situation they are attempting to ‘explain’. When talking about a series of lectures he had given at the London School of Economics, Feyerabend sketches out for us his intent: My aim in the lectures was to show that some very simple and plausible rules and standards which both philosophers and scientists regarded as essential parts of rationality were violated in the course of episodes (Copernican Revolution; triumph of the kinetic theory; rise of quantum theory; and so on) they regarded as equally essential. More specifically I tried to show (a) that the rules (standards) were actually violated and that the more perceptive scientists were aware of the violations; and (b) that they had to be violated. Insistence on the rules would not have improved matters, it would have arrested progress (SFS 13). Feyerabend suggests here that not only are rules not always fruitful in science, but that strict adherence to those rules sometimes hinders its progress. The same can be said about historiography of science. If we insist on strict adherence to specific rules in all cases then not only are going to get it ‘wrong’, but we may make it harder to get it ‘right’ (i.e. more useful, less problematic historical descriptions). So, we have discussed a specific problem with Lakatos’ methodology of research programmes and ended up at the seeming inadequacy of all methodologies. But neither I, nor Feyerabend, believe that there are never times when rules can be applied fruitfully to historical analyses. Indeed, Lakatos’ concept of the research programme seems to provide criteria that are more widely applicable than many others proposed before it. It does not fall prey to the rash assumption that science is strictly rational, thought it admits science’s rationality is all that it can explain. This is precisely what Feyerabend wants the rationalists (and particularly the other LSE rationalists) to admit: that we cannot always fit history into the box of rationality (regardless of whether the box is that of falsificationism or the methodology of research programmes). So, on the one hand, Lakatosian research programmes explain more than any other rationalist reconstruction can, but on the other hand, Lakatos admits that (unlike Feyerabend) he cannot explain irrationality in science. How can I criticize Feyerabend? If I accused him of incoherence, or self-contradiction, he would take it as a complement. If one can accept any standard at any time, depending upon the circumstances, then of course one can seem to be contradictory, he would say. I tend to agree with Feyerabend that no rules can be applied absolutely, for all time. But, one might criticize him in his specific historical analyses. For instance, his emphasis on the rhetorical (non-rational) use of language and irrational ‘methods’ of Galileo and Copernicus may ignore some of the important rational features in their work. Though this problem may be inherent to an attack on rationalist reconstructions of science, I think that Feyerabend often ignores salient features of history simply because they are instances of rationality. That being said, I believe that Feyerabend’s philosophy of science provides us with the mindset to build a number of very unique perspectives on the history of science. He tells us that no method can work absolutely, but some methods can work sometimes. Our task is to think for ourselves and create our own interpretations of science, and not to rely on the grandiose systems of our predecessors. Bennett, Jesse. The Cosmic Perspective (1st edition), Addison Wesley Longman, 1999, New York. “Chaos on the Quantum Scale”, Mason A. Porter and Richard L. Liboff, pp.532-537 in American Scientist Volume 89, No. 6 November-December 2001. “Chaos Theory and Fractals”, Jonathan Mendelson and Elana Blumenthal 2000-2001, URL: http://www.mathjmendl.org/chaos/index.html “Early Quantum Mechanics”, J J O'Connor and E F Robertson 1996, URL: http://www-history.mcs.st-andrews.ac.uk/history/HistTopics/The_Quantum_age_begins.html Einstein, Albert. “On the Quantum Theory of Radiation” pp63-77 in Sources of Quantum Mechanics. Ed. B.L. Van der Waerden, 1968, Dover Publications, New York. Feyerabend, Paul. Against Method, Verso, 1988 1975 New York. (Referred to in the text as AM) Feyerabend, Paul. Science in a Free Society, New Left Books, 1978, London (referred to in the text as SFS). Lakatos, Imre. “History of Science and its Rational Reconstructions” pp.107-127 in Scientific Revolutions, ed. Ian Hacking, 1981, Oxford University Press, New York. Wittgenstein, Ludwig. Philosophical Investigations. Translated by G.E.M. Anscombe (No publishing information provided).
2faaa3816d1e2b9e
Saturday, May 20, 2017 Cosmo : supremely relaxing fishing video The Seychelles are an angler’s paradise – if you can actually get to them. Follow the crew of the Alphonse Fishing Co. as they wade the flats of the Cosmoledo Atoll, hoping for a shot at Giant Trevally.  see the story Cosmoledo island with the GeoGarage platform Friday, May 19, 2017 Terrifying 20m-tall 'rogue waves' are actually real The Wave painting by Ivan Aivazovsky From BBC by Nic Fleming Smashed portholes and flooded cabins on the upper decks. Rogue waves could safely be classified alongside mermaids and sea monsters. However, we now know that they are no maritime myths. A wave is a disturbance that moves energy between two points. The most familiar waves occur in water, but there are plenty of other kinds, such as radio waves that travel invisibly through the air. Although a wave rolling across the Atlantic is not the same as a radio wave, they both work according to the same principles, and the same equations can be used to describe them. A rogue wave is one that is at least twice the "significant wave height", which refers to the average of the third highest waves in a given period of time. The sceptics had got their sums wrong, and what was once folklore is now fact. This led scientists to altogether more difficult questions. Given that they exist, what causes rogue waves? More importantly for people who work at sea, can they be predicted? Until the 1990s, scientists' ideas about how waves form at sea were heavily influenced by the work of British mathematician and oceanographer Michael Selwyn Longuet-Higgins. In work published from the 1950s onwards, he stated that, when two or more waves collide, they can combine to create a larger wave through a process called "constructive interference". According to the principle of "linear superposition", the height of the new wave should simply be the total of the heights of the original waves. A rogue wave can only form if enough waves come together at the same point according to this view. However, during the 1960s evidence emerged that things might not be so simple. The key player was mathematician and physicist Thomas Brooke Benjamin, who studied the dynamics of waves in a long tank of shallow water at the University of Cambridge. With his student Jim Feir, Benjamin noticed that while waves might start out with constant frequencies and wavelengths, they would change unexpectedly shortly after being generated. Those with longer wavelengths were catching those with shorter ones. This meant that a lot of the energy ended up being concentrated in large, short-lived waves. At first Benjamin and Feir assumed there was a problem with their equipment. However, the same thing happened when they repeated the experiments in a larger tank at the UK National Physical Laboratory near London. What's more, other scientists got the same results. For many years, most scientists believed that this "Benjamin-Feir instability" only occurred in laboratory-generated waves travelling in the same direction: a rather artificial situation. However, this assumption became increasingly untenable in the face of real-life evidence. At 3am on 12 December 1978, a German cargo ship called The München sent out a mayday message from the mid-Atlantic. Despite extensive rescue efforts, she vanished never to be found, with the loss of 27 lives. A lifeboat was recovered. Despite having been stowed 66ft (20m) above the water line and showing no signs of having been purposefully lowered, the lifeboat seemed to have been hit by an extreme force. However, what really turned the field upside down was a wave that crashed into the Draupner oil platform off the coast of Norway shortly after 3.20pm on New Year's Day 1995. Hurricane winds were blowing and 39ft (12m) waves were hitting the rig, so the workers had been ordered indoors. No-one saw the wave, but it was recorded by a laser-based rangefinder and measured 85ft (26m) from trough to peak. The significant wave height was 35.4ft (10.8m). According to existing assumptions, such a wave was possible only once every 10,000 years. The Draupner giant brought with it a new chapter in the science of giant waves. When scientists from the European Union's MAXWAVE project analysed 30,000 satellite images covering a three-week period during 2003, they found 10 waves around the globe had reached 25 metres or more. "There must be another mechanism involved." In the last 20 years or so, researchers like Chabchoub have sought to explain why rogue waves are so much more common than they ought to be. Instead of being linear, as Longuet-Higgins had argued, they propose that rogue waves are an example of a non-linear system. If waves interact in a non-linear way, it might not be possible to calculate the height of a new wave by adding the originals together. Instead, one wave in a group might grow rapidly at the expense of others. When physicists want to study how microscopic systems like atoms behave over time, they often use a mathematical tool called the Schrödinger equation. It turns out that certain non-linear version of the Schrödinger equation can be used to help explain rogue wave formation. In a 2016 study, Chabchoub applied the same models to more realistic, irregular sea-state data, and found rogue waves could still develop. "Having the design criteria of offshore platforms and ships being based on linear theory is no good if a non-linear system can generate rogue waves they can't cope with." Still, not everyone is convinced that Chabchoub has found the explanation. "Chabchoub was examining isolated waves, without allowing for interference with other waves," says optical physicist Günter Steinmeyer of the Max Born Institute in Berlin. "It's hard to see how such interference can be avoided in real-world oceans." Instead, Steinmeyer and his colleague Simon Birkholz looked at real-world data from different types of rogue waves. They looked at wave heights just before the 1995 rogue at the Draupner oil platform, as well as unusually bright flashes in laser beams shot into fibre optic cables, and laser beams that suddenly intensified as they exited a container of gas. Their aim was to find out whether these rogue waves were at all predictable. The pair divided their data into short segments of time, and looked for correlations between nearby segments. In other words, they tried to predict what might happen in one period of time by looking at what happened in the periods immediately before. The results, which they published in 2015, came as a surprise to Steinmeyer and Birkholz. It turned out, contrary to their expectations, that the three systems were not equally predictable. They found oceanic rogue waves were predictable to some degree: the correlations were stronger in the real-life time sequence than in the shuffled ones. There was also predictability in the anomalies observed in the laser beams in gas, but at a different level, and none in the fibre optic cables. However, the predictability they found will be little comfort to ship captains who find themselves nervously eyeing the horizon as the winds pick up. "In principle, it is possible to predict an ocean rogue wave, but our estimate of the reliable forecast time needed is some tens of seconds, perhaps a minute at most," says Steinmeyer. "Given that two waves in a severe North Sea storm could be separated by 10 seconds, to those who say they can build a useful device collecting data from just one point on a ship or oil platform, I'd say it's already been invented. It's called a window." The complexity of waves at sea is the result of the winds that create them. While ocean waves are chaotic in origin, they often organise themselves into packs or groups that stay together. In 2015 Themis Sapsis and Will Cousins of MIT in Cambridge, Massachusetts, used mathematical models to show how energy can be passed between waves within the same group, potentially leading to the formation of rogue waves. The following year, they used data from ocean buoys and mathematical modelling to generate an algorithm capable of identifying wave groups likely to form rogues. Most other attempts to predict rogue waves have attempted to model all the waves in a body of water and how they interact. This is an extremely complex and slow process, requiring immense computational power. Instead, Sapsis and Cousins found they could accurately predict the focusing of energy that can cause rogues, using only the measurements of the distance from the first to last waves in a group, and the height of the tallest wave in the pack. "Instead of looking at individual waves and trying to solve their dynamics, we can use groups of waves and work out which ones will undergo instabilities," says Sapsis. He thinks his approach could allow for much better predictions. If the algorithm was combined with data from LIDAR scanning technology, Sapsis says, it could give ships and oil platforms 2-3 minutes of warning before a rogue wave formed. Others believe the emphasis on waves' ability to catch other waves and steal their energy – which is technically called "modulation instability" – has been a red herring. "These modulation instability mechanisms have only been tested in laboratory wave tanks in which you focus the energy in one direction," says Francesco Fedele of Georgia Tech in Atlanta. "There is no such thing as a uni-directional stormy sea. In real-life, oceans' energy can spread laterally in a broad range of directions." They used historic weather forecast data to simulate the spread of energy and ocean surface heights in the run up to the Draupner, Andrea and Killard rogue waves, which struck respectively in 1995, 2007 and 2014. Their models matched the measurements, but only when they factored in the irregular shapes of ocean waves. Because of the pull of gravity, real waves have rounded troughs and sharp peaks – unlike the perfectly smooth wave shapes used in many models. Once this was factored in, interfering waves could gain an extra 15-20% in height, Fedele found. "When you account for the lack of symmetry between crest and trough, and add it to constructive interference, there is an enhancement of the crest amplitudes that allows you to predict the occurrence observed in the ocean," says Fedele. What's more, previous estimates of the chances of simple linear interference generating rogue waves only looked at single points in time and space, when in fact ships and oil rigs occupy large areas and are in the water for long periods. This point was highlighted in a 2016 report from the US National Transportation Safety Board, written by a group overseen by Fedele, into the sinking of an American cargo ship, the SS El Faro, on 1 October 2015, in which 33 people died. "If you account for the space-time effect properly, then the probability of encountering a rogue wave is larger," Fedele says. Also in 2016, Steinmeyer proposed that linear interference can explain how often rogue waves are likely to form. As an alternative approach to the problem, he developed a way to calculate the complexity of ocean surface dynamics at a given location, which he calls the "effective" number of waves. "Predicting an individual rogue wave event might be hopeless or non-practical, because it requires too much data and computing power. But what if we could do a forecast in the meteorological sense?" says Steinmeyer. "Perhaps there are particular weather conditions that we can foresee that are more prone to rogue wave emergence." Steinmeyer's group found that rogue waves are more likely when low pressure leads to converging winds; when waves heading in different directions cross each other; when the wind changes direction over a wide range; and when certain coastal shapes and subsea topographies push waves together. They concluded that rogue waves could only occur when these and other factors combined to produce an effective number of waves of 10 or more. Steinmeyer also downplays the idea that anything other than simple interference is required for rogue wave formation, and agrees that wave shape plays a role. However, he disagrees with Fedele's view that sharp peaks can have a significant impact on wave height. "Their main role is that ocean waves are not perfect sine waves, but have more spikey crests and depressed troughs. However, what we calculated for the Draupner wave is that the effect of non-linearities on wave height was in the order of a few tens of centimetres." In fact, Steinmeyer thinks that Longuet-Higgins had it pretty much right 60 years ago, when he emphasised basic linear interference as the driver of large waves, rogue or otherwise. But not everyone agrees. In fact, the argument over exactly why rogue waves form seems set to rumble on for some time. Part of the issue is that several kinds of scientists are studying them – experimentalists and theoreticians, specialists in optical waves and fluid dynamics – and they have not as yet done a good job of integrating their different approaches. There is no sign that a consensus is developing. But it is an important question to solve, because we will only be able to predict these deadly waves when we understand them. Links : Thursday, May 18, 2017 North Sea wind power hub: A giant wind farm to power all of north Europe North Sea Infrastructure The future development of a North Sea energy system up to approx. 2050 will require a rollout, coordinated at European level, of interlinked offshore interconnectors, i.e. a so-called interconnection hub, combined with large-scale wind power. Any surplus wind power could be converted into other forms of energy, or stored. Situating this interconnection hub on a modularly constructed island in a relatively shallow part of the North Sea would result in significant cost savings. These are the starting points for a proposed efficient, affordable and reliable energy system on the North Sea, which will contribute to European objectives being met. This vision does not preclude the option of providing renewably generated power from the wind farms to nearby oil and gas platforms to reduce Europe's CO2 emissions. From Ars Technica by William Steel The harnessing of energy has never been without projects of monolithic scale. From the Hoover Dam to the Three Gorges—the world's largest power station—engineers the world over have recognised that with size comes advantages. The trend is clear within the wind power industry too, where the tallest wind turbines now tower up to 220m, with rotors spinning through an area greater than that of the London Eye, generating electricity for wind farms that can power whole cities. While the forecast for offshore wind farms of the future is for ever-larger projects featuring ever-larger wind turbines, an unprecedented plan from electricity grid operators in the Netherlands, Germany, and Denmark aims to rewrite the rulebook on offshore wind development. A proposed North Sea power link island, as conceived by TenneT with a map of the North Sea, with the location of the Dogger Bank and the possible interconnectors highlighted The proposal is relatively straight-forward: build an artificial island in the middle of the North Sea to serve as a cost-saving base of operations for thousands of wind turbines, while at the same time doubling up as a hub that connects the electricity grids of countries bordering the North Sea, including the UK. In time, more islands may be built too; daisy chained via underwater cables to create a super-sized array of wind farms tapping some of best wind resources in the world. “Don’t be mistaken, this is really a very large, very ambitious project—there’s nothing like it anywhere in the world. We’re taking offshore wind to the next level,” Jeroen Brouwers, spokesperson for the organisation that first proposed the plan, Dutch-German transmission system operator (TSO) TenneT, tells Ars Technica. “As we see it, each island could facilitate approximately 30 gigawatts (GW) of offshore wind energy; but the concept is modular, so we could establish multiple interconnected islands, potentially supporting up to 70 to 100GW.” The London Array To add some context to those figures, consider that the world’s largest offshore wind farm in operation today, the London Array, has a max capacity of 630MW (0.63GW), and that all the wind turbines installed in European waters to date amount to a little over 12.6GW. The Danish TSO Energinet says 70GW could supply power for some 80 million Europeans. Undoubtedly ambitious, the North Sea Wind Power Hub—as the project is titled—is nevertheless being taken seriously by key stakeholders. The project was centre of attention at the seminal North Seas Energy Forum held in Brussels at the end of March. There, the consortium behind the project (Dutch-German TSO TenneT, alongside the Danish TSO Energinet) took the opportunity to sign a memorandum of understanding (MoU) that will drive the project forward over the coming decades. Dagmara Koska, a member of the cabinet of the EU vice-president in charge of the Energy Union (Maroš Šefčovič), tells Ars Technica: “We’re incredibly supportive of the project and welcome the MoU. The agreement demonstrates commitment to a very exciting prospect; one that stands to create a lot of synergies to benefit the growth of renewables energy in northern Europe.” On the intentions of the Wind Power Hub, Koska says: “From our perspective, the project fully reflects the spirit of the North Seas Energy Cooperation—the political agreement signed last yearto facilitate deployment of offshore renewable energy alongside interconnection capacity across the region. As Maroš Šefčovič said at the signing, it’s an ingenious solution.” The London Array wind farm is the largest in operation with 175 wind turbines generating enough power for close to half a million UK homes annually. A paradigm shift The North Sea Wind Power Hub represents a fundamentally new approach to the development of offshore wind; one that tackles multiple challenges faced by the wind industry head on and capitalises on economies of scale in a bid to deliver access to the wind resources of the North Sea at reduced costs. Something of a case of necessity being the mother of invention, Brouwers explains that the Wind Power Hub concept is a response to a looming problem faced by the wind industry: ”At the moment, offshore wind is focused on sites relatively close to shore where development costs are lower. The problem is that there’s not space for the 150GW of offshore wind power that the EU has called for. There are other industrial and economic interests in those near-shore regions—fishing, shipping lanes, military areas and so on. "This pushes things farther out to sea, but the costs can rapidly rise as you move to deeper waters. The solution? Create near-shore costs, or even lower, out at sea.” Construction of offshore wind farms is a highly complex logistical and engineering operation So how would the Wind Power Hub deliver on this objective? Well, the wind farms envisioned by the project wouldn’t be dissimilar from those we see today, but their proximity and connection to artificial "power link islands" represents a substantial departure from the conventional model for offshore wind. “The idea is that islands as large as six square kilometres would feature a harbour, a small airstrip, transmission infrastructure, and all equipment necessary to maintain the surrounding wind farms, alongside accommodation and workshops for staff,” Brouwers says. London Array construction These novel features would open up a lot of possibilities for wind power developers and operators. With a base of operations out at sea—complemented with storage of components, assembly lines, and other logistical assets—the installation of wind turbines would be more convenient, efficient, and ultimately cheaper than is achieved by today’s methods which rely on specialised ships journeying out from ports. Savings on installation would be coupled with reduced expenditure over the twenty-year lifetime of wind turbines, too. Operations and maintenance of offshore wind turbines—a crucial, albeit expensive, affair that stands to be transformed with a base of operations located out at sea. Onshore wind farms require a lot of support. But in harsh marine environments, that need is paramount. Operations and maintenance, or O&M, is key to ensuring turbines avoid downtime and remain productive. By convention (and presently also by necessity) offshore O&M run out of ports; it's logistically complex and pricey, easily representing some 20% of a wind turbine's levellised cost of energy (LCOE), and increasing with distance from shore. O&M is a permanent fixture on the wind industry’s list of areas within which it aims to lower expenditure, and highlighted as such by the International Renewable Energy Agency, which reports: “It is clear that reducing O&M costs for offshore wind farms remains a key challenge and one that will help improve the economics of offshore wind.” “In contrast to what we see today,” says Brouwers, “operating from an island on the doorstep of the wind farms would be a game-changer in terms of reducing costs and simplification of O&M activities.” Subsea DC cables would not only export power from the wind farms, but will serve as interconnectors between countries bordering the North Sea. High Voltage Direct Current Alongside savings on installation and reductions on O&M, a third major cost saving feature of the Wind Power Hub concerns grid connections—the electrical infrastructure that links wind farms with electricity grids. Typically, grid connection is a significant cost component in offshore wind, representing between 15 to 30% of the capital costs for an offshore wind farm, with costs creeping higher the farther from shore you go. Like O&M, grid connection is a cost component that holds potential for improvement. With the Wind Power Hub, instead of alternate current (AC) cables taking electricity from a wind farm to grids onshore—the typical arrangement we see today—the output of multiple wind farms would be directed to a power link island. There, electricity would be aggregated, conditioned for transmission, and then dispatched to onshore grids of the North Sea countries. It’s a setup that would reduce the amount of export cables running to individual wind farms, and enable cost-effective use of high-voltage direct current (DC) transmission that boasts the added benefit of reduced losses compared to AC transmission. International electricity interconnections are the set of lines and substations that allow the exchange of energy between neighbouring countries and generate a number of advantages in connected countries. North Sea Super Grid: The key to sustainable energy in Europe As significant as the North Sea Wind Power Hub would be terms of clean energy production and cost reduction of offshore wind power, the broader proposition for the concept goes beyond island-building and supporting wind farms. It would provide a solution to one of the central challenges in transitioning to a sustainable future. As Brouwers says: “When we talk about the transition towards 100% sustainable energy production, it’s simply not possible from a national point of view. We need to consider things on a European level, and we need the infrastructure to transport the renewable electricity to where it is needed.” The inherent difficulty with renewable energy is its intermittency: power generation relies on variable resources like the Sun and wind that we cannot control. It’s an immutable characteristic of renewables, and one that creates problems for grids trying to balance supply and demand, and ensure efficient use of generated electricity. At least part of the solution is interconnectors—cables that function as long distance energy conduits across and between electricity grids. Interconnectors allow for electricity generated in one region to be transmitted to another, and allow countries to import and export electricity. The UK, for example, has interconnectors with France (2GW), the Netherlands (1GW), Northern Ireland (500MW), and the Republic of Ireland (500MW). “Without interconnectors we’re not able to balance supply and demand and that’s crucial for the energy transition. It’s absolutely key,” explains the EU Energy Union’s Koska. “We have cables between some North Sea countries already, but considering the amount of renewables coming online in the region, it’s not enough if we are to optimise use of resources available.” The imperative and current efforts to establish a European super grid are part of another story for another day, but the significance of interconnectors is neatly outlined in the YouTube video above from the Spanish TSO Red Eléctrica. In this matter of interconnectors and energy distribution, the Wind Power Hub would serve an extraordinarily valuable purpose; one Koska describes as “a clear response to needs of the European grid, and the goals set by the European Union that would contribute to a crucial part of the energy transition.” As noted earlier, undersea cables would transmit electricity from islands to countries bordering the North Sea, but the same DC cables would also function as interconnectors between those nations. Something similar is already under development in the Baltic Sea, where the Combined Grid Solution will connect Danish and German electrical grids via the Kriegers Flak wind farm. The Wind Power Hub applies a similar logic, albeit connecting via islands and not wind farms, and on a much grander scale. The Netherlands, Denmark, Germany, the UK, Norway and Belgium are all potential players in this new North Sea grid.  Construction of Mischief island by China has resulted in some 1,379 acres of land. Specialized ships involved in the construction process can be seen in this image. The dark lines seen connected to ships are floating pipes that pump sediment to be deposited. photo : CSIS Asia Maritime Transparency Initiative /Digital Globe Building islands Construction of islands is nothing new. Prominent examples of the practice come from China and Dubai. Although motivated by radically different intentions (in the former instance, to establish a military presence in waters of the South China Sea; in the latter, to support luxurious hotels and residences) both nations have demonstrated the validity of creating artificial islands to varying specifications. In the simplest of terms, island-building involves dumping a huge amount of rock and sediment on the seabed until an island emerges. In reality, a little more finesse and a significant amount of engineering skill goes into the process. Acumen here means that islands may be built to survive waves, storms, and erosion, as well ensure that the newly minted land can physically support whatever is destined to be built on the island. Expertise will be especially critical for islands of the North Sea Wind Hub where the northerly climate and rough waters of the North Sea offer up considerable challenges. Still, with the Netherlands party to the project, there will be no shortage of world-class engineers on hand to deliver solutions. The Dutch have a long history in land reclamation and have been at the helm of some of the most prominent examples of island building around the world, including those of Dubai.  A European wind power infographic produced by WindEurope in 2016. The task ahead The North Sea Wind Power Hub is a vast, multinational project that won't just pop up overnight. Brouwers notes that the consortium imagines a first island could be realised by 2035. Project literature frames the project as one providing a vision for joint European collaboration out to 2050. “It’s a long-term project, but it’s important to begin now and that the industry knows what on the horizon,” says Brouwers. For its part, numerous bodies within the European wind industry have acknowledged and expressed optimism about the project. Andrew Ho, senior offshore wind analyst of the wind power trade association Wind Europe, tells Ars Technica: “Setting out a long term ambition for offshore wind provides a great signal to the wind sector. It’s not governments that are behind the target yet, it’s TSOs laying out the vision—but it’s still important to know that they see a big role for offshore wind in the future of European energy. "The reality is we need a lot more clean energy if we’re going to decarbonise and really commit to the actions of COP21. For that, we need the technologies that can deliver vast amounts of clean power with relatively stable output—and that’s what offshore wind gets you.The wind industry would certainly be ready to deliver the volume of offshore wind envisioned by the Wind Power Hub.” Ho emphasised that the wind industry’s activities over the forthcoming decade will lay the groundwork for the Wind Power Hub's success: “The project would give us a pathway from 2030 to 2050, but we’re missing policy targets for 2023 to 2030. To explore the project’s full potential we need to support development through the next decade to ensure we’re fully cost competitive with other sources of energy in the period leading up to 2030.” As the industry works towards reducing costs, the consortium will busy itself with more practical matters. Brouwers explains: “The next steps involve feasibility studies. We’re also underway in collaborating with environmental groups about the construction of the islands and in talks with infrastructure companies beyond the energy sector, of the sort that would provide critical insight on the project. There’s certainly a lot of work ahead of us.” The North Sea Wind Power Hub is an unquestionably mammoth project. But in so being it aptly reflects the enormity of challenges we face in tackling climate change. Many would contend that we already have the technologies necessary for transitioning to a sustainable energy system. The Wind Power Hub project reminds us that boldly pursuing the extraordinary, and resolving to commit to collaborative solutions, are traits that will serve us well in application of those technologies. Links : Wednesday, May 17, 2017 How an uninhabited island got the world’s highest density of trash with the highest density of plastic debris reported anywhere on the planet. From National Geographic by Laura Parker No one lives there. It is about as far away from anywhere and anyone on Earth. All of it is trash, most of it plastic.  One researcher claims that a hermit crab that has made its home in a blue Avon cosmetics pot is a 'common sight' on the island. The plastic is very old and toxic, and is damaging to much of the island's diverse wildlife “Although alarming, these values underestimate the true amount of debris, because items buried 10 cm below the surface and particles less than 2 mm and debris along cliff areas and rocky coastlines could not be sampled,” Lavers and a colleague wrote in their study, published Tuesday in the scientific journal, Proceedings of the National Academy of Sciences. Henderson Island, with the GeoGarage platform a coral atoll in the south Pacific, is just 14.5 square miles (37.5 square km), and the nearest cities are some 3,000 miles (4,800 km) away  Henderson Island has the highest density of plastic debris in the world, with 3,570 new pieces of litter washing up on its beaches every day. Jenna Jambeck, a University of Georgia environmental engineering professor, who was one of the first scientists to quantify ocean trash on a global scale, was not surprised that Lavers and Bond discovered plastic in such abundance on Henderson. Jambeck’s 2015 study concluded that 8 million tons of trash flow into the ocean every year, enough to fill five grocery store shopping bags for every foot of coastline on Earth. “One of the most striking moments to me while working in the field was when I was in the Canary Islands, watching microplastic being brought onto the shore with each wave,” she says. “There was an overwhelming moment of ‘what are we doing?’ It’s like the ocean is spitting this plastic back at us. So I understand when you’re there on the beach on Henderson, it’s shocking to see.” The Henderson research ranks with earlier discoveries of microplastics in places so remote, such as embedded in the deep ocean floor or in Arctic sea ice, that finding plastic in such abundance touched a nerve. Links : Tuesday, May 16, 2017 The incredible 'x-ray' map of the world's oceans that reveals the damage mankind has done to them Darker colors, which can be seen in the East China and the North Seas, for example, show just where the ocean has been hit hardest. Source: NGM Maps, 'Spatial and Temporal Changes in Cumulative Human Impacts on the World's Ocean,' Ben S. Halpern and others, Nature Communications; UNEP-WCMC World Database on Protected Areas (2016 From DailyMail by Cheyenne MacDonald • Study used satellite images and modelling software, to compare cumulative impact in 2008 and 2013 • Over this span of time, researchers found that nearly two-thirds of the ocean shows increased impact • These impacts stem from fishing, shipping, or climate change – and some areas are experiencing all three The stunning map comes from the April 2017 issue of National Geographic magazine, based on data from a recent study published to Nature Communications, and the World Database on Protected Areas. Darker colors, which can be seen in East China and the North Seas, for example, show just where the ocean has been hit hardest. ‘The ocean is crowded with human uses,’ the authors explain in the paper. ‘As human populations continue to grow and migrate to the coasts, demand for ocean space and resources is expanding, increasing the individual and cumulative pressures from a range of human activities. ‘Marine species and habitats have long experienced detrimental impacts from human stressors, and these stressors are generally increasing globally.’ Using satellite images and modelling software, the researchers calculated the cumulative impact of 19 different types of human-caused stress on the ocean, comparing the effects seen in 2008 with those occurring five years later. The map above reveals the cumulative human impact to marine ecosystems as of 2013, based on 19 anthropogenic stressors. Shades of red indicate higher impact scores, while blue shows lower scores This revealed that nearly two-thirds (66 percent) of the ocean, and more than three-quarters (77 percent) of coastal areas experienced increased human impact, which the researchers note are ‘driven mostly by climate change pressures.’ ‘A lot of the ocean is getting worse, and climate change in particular is driving a lot of those changes,’ lead author Ben Halpern told National Geographic. While the Southern Ocean was found to be subjected to a ‘patchy mix’ of increases and decreases, the researchers found that other areas, especially the French territorial holdings in the Indian Ocean, Tanzania, and the Seychelles, saw major increases. Just 13 percent of the ocean saw a decrease in human impact over the years included in the study. These regions were concentrated in the Northeast and Central pacific, along with the Eastern Atlantic, according to the researchers. In a comprehensive study analyzing changes over a five-year period, researchers found that nearly two-thirds of the ocean shows increased impact.’The graphic shows (a) the difference from 2013 to 2008, with shades of red indicating an increase, while blue shows decrease. It also reveals (b) the 'extreme combinations of cumulative impact and impact trend' Links : Monday, May 15, 2017 Netherlands NLHO layer update in the GeoGarage platform 1 new inset added see GeoGarage news Changes to Traffic Separation Scheme TSS to be implemented on 1st June, 2017  Prelimary notice of changes of the shipping routing Southern North Sea (Belgium and Netherlands).  changes in charts (see NTMs Berichten aan Zeevarenden week 17 / 15 / 09) New Zealand Linz layer update in the GeoGarage platform 7 nautical raster charts updated China revises mapping law to bolster claims over South China Sea land, Taiwan China claims they aren't military bases, but their actions say otherwise.  From JapanTimes China’s National People’s Congress Standing Committee, a top law-making body, passed a revised version of China’s surveying and mapping law intended to safeguard the security of China’s geographic information, lawmakers told reporters in Beijing. Hefty new penalties were attached to “intimidate” foreigners who carry out surveying work without permission. President Xi Jinping has overseen a raft of new legislature in the name of safeguarding China’s national security by upgrading and adding to already broad laws governing state secrets and security. Laws include placing management of foreign nongovernmental organizations under the Security Ministry and a cybersecurity law requiring that businesses store important business data in China, among others. Overseas critics say that these laws give the state extensive powers to shut foreign companies out of sectors deemed “critical” or to crack down on dissent at home. The revision to the mapping law aims to raise understanding of China’s national territory education and promotion among the Chinese people, He Shaoren, head spokesman for the NPC Standing Committee, said, according to the official China News Service. When asked about maps that “incorrectly draw the countries boundaries” by labeling Taiwan a country or not recognizing China’s claims in the South China Sea, He said, “These problems objectively damage the completeness of our national territory.” China claims almost all the South China Sea and regards neighboring self-ruled Taiwan as a breakaway province. The new law increases oversight of online mapping services to clarify that anyone who publishes or distributes national maps must do so in line with relevant national mapping standards, He said. The rise of technology companies which use their own mapping technology to underpin ride-hailing and bike-sharing services made the need for revision pressing, the official Xinhua News Agency said Tuesday. Foreign organizations that wish to carry out mapping or surveying work within China must make clear that they will not touch upon state secrets or endanger state security, according to Song Chaozhi, deputy head of the State Bureau of Surveying and Mapping. Foreign individuals or groups who break the law could be fined up to 1 million yuan ($145,000), an amount chosen to “intimidate,” according to Yue Zhongming, deputy head of the NPC Standing Committee’s legislation planning body.  According to MoT, China cleared the wreckage of stranded fishing boat on Scarborough Shoal to ensure the security of navigation. China’s Southeast Asian neighbors are hoping to finalize a code of conduct in the South China Sea, but those working out the terms remain unconvinced of Beijing’s sincerity. Signing China up to a legally binding and enforceable code for the strategic waterway has long been a goal for claimant members of the Association of Southeast Asian Nations. But given the continued building and arming of its artificial islands in the South China Sea, Beijing’s recently expressed desire to work with ASEAN to complete a framework this year has been met with skepticism and suspicion. The framework seeks to advance a 2002 Declaration of Conduct (DOC) of Parties in the South China Sea, which commits to following the United Nations Convention on the Law of the Sea (UNCLOS), ensuring freedom of navigation and overflight, and “refraining from action of inhabiting on the presently uninhabited islands, reefs, shoals, cays, and other features.” The South China Sea Dispute – An Update, Lecture Delivered on April 23, 2015 at a forum sponsored by the Bureau of Treasury and the Asian Institute of Journalism and Communications at the Ayuntamiento de Manila. But the DOC was not stuck to, especially by China, which has built seven islands in the Spratly archipelago.It is now capable of deploying combat planes on three reclaimed reefs, where radars and surface-to-air missile systems have also been installed, according to the Asia Maritime Transparency Initiative think tank. Beijing insists its activities are for defense purposes in its waters. Malaysia, Taiwan, Brunei, Vietnam and the Philippines, however, all claim some or all of the resource-rich waterway and its myriad of shoals, reefs and islands. Finalizing the framework would be a feather in the cap for the Philippines, which chairs ASEAN this year. Manila has reversed its stance on the South China Sea, from advocating a unified front and challenging Beijing’s unilateralism, to putting disputes aside to create warm ties. Philippine President Rodrigo Duterte has opted not to press China to abide by an international arbitration decision last year that ruled in Manila’s favor and invalidated Beijing’s sweeping South China Sea claims. There will be no mention of the Hague ruling in an ASEAN leaders’ statement at a summit in Manila on Saturday, nor will there be any reference to concerns about island-building or militarization that appeared in last year’s text, according to excerpts of a draft. The map’s most valuable and relevant feature is found on the upper left section where a cluster of land mass called “Bajo de Masinloc” and “Panacot” – now known as Panatag or Scarborough Shoal – located west of the Luzon coastline  (see YouTube : An ancient map is reinforcing Manila's arbitration victory against China on the disputed South China Sea.) Duterte said Thursday that he sees no need to gather support from his neighbors about the July 2016 landmark decision. His predecessor, Benigno Aquino III, brought the territorial disputes to the Permanent Court of Arbitration in The Hague in 2013 amid China’s aggressive assertion of its claims in the South China Sea by seizing control of Scarborough Shoal located less than about 300 km (200 miles) from the Philippines’ Luzon island, and harassment of Philippine energy surveillance groups near the Reed Bank, among others. While the arbitration case was heard, China completed a number of reclamation projects on some of the disputed features and fortified them with structures, including those military in nature. China did not participate in the arbitration hearing, and does not honor the award, insisting it only seeks to settle the matter bilaterally with the Philippines. Duterte had said he will confront China with the arbitral award at a proper time during his administration, which ends in 2022, especially when Beijing starts to extract mineral and gas deposits. He rejected the view that China can be pressed by way of international opinion, saying, “You are just dreaming.” The Philippines, meanwhile, has completed an 18-day scientific survey in the South China Sea to assess the condition of coral reefs and draw a nautical map of disputed areas. Two survey ships, including an advanced research vessel acquired from the United States, conducted surveys around Scarborough Shoal and on three islands, including Thitu, in the Spratly group, National Security Adviser Hermogenes Esperon said Thursday. “This purely scientific and environmental undertaking was pursued in line with Philippine responsibilities under the U.N. Convention of the Law of the Sea to protect the marine biodiversity and ensure the safety of navigation within the Philippines’ EEZ,” Esperon said in a statement. He gave no details of the findings from the reef assessments and nautical mapping of the area, which was carried out between April 7 to 25. Links : Sunday, May 14, 2017 Here on the way from Ushuaia Argentina to Piriapolis in Uruguay. Footage by Arved Fuchs, Felix Hellmann and Heimir Harðarson. The Dagmar Aaen was employed for the fishing industry until 1977.
2a885eead1cbb1b0
Tuesday, August 19, 2014 Maldacena's bound on statistical significance JM: Geometry and Quantum Mechanics, Maldacena reminds us of the obvious and old observation that the spacetime inside the black hole interior (i.e. the lifetime and the Lebensraum of the poor infalling observers) is limited which inevitably seems to affect the accuracy and reliability of the experiments. Such limitations are often described in terms of the usual uncertainty relations. Inside the hole, you can't measure the energy more accurately than with the \[ \Delta E = \frac\hbar{2 \Delta t} \] error margin and similarly for the momentum, and so on. But Juan chose to phrase his speculative ideas about the universal bound in a more invariant and more novel way, using the notion of entropy. A person who is falling into a black hole and wants to make a measurement must be sufficiently different from the vacuum. But after she is torn apart, hung by her balls, and destroyed (note that I am politically correct and "extra" nice to the women so I have used "she"), the space she has once occupied is turned into the vacuum. The vacuum inside a black hole of a fixed mass is more generic so the "emptying" means that the total entropy goes up. Juan says that the relative entropy\[ S(\rho|\rho_{\rm vac}) = \Delta K - \Delta S \geq 0 \] Because we know that once she's destroyed at the singularity, the entropy jumps at least by her entropy, it is logical – and Juan is tempted – to interpret the life and measurements inside the black hole, and not just the fatal end, as a process in which she approaches the equilibrium. So it's not possible to perform a sophisticated, accurate, and/or reliable experiment without sending something in. And if we send something in, the entropy will increase. An explicit inequality that Maldacena conjectured is the following inequality for the statistical significance:\[ p \gt \exp(-S) \] That's a formula written in the convention where the \(p\)-value is close to zero. If you prefer to talk about "\(P=\)99% certainty", you would write the same thing as\[ P \lt 1-\exp(-S) \] The certainty is less certain than 100% minus the exponential of the negative entropy and I suppose that by \(S\), Juan only means the entropy of the object. It's still huge which means that the statement above is very weak. The entropy of a human being exceeds \(10^{26}\) (in the dimensional units nats or, almost equivalently, in the less natural but more well-known bits) so the deviation from 100% is just \(\exp(-10^{26})\) which is a really small number morally closer to the inverse googolplex than the inverse googol. There may be stronger inequalities like that. And I also suspect that many such inequalities could be applicable generally – outside the context of black hole interiors. Have you ever encountered such inequalities or proved them? Note that the \(p\)-value encoding the statistical significance is the probability of a false positive. If we're constrained to live in a finite-dimensional Hilbert space where all basis vectors get ultimately mixed up with each other or something else, it's probably impossible to be any certain than your microstate isn't a particular one. But there are just \(\exp(S)\) basis vectors in the relevant Hilbert space and one of them may be right even if the "null hypothesis" holds, whatever it is. I am essentially trying to say that \(\exp(-S)\) is the minimum probability of a false positive. If someone thinks that she can formulate such comments more clearly or construct some evidence if not a conclusive proof (or proofs to the contrary), I will be very curious. If you allow me to return to the black hole interior issues: It seems to me that these "bounds on accuracy or significance" haven't played an important role in the recent firewall wars. But they're still likely to be a part of any complete picture of the black hole interior. For example, it's rather plausible that all the arguments (and instincts) directed against the state dependence violate these bounds. Juan tends to say that the rules of quantum mechanics may become approximate or inaccurate or emergent inside the black hole, and so on. He even says that "because the time is emergent inside, so is probably the whole quantum mechanics". Well, the answer may depend on which rule of quantum mechanics we exactly talk about. But quite generally, I don't believe that there can be any modification of quantum mechanics, even in the mysterious black hole interiors. In particular, the inequalities sketched by Maldacena himself might be derivable from orthodox quantum mechanics itself. And I would be repeating myself if I were arguing that ideas like ER-EPR and state dependence agree with all the postulates of quantum mechanics. Also, if we sacrifice the exact definition of time as a variable that state vectors or operators depend on – and we do so e.g. in the S-matrix description of string theory – it doesn't really mean that we deform quantum mechanics, does it? If we lose time, we no longer describe the evolution from one moment to another and we get rid of the explicit form of the Heisenberg or Schrödinger equations. But the "true core" of quantum mechanics – linearity and Hermiticity of operators, unitarity of transformation operators, and Born's rule – remain valid. What breaks down inside the black hole is the idea that exactly local degrees of freedom capture the nature of all the phenomena. But unlike locality, quantum mechanics doesn't break down. I should perhaps emphasize that even locality is only broken "spontaneously" – because the black hole geometry doesn't allow us to use the Minkowski spacetime as an approximation for the questions we want to be answered. 1. They're government workers. Of course 80% are going to require an operating system that was designed for mental defectives! Frankly, I'm surprised that the number is not even higher. I guess that's an indication of the partial success that the LiMux developers had in dumbing down the system to government-worker level -- a difficult task. I guess that Munich, in anticipation of the change, is transferring the budget for hiring a competent IT staff to purchasing third-party virus-protection software. "Penguins belong to the South Pole, not to European or American buildings." Except, apparently, Google datacenters. You do know that Google Web Server (which feeds this blog) runs on Linux, don't you? 2. Linux is fast and tight, Windows is pretty. I did a 3-month calculation of a growing crystal lattice. Knoppix (boot from CD) ran 30% faster than Windows, AMD ran 30% faster than Intel. Knoppix in AMD still ran three months - but the log-log plot of the output was longer, Past 32 A radius ran in blades. Theoretical slope is -2. The fun is in the intercept (smaller is better) and the bandwidth. Unix is not unfriendly, but it is selective about who its friends are. "the Linux solution is very expensive because it requires lots of custom programming." Bespoke vs. off the rack. 3. Nope, I am using Linux since over 20 years, and I am in trouble only whenever I have to use a computer with Windows installed :-) 4. This is silly. Germany is (unlike Greece and others) a very well functioning country with a healthy equilibrium between the commercial and government sector. So the people who work for the government are in principle the very same kind of people who work in the private sector, too. The government sector has a different way how it's funded - it's stealing money from the productive citizens via the so-called "taxes" - but that doesn't really affect the work that the employees are doing there. I think that the Google web server running this server should be moved to the South Pole, too. ;-) 5. I just cannot envision any modification of quantum mechanics whatsoever. I’ll bet that lubos is correct here. 6. "Time" is a whore concept. No reason to believe QM depends on its survival. 7. Interesting point that one cannot perform a measurement absent a source and a sink. If everything is at equilibrium, one can build a thermometer and read it, but not calibrate it to assign the output meaning. 8. Sadly, Windows taught people that (1) Computers should be pretty and should be so easy a 3 year old could use them and (2) Computers should crash all the time. People expect lousy performance and don't care, as long as Facebook and Twitter come up most of the time. I don't use Windows at all now. I use open source software. I fully admit that most people have not the training nor the ambition to do this. I pay nothing for my software and my computer works the way I want it to. I find Windows too confining. On the other hand, for those who want pretty, sparkly screens, and no thought required, Windows is the way to go. 9. OK but having used Linux for 20 years should be classified as a medical disorder. ;-) 10. It's only strange because the "technical people" have been penetrated by anti-market zealots who suppress everyone else. It's much stranger to be a fan of such a thing. Unix is a system from the 1960s that should be as obsolete today as the cars or music from the 1960s. But it's not obsolete especially because its modern clones have been promoted by a political movement. Unix, like Fortran and other things, should share the fate of Algol, Cobol, Commodore 64 OS, and many other things, and go to the dumping ground of the history where it has belonged for quite some time. 11. There is nothing wrong for a system to be usable by a 3-year-old. Coffee machines, toasters, and vacuum cleaners have the same property. Kids are ultimately the best honest benchmarks to judge whether software is constructed naturally. When kids may learn it, it really means that an adult is spending less energy with things that could also be made unnecessarily complicated, and it's a good thing. My Windows 7 laptop hasn't crashed for a year since I stopped downloading new and new graphics drivers etc. I had freezes due to Mathematica's insane swapping to the disk - when it should say "I give up" instead - but that's a different thing. 12. "So the people who work for the government are in principle the very same kind of people who work in the private sector, too." Ah ... so can you show me the private sector equivalent, in principle, of the Potsdam Institute for Climate Impact Research? ;-) The United States also is a very well-functioning country with a healthy equilibrium between the commercial and government sector. (In fact, I would argue that the US is less socialist than Germany.) Surely, during your time in the US you must have been forced to deal with the New Jersey or Massachusetts DMV? (Here I use the generic term -- in New Jersey it's called the MVC, while in Massachusetts it's the RMV.) If not, consider yourself very fortunate. There's a little bit of Greece in every government bureaucracy. (In the US, we have to tell them not to defecate in the hall -- http://www.newser.com/story/189036/epa-to-workers-stop-pooping-in-the-hall.html -- yeah.) These are the folks who prefer a platform that is better suited for gaming, entertainment, and viruses than getting quality work done. Hence, I agree with you, I think that Munich is leaning toward making the right decision. 13. Sure, I can. The commercial sector is literally drowning in similar šit, too. Try e.g. 14. Your taking of COBOL out to the dumping ground of history may be a bit premature. It's still actively being used in bluechip industries such as banking, insurance, and telecommunications. As far as new development goes it's rarely (if ever) used in GUI type applications but remains popular for high volume backend transaction processing in the bluechip industries. My guess is that your recent Bank of America transactions were touched by COBOL at some point, most likely in the mission critical application of updating your account. Not that I don't agree with your sentiment, it's just that it's incredibly difficult to get rid of. The business case for replacing existing backend systems with a more modern platform are usually weak. 15. Keyboards and mice should theoretically be obsolete too, but after playing with tablets for a couple of years, many people are moving back to laptops and even desktops for "real work". Linux having its origins in the 1960's is not an argument at all against it. 16. LOL, right, it surely feels like the two debit cards were attempted to be sent to me by a COBOL robot. ;-) I understand it's hard to get rid of things when lots of stuff has been written in an old framework. 17. Eelco HoogendoornAug 19, 2014, 10:57:00 PM 'What I am really stunned by is the unbelievably complicated culture of installing things on Linux.' Indeed. The only thing such accomplishes is making people feel clever because they haxxored their computer with 1337 compilars. In the real world of people trying to get stuff done, such nonsense is known as a lack of encapsulation, which is simply objectively bad software design. 18. Wow, what a highly emotional and non-factual piece. I come here for science news, but the credibility of the blog just plummeted. So three year old user friendliness is the main criterion for municipal desktop operating systems? Where did this criterion come from? If valid, there are several Linux distributions dedicated to three year olds. Dou Dou, for example. Come on Lubos you can de better. Where is the meat (facts)? 19. Have people who struggled with Linux run Windows computers for a long time before switching to a different operative system? Are there people who have always run Linux machines and never used Windows, but still feel unhappy about the Linux user experience. Just wondering because my mother started using computers when she was 60 yo, and she always found it pretty straightforward to use. Only time she tried to use Windows she found it pretty disgusting and user-unfriendly. 20. Lubos is a theorist. All theorists use Windows, while most all experimentalists use Linux (Scientific Linux is the official OS of Fermilab and CERN). I'll let someone else explain the reasons. 21. I think I get it already. Theorists tax the Operating System as lightly as a three-year-old, whereas experimentalists need the system for real work. 22. Dear Eelco, thanks for making these observations clear with some adult terminology! ;-) 23. I think it is true to some extent and there is nothing to be ashamed of. Of course that theorists often use computers in similar ways as writers (of literature), not really to compute, and they don't want to waste their time by forcing computers to do elementary things because computers are supposed to make things simpler, not harder. Experimenters do lots of complicated things with computers so they may sacrifice some friendliness without increasing the amount of wasted time by too high a percentage. For the Kaggle contest, I had to recreate an Ubuntu virtual machine because it seemed like the most plausible if not only way to install software that helps one produce competitive scores. By now, someone has ported it to Windows. I would probably prefer it but my experience with things like Visual Studio etc. is really non-existent, due to my Linux training, so the Linux path could have been easier for me due to the historical coincidences, too. 24. "it's been my point for years that the movement to spread Linux on desktop is an ideological movement" The reverse is true. Computing in the free world is subject to market forces. Linux has won hands down everywhere except for the Desktop where MS Office addicted persons obstruct innovation. Political and objective reasoning has placed Linux everywhere except the desktop. Grandmothers, children and some theorists have been well served on Desktop Linux for a decade or more. I invite you to drill down to the objective reasons why that is. We will probably never know the truth about Munich IT management decisions, but the wider market tells a clear and dramatic story in favour of open (but profit making) systems. If you find being called out for lack of meat obnoxious then I am sorry. This article happens to be the the first protein lacking I have seen by you, Thank you for the Reference Frame. 25. Desktop - and increasingly more often, mobile platforms - are the places where the actual work is being done and where the actual relevant features of operating systems are being tested. It's unambiguously clear that for the operating systems to do their work well, they should be profit-driven, company-protected systems. Whether the source is open or closed isn't too important. What's important is that a company has a financial interest to make it work. So Apple is doing the same thing for iOS and Google for Android that Microsoft is doing for Windows. The underlying mechanisms that make all these things usable are completely analogous and they require capitalism. 26. You call the sharing of IT ideas, architecture and open core modules "socialism". By the same token you are a rabid socialist for openly discussing your physics theories. By all means let Apple and Microsoft tinker with buttons and pixels to accommodate the increasingly dumbed down populations, but let the core architecture be defined by the Open Source world. This massively benefits the corporate world as well as the rest of humanity, which is why the corporate world all use Open solutions in one way or another. 27. Yes, I am an insane socialist donating intellectual assets of multi-million values to others for free. But that's less unethical than to be forcing others to use unusable products. 28. It may be several hundred thousand generations behind the most obsolete flying saucer dimensional transfer management system in the galaxy, but .NET is the greatest thing in the known universe for sure. Do the Linux bug dwellers have anything remotely like this? I don't know since I haven't looked but I seriously doubt it. Congratulations to the officials of Munich city who have belatedly achieved common sense. 29. Hmm, think you have been brainwashed by microsoft, Lubos---there are plenty of uses for Linux...even Google uses a lightly morphed version, as does Android, etc...here is a partial list of surprising adopters from Wikipedia: --lots of free compilers as well for developers and programmers. 30. I have never communicated with Microsoft or read any of its opinions - unfortunately, I would say - so I couldn't have been "brainwashed by Microsoft". I am not saying that people aren't using all kinds of other products, and so am I. Concerning mobile OSes, I have devices with iOS, Android, as well as Windows Phone, and Android is the most expensive one. I am just warning against the political movement that is trying to force different systems upon desktop users whose majority clearly and voluntarily prefers Microsoft Windows as the market conditions unambiguously show. 31. Unlike benchtop chemistry and biology, physics can be mostly taught online, with engineers later being hired to do experiments. I sure would like Lubos to join an online university to create video lectures, at both advanced and entry level physics. -=NikFromNYC=-, Ph.D. in chemistry (Columbia/Harvard) 32. Honest question: What's so great about it? Can you explain or give an example? Thanks. 33. I have to say that I fail to see the Linux world as some sort of sinister kabal that is forcing innocents to use unusable systems. Look at the Linux desktop market share, and you can at least say that they have failed. Windows is great for Microsoft-style word processing and spreadsheets. Perhaps it's even OK for TeX/LaTeX, if there's a decent and easy to install distribution for it (I know there is one for OSX, not sure about Windows). Linux seems popular for scientific computing, and where such users want a more polished and easy to use system for their work laptop/desktop, they choose OSX, which gives you Unix underneath and a polished user interface on top. That's why a progressive household would have all three operating systems on their computers. I know mine does. :) 34. OT: Which reminds me ... I'm feeling nostalgic. It's many decades since every other word in those horrible computer trade magazines seemed to be about the 'goto' statement and 'spaghetti code'. Now all is silent — as far as I know anyway. Oh, how I miss the tedium of it all! Anyone care to rekindle the exquisite ennui? Hey, how about a discussion on punched cards versus paper tape? :) Incidentally, as far as operating systems go, I mostly use Windows simply because, reluctantly, that was all that was made available to me at one point (more accurately it started with that awful DOS), but I got used to it and I can do all I need to do with it. But most of all I use it these days because I'm buggered if I'm going to spend any time looking up the kind of stuff that I lost interest in and forgot about years ago just to make a change for the sake of Greater F#cking Spartan. Also VBA behind Excel can be very handy for a quickie, a little like a fast shag behind the bicycle shed. Just the ticket sometimes. :) P.S. Many years ago, but again long past my interest date, I surprised myself by reading Bjarne Stroustrup's book on the genesis of C++ (I forget the title) and found it fascinating. I'm pretty sure I'm fully cured now though. :) 35. I just noticed that Microsoft is currently in the process of shifting its German operational center to - München, Schwabing. Now that they are becoming a big tax payer over there, it seems inconvenient for the municipal government to run on Linux. After all, Linux won't finance any pleasure ('amigo') trips for the local politicians, Microsoft perhaps does ... 36. Absent a source and a sink of time... everything happens? Or nothing happens? The event horizon is when happening stops? Can entropy be static? 37. "Suggestions the council has decided to back away from Linux are wrong, according to council spokesman Stefan Hauf." Some meat: 38. Dear FlanObrien, the committee to review the computing in the city was probably built by the executive power in the city which is why one should also respect the interpretation of the executive power, and not the council, why it was done. 39. Believing the world should run on the level of three-year-olds is really very disturbing. It may also explain why social has become more and juvenile over time. I figure if you need pretty pictures and shiny baubles, you're not really looking for a computer. More like an electronic playmate. It's interesting that your Vista computer worked so well. Mine crashed, despised the peripherals (all of which I replaced) and drove me to buy an Apple to escape the Microsoft curse. Maybe I just really use my computer more than most and expect it to function like I want it, not like a three-year-old wants it. I'm a grown-up now. I want a grown-up computer.
68c39e32c6eb0fee
måndag 19 december 2016 New Quantum Mechanics 21: Micro as Macro The new quantum mechanics as realQM explored in this sequence of posts offers a model for the microscopic physics of atoms which is of the same form as the classical continuum mechanical models of macroscopic physics such as Maxwell's equations for electro-magnetics, Navier's equations for solid mechanics and Navier-Stokes equations for fluid mechanics in terms of deterministic field variables depending on a common 3d space coordinate and time. realQM thus describes an atom with $N$ electrons realQM as a nonlinear system of partial differential equations in $N$ electronic wave functions depending on a common 3d space coordinate and time. On the other hand, the standard model of quantum mechanics, referred to as stdQM, is Schrödinger's equation as a linear partial differential equation for a probabilistic wave function in $3N$ spatial coordinates and time for an atom with $N$ electrons.   With realQM the mathematical models for macroscopic and microscopic physics thus have the same form and the understanding of physics can then take the same form. Microphysics can then be understood to the same extent as macrophysics. On the other hand, the understanding of microphysics according to stdQM is viewed to be fundamentally different from that of macroscopic physics, which effectively means that stdQM is not understood at all, as acknowledged by all prominent physicists. As an example of the confusion on difference, consider what is commonly viewed to be a basic property of stdQM, namely that there is limit to the accuracy that both position and velocity can be determined on atomic scales, as expressed in Heisenberg's Uncertainty Principle (HUP). This feature of stdQM is compared with the situation in macroscopic physics, where the claim is that both position and velocity can be determined to arbitrary precision, thus making the case that microphysics and microphysics are fundamentally different. But the position of a macroscopic body cannot be precisely determined by one point coordinate, since  a macroscopic body is extended in space and thus occupies many points in space.  No one single point determines the position of and extended body. There is thus also a Macroscopic Uncertainty Principle (MUP). The argument is then that if the macroscopic body is a pointlike particle,  then both its position and velocity can have precise values and thus there is no MUP. But a pointlike body is not a macroscopic body and so the argument lacks logic. The idea supported by stdQM that the microscopic world is so fundamentally different from the macroscopic world that it can never be understood, thus may well lack logic. If so that could open to understanding of microscopic physics for human beings with experience from macroscopic physics. If you think that there is little need of making sense of stdQM, recall Feynman's testimony: • We have always had a great deal of difficulty understanding the world view that quantum mechanics represents. At least I do, because I’m an old enough man that I haven’t got to the point that this stuff is obvious to me. Okay, I still get nervous with it ... You know how it always is: every new idea, it takes a generation or two until it becomes obvious that there’s no real problem. I cannot define the real problem, therefore I suspect that there is no real problem, but I’m not sure there’s no real problem. (Int. J. Theoret. Phys. 21, 471 (1982).)  It is total confusion, if it is totally unclear if there is a problem or no problem and it is totally clear that nobody understands stdQM.... Recall that stdQM is based on a linear multi-dimensional Schrödinger equation, which is simply picked from the sky using black magic ad hoc formalism, which could be anything, and is then taken as a revelation about real physics when interpreted by reversing the black magics.  This is like scribbling down a sign/equation at random without intentional meaning, and then giving the sign/equation an interpretation as if it had an original meaning, which may well be meaningless, instead of expressing a meaning in a sign/equation to discover consequences and deeper meaning.    fredag 16 december 2016 New Quantum Mechanics 20: Shell Structure Further computational exploration of realQM supports the following electronic shell structure of an atom: Electrons are partitioned into an increasing sequence of main spherical shells $S_1$, $S_2$,..,$S_M$ with each main shell $S_m$ subdivided into two half-spherical shells each of which for $m>2$ is divided into two angular directions into $m\times m$ electron domains thus with a total of $2m^2$ electrons in each full shell $S_m$.  The case $m=2$ is special with the main shell divided radially into two subshells which are each divided into half-spherical subshells each of which is finally divided azimuthally, into $2\times 2$ electron domains for $S_2$ subshell, thus with a total of $2m^2$ electrons in each main shell $S_m$ when fully filled, for $m=1,...,M$, see figs below. This gives the familiar sequence 2, 8, 18, 32,.. as the number of electrons in each main shell. 4 subshell of S_2 8 shell as variant of full S_2 shell  9=3x3 halfshell of S_3 The electron structure can thus be described as follows with parenthesis around main shells and radial subshell partition within parenthesis: • (2)+(4+4) • (2)+(4+4)+(2) • ... • (2)+(4+4)+(4+4)  • (2)+(4+4)+(8)+(2) • .... • (2)+(4+4)+(18)+(2) • ... • (2)+(4+4)+(18)+(8) Below we show computed ground state energies assuming full spherical symmetry with a radial resolution of 1000 mesh points, where the electrons in each subshell are homogenised azimuthally, with the electron subshell structure indicated and table values in parenthesis. Notice that the 8 main shell structure is repeated so that in particular Argon with 18 electrons has the form 2+(4+4)+(4+4): Lithium (2)+1: -7.55 (-7.48)                  1st ionisation:      (0.2) Beryllium (2)+(2): -15.14 (-14.57)           1st ionisation: 0.5 (0.35) Boron (2)+(2+1): -25.3 (-24.53)               1st ionisation: 0.2 (0.3) Carbon (2)+(2+2): -38.2  (-37.7)               1st ionisation 0.5 (0.4) Nitrogen (2)+(3+2):  -55.3 (-54.4)            1st ionisation  0.5  (0.5) Oxygen (2)+(3+3): -75.5 (-74.8)               1st ionisation  0.5  (0.5) Fluorine (2)+(3+4):  -99.9   (-99.5)            1st ionisation  0.5      (0.65) Neon (2)+(4+4):   -132.4     (-128.5  )        1st ionisation 0.6        (0.8) Sodium (2)+(4+4)+(1): -165 (-162) Magnesium (2)+(4+4)+(2): -202  (-200) Aluminium (2)+(4+4)+(2+1): -244 (-243) Silicon (2)+(4+4)+(2+2): -291 (-290) Phosphorus (2)+(4+4)+(3+2): -340 (-340) Sulphur (2)+(4+4)+(4+2): -397 (-399) Chlorine (2)+(4+4)+(3+4): -457 (-461) Argon: (2)+(4+4)+(4+4): -523 (-526) Calcium: (2)+(4+4)+(8)+(2): -670 (-680) Titanium: (2)+(4+4)+(10)+(2): -848 (-853) Chromium: (2)+(4+4)+(12)+(2): -1039 (-1050) Iron: (2)+(4+4)+(14)+(2): -1260 (-1272) Nickel: (2)+(4+4)+(16)+(2): -1516 (-1520) Zinc: (2)+(4+4)+(18)+(2): -1773 (-1795) Germanium: (2)+(4+4)+(18)+(2+2): -2089 (-2097) Selenium: (2)+(4+4)+(18)+(4+2):- 2416 (-2428) Krypton: (2)+(4+4)+(18)+(4+4): -2766 (-2788) Xenon: (2)+(4+4)+(18)+(18)+(4+4): -7355  (-7438) Radon: (2)+(4+4)+(18)+(32)+(18)+(4+4): -22800 (-23560) We see good agreement even with the crude approximation of azimuthal homogenisation used in the computations. To see the effect of the subshell structure we compare Neon: (2)+(4+4) with Neon: (2)+(8) without the (4+4) subshell structure, which has a ground state energy of -153, which is much smaller than the observed -128.5.  We conclude that somehow the (4+4) subdivision of the second is preferred before a subdivision without subshells. The difference between (8) and (4+4) is the homogeneous Neumann condition acting between subshells, tending to increase the width of the shell and thus increase the energy. The deeper reason for this preference remains to describe, but the intuition suggests that it relates to the shape or size of the domain occupied by an electron.  With subshells electron domains are obtained by subdivision in both radial and azimuthal direction, while without subshells there is only azimuthal/angular subdivision of each shell. We observe that ionisation energies, which are of similar size in different shells, become increasingly small as compared to ground state energies, and thus are delicate to compute as the difference between the ground state energies of atom and ion. Here are sample outputs for Boron and Magnesium as functions of distance $r$ from the kernel along the horizontal axis : We observe that the red curve depicting shell charge $\psi^2(r)r^2dr$ per shell radius increment $dr$, is roughly constant in radius $r$, as a possible emergent design principle. More precisely, $\psi (r)\sim \sqrt{Z}/r$ mathches with $d_m\sim m^2/Z$ and $r_m\sim m^3/Z$ with $d_m$ the width of shell $S_m$ and thus the width of the subshells of $S_m$ scaling with $m/Z$, and thus the width of electrons in $S_m$ scaling with $m/Z$. We thus have $\sum_mm^2\sim M^3\sim Z$ and with $d_m\sim m^2/Z$ the atomic radius $\sum_md_m\sim M^3/Z\sim 1$ is basically the same for all atoms, in accordance with observation. Further, the kernel potential energy and thus the total energy in $S_m$ scales with $Z^2/m$ and the total energy by summation over shells scales with $\log(M)Z^2\sim \log(Z)Z^2$, in close correspondence with $Z^{\frac{1}{3}}Z^2$ by density functional theory. Recall that the electron configuration of stdQM is based on the eigen-functions for Schrödinger's equation for the Hydrogen atom with one electron, while as we have seen that of realQM rather relates to spatial partitioning. Of course, eigen-functions express some form of partitioning, and so there is a connection, but the basic problem may concern partitioning of many electrons rather than eigen-functions for one electron. torsdag 8 december 2016 Quantum Mechanics as Theory Still Without Meaning Yet another poll (with earlier polls in references) shows that physicists still today after 100 years of deep thinking and fierce debate show little agreement about the stature of quantum mechanics as the prime scientific advancement of modern physics. The different polls indicate that less than 50% of all physicists today adhere to the Copenhagen Interpretation, as the main text book interpretation of quantum mechanics. This means that quantum mechanics today after 100 years of fruitless search for a common interpretation, remains a mystery without meaning. Theory without interpretation has no meaning and science without meaning cannot be real science. If only 50% of physicists would agree on the meaning of the basic text book theories of classical physics embodied in Newton/Lagranges equations of motion, Navier's equation for solid mechanics, Navier-Stokes equations for fluid dynamics and Maxwell's equations for electromagnetic, that would signify a total collapse of classical physics as science and subject of academic study. But this not so: classical physics is the role model of science because there is virtually no disagreement on the formulation and meaning of these basic equations. But the polls show that there is no agreement on the role and meaning of Schrödinger's equation as the basis of quantum mechanics, and physicists do not seem to believe this will ever change. This is far from satisfactory from scientific point of view. This is my motivation to search for a meaningful quantum mechanics in the form of realQM presented in recent posts. Of course you may say that for many reasons my chances of finding some meaning are very small, but science without meaning cannot be real science. PS Lubos Motl, as a strong proponent of a textbook all-settled Copenhagen interpretation defined by himself, reacts to the polls with • The foundations of quantum mechanics were fully built in the 1920s, mostly in 1925 or at most 1926, and by 1930, all the universal rules of the theory took their present form...as the Copenhagen interpretation. If you subtract all these rules, all this "interpretation", you will be left with no physical theory whatsoever. At most, you will be left with some mathematics – but pure mathematics can say nothing about the world around us or our perceptions. • In virtually all questions, the more correct answers attracted visibly greater fractions of physicists than the wrong answers. Lubos claims that more correct views, with the true correct views carried by only Lubos himself, gathers a greater fraction than less correct views, and so everything is ok from Lubos point of view. But is greater fraction sufficient from scientific point of view, as if scientific truth is to be decided by democratic voting? Shouldn't Lobos ask for 99.9% adherence to his one and only correct view? If physics is to keep its position as the king science? Or is modern physics instead to be viewed as the root of modernity through a collapse of classical ideals of rationality, objectivity and causality?
9dd695634e8eb449
Why are diatomic oxygen molecules STILL reactive especially with metallic elements like sodium and copper even at room temperature? You would think that since the two oxygen atoms already have the much needed 8 valence electrons when they bonded with each other that they wouldn't need to react with anything else but that doesn't seem to be the case. • 7 $\begingroup$ Having a full octet for an atom is much like breathing for a man: important for sure, but it's not the only thing they need. $\endgroup$ Feb 16 '18 at 12:49 • $\begingroup$ If you take at loo kat the MO diagram for Oxygen, you will see that oxygen is a di-radical as we call it. It has two unpaired electrons, while the oxide (so a single oxygen atom now) would have all orbitals (s and p) filled which is more stable. $\endgroup$ Feb 16 '18 at 16:29 • 1 $\begingroup$ I don't understand what you mean by "STILL reactive". "Still" suggests that you think the situation has been going on for too long ("I asked you to do X and you still haven't done it!") but that doesn't make sense here. What other scenario (even if hypothetical) are you comparing with? $\endgroup$ Feb 16 '18 at 16:51 • $\begingroup$ I personally think the weird case is not that O2 is reactive, but that N2 isn't. $\endgroup$ – Joshua Feb 16 '18 at 17:35 • $\begingroup$ @Joshua it takes a lot to overcome a triple bond $\endgroup$ – mbrig Feb 16 '18 at 18:02 To estimate reactivity solely based on the the octet rule is a crude, (semi-)empirical approach in the first place. Atoms and molecules don't know anything about their number of electrons in shells, but the formation of molecules are based on the minimization of (free-)energy of the system. The octet rule gives a hint, for what kind of systems the potential energy may be small, but does not say anything about the relative energies of two systems, both fulfilling the octet rule. To "exactly" predict the reactivity, you would have to solve the Schrödinger equation for both states of your systems, find the eigenvalue of the energies, consider entropic contributions. Furthermore you would need to "sample" all possible pathways of your reaction, to check if the energy barriers are small enough to be overcome at a given temperature. (To be really exact, you still would need to estimate entropic contributions) Yet still a chemists' intuition often can predict reactivity very well. Concerning your specific problem, one possible qualitative explanation would be, that the octet rule is not fulfilled for copper, but only for oxygen, before the reaction happens. Taking a look on the structure of Copper(II) oxide on the other hand, shows that it fulfils the octet rule for both, copper and oxygen.(Ignoring d-electrons for copper) Furthermore, the reaction enriches the electron density in the strongly electronegative oxygen, while reducing the electron density in the "electropositive" copper, which also stabalizes the structure of copper oxide. • 1 $\begingroup$ I really can't agree with the explanation based on copper's octet being satisfied. A good model should have predictive power, and if the loss of the two 4s electrons is really thermodynamically favourable, then surely copper would react with - say - hydrogen to form $\ce{CuH2}$, or nitrogen to form $\ce{Cu3N2}$. $\endgroup$ – orthocresol Feb 17 '18 at 0:38 • $\begingroup$ The octet rule has some predictive power. But basically it is as i said: u have to calculate the relative energies of concurent systems. The octet rule is widely applicable in organic chemistry, where u will (hardly) find themodynamic stable compounds, in which the octet rule is not satisfied. But on the other hand, not every compound, u can write down on paper satisfying the octet rule, will be easy to synthesize or stable in the presence of oxygen. $\endgroup$ – vk_s Feb 17 '18 at 8:26 The octet-rule is a very inadequate rule for understanding overall reactivity. Ultimately you need to understand two things to know whether reactivity is likely: the thermodynamics of any possible reaction; and the kinetics of any possible reaction mechanism for the reaction. The formation of many oxides is thermodynamically favourable. That's why many compounds burn in an atmosphere containing oxygen. The formation of sodium oxide, carbon dioxide and many metal oxides is thermodynamically favourable. Carbon-containg things will burn in air, metals will often react (quickly, like sodium, or slowly, like iron). Some will burn spontaneously like caesium; others will take a little encouragement. The second factor that matters is whether there is an easy way for the oxidation reaction to happen. Aluminium, for example, is very reactive but rapidly forms a protective layer of strong aluminium oxide (alumina) on the surface preventing further reaction; iron is relatively reactive but won't rust easily unless contaminants are present; gold won't react at all. Carbon needs to be heated to start the reaction but will burn by itself when it does start. Where reactions have easy to access mechanisms and some thermodynamic advantage, oxygen is very reactive. Another thing to remember about oxygen is that it has some reaction mechanisms that are less than obvious and easier than expected for a diatomic molecule. Unlike nitrogen, for example, where thermodynamically favourable reactions are inhibited by its strong triple bond (and filled octet) an oxygen molecule is actually a diradical: despite the apparent filled octets you would expect from simple electron counting, the molecule has two unpaired electrons (this requires a bit of molecular orbital theory to explain and is one of the observations that simple electron octet counting doesn't explain well). Radicals tend to be more reactive than paired electron orbitals (or filled octets). In short, an octet-counting view is too simplistic to explain oxygen's reactivity and you need more sophisticated ways of looking at the electronic structure. And don't forget that you also need to know the thermodynamics and the kinetics of potential reaction mechanisms to get any useful predictions of whether reactivity is likely. Oxygen (O2) generally exists as diradicals i.e. each oxygen bonded to each other through single bonds and the remaining two electrons remains on each oxygen atoms as radicals. So this structural feature makes oxygen act as a strong oxidizing agent. Even in our body oxygen coordinates through its radical and this structural arrangement makes oxygen can act as a strong oxidants in many places. • $\begingroup$ Saying that O2 is a diradical rationalises the fact that it is reactive, but that doesn't make the statement a true fact!! $\endgroup$ – Karl Apr 12 '18 at 18:04 Your Answer
707c6a9f97d27e73
Cookies policy Book review 7 June 2003 It’s a kind of magic Quantum: A guide for the perplexed by Jim al-Khalili, Weidenfeld & Nicolson, £18.99, ISBN 0297843052 Reviewed by Marcus Chown “THERE is something fascinating about science,” Mark Twain famously wrote. “One gets such wholesale returns of conjecture out of such a trifling investment of fact.” Twain’s words sprang to mind as I was leafing through Jim Al-Khalili’s Quantum: A guide for the perplexed. For, never in the history of science can so trifling an investment of fact have spawned so fabulous a wealth of extraordinary consequences as in the case of quantum theory. The central fact is remarkably easy to state: in the microscopic world of atoms and their constituents, particles can behave like waves and waves like particles. There are a few quantitative details, of course. The waves in question are peculiar, abstract things. They have an amplitude represented by a complex number, propagate in space and time according to the Schrödinger equation and the square of the amplitude at any place is the probability of finding a particle there if anyone decides to look. Among the consequences of this mere handful of facts is that atoms can be in many places at once, penetrate “impenetrable” barriers, “know” about each other instantly even when on different sides of the universe, and do things with total disregard for cause and effect – arguably the most shocking and unsettling of all the consequences of their “wave-particle” nature. Throw a few more facts into the pot, such as the existence of spin and the utter impossibility of telling apart two electrons, two photons and so on, and you get lasers, superconductors, liquids that can ... To continue reading this premium article, subscribe for unlimited access. Quarterly by Direct Debit Inclusive of applicable taxes (VAT)
134a78d9f1c95a0d
Take the 2-minute tour × I am reading up on the Schrödinger equation and I quote: share|improve this question Possible duplicate: physics.stackexchange.com/q/44003/2451 –  Qmechanic Dec 11 '13 at 13:48 2 Answers 2 up vote 18 down vote accepted share|improve this answer Sorry I found David Z' answer a bit confused just when discussing the crucial point. Since the two functions ψ(x) and ψ(−x) satisfy the same equation, you should get the same solutions for them, except for an overall multiplicative constant; in other words, Normalizing ψ requires that |a|=1, which leaves two possibilities: a=+1 (even >parity) and a=−1 (odd parity). The first part "Since the two functions... multiplicative constant" is generally false without an important further requirement that is not garanted here. It is indeed true under the hypothesis that the eigenspace of the Hamiltonian operator with eigenvalue $E$ we are considering is one-dimensional. However this is not the case in general. Finally the remaining part of the statement above "Normalizing ... parity)." is incorrect anyway as it stands: normalization just requires $|a|=1$. Let me propose an alternative answer. First of all, one introduces the parity transformation, $P: {\cal H} \to {\cal H}$, where ${\cal H} = L^2(R)$, defined as follows without referring to any Hamiltonian operator: $$(P\psi)(x):= \eta_\psi \psi(-x)\:.$$ Above $\eta_\psi$ is a complex number with $|\eta_\psi|=1$. It is necessary to leave this possibility because, as is well known in QM, states are wavefunctions up to a phase so that $\phi$ and $e^{i\alpha} \phi$ are indistinguishable as states and, physically, we can only handle states. As the map $P$ is (1) bijective and (2) it preserves the probabilities of transition between states, it is a so-called quantum symmetry. A celebrated theorem by Wigner guarantees that every quantum symmetry can be represented by either an unitary or an antiunitary operator (depending on the nature of the symmetry itself). In the present case all that means that it must be possible to fix the map $\psi \mapsto \eta_\psi$ in order that $P$ becomes linear (or anti-linear) and unitary (or anti-unitary). As a matter of fact $P$ becomes unitary if $\eta$ is assumed to be independent form $\psi$. So we end up with the unitary parity operator: $$(P\psi)(x):= \eta \psi(-x) \quad \psi \in L^2(R)$$ where $\eta \in C$ with $|\eta|=1$ is any fixed number. We can make more precise our choice of $\eta$ requiring that $P$ is also an observable, that is $P=P^\dagger$. It is immediate to verify that it happens only for $\eta = \pm 1$. It is matter of convenience to fix the sign. We henceforth assume $\eta=1$ (nothing follows would change with the other choice). We have our parity observable/symmetry given by: $$(P\psi)(x):= \psi(-x) \quad \psi \in L^2(R)$$ What is the spectrum of $P$? As $P$ is unitary, the elements $\lambda$ of the spectrum must verify $|\lambda|=1$. As $P$ is self-adjoint the spectrum has to belong in the real line. We conclude that the spectrum of $P$ contains $\{-1,1\}$ at most. Since these are discrete points they must be proper eigenvalues with associated proper eigenvectors (I mean :Things like Dirac's delta are excluded). It is impossible that the spectrum contains $1$ only or $-1$ only, otherwise we would have $P=I$ or $P=-I$ respectively, that is evidently false. We have found that $P$ has exactly two eigenvalues $-1$ and $1$. At this point we can define a state, represented by $\psi$, to have even parity if $P\psi = \psi$ or odd parity if $P\psi = -\psi$. Let us come to the problem with our Hamiltonian. If $V(x) = V(-x)$, by direct inspection one immediately sees that: $$[P, H] =0\:.$$ Assuming that the spectrum of $H$ is a pure point spectrum (otherwise we can restrict ourselves to deal with the Hilbert space associated with the point spectrum of $H$ disregarding that associated with the continuous one), a known theorem assures that there is a Hilbert basis of eigenvectors of $H$ and $P$ simultaneously. If $\psi_E$ is such a common eigenvector (associated with the eigenvalue $E$ of $H$), it must verify either $P\psi_E= \psi_E$ or $P\psi_E= -\psi_E$, namely: $\qquad\qquad \qquad \psi_E(-x) = \psi_E(x)$ or, respectively, $\psi_E(-x)= \psi_E(x)$. To conclude, I stress that it is generally false that an eigenvector of $H$ has defined parity. If the eigenspace of the given eigenvalue has dimension $\geq 2$, it is easy to construct counterexamples. It is necessarily true, however, if the considered eigenspace of $H$ has dimension $1$. share|improve this answer Your Answer
c5f758486362ade1
Durham e-Theses You are in: Modern approaches to the exchange-correlation problem Peach, Michael Joseph George (2009) Modern approaches to the exchange-correlation problem. Doctoral thesis, Durham University. Kohn-Sham density functional theory (DFT) is the most prevalent electronic structure method in chemistry. Whilst formally exact, in practice it affords reasonable accuracy with reasonable computational cost and is the method of choice when considering molecules of non-trivial size. The key quantity is the exchange-correlation energy functional, the exact form of which is unknown. Approximate exchange-correlation functionals, particularly B3LYP and PBE, are routinely applied to chemical problems. However, it is not possible to guarantee a given accuracy in advance, nor is there a systematic means of obtaining a more accurate answer. Existing functionals are applied to ever more challenging problems and the accuracy required of them is continually increasing the need for more accurate functionals is one of the major challenges in electronic structure theory. This thesis focuses on several approaches that attempt to address this issue. In chapter 1 the electronic structure problem is outlined and discussed in terms of the Schrödinger equation and solutions involving wavefunctions. In chapter 2, the formal foundations of DFT are presented and methods of approximating the exchange-correlation functional are introduced. A promising new direction for developing exchange-correlation functionals, through attenuation of the exchange term, is introduced and discussed in detail in chapter 3. The accuracy of such functionals is investigated and compared to that obtained from conventional approaches, with a particular emphasis on the dependence on the attenuation parameters. It is then demonstrated that attenuated functionals offer the prospect of significantly improved descriptions of excitation energies, particularly for those of charge-transfer character. Apphcation of attenuated functionals to excitation energies that are problematic for conventional functionals is undertaken in chapter 4. Insight into the conflicting performance of conventional methods for different charge-transfer excitations is provided through a consideration of the orbital overlap between the orbitals involved in an excitation. Through this overlap quantity, a diagnostic test is proposed that enables a user to judge in advance the reliability of excitation energies from conventional functionals. Attenuated functionals are then applied to other difficult properties in chapter 5. Firstly they are used to study the bond length alternation and band gap in poly acetylene and polyyne oligomers and infinite chains. Then they are used to calculate nuclear magnetic resonance parameters in both main-group and first-row transition metal systems, through the theoretically rigorous optimised effective potential method. An entirely different approach to functional development is considered in chapter 6, where the adiabatic connection formalism is introduced as an alternative method of obtaining the exchange-correlation functional. For a series of two-electron systems, exact input data is used to determine the applicability of a number of simple mathematical forms in modelling the exact adiabatic connection. The conclusions from these simple systems are then used to provide insight into the possibility of using this approach in functional development. Item Type:Thesis (Doctoral) Award:Doctor of Philosophy Thesis Date:2009 Copyright:Copyright of this thesis is held by the author Deposited On:08 Sep 2011 18:24 Social bookmarking: del.icio.usConnoteaBibSonomyCiteULikeFacebookTwitter
2e149772daa58b40
Support Options Submit a Support Ticket Nanoelectronic Modeling Lecture 11: Open 1D Systems - The Transfer Matrix Method By Gerhard Klimeck1, Dragica Vasileska2, Samarth Agarwal3, Parijat Sengupta3 1. Purdue University 2. Electrical and Computer Engineering, Arizona State University, Tempe, AZ 3. Electrical and Computer Engineering, Purdue University, West Lafayette, IN Published on The 1D time independent Schrödinger equation can be easily solved analytically in segments of constant potential energy through the matching of the wavefunction and its derivative at every interface at which there is a potential change. The previous lectures showed the process for a single step potential change and a single potential barrier which consts of two interfaces. The process can be generalized for an arbitrary number of interfaces with the transfer matrix approach. The transfer matrix approach enables the simple cascading of matrices through simple matrix multiplication. The transfer matrix approach is analytically exact, and “arbitrary” heterostructures can apparently be handled through the discretization of potential changes. The approach appears to be quite appealing. However, the approach is inherently unstable for realistically extended devices which exhibit electrostatic band bending or include a large number of basis sets. Cite this work Researchers should cite this work as follows: • Gerhard Klimeck; Dragica Vasileska; Samarth Agarwal; Parijat Sengupta (2009), "Nanoelectronic Modeling Lecture 11: Open 1D Systems - The Transfer Matrix Method," BibTex | EndNote Università di Pisa, Pisa, Italy
7078cbdc9408ebc9
next up previous contents index Next: Second Quantization Up: Reducing the Size of Previous: The Frozen Core Approximation Truncated CI is not Size Extensive As we have previously pointed out, full CI --being the matrix formulation of the Schrödinger equation--is an exact theory for nonrelativistic electronic structure problems. If we truncate the CI (either in the one-electron or N-electron space), we no longer have an exact theory. Of course either of these truncations will introduce an error in the wavefunction, which will cause errors in the energy and all other properties. One particularly unwelcome result of truncating the N-electron basis is that the CI energies obtained are no longer size extensive or size consistent. These two terms, size extensive and size consistent, are used somewhat loosely in the literature. Of the two, size extensivity is the most well-defined. A method is said to be size extensive if the energy calculated thereby scales linearly with the number of particles N. The word ``extensive'' is used in the same sense as in thermodynamics, when we refer to an extensive, rather than an intensive property. A method is called size consistent if it gives an energy EA + EB for two well separated subsystems A and B. While the definition of size extensivity applies at any geometry, the concept of size consistency applies only in the limiting case of infinite separation. In addition, size consistency usually also implies correct dissociation into fragments; this is the source of much of the confusion arising from this term. Thus restricted Hartree-Fock (RHF) is size extensive, but it is not necessarily size consistent, since it cannot properly describe dissociation into open-shell fragments. It can be shown that many-body perturbation theory (MBPT)  and coupled-cluster  (CC) methods are size extensive, but they will be size consistent only if they are based on reference wavefunction which dissociates properly. As previously stated, truncated CI's are neither size extensive nor size consistent. A simple (and often used!) example is sufficient to make the point. Consider two noninteracting hydrogen molecules. If the CISD  method is used, then the energy of the two molecules at large separation will not be the same as the sum of their energies when calculated separately. In order for this to be the case, we would have to include quadruple excitations in the supermolecule calculation, since local double excitations could happen simultaneously on A and B. We would tend to think that size extensivity and size consistency are important, physical properties that all quantum mechanical models should have (indeed, full CI , an exact theory, has these properties), but perhaps they are not as essential as all that. Duch and Diercksen have claimed that ``making size extensivity the most important requirement of quantum chemical methods, although it does not guarantee correct physical description, seems to be based not that much on physical as on esthetical criteria'' [24]. Indeed, they show that quantum mechanics is a ``holistic'' theory, not well-suited toward the description of separated subsystems: Hilbert space of antisymmetric, many particle functions, describing the total system, can not be decomposed into separate subspaces. Consider two systems, SA and SB, with NA and NB electrons, respectively. Each system is described by its own function, $\Psi_A$ antisymmetric in NA particles and $\Psi_B$ in NB. Assuming that both functions are normalized to unity it is easy to show that the product function $\Psi_{AB} = \Psi_A \Psi_B$ is always ``far'' from the antisymmetric function $\Psi = {\cal A} \Psi_{AB}$, as measured by the overlap $\langle \Psi_{AB} \vert \Psi \rangle$ or the norm of the difference $2 - \sqrt{2} \leq \vert\vert \Psi_{AB} - \Psi \vert\vert ^2 \leq 2$. Such arguments notwithstanding, it is clear that the fraction of the correlation energy  recovered by a truncated CI will diminish as the size of the system increases, making it a progressively less accurate method. There have been many attempts to correct the CI energy to make it size extensive. The most widely-used (and simplest) of these methods is referred to as the Davidson correction  [25], which is \Delta E_{DC} = E_{SD}(1 - c_0^2) \end{displaymath} (tex2html_deferredtex2html_deferred4.tex2html_deferred26 E_DC = E_SD(1 - c_0^2) ) where ESD is the basis set correlation energy  recovered by a CISD  procedure. This correction approximately accounts for the effects of ``unlinked quadruple'' excitations (i.e. simultaneous pairs of double excitations). The multireference version [24] of this correction is \Delta E_{DC} = \left( 1 - \sum_{i \in {\rm Ref}} \vert c_i\vert^2 \right) (E_{MRCI} - E_{MR}) \end{displaymath} (tex2html_deferredtex2html_deferred4.tex2html_deferred27 E_DC = ( 1 - _i Ref |c_i|^2 ) (E_MRCI - E_MR) ) where EMRCI is the multireference CI energy and EMR is the energy obtained from the set of references (MCSCF energy if the references are obtained as all references in an MCSCF procedure). We have simply replaced the CISD  correlation energy in equation (4.26) with the analogous multireference correlation energy, and we have replaced c02 with the analogous sum of squares of all the reference coefficients. There are a number of other size extensivity corrections, and most of them do not take any significant amount of computation. Reference [24] provides a nice comparison of several of the more common CI size extensivity correction methods. We should also mention that Malrieu and co-workers have presented a self-consistent dressing of the Hamiltonian which gives size extensive results for selected CI procedures [26].   next up previous contents index C. David Sherrill
5ea0f6d00bffdba8
Dismiss Notice Join Physics Forums Today! Consciousness, Determinism and the Many Worlds view 1. Oct 8, 2005 #1 There are a few threads about determinism here and a few about interpretations of quantum mechanics, so I thought I'd start one that combines them. Determinism is nicely defined by the Stanford Encyclopedia of Philosophy as: Ref: http://plato.stanford.edu/entries/determinism-causal/" [Broken] I think one of the more contentious phrases used in this definition is the term, "natural law" which is explained this way: What exactly does "laws of nature" entail? I think its important to first recognize we're not talking about "supernatural laws" when referring to determinism. The point is, are any laws of nature not deterministic? From reading this article, and many others like it, quantum mechanics is always mentioned as a non-deterministic law. Here, the Stanford Encyclopedia mentions it: In conclusion, the article does not resolve whether nature is deterministic or not. We can say quantum mechanics is 'deterministic' but that doesn't address a number of issues, namely the random nature of radioactive decay nor why one possible evolution of the Schrödinger equation is observed and another is not. This is also discussed in the article. If nature has a 'random' element to it or not however, is less of a concern regarding consciousness for two reasons. There are two more immediate considerations. 1) At the macroscopic scale, such things as quantum interactions seem to cancel out. Even if they are statistical or random, does that have anything to do with consciousness? Generally this seems to be dismissed as a red herring. Who cares if things are truly random or not at the microscopic scale of an atom? Our brains, the contention is, are governed not by quantum interactions, but by gross interactions between millions or billions of molecules at any given 'switch' junction. So regardless of whether there is any truly random mechanism in the universe, by the time you add up all the large numbers of interactions, the mind is deterministic in the sense that it is not governed by individual molecular interactions, but by enormous numbers of interactions which may be very (exceedingly) slightly chaotic, but far from random. 2) Even if quantum mechanics provides for a random mechanism, and even if our brain utilizes this mechanism to function, a switch of some type that produces a random outcome is not special enough in any way to explain why consciousness should emerge. One can insert a pseudo-random switch into a computer, so computationalism is not discouraged by an indeterminate or random mechanism. If our brains operate on a macroscopic scale where quantum interactions cancel out, and even if it doesn't, even if there are random mechanisms that our brains rely on to function, the idea that the universe is essentially deterministic remains a valid concept with respect to consciousness. Given this perspective on the subject, and only this perspective on the subject, it seems it doesn't matter if the world is deterministic or not. Now I still have to add though, that there is one concept regarding QM that still seems to sit on the sidelines waiting to be recognized (though I certainly don't claim to be the only one that recognizes it). Let's assume for one minute that the many worlds theory of quantum mechanics is true. Let's say that for every possible interaction at the molecular level, all possible interactions actually occur and the world splits off into multiple worlds on another dimension. The first point to make is that this theory is completely deterministic in the sense that all possible futures exist as a function of a past event. This is an important point. In the many worlds theory, we have all possible worlds existing at the same time, and the sum of all these worlds is deterministic, not random at all. The second point then, and the biggest question of all is that if all these worlds actually exist, why are we only conscious of a single world? Is there a mechanism or law of nature that results in our being aware of only a single world? Last edited by a moderator: May 2, 2017 2. jcsd 3. Oct 9, 2005 #2 "Another exponential process like this could involve sodium-dependent action potentials. It is possible that the entry of a single sodium ion could depolarize the membrane enough to admit more sodium ions, causing more depolarization etc. in a runaway process, producing the action potential as a collapsed macroscopic event." http://www.neuroscience.com/manuscripts-1996/1996-011-miller/1996-011-miller.html [Broken] Last edited by a moderator: May 2, 2017 4. Oct 9, 2005 #3 But we ARE conscious of all worlds (at least conscious of all worlds where we exist). I am typing my reply to this thread right now, but if MW is correct then there is another world where I am not typing any reply to this thread. In each of these worlds, and in all other worlds where I exist, I am conscious of the existence of that world. The parallel worlds "communicate" however only at the level at which quantum states remain coherent - lose the coherency and the communication bewteen the worlds is lost. There is no reason to expect our consciousness (a very non-coherent phenomenon) to be able to surmount the barriers between these parallel worlds, any more than we expect any non-coherent phenomenon to surmount the barrier between the worlds. 5. Oct 9, 2005 #4 Tournesol, thanks for the link. I read it through and found it fascinating. Do you think quantum mechanical mechanisms in the brain are generally disregarded or do you think the debate between classical versus quantum mechanical mechanisms in the brain has yet to be decided? That is, do philosophers and scientists still debate the possibility of QM being utilized by the brain or is that generally poo-poo'ed? I read an article about the Japanese neutrino detector called Super Kamiokande and heard the photomultiplier tubes were able to detect a single photon emitted when a neutrino interacted with the water. Regarding your quote, if the membrane is able to react to a single sodium ion because of a "runaway process" involving other sodium ions, it seems that membrane is doing something very similar to the photomultiplier tubes in the neutrino detector. I've not heard of this possible QM mechanism in the brain, so thanks for pointing it out. 6. Oct 9, 2005 #5 Movinging finger, thanks for that. Yes, you're absolutely right. Each separate world has an independent and conscious observer unable to interact with any other world. I see I didn't explain the punchline very well unfortunately, because that wasn't what I was getting at. I'll try to elaborate as I see now I didn't get the idea out very well. Let's forget about the many worlds view for a moment and assume there is only one world. When we talk about a deterministic universe we also must acknowledge there could be QM mechanisms that result in a genuinely random event. Our brains may or may not make use of such a mechanism, and even if they do, could we not simply put similar pseudo-random number generators into a computer in order to perform a similar task? They may not be identical because computers have no way of creating real random numbers, but we could come close and thus with a random number, a computer should be able to perform everything a brain can, assuming a brain is strictly a fancy, biological computer. Random quantum events or random numbers in a computer would have the same affect of simply making the future slightly less predictable. Again, this assuming there is only one world. Would you agree with that? Now I want to elaborate on a possible difference between simply having a less than deterministic future because of some random noise and the possiblility of real quantum mechanical interactions in the brain. Let's use the Schrödinger's cat experiment. We'll call the box the cat is in Schrödinger's box. The box as you know has the ability to isolate a macroscopic chunk of matter from the universe. What if we put a computer inside this box with its own power supply. A computer is completely deterministic in the classical sense of the word, so I believe one must conclude that if we allow it to operate while inside the box, we can leave it for days, months, even years and it will be in a state which is completely determinate. Also, it won't have the ability to 'branch off' and create multiple universes, or at least if we say it does, if there are branches for quantum mechanical interactions between air molecules bouncing around the box, all those universes will be identical from the perspective of the computer. The air may be in a different state, but the computer won't be. Even if we put in a random number generator, there is no difference because even random number generators in computers are not really random, they are only pseudo-random. The point is I believe the computer will always end up in the same state regardless of how long it is allowed to run, within reason and as long as no quantum mechanical interactions occur that create macroscopic changes to the computer. Computers are immune to quantum mechanical interactions. From the perspective of the computer, there is only 1 world, not many. Now put a person in the box and perform the same thought experiment. If a brain is computational, if the only interactions in the brain depend on the interactions of trillions of ions and there are no quantum mechanical interactions that affect the macroscopic function of the brain, then the same result should occur. The person should end up in a deterministic state. But if there are interactions in the brain which depend on QM, then we should see the entire person go into a state of superposition very quickly. From the perspective of the person in the box though, there is only one universe, the one he is in. I guess what I'm getting at is, from the perspective of a single universe (ie: not the many worlds view) it doesn't seem as if there's any benefit to having a random mechanism in the brain. Sure the timeline may not be completely deterministic, but I can't see any significant benefit to a random mechanism in the brain nor any significant difference between a computer and a brain. What benefit would a random quantum mechanical event have that a pseudo-random computation wouldn't? However, if we consider the possibility that there are many worlds, then it seems there is a significant difference. The observer seems to go into superposition yet be only aware of a single universe, whereas a computer has no ability to do this without a QM mechanism. I realize I'm grasping at something that isn't as tangible as I'd like it to be here. Intuitively it seems to me the many worlds view is unique and says something about the observer and consciousness that other interpretations miss. I wonder if any of this makes sense others. 7. Oct 10, 2005 #6 Hi TE Could be, but on the other hand QM may be 100% deterministic but still unpredictable (eg hidden variables). I agree with all that, except I do not believe that quantum randomness or any other kind of randomness is a significant factor in information processing agents, human or otherwise. To my mind the whole world could be 100% deterministic at all levels. OK, I follow this I think, but I have to say that I believe in the decoherence explanation of wavefunction collapse – thus Schroedinger’s cat (being a decoherent system) is never in a superimposed “dead and alive” state. (ps the reason why computers are “immune to QM interactions” is the same, because they are macroscopic decoherent systems). I understand what you are saying, but I disagree with the initial assumption that a decoherent system (cat, human brain or computer) can remain in a coherent superposition of quantum states. I agree that randomness does not bestow any special powers on the brain, but I guess for different reasons. I believe the world operates deterministically (ie the world is ontically deterministic), and any apparent random behaviour is purely because of our subjective perspective (ie the world is epistemically indeterminable). If we accept the MW assumption then I speculate it may be the case that information transfer between worlds is possible only in coherent systems (most of which will be microscopic quantum objects), and decoherence (introduced into most systems including the human brain when we go to macroscopic scales) destroys the ability to transfer information between the worlds – this would explain why each conscious agent is aware of only one world. Last edited: Oct 10, 2005 8. Oct 11, 2005 #7 The age-old-dispute between determninism and free-will is centered on the idea of elbow-room or alternative possibilities -- the idea that there is more than one thing you can possible do under a given set of circumstances. If determinism is true, this is automatically impossible, so what QM indeterminacy might supply is confirmation of the traditional concept of FW. Last edited: Oct 11, 2005 9. Oct 11, 2005 #8 I'd be interested in your reasoning. I suspect there's been quite a bit written on this topic. If you or anyone has a reference that would be best. Granted, having a truly random mechanism such as radioactive decay provides an indeterminate future, but I don't think one can say it provides for free will. It only provides for a future which is unknown. Free will and consciousness seem IMHO to require something more than determinate or random mechanisms. As a small proof, if for example you ran a computer program which was allegedly conscious and had free will because it incorporated a truly random set of switches, then one could: 1. Record all switch positions throughout the computer 2. Replace all random switches with deterministic switches that mimicked the recording 3. Rerun the program using the deterministic switches. The end result would be a computer who's switches duplicated exactly the original 'random computer'. If the original random computer thought it had free will, then this deterministic computer which duplicated exactly everything that random computer did, must also believe it has free will. The conclusion then is that simply having a random mechanism is of no real consequence regarding free will. It may actually produce a random future, but it is no better than a deterministic one at producing free will. Perhaps I'm getting off track by suggesting there's a difference when considering the MW theory, in fact I'm sure now I've gotten off track. It may highlight slightly better the difference between pseudo-random mechanisms such as are found in computers from truly random mechanisms, but that is a secondary point IMO. I feel there must be something other than deterministic or random mechanisms at play. These are "fundamental" laws. They are causal mechanisms in the sense that they act locally and directly on neighboring parts of the universe. They do not create any kind of integrating or large scale affects, they only produce local affects on neighboring bits of the universe. Strangely, there is some speculation, especially within condensed matter physics, that there is something more to it. Robert Laughlin is a Nobel laureate and I believe his opinion is essentially that we're missing something, so to speak. I believe he's suggesting there is some organizational mechanism that operates above and beyond our reductionist views of determinism. Here's what he says: Ref: http://www.physics.lsa.umich.edu/nea/special/ford04.asp" [Broken] Note: Laughlin is not commenting on consciousness here, he is referring only to experiments in condensed matter physics that support his claim. What exactly those "principals of organization that ... are transcendent" is unknown, but I have to believe they are responsible for consciousness because I feel the reductionist view which says essentially that "A and B interact this way and that's all that happens" is insufficient to support phenomena known to occur such as consciousness and this reductionist view will shortly be proved insufficient. Last edited by a moderator: May 2, 2017 10. Oct 11, 2005 #9 Can you explain please how does indeterminacy endow free will? If I toss a coin, and then base my decision purely on the outcome of the coin toss (the outcome of the coin toss is to all intents and purposes an indeterministic outcome), does that mean my decision is a free will decision? 11. Oct 11, 2005 #10 My reasoning is basically as follows : A random outcome can effectively be generated by tossing a coin. Does anyone seriously believe that the equivalent of “coin tossing” goes on in the brain to generate “free will decisions”? If so, can they explain how it is that tossing a coin can endow free will upon the decision-making process? If anyone believes that “random events in the brain” bestow any special powers on the brain then I humbly suggest the onus is on that person to substantiate such a claim, rather than for me to refute it. I agree completely. Again (in the case of free will, whatever that might be) I agree completely . If you are suggesting simply that consciousness is an emergent phenomenon then yes I agree. Can you please define clearly what you mean by "free will" in the above context? Last edited: Oct 11, 2005 12. Oct 12, 2005 #11 I have not claimed that indeterminism is a *sufficient* criterion for FW, only a necessary one. Except that it actually doesn't. Why not ? A necessary condition for FW is Alterntaive Possibilities, and indeterminism provides that. (Considerably) more details here:- 13. Oct 12, 2005 #12 No (although it does allow for one ingredient of FW, ie Alternative Possibilities). If you base your decisions on an external deterministic mechanism, such as followign a script, that isn't FW either. To say that FW is based on an indeterministic mechanism does not mean any mechanism will do. Last edited: Oct 12, 2005 14. Oct 12, 2005 #13 Thank you. In order to understand the logic here, I need to ask some questions, I hope that is OK. In the linked article, free will is defined as "the power or ability to perform actions, at least some of which are not brought about necessarily and inevitably by external circumstances". May I ask : "external" is being measured relative to..... what? (ie where is the boundary, what is it that is internal, what is it that is external?) 15. Oct 12, 2005 #14 Hi Tournesol. I have read through this article several times, and I simply cannot find anywhere in the article any suggested mechanism whereby indeterminism endows free will to homo sapiens. The article talks of the "Darwinian model", is this supposed to be the model which explains how free will arises from indeterminism in the brain? 16. Oct 12, 2005 #15 The main problem with most “indeterminacy” models of free will decision-making is that introducing indeterminacy arbitrarily into the decision-making process leads not to free will, but to capricious (irrational) behaviour. This model may seem at first sight to provide a means of generating free will. There is a random element (the random idea generator) combined with a deterministic element (the sensible idea selector). The model is thus claimed to be both indeterministic and yet not capricious (it makes rational choices). But does it endow free will? Can we say whether this model actually provides free will? I cannot answer that question without making an assumption on the definition of free will, and that is a notoriously difficult thing to do. (I do not believe the definition of free will given in the link is self-consistent). However, what I CAN do is to show how this model applies to "machine" free will. Imagine that we have a deterministic computer-based decision making machine. We now add to this machine a “random idea generator” which generates random ideas based on a truly indeterministic process (possibly powered by some quantum-based device). The RIG generates alternate ideas for action, inputs these ideas to the computer, and the computer then decides which of these ideas to turn into action. The computer is performing the role of the SIS. IF it is true that the Darwinian model endows free will, THEN it also follows that the machine we have just created also has free will. Would you agree? If not, why not? Last edited: Oct 12, 2005 17. Oct 12, 2005 #16 Free will is a feature of consciousness which allows a conscious individual the ability to decide between various courses of action. If consciousness doesn't exist, free will can't exist regardless of determinate or indeterminate mechanisms. The question then is, "Can determinate and indeterminate mechanisms alone provide for consciousness?" A reductionist would say that's all we have available, so the argument centers on which ones are required for free will. If one says the reductionist POV is incomplete, then something else is required for consciousness and without it, we have no free will. In this case, determinate and indeterminate mechanisms are insufficient to provide for free will because those mechanisms alone can not provide for consciousness. 18. Oct 12, 2005 #17 I am open to the possibility of artificial intelligence, so I am open to the possibility of artificial FW. (Note that when we want computers to be "creative" , we do indeed use real or pseudo- random number generators). 19. Oct 12, 2005 #18 Sorry, I'm getting that thought experiment (TE) mixed with another. You're right. This TE wasn't intended to prove an indeterminate mechanism is not needed for FW. This thought experiment shows that an indeterminate mechanism is not a prerequisite for consciousness. My mistake. From the perspective of a reductionist, consciousness can be explained by deterministic mechanisms and thus to add "free will" some will say an indeterminate mechanism is required while others (compatibalists) may suggest no such mechanism is needed. 20. Oct 15, 2005 #19 Thank you - but with respect this is not a "definition" of free will, it simply describes one of the features of free will (unless you wish to claim that ALL THERE IS to free will is the ability to consciously decide between various courses of action?) 21. Oct 15, 2005 #20 Hi Tournesol, would you say that the machine we have just created above (by combining a deterministic computer with a random idea generator) now has “free will”? Similar Discussions: Consciousness, Determinism and the Many Worlds view 1. Many-Worlds Theory (Replies: 91)
eaa8b464a8e14c1d
onsdag 2 september 2015 Finite Element Quantum Mechanics 5: 1d Model in Spherical Symmetry The new Schrödinger equation I am studying in this sequence of posts takes the following form, in spherical coordinates with radial coordinate $r\ge 0$ in the case of spherical symmetry, for an atom with kernel of charge $Z$ at $r=0$ with $N\le Z$ electrons of unit charge distributed in a sequence of non-overlapping spherical shells $S_1,...,S_M$ separated by spherical surfaces of radii $0=r_0<r_1<r_2<...<r_M=\infty$, with $N_j>0$ electrons in shell $S_j$ corresponding to the interval $(r_{j-1},r_j)$ for $j=1,...,M,$ and $\sum_j N_j = N$: Find a complex-valued differentiable function $\psi (r,t)$ depending on $r≥0$ and time $t$, satisfying for $r>0$ and all $t$, • $i\dot\psi (r,t) + H(r,t)\psi (r,t) = 0$              (1) where $\dot\psi = \frac{\partial\psi}{\partial t}$ and $H(r,t)$ is the Hamiltonian defined by • $H(r,t) = -\frac{1}{2r^2}\frac{\partial}{\partial r}(r^2\frac{\partial }{\partial r})-\frac{Z}{r}+ V(r,t)$, • $V(r,t)= 2\pi\int\vert\psi (s,t)\vert^2\min(\frac{1}{r},\frac{1}{s})R(r,s,t)s^2\,ds$, • $R(r,s,t) = (N_j -1)/N_j$ for $r,s\in S_j$ and $R(r,s,t)=1$ else, • $4\pi\int_{S_j}\vert\psi (s,t)\vert^2s^2\, ds = N_j$ for $j=1,...,M$.                  (2) Here $-\frac{Z}{r}$ is the kernel-electron attractive potential and $V(r,t)$ is the electron-electron repulsive potential computed using the fact that the potential $W(s)$ of a spherical uniform surface charge distribution of radius $r$ centered at $0$ of total charge $Q$, is given by $W(s)=Q\min(\frac{1}{r},\frac{1}{s})$, with a reduction for a lack of self-repulsion within each shell given by the factor $(N_j -1)/N_j$. The $N_j$ electrons in shell $S_j$ are thus homogenised into a spherically symmetric charge distribution of total charge $N_j$. This is a free boundary problem readily computable on a laptop, with the $r_j$ representing the free boundary separating shells of spherically symmetric charge distribution of intensity $\vert\psi (r,t)\vert^2$ and a free boundary condition asking continuity and differentiability of $\psi (r,t)$.    Separating $\psi =\Psi +i\Phi$ into real part $\Psi$ and imaginary part $\Phi$, (1) can be solved by explicit time stepping with (sufficiently small) time step $k>0$ and given initial condition (e.g. as ground state): • $\Psi^{n+1}=\Psi^n-kH\Phi^n$,  • $\Phi^{n+1}=\Phi^n+kH\Psi^n$,  for $n=0,1,2,...,$ where $\Psi^n(r)=\Psi (r,nk)$ and $\Phi^n(r)=\Phi (r,nk)$, while stationary ground states can be solved by the iteration • $\Psi^{n+1}=\Psi^n-kH\Psi^n$,  • $\Phi^{n+1}=\Phi^n-kH\Phi^n$,  while maintaining (2). A remarkable fact is that this model appears to give ground state energies as minimal eigenvalues of the Hamiltonian for both ions and atoms for any $Z$ and $N$ within a percent or so, or alternatively ground state frequencies from direct solution in time dependent form. Next I will compute excited states and transitions between excited states under exterior forcing. Specifically, what I hope to demonstrate is that the model can explain the periods of the periodic table corresponding to the following sequence of numbers of electrons in shells of increasing radii: 2, (2, 8), (2, 8, 8), (2, 8, 18, 8), (2, 8, 18, 18, 8)... which to be true lacks convincing explanation in standard quantum mechanics (according to E. Serri among many others). The basic idea is thus to represent the total wave function $\psi (r,t)$ as a sum of shell wave functions with non-overlapping supports in the different in shells requiring $\psi (r,t)$ and thus $\vert\psi (r,t)\vert^2$ to be continuous across inter-shell boundaries as free boundary condition, corresponding to continuity of charge distribution as a classical equilibrium condition. I have also with encouraging results tested this model for $N\le 10$ in full 3d geometry without spherical shell homogenisation with a wave function as a sum of electronic wave functions with non-overlapping supports separated by a free boundary determined by continuity of wave function including charge distribution. We compare with the standard (Hartree-Fock-Slater) Ansatz of quantum mechanics with a multi-dimensional wave function $\psi (x_1,...,x_N,t)$ depending on $N$ independent 3d coordinates $x_1,...,x_N,$ as a linear combination of wave functions of the multiplicative form • $\psi_1(x_1,t)\times\psi_2(x_2,t)\times ....\times\psi_N(x_N,t)$,   with each electronic wave function $\psi_j(x_j,t)$ with global support (non-zero in all of 3d space). Such multi-d wave functions with global support thus depend on $3N$ independent space coordinates and as such defy both direct physical interpretation and computability, as soon as $N>1$, say. One may argue that since such multi-d wave function cannot be computed, it does not matter that they have no physical meaning, but the net output appears to be nil, despite the declared immense success of standard quantum mechanics based on this Ansatz. Inga kommentarer: Skicka en kommentar
644e2853e131becf
Viewpoint: Dissipative Stopwatches Christine Muschik, Institute of Photonic Sciences, Av. Carl Friedrich Gauss, 3, 08860 Castelldefels, Barcelona, Spain Published March 11, 2013  |  Physics 6, 29 (2013)  |  DOI: 10.1103/Physics.6.29 Precisely Timing Dissipative Quantum Information Processing M. J. Kastoryano, M. M. Wolf, and J. Eisert Published March 11, 2013 | PDF (free) +Enlarge image Figure 1 APS/C. Muschik Figure 1 (Top) In conventional time evolution of quantum systems, different input states |ψin〉 lead to different output states |ψout〉 if a unitary operation U is applied. (Bottom) Quantum state preparation by reservoir engineering takes a different approach. If the interaction of the system with a bath is engineered such that |ψfin〉 is the unique steady state of the dissipative process, then the system is driven into this state irrespective of the initial state initiating the evolution paths ρ1, ρ2, etc. +Enlarge image Figure 2 Figure 2 Using the protocols developed by Kastryano et al. [1], dissipative quantum information processing can be performed by engaging Markov processes in a precisely time-ordered fashion. Imagine a relay race in which one runner hands a baton to a second runner. In a similar way, quantum states evolving according to Liouville operators L1 and L2 can trigger each other. In this example runner L1 starts at t0 and ends at t1, where runner L2 takes over until t2 (shown as the finish line, but this process could be repeated over more sequences). Normally, dissipative evolutions cannot be timed in that way, but the protocols developed by Kastoryano et al. allow one to perform dissipative operations sequentially at specific points in time during well-defined time windows. When riffle-shuffling a deck of 52 cards, seven shuffles are necessary to arrive at a distribution of playing cards that is, to a large degree, independent of the initial ordering. The fact that initial correlations survive if the deck is only shuffled a few times and disappear suddenly after seven shuffles is well known by magicians, who use this phenomenon in card tricks to amaze their audience. In a paper in Physical Review Letters [1], Michael Kastoryano of the Free University of Berlin, Germany, and colleagues show how this effect can be leveraged for quantum information processing. Quantum information science uses phenomena such as superpositions and entanglement to devise quantum devices capable of performing tasks that cannot be achieved classically. These applications are typically based on unitary dynamics (that is, time evolutions that are governed by the Schrödinger equation). One big practical problem hindering the operation of such devices in the quantum regime is dissipation caused by the interaction of the system with its environment. In the last several years, a new approach to quantum information processing has led to a rethinking of the traditional concepts that rely on unitary dynamics alone and avoid dissipation unconditionally: instead, these new protocols harness dissipative processes for quantum information science. Actively using dissipation in a controlled way opens up interesting new possibilities and has important advantages: dissipative protocols are robust and, as explained below, allow one to prepare a desired quantum state, irrespective of the initial state of the system. However, the underlying processes are intrinsically probabilistic and time independent. In general, it is therefore not clear how to incorporate them in the existing framework of unitary quantum information processing. One route around this difficulty is to embrace the probabilistic and time-independent nature of these processes and use specifically designed dissipative architectures (see, for example, Ref. [2]), but so far there are very few, and these schemes are conceptually very different from unitary protocols. Protocols based on unitary dynamics typically require precise timing and operations, which are conditioned on previous ones. The work by Kastoryano et al. shows how dissipative processes can be timed and used in a conventional way without losing the specific advantages of dissipative schemes [1]. The unitary time evolution (Fig. 1) of a pure quantum state |ψ under a Hamiltonian Ĥ is governed by the Schrödinger equation iħ|Ψ·(t)=Ĥ|Ψ(t). Accordingly, a unitary time evolution U(t)=e-iĤtħ transforms a pure state |Ψin always into another pure state |Ψout=U(t)|Ψin. Consider, for example, a system with spin states | and |. The Hamiltonian Ĥ=ħκ(|ge|+|ge|) causes the spin to flip. If acting for a time t=π/2κ, it transforms | into | and | into |. Dissipative processes, in contrast, can turn a pure state into a mixed one that is described by a density matrix ρ, representing a statistical mixture. A dissipative time evolution is governed by a master equation (i.e., an equation of motion for the reduced density operator of a subsystem that interacts with an environment; the dynamics of the system is obtained by tracing out the degrees of freedom of the environment). Here we consider Markov processes (that is, “memoryless” ones) that are described by a time-independent Liouvillian master equation ρ·=L(ρ) with L(ρ)=Γ(2âρâ+-â+âρ-ρâ+â), with jump operator â and rate Γ. It is instructive to consider, for example, the jump operator â=||. The corresponding master equation causes a system in state | to relax to | at a rate Γ. A dissipative process of this type can be understood as applying the jump operator â with certain probability ρ, which is determined by Γ. If we prepare our system in state | and wait a short while, we don’t know whether there has already been a quantum jump ||⟩ or not, hence the resulting quantum state is a mixed one ρmixed=p||+(p-1)||. (The steady state can be still a pure one, | in this example.) Because of this intrinsically probabilistic feature, dissipative processes are difficult to time. Kastoryano and colleagues develop new tools that allow one to use dissipative processes in such a way that the desired transitions occur at very well-defined points in time (Fig. 2). The key to dissipative protocols is to tailor the interaction between the system and a bath such that a specific desired jump operator â is realized. This jump operator is chosen such that the target state ρfin is the unique steady state of the dissipative evolution L(ρfin)=0. For dissipative quantum computing [3], the result of the calculation is encoded in ρfin, and if the goal is quantum state engineering, ρfin can be, for example, an entangled [4] or topological state [5]. This state is reached regardless of the initial state of the system. Should the system be disturbed, the dissipative dynamics will bring it back to the steady state ρfin. This is impossible for unitary dynamics. In the unitary case, imperfect initialization inevitably leads to deviations from the desired final state. Therefore “reliable state preparation,” the second of the five criteria that DiVincenzo established for a scalable quantum computer [6], was for a long time considered a fundamental requirement for quantum information processing. Kastoryano et al. extend the toolbox of dissipative quantum information processing by introducing devices for timing this type of process exactly. More specifically, they introduce schemes for (i) preparing a quantum state during a specified time window and (ii) for triggering dissipative operations at specific points in time. The authors use these tools for demonstrating a dissipative version of a one-way quantum computation scheme [7]. The central ingredient that is used here is a very interesting mechanism called the “cutoff phenomenon.” Stochastic processes that have this cutoff quality exhibit a sharp transition in convergence to stationarity. They do not converge smoothly to the stationary distribution during a certain period (if the initial state is far away from the stationary state) but instead converge abruptly (exponentially fast in the system size) at a specific point in time. This behavior was first recognized in classical systems [8]. An intriguing classical example is the shuffling of playing cards outlined above. This type of mechanism has now been investigated in the quantum setting. In earlier work [9] by some of the authors of the new paper, the cutoff phenomenon is introduced for quantum Markov processes using the notions of quantum information theory. This provides a quantitative tool to study the convergence of quantum dissipative processes. In Ref. [1], Kastoryano et al. employ this quantum-version of the cutoff phenomenon to start and end dissipative processes at specific points in time, thus allowing for the integration in the framework of regular quantum information architectures. In future, these kinds of tools might also become important for new dissipative quantum error correcting schemes and may point towards new insights regarding passive error protection. Quantum reservoir engineering is a young and rapidly growing area of research. Several protocols have been developed including dissipative schemes for quantum computing [3], quantum state engineering [10], quantum repeaters [2], error correction [11], and quantum memories [12]. Dissipative schemes for quantum simulation [13] and entanglement generation [4] have already been experimentally realized. Actively using dissipative processes is a conceptually interesting direction with important practical advantages. The results presented in Ref. [1] add new tools for exploiting and engineering dissipative processes and provide the linking element for incorporating dissipative methods into the regular framework of quantum information processing. 1. M. J. Kastoryano, M. M. Wolf, and J. Eisert, “Precisely Timing Dissipative Quantum Information Processing,” Phys. Rev. Lett. 110, 110501 (2013). 2. K. G. H. Vollbrecht, C. A. Muschik, and J. I. Cirac, “Entanglement Distillation by Dissipation and Continuous Quantum Repeaters,” Phys. Rev. Lett. 107, 120502 (2011). 3. F. Vertraete, M. Wolf, and J. I. Cirac, “Quantum Computation and Quantum-State Engineering Driven by Dissipation,” Nature Phys. 5, 633 (2009). 4. H. Krauter, C. A. Muschik, K. Jensen, W. Wasilewski, J. M. Petersen, J. Ignacio Cirac, and E. S. Polzik, “Entanglement Generated by Dissipation and Steady State Entanglement of Two Macroscopic Objects,” Phys. Rev. Lett. 107, 080503 (2011). 5. S. Diehl, E. Rico, M. A. Baranov, and P. Zoller, “Topology by Dissipation in Atomic Quantum Wires,” Nature Phys. 7, 971 (2011). 7. R. Raussendorf and H. J. Briegel, “A One-Way Quantum Computer,” Phys. Rev. Lett. 86, 5188 (2003). 8. P. Diaconis, “The Cutoff Phenomenon in Finite Markov Chains,” Proc. Natl. Acad. Sci. U.S.A. 93, 1659 (1996). 9. M. J. Kastoryano, D. Reeb, and M. M. Wolf, “A Cutoff Phenomenon for Quantum Markov Chains,” J. Phys. A 45, 075308 (2012). 10. S. Diehl, A. Micheli, A. Kantian, B. Kraus, H. P. Büchler, and P. Zoller, “Quantum States and Phases in Driven Open Quantum Systems with Cold Atoms,” Nature Phys. 4, 878 (2008). 11. J. Kerckhoff, H. I. Nurdin, D. S. Pavlichin, and H. Mabuchi, “Designing Quantum Memories with Embedded Control: Photonic Circuits for Autonomous Quantum Error Correction,” Phys. Rev. Lett. 105, 040502 (2010); J. P. Paz and W. H. Zurek, “Continuous Error Correction,” Proc. R. Soc. London A 454, 355 (1998). 12. F. Pastawski, L. Clemente, and J. Ignacio Cirac, “Quantum Memories Based on Engineered Dissipation,” Phys. Rev. A 83, 012304 (2011). 13. J. T. Barreiro, M. Müller, P. Schindler, D. Nigg, T. Monz, M. Chwalla, M. Hennrich, C. F. Roos, P. Zoller, and R. Blatt, “An Open-System Quantum Simulator with Trapped Ions,” Nature 470, 486 (2011). About the Author: Christine Muschik Christine Muschik Christine Muschik is an Alexander von Humboldt postdoctoral fellow at the Institute of Photonic Sciences (ICFO) in Barcelona, where she joined the quantum optics theory group led by Maciej Lewenstein. She received her Ph.D. at the Max Planck Institute for Quantum Optics, where she worked with Ignacio Cirac in close collaboration with Eugene Polzik at the Niels Bohr Institute. Her thesis on quantum information processing with atoms and photons was completed within the international Ph.D. program of excellence: QCCC (Quantum Computing, Control and Communication) supported by the ENB (Elite Network of Bavaria). Her current research interests are in theoretical quantum optics and quantum nanophotonics. Subject Areas New in Physics
dc611c886df13d59
Erwin Schrödinger From Wikipedia, the free encyclopedia Jump to: navigation, search This person was awarded a Nobel Prize Erwin Schrödinger Erwin Schrödinger (Erwin Rudolf Josef Alexander Schrödinger, 12 August 1887, Vienna-Erdberg – 4 January 1961, Vienna) was an Austrian physicist and theoretical biologist. He was one of the founding fathers of quantum theory and won the Nobel Prize in Physics in 1933. Life[change | change source] Schrödinger went to the Academic Gymnasium from 1898 to 1906. Afterwards he studied mathematics and physics in Vienna and wrote his habilitation up from 1910. He was a soldier in World War I. Afterwards he got professorships in Zürich, Jena, Breslau and Stuttgart. In 1920 he married. In 1927 he went to Berlin to fellow Max Planck. After the take-over of power by the Nazis, Schrödinger left Germany and got a new professorship in Oxford. In 1933 he was awarded the Nobel Prize. Three years later he returned to Austria and became professor in Graz. In 1938 he had to leave Austria, because the Nazis had taken over government. He went to Dublin and became director of the School for Theoretical Physics. In 1956 he returned to Vienna and got a professorship for Theoretical Physics. He died of tuberculosis in 1961. Important work[change | change source] Schrödinger's most important work is the wave mechanics – a formulation of quantum mechanics, and especially the Schrödinger equation. He also worked on the field of biophysics. He invented the concept of negentropy and helped to develop molecular biology. Other pages[change | change source] Other websites[change | change source]
c3ac8a968acf9710
Does Philosophy Make You a Better Scientist? By Sean Carroll | July 6, 2009 9:27 am Steve Hsu pulls out a provocative quote from philosopher of science Paul Feyerabend: It’s probably true that the post-WWII generations of leading physicists were less broadly educated than their pre-war counterparts (although there are certainly counterexamples such as Murray Gell-Mann and Steven Weinberg). The simplest explanation for this phenomenon would be that the center of gravity of scientific research switched from Europe to America after the war, and the value of a broad-based education (and philosophy in particular) has always been less in America. Interestingly, Feyerabend seems to be blaming philosophers themselves — “the withdrawal of philosophy into a `professional’ shell” — rather than physicists or any wider geosocial trends. But aside from whether modern physicists (and maybe scientists in other fields, I don’t know) pay less attention to philosophy these days, and aside from why that might be the case, there is still the question: does it matter? Would knowing more philosophy have made any of the post-WWII giants better physicists? There are certainly historical counterexamples one could conjure up: the acceptance of atomic theory in the German-speaking world in the late nineteenth century was held back considerably by Ernst Mach‘s philosophical arguments. On the other hand, Einstein and Bohr and their contemporaries did manage to do some revolutionary things; relativity and quantum mechanics were more earth-shattering than anything that has come since in physics. The usual explanation is that the revolutionary breakthroughs simply haven’t been there to be made — that Feynman and Schwinger and friends missed the glory days when quantum mechanics was being invented, so it was left to them to move the existing paradigm forward, not to come up with something revolutionary and new. Maybe, had these folks been more conversant with their Hume and Kant and Wittgenstein, we would have quantum gravity figured out by now. Probably not. Philosophical presuppositions certainly play an important role in how scientists work, and it’s possible that a slightly more sophisticated set of presuppositions could give the working physicist a helping hand here and there. But based on thinking about the actual history, I don’t see how such sophistication could really have moved things forward. (And please don’t say, “If only scientists were more philosophically sophisticated, they would see that my point of view has been right all along!”) I tend to think that knowing something about philosophy — or for that matter literature or music or history — will make someone a more interesting person, but not necessarily a better physicist. This might not be right, though. Maybe, had they been more broad and less technical, some of the great physicists of the last few decades would have made dramatic breakthroughs in a field like quantum information or complexity theory, rather than pushing harder at the narrow concerns of particle physics or condensed matter. Easy to speculate, hard to provide much compelling evidence either way. CATEGORIZED UNDER: Philosophy, Science • iggy2112 I find it hard to believe that basic philosophical rigor wouldn’t help theoretical physics. One has to have a feel for whether or not the group of assumptions or presuppositions are logically consistent before building a theory on top of them. • Neal J. King I wouldn’t blame Mach for holding back atomic theory; but rather the lack of competing philosophers of science at the time. • Sam C I’m not convinced that Feyerabend meant that reading Hume would help one make breakthroughs in physics. What he’s complaining about is that physicists who only know physics are ‘uncivilized savages’: without the ugly colonial language, they’re stunted human beings who aren’t using a large part of their intelligence. The same is true, I’d say and I think Feyerabend would also say, of philosophers who only know philosophy. There’s more to life than physics. Indeed, there’s more to being a scientist than physics: there’s skeptical curiosity, and imagination, and joy in difficult thought, all of which are developed by doing philosophy. • Carolune Interesting entry – your first paragraph (after the quote) sounds almost verbatim like this very interesting digression by Lee Smolin for Canada’s CBC’s Ideas series: How to think about Science: Episode 23. I can only recommend it for a very interesting in-depth reflection about the importance of broader interests and philosophy in the progress of science. Lee Smolin rawks… :) • Chris W. The influence of Mach on Einstein is fascinating, not least because it is so full of irony. Mach was unmoved by the analysis of Brownian motion, and had little use for special or general relativity. And yet, Julian Barbour has argued that general relativity is a nearly perfect realization of Mach’s principle in dynamics, or at least, a particular statement of the essential content of Mach’s principle. (By the way, Mach was also Wolfgang Pauli’s godfather.) • George Musser A lot of physicists, especially those working on quantum foundations and non-string-theoretic approaches to quantum gravity, do say they find value in philosophy. And surely you have benefited from Huw Price’s work. • gyokusai Ha, I also wanted to reference Smolin, but Carolune (#4) beat me to it. In his Trouble With Physics, Smolin makes quite a convincing case I think that, with different ways of thinking, we might indeed have developed different ideas by now and pushed the frontiers instead of having such a long period of consolidating. Plus, more or less, getting stuck. And Smolin quotes Feyerabend too, I remember. • Chris W. Philosophical presuppositions don’t matter, except when they do. Adopting the attitude “shut up and calculate” works, except when it doesn’t. • Chris W. Also see the discussion in this post and the subsequent comments. • Matt Leifer As someone who occasionally works on the foundations of quantum theory, I must say that I have found the philosophical literature in this area to be tremendously helpful in promoting clear thinking about things like the measurement problem and Bell’s theorem. It is one of the few areas where there is considerable interation between the scientific and philosophical communities, and where that interaction has definitely led to conceptual progress. Of course, many would argue about whether this is really science, but let’s not get into that right now. Lately I have been thinking that a similar degree of interaction between the cosmology/strings community and philosophers could be very fruitful, especially with the whole landscape/anthropic principle/probability measures business. It is not simply a matter of having philosophers criticizing the whole idea of the program (although I am sure that would happen quite a lot), but more usefully they could analyze specific proposals in order to check whether thay are consistent and well-founded. I think it would be a good thing if these ideas were analyzed with the same sort of philosophical rigor that has been applied to Bell’s theorem for example. • Tod R. Lauer It depends on how you look at this question. A subtle issue is why who works on what, where they get their ideas and inspiration, how they persevere in the face of difficulties, contraindications and so on. In this case, it’s possible that a training in philosophy, as much is it is a discipline that focuses on how one thinks about thinking, may provide utility on an individual or idiosyncratic basis. But then, I’m not sure that I would elevate it above any other source of inspiration, and it’s harder for me to imagine that any specific philosophical ideas, schools or thought, etc. would have any general objective usefulness, certainly not in any literal translation: philosophy -> physics. Clearly, Feynman, who disdained philosophy, did just fine without it. He was also very clear about his sources of inspiration, and how he drew on his own talents for deep creative thought. At the same time, just as everyone talks with an accent, whether they think so or not, he clearly had an exceptionally strong philosophy of science that colored how he looked at the world… • Aw, Heck! As a former student of philosophy, I agree with Sean the study of philosophy can make anyone a more interesting person but not necessarily a better physicist. And the quote from Feyerabend demonstrates it won’t necessarily make you a better person either. I also agree that “professional” philosophy has withdrawn into a shell and alienated the general population, but I think it’s a shell of arrogance and not one of professionalism. Furthermore I find Feyerabend’s characterization of educated non-philosophers as “uncivilized savages” to be indicative of this very arrogance. If philosophy is ever to encourage non-philosophers to explore the field once again, it won’t be with language like this. • FUG Philosophy is unavoidable in physics. Science is, essentially, the best epistemological method to date (not that I’m biased). I don’t think you’re becoming a “barbarian” if you don’t formally study philosophy (I always cringe whenever I read that term in philosophy), but studying philosophy, like mathematics, will help you become a better physicist because you will become a better thinker. And, hell, thinking is pretty frick-tastically sweet. • daisyrose I am not sure one should be insulted to be called an *uncivilized savage* especially if one has imagination, a little leisure and an expression of intuition with out denting or shaving off the nose of a truth. • steeleweed Europeans place more value than we do on wide-spread learning. Used to be that British businesses hired the English or History or Philosophy major, but I suspect that in recent years they have been emulating America and going for the MBA-types. Recall IBM hiring in the 1960s – any degree was acceptable but you needed some degree. It proved you had enough discipline to accomplish something – and they didn’t want someone uneducated to rise high the company and deal with upper-level types only to embarrass himself and IBM by his lack of education. The chief sub-discipline of Philosophy that all scientists need is Logic, and it wouldn’t hurt the rest of the human race either. Generally, 5% think, the other 95% believe they think but don’t really have a clue. Some schools are beginning to teach ‘life skills’, but I’ve never seen Thinking 101 and we need that even more. Am contemplating writing a book: Thinking for Dummies. • RPF The profound truth about philosophy is that it is completely and utterly useless. In philosophy you can defend any stance you want and it doesn’t mean anything. It is just a terrible waste of time. The few deep and useful insights about reality which can be achieved by reasoning alone you are very likely to develop anyway if you are smart without wasting time on philosophical studies. This is why philosophers are so arrogant and alienated – at some point in their career they realize that philosophy is an empty talk, that their investigations are meaningless and that it’s Science not philosophy that allows one to discover fundamental truths about reality, this is also why many resent philosophers resent science. What’s more some philosophical ideas can actually have negative impact on physics, one example is the lack of belief in objective reality which can make one naively reject the possibility of gaining further understanding of the system purely on philosophical ground. This is the case with Bohr and his absurd Copenhagen interpretation which have stood in the way of progress ever since it was developed. As a result QM is a mixture of very good ideas and complete nonsense. Physicists certainly should have deep insight and understanding but this comes from studying foundations of physics and mathematics not philosophy which is useless. • Chris W. How very Wittgensteinian, RPF. I think that good physicists are generally willing to consider the ideas of anyone who actually cares about the problems that arise in physics, and has taken some trouble to carefully study the background of those problems. A number of scholars specializing in philosophy as a professional discipline over the past couple of centuries have indeed fit that description. ( “There are only two kinds of music, good and bad.” — Duke Ellington ) • Julianne RPF — even if one can “defend any stance you want”, the mental discipline involved in actually carrying out the defense is indeed useful training, in much the same way that upper level abstract mathematics is (e.g. I don’t use group theory as an astronomer, but I learned a hell of a lot about how to think from setting up mathematical proofs while learning it.) Likewise for philosophy — in grad school I was friends with a pack of philosophers, and man alive, they were smaaaaart. Deadly rigorous in arguments, with a level of logical power that scientists tend to think that only they have. (They were primarily philosophers of language, which might lend itself more to this kind of thinking — I have no idea if the ‘what is truth” subset of the discipline is similar). Regardless, I don’t find this to be that distinct from theoretical particle physics, which spends 99% of its effort calculating the effects of theories that are not the actual laws of nature. Sure, once in a while you get lucky and predict the neutrino, but most of the models they’re exploring are not accurate representations of the real universe. However, they gain insight and develop new methodologies by carrying out those explorations, making them “useless, but worthwhile”. • spyder From the perspective of a historian/philosopher of religion and philosophy, i would like to see more scientists (across the disciplines) express themselves and their research to a greater public, and to maximize their efforts they would benefit from philosophy coursework. The impact of science on philosophy has been monumental; philosophical rigor demands that philosophers engage the sciences as sources of objectifiable and verifiable givens. A solid, hermeneutically-whole philosophy requires the integration of physicists latest discoveries and the ongoing efforts of astronomers to understand the universe. What a profoundly and staggeringly stupid three sentences… • Peter Beattie And that from the crackpot who wrote, in Against Method: Obviously, philosophy doesn’t even make you a better philosopher. As to the “withdrawal” quote. That he trashes Feynman of all people for lacking in philosophical depth only goes to show what this kind of bigoted, condescending hagiography is: complete bullshit. If that’s what you mean by philosophy, then scientists, or indeed anyone, should pay no attention. Philosophy teaches how to think well. Read Feynman’s The Meaning of It All, and you will instantly have improved your thinking. That’s applied philosophy for you. And it lasts. People still talk about Feynman everywhere you look. Feyerabend — well, not so much. • Cunctator thanks for the very gracious and sensible reply to a comment that was all but gracious, honestly close to the dangerously ignorant. Yes, as a ‘philosopher’ (whatever that might be) I do take the issue a bit personally when it comes to statements like ‘we do useful cool stuff, while you are a bunch of wankers’. Our science did not pup out of the earth as a full-formed discipline, but its the product of millennia of intellectual development, big part of which ‘scientists’ did not even exist. The word itself was ‘invented’ by William Wheewell in the mid-1800s. Before that scientists were–guess what RPF–natural philosophers. Not knowing, or worse not even considering relevant this historical evolution, might not make you less competent in what you’ve been trained to do, but certainly will make you a less rigorous, conscious, open-minded and insightful scientist. And i believe these are traits of a great scientist. In fact, I slightly resent this post from Sean, since—even if presented in a ‘neutral’ tone—this kind of polemics only end up fueling (as we see clearly from the comments here) this ‘two cultures’/'science wars’ attitude, which is only detrimental: both to philosophers and to scientists. If i may, I’d redirect you to a blogpost of mine, on this issue, written some weeks ago: • hackenkaus Bohr, Einstein, Schrodinger, Boltzmann and Mach all spoke German. Clearly later generations of physicists have been hobbled by their practice of thinking in English. Feyerabend is clearly an idiot. I’ve never even heard of him. • citrine The training a Philosophy student gets from constructing and presenting a viewpoint within a rigorously defined, consistent framework could be very useful when interpreting the results of calculations. Especially when it comes to describing non-intuitive phenomena, knowing the pitfalls, ambiguities and limitations of language could alert a theorist as to the way the interpretations are expressed. • Haelfix I took a class in philosophy as an undergraduate. I have yet to utilize any concept or method learned there (other than rudimentary logic, which the greeks knew and every physicist in the world knows) a single time in my career as a physicist. I challenge any physicist to lay down an example where his knowledge of some esoteric philosophy actually improved his work as a physicist in a way that wouldn’t be immediately obvious to a specialist without that acumen. The only example I can think off, is probably Einstein’s obsession with Mach’s ideas during the formulation of GR. The irony is that it was the least well motivated part of that picture and is now known to be superflous (at least if phrased in the most obvious way). • Timon of Athens Yes, he has argued that. And, as usual, he’s wrong. Re: Huw Price: he has indeed done excellent work, and has really made a contribution to our understanding of the Arrow of time. His work is a prime example of the kind of things that philosophers can do that would really be useful to physicists. On the other hand, *part* of the reason for his success is simply the fact that, not being a physicist, he is free of all the accumulated strata of quasi-sociological baloney that impede physicists who want to think about this issue. I recall Sean C. reporting on a talk he gave [was it at Santa Cruz? Sorry, I forget] where he was met with opposition that I can only describe as mind-boggling stupidity. It was based not on physics — the physics point being made, such as it was, was trivial — but on sociology-of-physics resentments ["you pointy-headed cosmologists think you know thermodynamics better than us?"]. All that junk could have no influence on somebody like Huw Price, because he would probably find it hard even to imagine. So philosophers might be useful simply because they are immune to all that sort of rubbish. [Of course, they have their own sociological problems....] • greg Complete rubbish. • George “Feyerabend is clearly an idiot” Exactly. Thank you. • Jacob Russell Philosophers in no way ignore or exclude science … check out any number of past discussions here… • Thomas Even though (and maybe because) Weinberg indeed seems well educated in classical philosophy he makes a persuasive argument (in “Dreams of a Final Theory”) with regard to the unreasonable uselessness of philosophy in physics. • onymous It’s funny how people point to non-string approaches in quantum gravity as a community heavily influenced by philosophy, and see this as a positive thing. Those approaches are so far universally dead ends, telling us nothing about the real world or about consistent toy theories that demonstrably reduce to general relativity in flat space. String theory, on the other hand, at least gives us the latter. And it’s been, as far as I can see, relatively free of philosophizing. Even a somewhat philosophical idea like holography was dreamed up by ‘t Hooft and Susskind for technical reasons, and only very concrete realizations of it like Strominger and Vafa’s entropy counting or, most spectacularly, AdS/CFT have really made it a concrete part of our understanding of theoretical physics. People can get inspiration for big-picture, hand-wavy ideas wherever they want, and philosophy might be useful for that, but at the end of the day it’s nearly always calculation, hard work, slow accumulation of technical knowledge, and relatively conservative approaches that lead to progress in science. • diotimajsh Haelfix, in response to I offer another Albert Einstein example: he explicitly said that studying David Hume helped give him a mindset that was capable of forming so radical a theory as relativity. Something about Hume’s thorough, skeptical attitude toward everything helped Einstein to so profoundly challenge the dominant theories in physics, I believe. (Einstein also studied Kant’s *Critique of Pure Reason* at a fairly young age, although to my knowledge he doesn’t mention it as an influence. Of course, Kant thought that Newton had everything right, and that the basics of physics could be derived a priori; but Kant also offered some other surprising challenges to the conventional thought of his own time.) • Haelfix Hmm, and what mindset is that exactly? To be skeptical and not take an authorities word for a result and instead to check things for yourself? Hardly what I would call a novel or penetrating philosophy, even (perhaps especially) in Einstein’s time. Its also pretty much drilled into every scientists brain since childbirth. Mach’s idea of relational space is more what I had in mind. Thats very much a well thought out idea that could in principle have had distinct physical consequences. It turns out that nature doesnt work that way (modulo definition quibbles) but it wasn’t a bad idea to try. • Bee Well, I’m not a big philosopher – by and large I find it to be too many words – but I think it is always useful to learn different perspectives on our work. What some people have mentioned above, ‘clear thinking,’ ‘rigor,’ ‘consistency’ etc, I frankly found more of that in the maths than in the philosophy department. In fact, I found mathematicians to be the better philosophers, but maybe that’s just me. To be honest, a lot of the discussion about philosophy strikes me as a redirection of “questions you’re not supposed to ask” if you’re a physics student. And you’re not supposed to ask them because your prof won’t be able to answer them. Or if he does, he’ll tell you to go to the philosophy department, that being meant in a condescending way and as a discouragement. Some physics students actually go to the philosophy department. But in my experience few find what they were looking for. (Howard Burton tells a little story about that in his book). Thus, I don’t think philosophy makes you a better scientists. But openmindedness and not getting discouraged when searching for answers does. • Simon Kiss Isn’t this thread missing an obvious point: namely, the relationship between physics and ethics? While physics may not be as plagued with ethical debates as biology or medical research has, it has not in been absent! (Atomic weapons anyone?). Nor is this a thing of the past. Physicists are working on all sorts of rocketry and satellite technology that may be put to questionable purposes. Doesn’t a familiarity with ethics (a substantial sub-discipline of philosophy) make physicists better prepared to navigate the murky ethical questions that arise when mixing technology, science, and the state? • commenter I would have to question this. It seems to me that American scientists are often more aware and appreciative of broader intellectual developments outside their own field than their European counterparts. You yourself, Sean, are an example, with posts like this and numerous others. By contrast, scientists I know from other countries are often oblivious to everything outside their own narrow subdiscipline, and sometimes seem puzzled as to why anyone would ever want to study anything else. And never mind philosophy and the humanities; I know one Cambridge-educated mathematician who had never heard of either Steven Weinberg or Richard Dawkins! When you think about it, this makes some sense in light of the way the respective educational systems work. In Europe, so I understand, the “two cultures” are separated from each other at a very young age, and one basically studies nothing but one’s specialty; whereas in the U.S., people usually have to study broadly even at the university level. • Socrates I am sure, that being narrow-minded physicist is the worst thing in the world. But the problem is, that most of the contemporary guys in physics are really ignorant to many different areas of the human knowledge-and if some lunatic says something weird-another brilliant theory(I agree, that most of the time these so called theories are not even wrong:)), most of the physicists will say, that he/she is the usual I-know-everything-better-than-you-stupid-guys kind of scientist, not appreciated by the wide public..But what if, one day, somebody says something really different-something strange, and it turns to be right?? Then what… we will ignore him/her, only because he/she has got his insipartion from an ancient greek philosopher? • Bee Socrates: In my experience the vast majority of physicists don’t care where you get your inspiration from. They tend to find it somewhat suspicious though when people make a big deal about it. Doesn’t matter what it is, whether it’s your believe in God, surfing, or philosophy. Where you got the idea might make for a nice story in your memoires but is completely irrelevant for the question whether it will work. Most of the ‘really different /something strange’ people aren’t ignored because they got their inspiration from weird sources, but because they fail to establish the necessary contact to the prevailing knowledge and thus to show their ideas are not in conflict with evidence. • Román Erwin Schrodinger: “We have inherited from our forefathers the keen longing for unified, all-embracing knowledge. The very name given to the highest institutions of learning reminds us, that from antiquity to and throughout many centuries the universal aspect has been the only one to be given full credit. But the spread, both in and width and depth, of the multifarious ranches of knowledge by during the last hundred odd years has confronted us with a queer dilemma. We feel clearly that we are only now beginning to acquire reliable material for welding together the sum total of all that is known into a whole; but, on the other hand, it has become next to impossible for a single mind fully to command more than a small specialized portion of it.” The last quote by Borges is completely applicable to science. We have lost the capability of linking our different ranches of knowledge, and this is fairly wrong. This semi-religious discurse of science, like “science is the only way of knowledge” i don’t know where it comes from but it is not different from any catholic dogma. This arrogance is heading us to the wrong way. • Bee Cee A couple things: First, I think philosophy has made a fair dent in psychology–especially cognitive science–and linguistics. I frequently see a number of philosophers cited in research articles in those fields. Philosophers of physics do seem to have made something of an impression in foundational research on quantum mechanics and on the arrow of time. I think, however, they tend to be a lot more interested in Bohmian mechanics and GRW than are most physicists. Second: I think we need to make a rough-and-ready distinction between analytic and continental philosophers. Analytic philosophers tend to be extremely naturalistic and to consider themselves intellectually aligned with math and science. Many of the great analytic philosophers doubled as mathematicians or logicians –Frege, Russell, Tarski, Putnam, Kripke, Quine etc., and many philosophers, myself included, studied math as undergraduates. Continental philosophers, like Feyerabend, Heidegger, Nietzsche, Sartre, etc., tend to be much more humanities oriented: literary, historical, and frequently just obscure. I think it’s unfortunately the latter that most non-philosophers associate with philosophy. This could be why so many scientists think philosophers are hostile to and ignorant of what they do. • Raymond I personally think that any help a scientist can get would be useful, especially considering how bad scientists are at making strong arguments. Take Dawkins for example when it comes to his book “The God Delusion.” The man makes wonderful points and focuses on a good deal of “evidence,” but when it comes down to constructing a solid argument about his view or even against the view of others, he is no better at it than a college student in an introductory level philosophy class. Not to mention, modern scientists in particular seem to be blinded by their own models of the universe as well as absolutely convinced that “their” way is the path we should all be on. Perhaps by studying up on what some of the so-called “philosophers” have to say would do them well in learning a bit of humility when it comes to dishing out their “answers” as well. On that same note, perhaps if the scientists who were working on developing technology for nuclear weapons had actually done a little thinking outside of their scientific padded cells, they would have realized that their invention and work would be used to commit mass murder. In that case, as well as many others (working for government agencies, military contracts, matters of “national security,” etc.), a consideration of scientific ethics via philosophy would have served them well. • Pingback: links for 2009-07-07 | • Landru Re: Cunctator at 21 above Thanks for the pointer to your Hypertiling blog. I read through about half a dozen posts there, and I have to say that you seem to spend a lot of your time being annoyed by critics and commentators that you think are under-informed or ignorant (e.g. Lawrence Krauss, Adam Frank, etc.) and comparatively little time explaining what it is that your own field actually achieves. This may be fine for a readership that already appreciates academic philosophy, but since you seem interested in engaging a wider audience (as evidenced by your posting here) I would urge you to seize the educational opportunity and explain to non-expert CV readers what it is that you do all day. Here’s a simple, sharp version of this question: Why should anyone believe that physics is different from nonsense? Simple: your GPS works. Done. There is a great deal in the modern world that a non-expert can see, read and touch to confirm that science is not just made-up nonsense (or at least that it’s not _all_ made-up nonsense) but has some purchase in reality. If your taste tends toward the lower tech than GPS, you can also check this classic post from Julianne on the physics of chocolate: It’s worth reading all the way to the last few sentences. So the corresponding, simple question for you is: why should anyone (such as RPF at 16) believe that academic philosophy is other than nonsense? What can a non-expert see, read or touch in the world to confirm that philosophy is not just a made-up game? If you can answer this question in a calm, lucid, readable manner then you might go a long way to answering the question posed by Sean in the original post here, and educate a few people as well. [One possible answer to "How do we know philosophy is not nonsense?" might be "Because I know some smart people who think it's not," similar to the sentiment by Julianne at 18 above. This is certainly a non-zero answer, and I would guess would be the answer given most frequently by non-experts. But I think it's a bit weak, and hope/trust that you can do better by explaining actual content and not just citing authority.] • Raymond For Landru: The question of how to “confirm” that philosophy is not “nonsense” is almost a perfect example of how scientists tend to see the world as containing “truths” that can be confirmed only by experiment, evidence, etc. Unfortunately for science, though, there is no direct answer or evidence that can support a question like, “Why should I not kill another person?” A question like this must be approached in a different way and instead has to be reasoned out to a point where an individual is forced with a series of choices. You cannot just say, “Well, killing somebody is bad,” or “Killing somebody is condemned by [insert religious text here],” because while these responses might be satisfactory up front, they are very poor arguments in general. At some point the individual must make a choice, a choice that is based upon a number of reasons that they have also chosen or observed or what not. So for example, I choose not to kill somebody else because I know that by doing so I would be terminating their life and thus denying them of a chance to grow. I have no care for the “law” or for the idea that killing other people is “bad,” but instead direct my focus to my own personal philosophy of a love ethic, which I chose to pursue some time ago. If I were to kill another person, though, I would be violating that choice of a love ethic, thus negating my original choice. In any case, while the internal consistency of science and scientific principles may be wonderfully confirmed by things such as a GPS working, the very ideas of confirmation, answers, nonsensical information, etc. are not necessarily applicable (at least not in the same way) to philosophy. Any philosopher who tries to convince you that philosophy is not nonsense, though, is missing the point that philosophy does not strive for confirmation but for strong internal consistency via argument, reason, etc. And every philosophy is based on choices by an individual to continue with an idea or reason as well as how it is applied in their every day life. • Pingback: the philosopher and the physicist › nemski • Pingback: Please stand by while I get up to date. « Shores of the Dirac Sea • Cunctator thanks for spending time reading my ramblings, and thanks even more for the provacative but polite comment. I will try to give you a half-decent answer. First of all, some things must be made clearer: 1) I do not in any way deny the effectiveness of science. I am a user of technology, and deeply interested in several scientific fields, I read scientific publications and I do not ever secretly think that it’s ‘actually really rubbish’. I remember ‘The Physics of Chocolate’ and I actually wanted to try it myself :) Yet, if I am interested in understanding and evaluating critiques to science and technology is because I generally like not to take anything for granted, and always ask some extra question, even in the face of ‘it works!’ (yes, this is part of a philosophical training, the annoying ‘but why?’ question. 2) Philosophy is a much broader category than science, if nothing else for the reason that 2 scientists, no matter how different in interests, will always find common ground in a set of methodological assumptions. Philosophers don’t, since (one of) their job(s) is to question and rebuild the ontological and epistemological theories that underlie any kind of methodological assumption. Specifically, as someone already observed in a comment above, much care should be given in drawing a line between analytic and continental philosophy. I do not want to get into this controversy here, especially when it comes to judging their merit with the only criterion of ‘how much impact does it have on actual real life’ (both could be criticized and defended in this regard, but let me just say that the more ‘naturalistic/logical/mathematical’ approach of analytic philosophy does not in any way transparently correspond to a more direct efficaciousness and ‘real-life’ relevance…), but it is simply important to keep in mind that in their methods and–mainly–aims the two are often miles apart. 3) I am not the best exemplar of ‘philosopher’ to answer this question of yours, since my ideas are particular and many, many ‘philosophers’ would disagree with me. I am aware of the often polemic tone of my posts, but this is caused by one main point, the same point which constitutes: I do not want to proselytize, I do not want to convince anyone that MY ideas are good. I simply want to indicate that different kinds of training allow for different kinds of expertise. (For example, yes, I am irritated when i see a physicist discussing about international politics on a major newspaper, because his being a VIP physicist [the reason why he's got access to such a newspaper] does not in any way make him an expert). Similarly, and this is my main point, I do not see why one should *start* from the assumption that philosophy is a bunch of crap. I understand the intuitive appeal of the phenomenic evidence of something which just ‘works’ that science can give (i.e. your GPS example) but the lack of this specific kind of ‘in your face’ evidence should not be enough to trash the whole of philosophy. You cannot ask: so if philosophy works, show me some new philosophy based PC, or some new philosophy based fridge. The two disciplines are different. Feyerabend was a quite eccentric guy, and many of his ideas and statements are considered extreme even in the philosophical community, he was an exceptional thinker in his own way, but let’s not make a paradigmatic example out of him (Fritz Zwicky was kind of an ass too, but we tend to respect his scientific intuitions nonetheless). Once again, philosophy is hard to define, but as an intellectual quest has indirectly produced major historical revolutions, the scientific one included. Philosophy does not offer a finite product, but redefines the limits of what we as human beings think and therefore produce. Another example: Ted Nelson was trained as a philosopher and as a sociologist, and yet, his theoretical work deeply influenced his technical one, both of which contributed to our own understanding of the Net. The circle that goes from theoretical thinking and material effect is continuous and constant throughout history, which is why separation is a negative stance. Philosophy, as I see it, is a meta-tool, one used to help other disciplines (other tools of human understanding of the world) to either clarify (or eventually criticize) their aims or to evaluate (or eventually criticize) their results. Useful, but a tool nonetheless. Philosophers as scientists are highly trained individuals, but none should be morally, intellectually or institutionally prioritized. That is what I criticized this kind of ‘two cultures’ wars. Let’s keep hostilities aside, and let’s try to study a bit of each other’s discipline, for the results can only be better. I do not want to claim the intellectual superiority of ‘philosophers’, just as much as I fight against any other kind of undeserved prestige that is often attributed to different ‘intellectuals’, scientists included. My own view being: yes to study philosophy can (if not necessarily *will*) make you a better scientist (and be careful here, since ‘scientist’ is a name that encompasses people from the physicist to the maritime biologist), just as studying ‘science’ (again, all of them) can make you a better philosopher. I have been too prolix as it often happens. I hope this reply is somewhat useful to you. If we keep cordial tones, I’d be very happy to keep discussing this issue further. Best regards • Giotis The two fields are related and complement one another. With philosophy we talk to ourselves about nature but with science we talk to nature directly, or at least we try to. • Jonathan Vos Post Notwithstanding #11 and #26, “Feynmans, the Schwingers” is an interesting pairing, precisely because Dirac was able to unify their major results at the foundations of Field Theory, and taught that unification in his famous course on Relativistic QM and field theory, recently available in a new edition on arXiv. I didn’t know Schwinger, but I consider it an oversimplification to say that Feynman rejected Philosophy. He may have intentionally rejected a lot of Philosophy, just as he rejected a lot of univerwsity protocol (i.e. he wouldn’t serve on committees), because so much seemed to be, to a pragmatic theory-builder and problem solver and teacher, a waste of time. Clearly, Feynman was acutely aware of some philosophical conerns, such as Epistemology (he was very good as a teacher in showing exactly how we know what we know). The attacks from outside the sciences, on Darwin and Einstein, whatever the psychological causation, seem to me more about their philosophical claims than their scientific claims. The converse question is more acute. Why have so many Philosophers failed to educate themselves on the breakthroughs in Mathematics, Physics, Biology, and observational Cosmology, which seem, even to the general public, to bear on ancient central problems of Philosophy? • Kevin First, someone asked for discrete examples of philosophy’s usefulness, a la GPS for engineering/physics. Somewhat off topic of the post, but a legitimate question that is (all too often disdainfully) brought up by scientists, and should be addressed. A direct analogy may or may not be possible (saying a GPS “works” and saying a legal system “works” are two different statements, and a comment section isn’t the forum to define and expound upon this), but examples of philosophical ideas influencing and changing the nature of human society abound. A short list of examples: empiricism and the underpinnings of the modern scientific method, political philosophy and the shift from monarchy to democratic government in many countries, abolition/civil rights/feminism and the progress towards equal rights for all people, legal philosophy and jurisprudence. Does trial by judge/jury “work” in the same way a GPS does? No; but I think we can convince ourselves they are more just, fair, and efficient than burning all of the accused and assuming Satan will protect the guilty. As to the discussion about whether it helps physicists, this discussion might be broadened even further: Does breadth of education/personal interest help people who work on deep, highly specialized topics? I argue that it often does, but not necessarily, or always directly; and the benefits can come (not an exclusive list) from a useful skill, useful knowledge applied to a new situation, or simply a different interpretation of things. I’ve heard it often stated, and agree, that it is useful for an experimentalist to deeply understand the theory behind their investigations, and that a good handle on an experiment can help a theorist interpret a new result. So, is it useful to go further than understanding the “other side” of physics, into becoming, not an expert, but a facile amateur, in another field? I think so. For example, drawing diagrams is often incredibly useful when discussing an experiment; well-drawn, detailed diagrams are generally even more useful, and I think many physicists would indirectly benefit from a short course in drawing or drafting. Certainly some meetings I’ve been to where someone draws the same thing multiple times or constantly changes it until it is right would go a little quicker :) . Math learned in pretty much any context tends to have a broad range of applications. Philosophy may not directly help a physicist deal with some specialized physical system, but the logical methods available from philosophy may often indirectly help (skepticism, an understanding of logic and fallacy, ability to argue a point rigorously). Hope that added to the discussion somewhat. • Low Math, Meekly Interacting The argument, it would seem, is that the value of philosophy to science is to help scientists learn how to think rigorously. Apparently, without a strong philosophical foundations, scientists are bereft of this ability. Nothing that I have observed corroborates this supposition. If one lacked the ability to think before, defending ones work against the bracing critique of scientific peers hones those skills very quickly, or the scientist simply fails. There is the extra benefit in science that an argument, no matter how logically sound, fails if experiment cannot validate it. Having to meet that standard appears to hone the mind rather effectively as well. • Chris W. Experiment doesn’t validate or refute the argument. It validates (provisionally) or refutes the premises of the argument, assuming the argument is sound. That’s why scientists need to learn how reason rigorously, and elucidate their premises. That said, they’re better off if they learn this from their peers, in the context of studying scientific problems. However, if sloppy reasoning and reliance on implicit premises becomes endemic, then other people—eg, philosophers, mathematicians—certainly have the right to butt in and point it out. There are other habits (and skills) that ought to be discussed here—the formulation of problems and questions, and the critical analysis of problem formulations. • Ponder Stibbons This ignores the sociological pressures, which a few others above have eluded to, that often cause scientists to unquestioningly accept certain patterns of thought. One’s peers aren’t going to launch critiques that don’t even cross their minds. I don’t know what you mean by ‘fails’ but the history of science is replete with examples where theories were known to conflict with experimental evidence but were not rejected. (For examples, see for example Lakatos’ The Methodology of Scientific Research Programmes.) Pretty much no one working in the history/philosophy of science accepts naive falsificationism anymore, because it just does not mesh with the history of science — the process of evaluating theories is much, much more complicated than that. • Ponder Stibbons I also note that although Matt Leifer at #10 and Sean himself have admitted the value of philosophers’ work to their own research, many in this thread are still questioning if any philosophical work at all is useful to physics. Should we then conclude that Matt and Sean are not doing physics? Or that they are unable to judge what is useful to them? • Chris W. [I just missed the comment editing timeout, hence the re-post.] There are other habits (and skills) that ought to be discussed here—the formulation of problems and questions, and the critical analysis of problem formulations. In his conversations with Einstein while at the Institute for Advanced Study, Shiing-Shen Chern was struck by how much time and effort Einstein spent in considering problem formulations, which for him was a comparatively minor issue in mathematical work; clear problem formulations were generally available, and one struggled primarily with finding a path to a solution. (See the Einstein centenary volume edited by Harry Woolf, 1981.) • Ponder Stibbons Yes. There is definitely much more space to ask ‘forbidden questions’ in the philosophy of physics community, than there is in the professional physics community. Many people working in the philosophy of physics now are lapsed physics students who became frustrated with the tunnel vision of much of the physics community, and jumped ship. • Lee Smolin This post and the discussion around it brings out very clearly the importance of a diversity of approaches to scientific problems. Let us just look at the facts, without taking sides. It happens to be the case that some physicists find philosophy and history of science important sources of ideas, inspiration and critical thinking. Others do not find them important. It also happens to have been the case that from the beginnings of physics till the 1940s the leading physicists were those familiar with the philosophical tradition-Einstein, Poincare, Boltzmann, Mach, Bohr, Heisenberg, Pauli, Schroedinger, etc. And before them, Newton, Leibniz, Copernicus, Kepler, Galileo… To answer the challenge in 24 -as to whether any physicist has gotten something valuable from philosophy-just read the introductions to the books by these great physicists. What one sees there is that what these great scientists got from their intimate knowledge of the philosophical tradition was not primarily the sharpening of their thinking: instead they understood the problems they attacked-such as the natures of space, time and motion, the properties of matter, the nature and role of forces, the existence or not of atoms, casuality, etc-as having arise and been defined within the philosophical tradition. And they saw themselves as contributing to the continuation of the history of inquiry into these basic questions. After world war II there was a switch in style and methodology of physics, and the dominant theorists were people such as Fermi and Feynman who did not find philosophy and history useful. And the proof that it was not useful, is that it was them and not their more philosophically minded colleagues who solved the key problems of their era. I would conclude from this that at different periods different kinds of styles are needed to succeed in the problems physicists face at the time, and so people who have the required style dominate in any period. To invent relativity and quantum theory, let alone calculus or classical mechanics, a more philosophically informed style was needed, while to develop QED and applications of existing theories such as condensed matter physics and nuclear physics, a less philosophical style is needed. I think the evidence shows that people who invent theories tend often-but not universally-to find inspiration in the history of thinking about basic questions like space and time, whereas those who develop existing theories do not need that particular kind of inspiration. Feynman didn’t find philosophy useful, but, for all his greatness, was more a developer than an inventor. (He once expressed to me his disappointment at having never actually invented a theory from scratch-apart from his theory of the V-A current, which he said he regarded as a limited success because he missed the gauge bosons.) David Finkelstein, who certainly is an inventor, and, as a byproduct, discovered the meaning of black hole horizons, topological conservation laws in field theory and quantum groups, used to say that he liked to study history and philosophy of science because knowing the history of a question gave him a running start. The fact is that some of us feel that way, and some of us don’t. If we can agree about this we can avoid unresolvable arguments for and against the usefulness of philosophy and history of science. Is it too much to ask that those who don’t feel the need for philosophy accept as colleagues deserving of respect those of us who do? At the very least, I hope you have enough respect for the history of our subject to take seriously how the greatest physicists-such as Newton, Boltzmann, Einstein, Bohr, Heisenberg, Schroedinger etc thought about the importance of philosophy. At the same time, those of us who feel the need to situate our work in the light of the philosophical tradition should respect the fact that such great physicists as Fermi and Feynman felt no such necessity-and neither do many of our contemporaries. The only interesting question is then which style is needed to make progress on the problems that face us now. I have argued that a more philosophical style is needed to solve the great problems of quantum gravity and unification and, while I have detailed reasons for this, I think we can all agree that the proof will be in who makes the breakthroughs that resolve the big problems before us. Feyerabend, whose quote started this off, is certainly a problematic figure. What he was not, as suggested in 39 above by BeeCee, was a continental philosopher. He was trained in physics and philosophy in Vienna, by descendents of logical positivists, and then got a PhD in London under Popper. Any attempt to parse a quote of his might take into account the fact that he often deliberately played the provocateur. Having discussed with him several times I can report that his detailed knowledge of theoretical physics was way above that of analytic philosophers I had met in graduate school, such as Nelson Goodman and Hilary Putman. The first time I met him, he asked me some technical questions about the interplay of renormalizability and symmetry breaking in the Weinberg-Salam model. What Feyerabend did do was to puncture claims of Popper and others to explain how science works-how it is that scientific knowledge increases over time. His book, Against Method, and other writings, attacked the claims by Popper and others that the answer was reliance on a particular method. The impression I went away with from our conversations was that he deeply admired the successes of science, but cared that we not rely on false claims about why science was so successful. Feyerabend’s contribution was thus mainly negative-he left us with the problem of how, if there is no consistent scientific method, it is that science does progress. This, if I may say so, was the problem I tried to address in my own recent book, which is why the key Chapter 17 features Feyerabend. What was certainly true was that he saw himself, with reason, as someone more deeply educated in both science and philosophy than most of our generation, and he found our work consequently lacked depth. And he was no kinder to philosophers than to physicists-as evidenced by the title of an essay he wrote about contemporary philosophy: “From Incompetent Professionalism to Professionalized Incompetence—the Rise of a New Breed of Intellectuals.” So my hope would be that whether we agree or disagree with Feyerabend-we can all agree that science as a whole is stronger and will progress faster if we can tolerate a diversity of approaches to key questions including the one under discussion here-of the importance of philosophy for physics. • Kevin Reiterating my previous point, an understanding of philosophy will not necessarily, directly help every physicist, but it can help some, and honestly, how could it hurt? And all of that aside, it’s pretty interesting stuff. The history of science is undeniably related to philosophical developments, and the history and reasoning behind scientists adopting empiricism over a priori logic and revealed knowledge is quite interesting. These days, most every scientist accepts and is trained in empiricism, a distinctly philosophical theory of knowledge. Whether or not it is taught directly, or placed into its historical and philosophical context during instruction, this philosophy underpins all of modern science, and it is a shame that so many scientists will disdainfully dismiss, often from ignorance (not directed at any author/commenter personally), an entire field of study that has occupied people for thousands of years and forms the foundation of science. Many would rather sit comfortably in their worldview that empiricism is “obviously” correct, rather than spend the time to understand the historical debates and reasoning of many highly intelligent people that led to empiricism’s adoption into the scientific method. Too many arrogantly assume that they would have thought that way anyway, even if it wasn’t drilled into them in every science class taken in their lifetime, and implicitly assuming themselves to be somehow better than many brilliant people throughout history who struggled with the relations and hierarchy between different kinds of knowledge. But how many of us can justify the scientific method with arguments as to why experimental evidence is better than a priori reasoning or revelation, without resorting to “Well clearly, …”, “Obviously, …”, or the classic move of quoting the achievements of science while downplaying or denigrating real and relevant achievements of other fields of study? Studying some philosophers – Aristotle, Hume, Locke, among many – would help us to be able to form these arguments more convincingly. Whether that leads to better scientific results is not easily answered, but that doesn’t necessarily imply uselessness. At the very least, being able to discuss philosophy on the level of an informed amateur makes us more well-rounded and more interesting conversationalists. That’s good enough for me. • FSN In some sense, modern physicists are also philosophers, since they always discuss things like interpretation of quantum mechanics, origin of the universe, the nature of time, among other things. In many of these discussions no equations are involved, just arguments. I guess that the forthcoming book by Sean will contain a lot of philosophy inside. When we want to talk about the nature of time, we will necessarily end up doing philosophy, even if the motivation comes from highly technical equations. So, I think that at the end of the day, much of the modern physics discussed today make physicists better philosophers and, as a feedback effect, better physicists. • Cunctator @Lee Smolin The tone and content of your comment finds me in complete agreement: to quote you ‘Is it too much to ask that those who don’t feel the need for philosophy accept as colleagues deserving of respect those of us who do?’ I think we should all ask ourselves this question. I was actually thinking about Sean’s book, and now that you mention it I can’t help but ask some questions about that. !) As you said, it is a book about the nature of time, it’s got to have philosophy inside. Now, note this: de we make this assumption by assuming that time is not a material substance reducible to components, i.e. is not a possible object of pure scientific inquiry, but in need of purely logico/metaphysical analysis (hence the philosophy) OR do we make such assumption because, whatever the nature of time might be and whether or not we can give a naturalistic explanation of it, it STILL has repercussions for our own existential experience (hence the philosophy)? 2) (and connected to 1) how far does a physicist go in including philosophical speculations about Time in a physics book about time? And what kind of philosophy? Here the divide between the ‘two philosophies’ is large: if the analytic school, specifically in philosophy of science, has dealt at length with the problem of time, of flow of time, duration, eternity etc (in a characteristic logical way), continental philosophers have been equally fascinated with the topic–including Kant, Husserl, Heidegger, Derrida and Levinas–, if in a characteristically ‘existential’ way. Assuming that we can make this distinction (I am not completely sure we can) between 1) a purely physical/reductionist account of time, 2) a logico/metaphysical account of time and 3) an existential account of time, what motivates the implementation of one approach into a book based onto another one? Is there to gain to fuse the different ones? If you ask me, yes there is. A somewhat paltry example: were Einstein’s intuitions about the observer-relativity of time physical, logical or existential? Or all of them? My point being: doesn’t this example of a ‘frontier question’ such as the nature of time give us a clear view onto the necessity of a multidisciplinary approach to problems concerning nature? Doesn’t the collaboration of physics and philosophy help in this inquiry? What we seem to forget is that ANY scientific problem was once a ‘frontier question’, and that often some degree of philosophical speculation helped towards a clear picture of that problem. • Aurelius I think formal–in particural Bayesian–epistemology and confirmation theory could be of use to physicists, or at least of interest to them. Bayesian philosophers often try to give, say, a formal characterization and measure of how E confirms H, and I think they’ve at least made some progress on Hume’s (old) problem of induction. Physicists might also be interested in Goodman’s New Riddle of Induction, which, imho, is way cooler than Hume’s. For those interestest in the latter, check out the section the the Grue paradox: • Chris W. From Cunctator: I don’t want to start an extended debate here, but the implied identification of “material substance reducible to components” with “possible object of pure scientific inquiry” strikes me as clearly untenable. It is especially evident in quantum field theory, not to mention general relativity, that older notions of the “material” have been largely transcended in 20th century physics. Even the alleged reliance on reductionism obscures a much more nuanced reality. Both physicists and engineers are of necessity engaged in synthesis as well as analysis, and the discovery and understanding of novelty that arises in complex “composed” systems, whether as actually constructed artifacts or as theoretical constructs. This novelty may often be cause for dismay, but complexity isn’t obliged to stay within the bounds intended by its supposed designers. (This has been a perennial preoccupation of thoughtful software designers and developers, as well as academic computer scientists.) In this connection, I strongly recommend that everyone read this wonderful essay by particle physicist Chris Quigg, “Nature’s Greatest Puzzles” (arxiv:hep-ph/0502070). [PS: Regarding the content of his book, Sean Carroll is certainly an example of a physicist who is both interested and well-informed in the literature and longstanding concerns of philosophy.] • Low Math, Meekly Interacting I don’t think anyone should be forbidden the methods that help them, but Feyerabend would apparently argue that a firm grounding in philosophy is essential. This is the most important, and most controversial, point in question. Some people find coffee and cigarettes very helpful, but that doesn’t mean they need such augmentation to function, or even that it’s very good for them overall. I don’t operate in the rarified world of particle and cosmological theory, so obviously the problems I and my peers routinely encounter are very different. Nor have I personally known an active particle theorist (he left academia for a govt. administrator job), only active solid-staters. At any rate, none of them know any more philosophy than I do, and I’d find it difficult to assert their ignorance has had any discernible impact on their ability to be successful and productive. But then again, all the scientists I know are operating on a firm theoretical foundation, and concern themselves with generating and interpreting experimental data, which is always in ready and ample supply. Not so for the theorist attempting to unify the forces of nature, or elucidate the universe’s origins. Perhaps these days, for that, you really do need philosphy. But like coffee and cigarettes, will the crutch ultimately kill you? That I do wonder about a great deal when I see yet another iteration of this debate. • Cunctator I agree completely, my generalization was only a rather blunt one in order to define a ‘criterion of difference’ between science and philosophy. On the other hand if things in 20th century science became really more nuanced than reductionist classical mechanics (as indeed they are) it is only a reason more to propose a collaboration between science and philosophy. Thanks for the article, I’ll certainly read it. • Craig Callender I’m a philosopher of science especially interested in physics, and I think this is a very interesting question. Obviously I’d hope that the answer is Yes, but I must say that the evidence is mixed. The most I’d argue for is that it certainly can help in isolated instances. The training philosophers receive in logic, but also certain norms in the field (‘unflinchingly following the argument where it leads’ and such), certainly can clear up confusion and help isolate otherwise hidden assumptions. The historical fact F refers to seems right. Reading Gerald Holton’s “Do Scientists Need a Philosophy?”, he points out that Einstein, Bohr, Planck, Heisenberg, Minkowski, Boltzmann, and so on had a fairly common philosophical upbringing and later continued interest in reading Plato, Hume, Poincaré, Mach, Duhem, Russell, etc. They also read the philosophical physicists such as Ampere, Helmholtz, Hertz, Eddington, Jeans, and more. By contrast, Holton points out that Sheldon Glashow was asked what he and his cohort read outside of science and he named sci fi, Velikovski, and L. Ron Hubbard! Of course, back then, the philosophers were more connected to the science: Carnap, Neurath, Frank, Bridgman, Reichenbach (one of his supervisors was Einstein). Yet it’s not clear that knowing or following the precepts of the logical positivists helped science. Cases that seem clearer where philosophy has helped might be (it’s been argued) that Einstein’s reading of Hume and Poincaré opened the door to questioning absolute simultaneity, the influence of Bacon’s philosophy on early modern science, and (M. Friedman argues) philosopher’s discussions of infinitesimals on Newton. Physicists in Feynman and Glashow’s generations turned away from philosophy, but one thing that hasn’t been said is that philosophy turned away from physics too in Glashow’s formative years. The ‘linguistic turn’ in philosophy would look pretty stale and barren to an outsider (and many insiders); that kind of philosophy would hardly be a likely source of inspiration for physics. Fortunately, philosophy is now recovering from the linguistic turn and many of us are learning physics and interacting with physicists. Exchanges with physicists on the measurement problem, the problem of time in quantum gravity, the direction of time, and the meaning of gauge freedom, have all recently been very productive. This has resulted in many joint physics-philosophy books, conferences, comments, visiting scholars, and so on. And I think/hope that many of the physicists involved would think these exchanges have been worthwhile. Just don’t read Feyerabend… • Paul Stankus To all, and particularly Kevin at 57. above, One of the points made in Lee Smolin’s excellent book (it makes a great gift, btw, for scientists and non-scientists alike) is that (i) when the history of ideas are taught in physics classes, the “true” thread is highlighted while all the “blind alleys” are ignored and airbrushed out; and (ii) this is regrettable, because day-to-day, working physicists would be able to do a better job if they did have a knowledge of alternative ideas in history. I see the suggestion of Kevin at 57. as kind of a larger-scale version of this idea, that physicists would be well-advised to understand/appreciate empiricism and the scientific method as part of a broader pageant of intellectual history. Phrased this way it sounds appealing; but honestly I’m not yet convinced. As a analogy, I have reliable access to clean drinking water as a result of the work, some of it brilliant, by many people over several centuries. But while it might be fair for me to offer a silent prayer of thanks to John Snow and Louis Pasteur every time I turn the tap to get a drink, it works just the same if I don’t. Similarly, I can use the philosophical stance of empiricism that I absorbed as a student (just as Kevin says) to accomplish science, even without knowing the intellectual history that led up to it. Would I be a better scientist if I knew more history and philosophy? Maybe so; but one important clue is that the people who trained me, and the people who pay me, to do science didn’t/don’t seem to think so. Perhaps, as several people have mentioned above, the more interesting question is the reverse: not whether scientists can do better by knowing philosophy, but rather why scientists who are ignorant of philosophy and history can do a reasonable job at all. Ideas? • Chris W. Paul, let me pick up your analogy to reliance on reliable access* to clean drinking water. Suppose certain people in positions of influence embarked on an effort—motivated by malice, ideology, incompetence, or cynical self-interest—to dismantle or neglect the infrastructure and regulatory systems that ensure reliable access to clean drinking water, and used every social, political, and economic means at their disposal to undermine even the expectation that we should have it. Wouldn’t the knowledge of how we came to have it in the first place become more relevant? And wouldn’t the complacent acceptance that we have it, without much understanding of what it took to achieve that, prove useful in undermining that expectation? One thing to keep in mind is that the founders of empiricism were of necessity philosophers, and conscious of the fact, because they were engaged in formulating and promulgating a philosophy, and doing so often in the face of concerted opposition by people who were every bit as smart as they were. In my opinion, the greatest scandal of philosophy is that, while all around us the world of nature perishes—and not the world of nature alone—philosophers continue to talk, sometimes cleverly sometimes not, about the question of whether the world exists…. We all have our philosophies, whether or not we are aware of this fact, and our philosophies are not worth very much. But the impact of our philosophies upon our actions and our lives is often devastating. This makes it necessary to try to improve our philosophies [and science] by criticism. This is the only apology for the continued existence of philosophy which I am able to offer.    — Karl R. Popper (* About 16 months ago I spent a few nights in a very nice hotel in a large Asian city. Other members of the group I was with, who had spent much time in that city, advised me to avoid even so much as moistening or rinsing my toothbrush with the hotel’s tap water, and to use bottled water instead. I took their word that I could trust the latter.) • Arjen Dijksman “Only when they must choose between competing theories do scientists behave like philosophers.” ~ Thomas Kuhn. When we have to choose for example between competing interpretations of quantum mechanics that give the same predictions, it is quite useful to put forward philosophical arguments. • Neal J. King Physics, as well as other aspects of natural science, is thinking about nature. Philosophy is thinking about thinking. Physicists need to be able to do enough thinking about their thinking to recognize when their thinking about nature is not working properly. This is particularly important when the foundations on which they build are being shaken, as during the development (and pre-development) of quantum theory, and the house has to be built “from the top down“. Analogously, a bicyclist on a long trip needs to be able to tell when he has a flat tire, and how to fix it; maybe he needs to know a little bit about how glue responds to temperature. He does not need to know details about the manufacturing of rubber tires. Feynman, Glashow, et al. have lived and worked during a period that would be described, in Kuhn’s framework, as “normal science”: We have been able to depend on quantum mechanics and relativity the whole time. There have been occasional proposals to turn everything over, but they haven’t really been needed. By contrast, when quantum theory was being developed, everybody knew that big changes were badly needed, but it wasn’t clear in which direction. Physicists grasped onto what philosophical guidance they could get for hints as to how to proceed. For example, one idea that several of the leading lights seized upon was to “drop concepts that don’t actually appear in the phenomena”, like the concept of a trajectory. This is such a vague idea that it’s hard to see how it could inspire anything, but it got Heisenberg to thinking about the Fourier components of the dynamical variables p and q, instead of thinking about momentum and position directly; and this led him to matrix mechanics. I think this explains why contemporary physicists have not been that interested in philosophy – excepting those who are specifically interested in understanding QM more deeply, rather than in using it as the basis upon which to explain more phenomena. QM seems to be working fine, so there’s no need to pull out the patch kit. • Chris Philosophy is science. Empirical study and reason are the modes of science that began with the Lovers of Knowledge who were the first philosophers, and thus, the question, “would they be better physicists with philosophy,” is incoherant. They cannot be physicists without being philosophers. It is a necessary condition. It just so happens that the modern academic conception of philosophy tends to deal with the history of thought, or with ethics — like, biomedical morality — and thus draws closer to social sciences in practice due to the emphasis on literature, but the very act of formulating and testing a hypothesis remains a philosophical endeavor by definition. It is a shame, and a sham, that philosophy has been pigeonholed and removed from modern scientific practices which are direct derivations of the first science. And it seems that in this process, scientists have lost their connection to the tradition of rational inquiry which began with philosophy, and have degraded science from an exploration where the journey is the value, to a results based practice, where the ability to think is second to the ability to produce. Does that mean that you will be a better scientist if you read Descartes? Not directly because the specific quanta contained in the work of a long dead Frenchmen will not likely be relevant to your modern inquiry, but the modes of thought and the inquiry into what can be known, is at the core of all the sciences, and if you think like Descartes did — again, not in the specific but in the general sense — you will be better. It is no surprise that the best philosophers — Descartes work in optics serves this point well — were also darn good scientists. Nothing disheartens me more to hear someone studying science say something like, “I don’t like philosophy.” It shows that in their studies, they are not learning to think but learning to regurgitate. Certainly they may be able to conduct the experiments which will provide them a career, but because they do not examine the reasons or the core of their pursuit — the motivation, i.e. the unanswerable — many become little more than uncreative automatons. I think that the best scientists — “wait, what if it was a double helix?” :) — are always creative. • Claire C Smith In very different types of physics, an example, electromagnetism/engineering and then in contrast, cosmology, both appear to benefit from good thinking, regardless of what subject set they are from, but the first here, is perhaps more hands on, less theoretical. The thinking used in philosophy may apply more to cosmology but the use of applied logic, which is essentially a field of maths – formal logic, more to engineering. Both inductive and deductive thinking, be it whether the use of equations are used or not express these methods, seem to act as a way forward. These methods then, combined with theoretical physics thinking, critical thinking, all the way up to the thinking used in meta physics, seem all to be very beneficial to physics. A few pictures/diagrams wouldn’t go amiss. • lucy Sorry to be commenting on this a bit late, but this post really interested me – i’m a maths/physics student who’s found philosophy of physics to be consistently interesting and useful – not just in a vague ‘improving your thinking skills way’, but for specific questions in physics. For example: - Newtonian mechanics does not always have to be deterministic – this is surprisingly little known, see e.g. Norton’s dome example . Or see Earman’s Primer on Determinism for wider discussion on the compatibility of determinism with general relativity and quantum physics. - Interpretations of QM. Lots to choose from, but for example dealing with the problem of probabilities in the many-worlds interpretation, or the discussion of the assumptions and results of Bell’s theorem, and whether it only poses a problem for hidden variable theories. Chapter 8 of Huw Price’s book is very good at discussing these two. OK, these may not exactly be at the practical end of physics, but they still look like proper physics to me. • Peter Shor I have two things to say. First: was Feynman really anti-philosophy? There’s his famous quote about not asking “but how can it be like that?” But this quote was directed at students, for which it was actually very good advice; he later disregarded his own advice. When I was at Caltech, around 1980, he gave a lecture about how maybe negative probabilities could solve the EPR paradox and get around Bell’s inequality. It obviously didn’t work, since he didn’t publish this result, but I think the fact that he was thinking about it means he wasn’t really anti-philosophy at heart. He may not have said anything positive about the philosophers of physics who were his contemporaries, but I’m not sure I can really fault him in this. Second: Does philosophy help? Maybe in some circumstances it hurts. Bohr was certainly well grounded in philosophy, and it (logical positivism in particular) seems to have played a role in his development of the Copenhagen Interpretation. When I think about the question of “why wasn’t quantum information theory discovered earlier,” I think some part of the answer has to do with the degradation of the Copenhagen Interpretation into what Mermin labeled the “shut and and calculate” interpretation. Of course, this may be because later generations of physicists didn’t have any background in philosophy (and later generations of philosophers didn’t have any background in physics). • Des Greene The rate of change of scientific theory has been so fast in the last century that it has perforce left the world of philosophers in ignorant darkness. They hear of the outline of theory but can find deeper understanding too difficult given its complexity and the limited time they can afford to study it. Both science and philosophy are losing out in this! Philosophy is still (largely) in a Newtonian world. Traditional logic may be a thing of the past. Quantum logic may be the more general paradigm. Scientists, for their part, are too busy keeping up with developments to consider the broader aspects of their theories. Clearly there is great need for a middle ground – maybe accademic programs need altering. • Paul Stankus Feynman authored what must be one of the most sweeping philosophical syntheses ever seen (this is from one of the latter-day essay collections, and I’m paraphrasing very approximately): “Human history has two fundamental pivots. The first is the invention of writing, which allows you to learn someone else’s ideas without that person being physically present and alive; the second is the invention of science, which allows you to [reliably, systematically, objectively] sort out the valid ideas from the faulty ones.” One can quibble with the details, but it’s certainly a grand vision. The latter-day Feynman himself keeps the sentiment up from beyond the grave here: • Chris W. This anecdote probably has something to do with Feynman’s attitude towards professional (academic) philosophers and the humanities generally. Also see this Amazon reader review. • Low Math, Meekly Interacting It’s not clear to me why having a fundamentally agnostic, one might call it “instrumentalist”, attitude towards quantum mechanics precludes the development of quantum information theory. It seems to me one could ignore ontological matters related to superpositions entirely and still regard its mathematical description as the most accurate representation of subatomic reality so far contrived. From that mathematical description other things could naturally follow, couldn’t they? Why must one ponder what it all “means” to make use of qubits and invent quantum computational algorithms? I could see how it might help in certain circumstances, but I don’t see the necessity. • AM19 So based on the comments the verdict seems to be if you are logically challenged or afraid to question the authority then you should probably learn a bit of philosophy, of course not just any philosophy especially not anything modern, preferentially things that Einstein or other physicists enjoyed. If you don’t have such problems then you might give it a try in your free time, who knows maybe you will benefit somehow. I think the problem here is that yes, learning philosophy might improve your thinking, but so can learning math, engineering, chemistry, and a lot of other stuff. Such answers do not take into account that our time for learning is limited. The question should be posed differently, you are a physicist and you have limited time to study, would it be a good idea to skip a semester of physics and learn philosophy instead? Will it make you a better physicist? Based on the comments above the answer to this question seems to be no. • Sam Meyerson It’s hard to see how some philosophical training would be harmful to us as scientists. Still on the whole I agree with the thoughtful remarks by Weinberg (in “Dreams of a Final Theory”), wherein he concludes that professional philosophy has “been of no help” in the development of modern physics. Pace Lee Smolin (#56), I suspect that the reason why the great physicists of yore, through the first third of the 20th century, were better educated in philosophy than we are today is simply because now there is so much more physics – and science in general – to learn. Given the choice of learning group theory, organic chemistry, or Kripke semantics, it’s pretty clear which is least likely to impact our research careers. Indeed, I would suggest that a physicist studying QCD phenomenology, a biologist working on bacterial evolution, or a chemist working on peptide synthesis will virtually never in their entire careers find it necessary to consult what philosophers have to say about epistemology, modal logic, or even philosophy of science. When we do import ideas from outside our chosen field, it is usually from other fields of science or mathematics. One can easily find citations to the biology, chemistry, and mathematics literature in physics articles, but very rarely to anything in philosophy journals. In recent years there has been (in my opinion) a positive trend whereby philosophers are first receiving high level (e.g. PhD) training in science before moving into philosophy, or independently make a serious effort to learn the relevant science. Professor Callender (#64) is an example of an academic philosopher who is admirably well-versed in physics and mathematics. (Another who comes to mind is John Earman.) Such philosophers are in a good position to seriously engage with the scientific literature, and I am hopeful that there will be increasing opportunities for exchange between our communities. We scientists already reason “philosophically”, and, we like to think, abductively (i.e. inference to the best explanation). We even employ modal logic, reflecting on necessity and possibility, and have been doing so since before Kripke. The formalistic approach to logic employed by many modern philosophers may be stimulating but I suspect it is rather barren in terms of its potential for physics. And just look at the sort of nonsense that passes for rigorous philosophy of religion today: . (To their credit, many philosophers view philosophy of religion as an unwanted stepchild.) Ultimately, I would endorse the view Einstein articularted shortly after he broke with the verificationists, that scientific theories are “free creations of the human spirit” (see I’m an instrumentalist at heart — I believe that the test for any theory is how well it explains and predicts observations. If a given theory is astonishingly predictive but metaphysically troubling, I would say that our metaphysics is in need of a tune-up. Our prephilosophical notions are, very plausibly, a partial product of our evolutionary path. Is there, to this very day, a philosophically tidy and uncontested interpretation of quantum mechanics? Getting back to Einstein, I agree with him that the creative act — the “spark of genius” — defies any philosophical description or categorization. It seems quite possible that, for a variety of reasons, a crucial scientific advance might result from a blunder, or from not following the philosopher’s rules of inference. Our philosophizing about science may help us better contextualize our work, but it rarely if ever is responsible for essential *scientific* insights. • Sam Meyerson • Paul This may be regrettable in some circumstances by usually it is not, most don’t want to spend time learning failed theories, there are too many working ones to learn. However I believe today we have tools to get around this problem – internet. I hope that eventually we can develop a database of free scientific knowledge maintained by scientists themselves, a tree of human knowledge. Such a database should contain not only theories considered current but also other possible alternatives and reasons why they were discarded. All theories should be ranked by their plausibility but still every one should be accessible and documented with links to publications preferably. This would allow people to see what is wrong with each approach and save them from reinventing the square wheel. This would also allow easy resurrection of abandoned theories when new data changes the picture. Now if such a database were also accompanied by (properly organized) forums it would make a great place for exchange and discussion of scientific ideas greatly facilitating collaboration between scientists. I think such a development is inevitable eventually but the sooner the better. • Enrique The problem with the lack of a philosophical culture (or personality) in the post-war physicists is that one’s always adopting a philosophical position regardless of our phi. culture or consciousness about it. This then translates as: “Scientist from the post-war era are really following philosophical positions from someone else, maybe unconsciously, and produce their work inside this philosophies that remain unquestioned for the time being”. I think physics is a product of the thought just as philosophy and with many obvious and not so obvious intersections. So I believe the quotation is pertinent because it denounces not a lack of studies but a lack of critical conscience about the philosophies implicit in the work of the physicist and, as a consecuence, a descent in the quality of the physics produced in the areas in which you need to change your paradigm to get solutions to long-time unsolved problems like the marriage between QFT and GR. • Pingback: Moonlit Minds « Moonlit Minds • Dr. Who I once had a subordinate tell me that I couldn’t see the forest for the trees. It was humbling, foremost, because I respected this individual more than all others under my direction (hello GN). It’s easy to get so absorbed in work, that you lose track of the bigger picture. Philosophy is essential to some, and not at all to others. Does it have a place in physics. Absolutely, for some, and absolutely not, for others. However, expanding one’s horizons will always be beneficial, not just because it will make you more interesting on a personal level. But, because personal levels always spill over into work. Does this mean you should take a class in Philosophy? Absolutely not! If you’re interested, there are plenty of books to self-study. Who will be the next Einstein or Bohr? It won’t be someone who’s so focused that he/she can’t see the forest for the trees. • uncle sam Better use of philosophy might at least keep scientists from indulging in fallacious explanatory/pseudoexplanatory schemes like the idea that decoherence can resolve the collapse problem in quantum mechanics. Decoherence is a false path IMHO to understanding why our world isn’t found to be composed of superpositions. IOW, decoherence can’t even come close to explaining (away) the collapse of the wave function (from extended superposed states into a localized state representing only one of the original combination.) Interested readers can delve into the discussion at Tyrannogenius (Dish on MWH and decoherence. I think that the deco-con is a circular argument and has other flaws. It indulges several fallacies in the form it is often touted. I accept that decoherence can affect the patterns or information status etc. of hits and the interaction of waves. It has a role. And yes, I know proponents say deco doesn’t really/finally “explain collapse” anyway, and that entanglement is part of the issue and I don’t deal with that here. But I’m saying it can’t tell us even a little about why and how the waves don’t just stay all mixed up together in an extended state. Below are some of my rebuttals. One decoherence argument looks at e.g. randomly-varying, relative phase shifts between different instances of a run of shots of single photons into a Mach-Zehnder interferometer. Their case goes, the varying phases cause the output to be random from either A or B channel instead of any guaranteed output (into e.g. A channel), that is otherwise dictated by interference – in the normal case where phase is strictly controlled. They tend to argue, such behavior has become “classical.” Somehow we are thus supposedly moved away from even worrying about what happened to the original superpositions that evolution of the WE says typically come out of both channels at the same time – until they get “zapped” by interaction with a detector. Well, that argument is fallacious for many reasons. First and foremost is the very idea of using what may or may not happen in preceding or subsequent events of an experiment, to argue the status of any given event. I mean, if the phase between the split WFs happened to be 70°, then the output amplitude in channel A = 0.819…, and the output amplitude in channel B = 0.573576… . In another case, with a different relative phase, the amplitudes would be different, umm – so what? There is still a superposition of waves, and the total WF exists in both channels until “detection” works its magic. That’s what the basic equation for evolution of the WFs say. They don’t have a post-modernist escape clause that if things change around the next time and the next time you run the experiment, then any one case gets to participate in some weird “socialized wave function” (?!) And, what about the case where we don’t have messed up phases but a consistent e.g. 70° phase delta across instances – then what? So there really isn’t or shouldn’t be a collapse then, but waves remaining in both output channels? That isn’t what happens, you know. Chad said, the other WF doesn’t have to go away (like to “another world”), they just don’t interfere anymore. But that isn’t really the issue: the issue is that the calculation says there’s amplitude in both channels – and then how the photon ends up condensed at one spot. The use of the density matrix doesn’t really solve or illuminate any of this either. One trouble with the DM is, it’s a sort of two-stage mechanism (in effect.) First, you start with the “classical” probabilities of various WFs being present. OK, that makes sense for actual description because we don’t always know what WFs are “really there.” But then there’s mishandling of two types. First, the actual detection probabilities are usually compiled out of the WF interactions (squared combined amplitudes.) But that takes a “collapse” mechanism for granted and can’t be used later in an argument attempting to “explain” it. If we just have Schrödinger evolution, the DM would just tabulate the likelihood of having various combinations of amplitudes, and that’s all! Without the supervention of a special collapse process, the DM has to be just a tabulation of the chances of having various amplitudes, not of the “probabilities” that only collapse can create IMHO. There wouldn’t be any “hits” to even be trying to “explain.” Briefly, roughly: the decoherence argument is largely an attempt to force an implicit ensemble interpretation on everyone, despite the clear conflict of the EI v. any acceptance of a “real” wave function each instance, that evolves according to e.g. a Schrödinger equation. Yeah, how can they “collapse”; well who knows, and cheating isn’t the right way to deal with it. Better an honest mystery than a dishonest “solution.” • Glenn Borchardt This is all well and good, but we must remember that it is impossible to teach someone anything that his job requires him not to know. If physicists and cosmologists really understood the philosophy behind quantum mechanics, relativity, and the Big Bang Theory, they would have to look elsewhere for employment. • Glenn Borchardt My analysis was based on the fact that the strange goings on in modern physics are solidly based on the philosophy of idealism, which is inherent in the works of all the philosophers cited in the discussion. There was hardly a hint that there might be a problem with that approach. In particular, there was no discussion of how and when to drop the ideality and replace it with materialism. Previously, I have been reluctant to criticize idealism because it definitely has its place in science. I use mathematical idealism and ideal models in my professional work all the time. These idealizations, however, should be slaves to science, not the other way around as in modern physics. For instance, we can invent more than three dimensions, but that does not give existence to more than x, y, z dimensions. We need to be able to distinguish clearly between the real and the ideal. The discussion so far has lacked a recognition of the importance of the philosophical struggle that has taken place in science in relation to the one in the greater society. In “The Ten Assumptions of Science” and “The Scientific Worldview” I framed that struggle, not as a battle between materialism and idealism, but as the opposition between determinism and indeterminism. I did this to establish a modern determinism (univironmental determinism) as the philosophical goal for scientists as well as for those interested in the scientific worldview. We can discard indeterminism altogether, but we can never discard idealism. We just need to put it in its proper place. • Sam Meyerson As a rule I don’t read books if the author feels compelled to add his degree after his name on the cover. • Eric Perlmutter In defense of the late Ernst Mach: his work on the principle of inertia and related topics remains at the center of contemporary discussions, not only on the philosophy of general relativity, but on its fundamental interpretation as it relates to the physical structure of spacetime. This latter issue must be embedded in our theory of quantum gravity, if not taken as a guiding principle — the loop quantum sector has certainly taken these issues seriously — so perhaps Mach comes out even. In general, I find physics to be the closest science to philosophy; I think we sell ourselves short to say that the one cannot inform the other. • Pingback: Does Philosophy Make You a Better Scientist? « Perpetual Optimism • Pingback: Speculative Science and Speculative Philosophy « Hyper tiling Discover's Newsletter Cosmic Variance Random samplings from a universe of ideas. About Sean Carroll See More Collapse bottom bar Login to your Account E-mail address: Remember me Forgot your password? Not Registered Yet?
b47e221435ab471c
Rigid rotor The rigid rotor is a mechanical model of rotating systems. An arbitrary rigid rotor is a 3-dimensional rigid object, such as a top. To orient such an object in space requires three angles, known as Euler angles. A special rigid rotor is the linear rotor requiring only two angles to describe, for example of a diatomic molecule. More general molecules are 3-dimensional, such as water (asymmetric rotor), ammonia (symmetric rotor), or methane (spherical rotor). Linear rotorEdit The linear rigid rotor model consists of two point masses located at fixed distances from their center of mass. The fixed distance between the two masses and the values of the masses are the only characteristics of the rigid model. However, for many actual diatomics this model is too restrictive since distances are usually not completely fixed. Corrections on the rigid model can be made to compensate for small variations in the distance. Even in such a case the rigid rotor model is a useful point of departure (zeroth-order model). Classical linear rigid rotorEdit The classical linear rotor consists of two point masses   and   (with reduced mass  ) each at a distance  . The rotor is rigid if   is independent of time. The kinematics of a linear rigid rotor is usually described by means of spherical polar coordinates, which form a coordinate system of R3. In the physics convention the coordinates are the co-latitude (zenith) angle  , the longitudinal (azimuth) angle   and the distance  . The angles specify the orientation of the rotor in space. The kinetic energy   of the linear rigid rotor is given by where   and   are scale (or Lamé) factors. Scale factors are of importance for quantum mechanical applications since they enter the Laplacian expressed in curvilinear coordinates. In the case at hand (constant  ) The classical Hamiltonian function of the linear rigid rotor is Quantum mechanical linear rigid rotorEdit The linear rigid rotor model can be used in quantum mechanics to predict the rotational energy of a diatomic molecule. The rotational energy depends on the moment of inertia for the system,  . In the center of mass reference frame, the moment of inertia is equal to: where   is the reduced mass of the molecule and   is the distance between the two atoms. According to quantum mechanics, the energy levels of a system can be determined by solving the Schrödinger equation: where   is the wave function and   is the energy (Hamiltonian) operator. For the rigid rotor in a field-free space, the energy operator corresponds to the kinetic energy[1] of the system: where   is reduced Planck constant and   is the Laplacian. The Laplacian is given above in terms of spherical polar coordinates. The energy operator written in terms of these coordinates is: This operator appears also in the Schrödinger equation of the hydrogen atom after the radial part is separated off. The eigenvalue equation becomes The symbol   represents a set of functions known as the spherical harmonics. Note that the energy does not depend on  . The energy is  -fold degenerate: the functions with fixed   and   have the same energy. Introducing the rotational constant B, we write, In the units of reciprocal length the rotational constant is, with c the speed of light. If cgs units are used for h, c, and I,   is expressed in wave numbers, cm−1, a unit that is often used for rotational-vibrational spectroscopy. The rotational constant   depends on the distance  . Often one writes   where   is the equilibrium value of   (the value for which the interaction energy of the atoms in the rotor has a minimum). A typical rotational spectrum consists of a series of peaks that correspond to transitions between levels with different values of the angular momentum quantum number ( ). Consequently, rotational peaks appear at energies corresponding to an integer multiple of  . Selection rulesEdit Rotational transitions of a molecule occur when the molecule absorbs a photon [a particle of a quantized electromagnetic (em) field]. Depending on the energy of the photon (i.e., the wavelength of the em field) this transition may be seen as a sideband of a vibrational and/or electronic transition. Pure rotational transitions, in which the vibronic (= vibrational plus electronic) wave function does not change, occur in the microwave region of the electromagnetic spectrum. Typically, rotational transitions can only be observed when the angular momentum quantum number changes by 1 ( ). This selection rule arises from a first-order perturbation theory approximation of the time-dependent Schrödinger equation. According to this treatment, rotational transitions can only be observed when one or more components of the dipole operator have a non-vanishing transition moment. If z is the direction of the electric field component of the incoming electromagnetic wave, the transition moment is, A transition occurs if this integral is non-zero. By separating the rotational part of the molecular wavefunction from the vibronic part, one can show that this means that the molecule must have a permanent dipole moment. After integration over the vibronic coordinates the following rotational part of the transition moment remains, Here   is the z component of the permanent dipole moment. The moment   is the vibronically averaged component of the dipole operator. Only the component of the permanent dipole along the axis of a heteronuclear molecule is non-vanishing. By the use of the orthogonality of the spherical harmonics   it is possible to determine which values of  ,  ,  , and   will result in nonzero values for the dipole transition moment integral. This constraint results in the observed selection rules for the rigid rotor: Non-rigid linear rotorEdit The rigid rotor is commonly used to describe the rotational energy of diatomic molecules but it is not a completely accurate description of such molecules. This is because molecular bonds (and therefore the interatomic distance  ) are not completely fixed; the bond between the atoms stretches out as the molecule rotates faster (higher values of the rotational quantum number  ). This effect can be accounted for by introducing a correction factor known as the centrifugal distortion constant   (bars on top of various quantities indicate that these quantities are expressed in cm−1):   is the fundamental vibrational frequency of the bond (in cm−1). This frequency is related to the reduced mass and the force constant (bond strength) of the molecule according to The non-rigid rotor is an acceptably accurate model for diatomic molecules but is still somewhat imperfect. This is because, although the model does account for bond stretching due to rotation, it ignores any bond stretching due to vibrational energy in the bond (anharmonicity in the potential). Arbitrarily shaped rigid rotorEdit An arbitrarily shaped rigid rotor is a rigid body of arbitrary shape with its center of mass fixed (or in uniform rectilinear motion) in field-free space R3, so that its energy consists only of rotational kinetic energy (and possibly constant translational energy that can be ignored). A rigid body can be (partially) characterized by the three eigenvalues of its moment of inertia tensor, which are real nonnegative values known as principal moments of inertia. In microwave spectroscopy—the spectroscopy based on rotational transitions—one usually classifies molecules (seen as rigid rotors) as follows: • spherical rotors • symmetric rotors • oblate symmetric rotors • prolate symmetric rotors • asymmetric rotors This classification depends on the relative magnitudes of the principal moments of inertia. Coordinates of the rigid rotorEdit Different branches of physics and engineering use different coordinates for the description of the kinematics of a rigid rotor. In molecular physics Euler angles are used almost exclusively. In quantum mechanical applications it is advantageous to use Euler angles in a convention that is a simple extension of the physical convention of spherical polar coordinates. The first step is the attachment of a right-handed orthonormal frame (3-dimensional system of orthogonal axes) to the rotor (a body-fixed frame) . This frame can be attached arbitrarily to the body, but often one uses the principal axes frame—the normalized eigenvectors of the inertia tensor, which always can be chosen orthonormal, since the tensor is symmetric. When the rotor possesses a symmetry-axis, it usually coincides with one of the principal axes. It is convenient to choose as body-fixed z-axis the highest-order symmetry axis. One starts by aligning the body-fixed frame with a space-fixed frame (laboratory axes), so that the body-fixed x, y, and z axes coincide with the space-fixed X, Y, and Z axis. Secondly, the body and its frame are rotated actively over a positive angle   around the z-axis (by the right-hand rule), which moves the  - to the  -axis. Thirdly, one rotates the body and its frame over a positive angle   around the  -axis. The z-axis of the body-fixed frame has after these two rotations the longitudinal angle   (commonly designated by  ) and the colatitude angle   (commonly designated by  ), both with respect to the space-fixed frame. If the rotor were cylindrical symmetric around its z-axis, like the linear rigid rotor, its orientation in space would be unambiguously specified at this point. If the body lacks cylinder (axial) symmetry, a last rotation around its z-axis (which has polar coordinates   and  ) is necessary to specify its orientation completely. Traditionally the last rotation angle is called  . The convention for Euler angles described here is known as the   convention; it can be shown (in the same manner as in this article) that it is equivalent to the   convention in which the order of rotations is reversed. The total matrix of the three consecutive rotations is the product Let   be the coordinate vector of an arbitrary point   in the body with respect to the body-fixed frame. The elements of   are the 'body-fixed coordinates' of  . Initially   is also the space-fixed coordinate vector of  . Upon rotation of the body, the body-fixed coordinates of   do not change, but the space-fixed coordinate vector of   becomes, In particular, if   is initially on the space-fixed Z-axis, it has the space-fixed coordinates which shows the correspondence with the spherical polar coordinates (in the physical convention). Knowledge of the Euler angles as function of time t and the initial coordinates   determine the kinematics of the rigid rotor. Classical kinetic energyEdit The following text forms a generalization of the well-known special case of the rotational energy of an object that rotates around one axis. It will be assumed from here on that the body-fixed frame is a principal axes frame; it diagonalizes the instantaneous inertia tensor   (expressed with respect to the space-fixed frame), i.e., where the Euler angles are time-dependent and in fact determine the time dependence of   by the inverse of this equation. This notation implies that at   the Euler angles are zero, so that at   the body-fixed frame coincides with the space-fixed frame. The classical kinetic energy T of the rigid rotor can be expressed in different ways: • as a function of angular velocity • in Lagrangian form • as a function of angular momentum • in Hamiltonian form. Since each of these forms has its use and can be found in textbooks we will present all of them. Angular velocity formEdit As a function of angular velocity T reads, The vector   on the left hand side contains the components of the angular velocity of the rotor expressed with respect to the body-fixed frame. It satisfies equations of motion known as Euler's equations (with zero applied torque, since by assumption the rotor is in field-free space). It can be shown that   is not the time derivative of any vector, in contrast to the usual definition of velocity.[2] The dots over the time-dependent Euler angles on the right hand side indicate time derivatives. Note that a different rotation matrix would result from a different choice of Euler angle convention used. Lagrange formEdit Backsubstitution of the expression of   into T gives the kinetic energy in Lagrange form (as a function of the time derivatives of the Euler angles). In matrix-vector notation, where   is the metric tensor expressed in Euler angles—a non-orthogonal system of curvilinear coordinates Angular momentum formEdit Often the kinetic energy is written as a function of the angular momentum   of the rigid rotor. With respect to the body-fixed frame it has the components  , and can be shown to be related to the angular velocity, This angular momentum is a conserved (time-independent) quantity if viewed from a stationary space-fixed frame. Since the body-fixed frame moves (depends on time) the components   are not time independent. If we were to represent   with respect to the stationary space-fixed frame, we would find time independent expressions for its components. The kinetic energy is expressed in terms of the angular momentum by Hamilton formEdit The Hamilton form of the kinetic energy is written in terms of generalized momenta where it is used that the   is symmetric. In Hamilton form the kinetic energy is, with the inverse metric tensor given by This inverse tensor is needed to obtain the Laplace-Beltrami operator, which (multiplied by  ) gives the quantum mechanical energy operator of the rigid rotor. The classical Hamiltonian given above can be rewritten to the following expression, which is needed in the phase integral arising in the classical statistical mechanics of rigid rotors, Quantum mechanical rigid rotorEdit As usual quantization is performed by the replacement of the generalized momenta by operators that give first derivatives with respect to its canonically conjugate variables (positions). Thus, and similarly for   and  . It is remarkable that this rule replaces the fairly complicated function   of all three Euler angles, time derivatives of Euler angles, and inertia moments (characterizing the rigid rotor) by a simple differential operator that does not depend on time or inertia moments and differentiates to one Euler angle only. The quantization rule is sufficient to obtain the operators that correspond with the classical angular momenta. There are two kinds: space-fixed and body-fixed angular momentum operators. Both are vector operators, i.e., both have three components that transform as vector components among themselves upon rotation of the space-fixed and the body-fixed frame, respectively. The explicit form of the rigid rotor angular momentum operators is given here (but beware, they must be multiplied with  ). The body-fixed angular momentum operators are written as  . They satisfy anomalous commutation relations. The quantization rule is not sufficient to obtain the kinetic energy operator from the classical Hamiltonian. Since classically   commutes with   and   and the inverses of these functions, the position of these trigonometric functions in the classical Hamiltonian is arbitrary. After quantization the commutation does no longer hold and the order of operators and functions in the Hamiltonian (energy operator) becomes a point of concern. Podolsky[1] proposed in 1928 that the Laplace-Beltrami operator (times  ) has the appropriate form for the quantum mechanical kinetic energy operator. This operator has the general form (summation convention: sum over repeated indices—in this case over the three Euler angles  ): where   is the determinant of the g-tensor: Given the inverse of the metric tensor above, the explicit form of the kinetic energy operator in terms of Euler angles follows by simple substitution. (Note: The corresponding eigenvalue equation gives the Schrödinger equation for the rigid rotor in the form that it was solved for the first time by Kronig and Rabi[3] (for the special case of the symmetric rotor). This is one of the few cases where the Schrödinger equation can be solved analytically. All these cases were solved within a year of the formulation of the Schrödinger equation.) Nowadays it is common to proceed as follows. It can be shown that   can be expressed in body-fixed angular momentum operators (in this proof one must carefully commute differential operators with trigonometric functions). The result has the same appearance as the classical formula expressed in body-fixed coordinates, The action of the   on the Wigner D-matrix is simple. In particular so that the Schrödinger equation for the spherical rotor ( ) is solved with the   degenerate energy equal to  . The symmetric top (= symmetric rotor) is characterized by  . It is a prolate (cigar shaped) top if  . In the latter case we write the Hamiltonian as and use that The eigenvalue   is  -fold degenerate, for all eigenfunctions with   have the same eigenvalue. The energies with |k| > 0 are  -fold degenerate. This exact solution of the Schrödinger equation of the symmetric top was first found in 1927.[3] The asymmetric top problem ( ) is not exactly soluble. In molecular quantum mechanics, the numerical solution of the rigid-rotor Schroedinger equation is discussed in Section 11.2 on pages 240-253 of an inexpensive textbook.[4] Direct experimental observation of molecular rotationsEdit For a long time, molecular rotations could not be directly observed experimentally. Only measurement techniques with atomic resolution made it possible to detect the rotation of a single molecule.[5][6] At low temperatures, the rotations of molecules (or part thereof) can be frozen. This could be directly visualized by Scanning tunneling microscopy i.e., the stabilization could be explained at higher temperatures by the rotational entropy.[6] The direct observation of rotational excitation at single molecule level was achieved recently using inelastic electron tunneling spectroscopy with the scanning tunneling microscope. The rotational excitation of molecular hydrogen and its isotopes were detected.[7][8] See alsoEdit 1. ^ a b Podolsky, B. (1928). "Quantum-Mechanically Correct Form of Hamiltonian Function for Conservative Systems". Phys. Rev. 32 (5): 812. Bibcode:1928PhRv...32..812P. doi:10.1103/PhysRev.32.812. 2. ^ Chapter 4.9 of Goldstein, H.; Poole, C. P.; Safko, J. L. (2001). Classical Mechanics (third ed.). San Francisco: Addison Wesley Publishing Company. ISBN 0-201-65702-3. 3. ^ a b R. de L. Kronig and I. I. Rabi (1927). "The Symmetrical Top in the Undulatory Mechanics". Phys. Rev. 29 (2): 262–269. Bibcode:1927PhRv...29..262K. doi:10.1103/PhysRev.29.262. 4. ^ Molecular Symmetry and Spectroscopy, 2nd ed. Philip R. Bunker and Per Jensen, NRC Research Press, Ottawa (1998) [1] ISBN 9780660196282 5. ^ J. K. Gimzewski; C. Joachim; R. R. Schlittler; V. Langlais; H. Tang; I. Johannsen (1998), "Rotation of a Single Molecule Within a Supramolecular Bearing", Science (in German), vol. 281, no. 5376, pp. 531–533, Bibcode:1998Sci...281..531G, doi:10.1126/science.281.5376.531, PMID 9677189 6. ^ a b Thomas Waldmann; Jens Klein; Harry E. Hoster; R. Jürgen Behm (2012), "Stabilization of Large Adsorbates by Rotational Entropy: A Time-Resolved Variable-Temperature STM Study", ChemPhysChem (in German), vol. 14, no. 1, pp. 162–169, doi:10.1002/cphc.201200531, PMID 23047526 7. ^ S. Li, A. Yu, A, F. Toledo, Z. Han, H. Wang, H. Y. He, R. Wu, and W. Ho, Phys. Rev. Lett. 111, 146102 (2013).http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.111.146102 8. ^ F. D. Natterer, F. Patthey, and H. Brune, Phys. Rev. Lett. 111, 175303 (2013).http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.111.175303 General referencesEdit • D. M. Dennison (1931). "The Infrared Spectra of Polyatomic Molecules Part I". Rev. Mod. Phys. 3 (2): 280–345. Bibcode:1931RvMP....3..280D. doi:10.1103/RevModPhys.3.280. (Especially Section 2: The Rotation of Polyatomic Molecules). • Van Vleck, J. H. (1951). "The Coupling of Angular Momentum Vectors in Molecules". Rev. Mod. Phys. 23 (3): 213–227. Bibcode:1951RvMP...23..213V. doi:10.1103/RevModPhys.23.213. • McQuarrie, Donald A (1983). Quantum Chemistry. Mill Valley, Calif.: University Science Books. ISBN 0-935702-13-X. • Goldstein, H.; Poole, C. P.; Safko, J. L. (2001). Classical Mechanics (Third ed.). San Francisco: Addison Wesley Publishing Company. ISBN 0-201-65702-3. (Chapters 4 and 5) • Arnold, V. I. (1989). Mathematical Methods of Classical Mechanics. Springer-Verlag. ISBN 0-387-96890-3. (Chapter 6). • Kroto, H. W. (1992). Molecular Rotation Spectra. New York: Dover. • Gordy, W.; Cook, R. L. (1984). Microwave Molecular Spectra (Third ed.). New York: Wiley. ISBN 0-471-08681-9. • Papoušek, D.; Aliev, M. T. (1982). Molecular Vibrational-Rotational Spectra. Amsterdam: Elsevier. ISBN 0-444-99737-7.
c431e7238bc28de2
Technology QuarterlyMar 24th 2001 edition Making materials atom by atom No more trial-and-error alchemy, materials can now be created with any desired properties—one atom at a time IN A modern-day hunt for the Philosopher's Stone, scientists are swapping the painstaking and messy processes of the laboratory bench for the neater, cleaner world of the virtual. Instead of experimenting with chemicals in a search for better materials, Gerbrand Ceder at Massachusetts Institute of Technology is using the laws of physics to calculate new ones. Dr Ceder has rejected the arcane delights of rainbow-hued salts and Bunsen burners in favour of 20 computers purring away in parallel. Starting from the Schrödinger equation—a mathematical formula that predicts how systems of atoms behave—he uses the rules of quantum mechanics to compute the properties of a theoretical structure. Then he adds an atom here, removes one there, until he hits on a material with the properties manufacturers are demanding. Normally, materials scientists use trial and error to carry out experiments in vitro, (ie, in glassware) combining educated guesswork and chemical skill to cook up a new compound. With luck, they stumble upon something exciting, but most of the time the conditions vary sufficiently in each experiment to condemn them to a tedious and often fruitless search. By contrast, the joy of designing materials in virtuo (ie, with software) is that every variable can be precisely controlled at the click of mouse. Hit the enter key, wait until the computer spews its results and, in a matter of minutes, the researcher knows whether the new structure is a waste of effort or not. The technique is not as simple as it sounds. The Schrödinger equation, which describes the interactions of electrons around a central atomic nucleus, demands that every electron affects every other. This means the calculations quickly become too complicated for even a powerful computer to handle an arrangement involving more than a few atoms. To the rescue comes “density functional theory”, a 30-year-old concept (for which a Nobel prize was awarded) that allows the electrons in many-atom systems to be treated independently. Only recently, however, has the method become practical, thanks to vastly more powerful computers. It is now possible to compute the quantum-mechanical properties of atomic systems that are in effect infinite. Dr Ceder has already used the method to design a metal oxide which gives batteries a longer life. In a paper soon to be published in the Journal of Applied Physics, he describes how he and his student, Eric Wu, are employing the technique to help with the design of receivers for mobile telephones. Firms making third-generation mobile phones want their receivers to be highly frequency selective, so they can cope with their allotted frequency band without suffering from interference. To do this, they need materials that absorb microwaves in an extremely narrow range. Unfortunately, no one is really sure what causes materials to absorb microwaves over too wide a frequency. It could be defects due to missing atoms, thermal vibrations of the crystal lattice, or even the boundaries between small, imperfectly formed crystals that make up the material. By creating a perfect crystal on a computer and then testing it, Dr Ceder and Mr Wu have deduced that thermal vibrations are not the problem. That is good news: defects in the crystal, unlike ambient temperature, can be fixed. Surprisingly, few companies for whom materials science is their bread-and-butter have yet picked up on the techniques. Ford Motor Company is an exception. Recently, the firm set up its own computational materials group. One of Ford's challenges is to improve the fuel economy of its vehicles while meeting emissions regulations. The problem is that metals, such as platinum, which are currently used in catalytic converters to remove pollutants from exhaust fumes, become ineffective when the engine's air/fuel mixture is tuned for fuel economy. By calculating the catalytic properties of materials on a computer, the group is learning what it is that makes other materials, such as copper zeolites, more effective under these conditions. Designing the right catalyst is a key to building a less thirsty car. A virtue from in virtuo, indeed. This article appeared in the Technology Quarterly section of the print edition under the headline "Making materials atom by atom" Reuse this contentThe Trust Project The Economist today Handpicked stories, in your inbox A daily newsletter with the best of our journalism
6909fd419e59b37a
direkt zum Inhalt springen direkt zum Hauptnavigationsmenü Sie sind hier TU Berlin Page Content Java visualisations for quantum mechanics Wave packet dispersion This Applet visualizes the behaviour of a free wave packet changing with time. The wave number can be set by the user. In contrast to the "classical" model, a quantum mechanical particle can "dissolve" with time. However, for macroscopic items, this takes very long time. Probability current in the one-dimensional potential well WStromdichte - One may observe solutions of the time-depended Schrödinger-equation of a particle in a potential well. Through superimposing the first 10 eigen-functions, one can observe dynamics of the quantum particle. Many properties of this system can be calculated analytically (see exercise 8 in "Theoretische Physik II SS08") and it can be regarded as a simplified quantum-dot system. Two-level system This Java applet displays the behaviour of a two-level quantum system stimulated by different types of impulses. It describes the transition of an electron through the excitation pulse. The laser is a realisation of that system. Elektron-phonon interaction An extension of the two-level quantum system: damped Rabi oscillations. In contrast to the basic version, this applet allows to observe effects which occur e.g. in semiconducters with indirect transitions. Mathematica visualisations for quantum mechanics The Schrödinger equation defines the shape and distribution of the electron orbits. This Java WebStart application lets the user change the first three quantum numbers and provides a three-dimensional view of the orbits, which can be scaled and rotated. This Java application visualizes the harmonic spherical functions in 3D and 2D slices. The spherical harmonic functions play an essential role in describing the angular momentum in quantum mechanics. Furthermore, they describe the angular dependence of hydrogen atom orbits. Vary the height of a potential well or barrier and observe the scattering of a traversing Schrödinger wave. In contrast to the applet seen above, a finite height of the well is being used. Zusatzinformationen / Extras Quick Access: Schnellnavigation zur Seite über Nummerneingabe Auxiliary Functions
faa55d6479847554
Kolmogorov equation Read PDF version  |   Download Code 1 Introduction We are interested in the numerical discretization of the Kolmogorov equation [12] <p><span class="math display">\[\label{kolmo} \left\{ \begin{array}{lll} \partial_t f - \mu \partial_{xx} f - v(x) \partial_y f =0, & (x,y)\in\R^2, t>0,\\ f(x,y,0) =f_0(x,y), & (x,y)\in\R^2 \end{array} \right.\]</span></p> where $\mu>0$ is a diffusive function and $v$ a potential function. This is one example of degenerate advection-diffusion equations which have the property of hypo-ellipticity (see for instance, [6, 13, 14]), ensuring the $C^\infty$ regularity of solutions for $t>0$ ([6]). In the present case, the generator of the semigroup is constituted by the superposition of operators $\mu \partial_{xx}$ and $ v(x) \partial_y $. Despite the presence of a first order term, that could lead to transport phenomena and, consequently, to the lack of smoothing, the regularizing effect is ensured by the fact that the commutator of these two operators is non-trivial, allowing to gain regularity in the variable $y$. A full characterization of hypo-ellipticity can be found in [6]. Solutions of (1) experience also decay properties as $t\to \infty$. This is also a manifestation of hypo-coercivity (in the sense developed by Villani [13], [14]) as a byproduct of the hidden interaction of the two operators entering in the generator of the semigroup. In this particular case $\mu=1$ and $v(x)=x$, using the Fourier transform, the fundamental solution of (1) (starting from an initial Dirac mass $\delta_{(x_0, y_0)}$) can be computed explicitly getting the following anisotropic Gaussian kernel \begin{equation}\label{kernel} K_{(x_0, y_0)}(x,y, t) = \frac{1}{3\pi^2 t^2} \exp \bigg[{-\frac{1}{\pi^2}\left( \frac{3|y-(y_0+tx_0)|^2}{t^3}+ \frac{3(y-(y_0+tx_0)) (x-x_0)}{t^2} + \frac{|x-x_0|^2}t\right )}\bigg] \end{equation*} which exhibits different diffusivity and decay scales in the variables $x$ and $y$. In view of the structure of the fundamental solution, one can deduce the following decay rates: \begin{equation}\label{decay} \| f(t)\|_{L^2}+ \sqrt t\, \| \partial_x f(t) \|_{L^2}+ t^{\frac 32} \|\partial_y f(t)\| _{L^2}\leq C ||f_0||_{L^2} \end{equation} for solutions with initial data $f_0$ in $L^2$. Similar decay properties can be predicted by scaling arguments, due to the invariance properties of the equation in (\ref{kolmo}). These decay properties are of anisotropic nature and of a different rate in the $x$ and $y$-directions. Indeed, in the $x$-direction, as in the classical heat equation, we observe a decay rate of the order of $t^{-1/2}$, while, in the $y$-variable, the decay is of order $t^{-3/2}$. The obtention of these decay properties by energy methods has been a challenging topic of particular interest when dealing with more general convection-diffusion models that do not allow the explicit computation of the kernel. In this effort, the asymptotic behavior of Kolmogorov equation and several other relevant kinetic models was investigated intensively through the concept and techniques of hypo-coercivity, which allow to make explicit the hidden diffusivity and dissipativity of the involved operators (see [13], [14] and the previous references therein). The literature on the asymptotic behaviour of models related with Kolmogorov equation is huge. We refer for instance to [8], [9], [2] for earlier works, and to [4], [5] for more recent approaches. Roughly speaking, it is by now well known that, constructing well-adapted Lyapunov functionals through variations of the natural energy of the system, one can make the dissipativity properties of the semigroup emerge and then obtain the sharp decay rates. These techniques have been developed also in other contexts such as partially dissipative hyperbolic systems (see [1]). In [10] Porretta and Zuazua introduces a numerical scheme that preserves this hypo-coercivity property at the numerical level, uniformly on the mesh-size parameters. The issue is relevant from a computational point of view since, as it has been observed in a number of contexts (wave propagation, dispersivity of Schrödinger equations, conservation laws, etc. [15], [7]), the convergence property in the classical sense of numerical analysis (a property that concerns finite-time horizons) is not sufficient to ensure the asymptotic behavior of the PDE solutions to be captured correctly. The fact that the numerical approximation schemes preserve the decay properties of continuous solutions can be considered as a manifestation of the property of numerical hypo-coercivity. In [3] Foster et al introduces a numerical scheme which preserves the long time behavior of solutions to the Kolmogorov equation. The method presented is based on a self-similar change of variables technique to transform the Kolmogorov equation into a new form, such that the problem of designing structure preserving schemes, for the original equation, amounts to building a standard scheme for the transformed equation. We also present an analysis for the operator splitting technique for the self-similar method and numerical results for the described scheme. Here, instead of, we investigate this behavior using the characteristics-Galerkin finite element method (trough Freefem++ [11]) and in particular, we confront the results to those obtained in [3]. 2 Description of the numerical scheme At the numerical level, we employ a finite element method based on characteristics-Galerkin technique, and for the sake of simplicity and ease, we use the Freefem ++ software ([11]). As described above, solution of Equation (1) does not only diffuses in the direction of x, by the effect of the diffusion operator $\partial_{xx} f$, but it is also diffuses in the direction of $y$, due to the transport equation $\partial_t f – v(x) \partial_y f$. We will treat both effects, transport and diffusion separately, using the characteristics method, that we recall hereafter, for the equation $\partial_t f – v(x) \partial_y f$ and linear or quadratic finite element to discretise the diffusion term. 2.1 Transport Let us consider the following scalar two dimensional transport equation \begin{equation}\label{transport} \partial_t f + \bm c \cdot \nabla f = g, \quad \bm c \in\R^2 \textrm{ in } \Omega \subset\R^2 \times (0,T) \end{equation} for some function $g$. Let (x,y,t)$\in \mathbb{R}^2 \times \mathbb{R}^+$. This transport equation can be written using the total derivative \begin{equation}\label{td} \frac{d}{ds} f(\bm X_{x,y,t}(s),s) = g \end{equation} if and only if the curve $(\bm X_{x,y,t}(s),s)$ satisfies the system of ordinary differential equation \begin{equation}\label{odec} \left\{ \begin{array}{ll} \frac{d}{ds}\bm X_{x,y,t}(s) = \bm c(\bm X_{x,y,t}(s),s),& \forall s\in(0,t), \\ \bm X_{x,y,t}(t) = (x,y) \\ \end{array}\right. \end{equation} Under suitable assumptions on $\bm c$, the problem is well defined and there exists a unique solution to (6) $\bm X_{x,y,t}$, called the characteristic curve reaching (or passing from) the point $(x,y)$ at time $t$. Since we cannot compute explicitly, in general, the solution of the equation (6), hence (4), we look for an approximate solution. Noting $\delta t>0$ the time step and $t_{n+1} = t_n + \delta t$, an easy manner to approximate the solution of Equation (4) is to perform a backward convection by the method of characteristic \begin{equation}\label{approxMOC} \frac{1}{\delta t} \left(f^{n+1}(x,y)-f^{n}(\bm X_{x,y,t_n}(x))\right) = g^n(x,y) \end{equation} where $f^n(x,y) = f(x,y,t_n)$ and $\bm X_{x,y,t_n}(x)$ is an approximation, as shown below, of the solution at time $t_n=n \delta t$ of the ordinary differential equation (6) for $s\in(t_n,t_{n+1})$ with the final data $\bm X_{x,y,t}(t_{n+1}) = (x,y)$. Assuming $f$ regular enough, by Taylor expansion, one can write $$f^n(\bm X_{x,y,t}(t_{n}))=f^n(\bm X_{x,y,t}(t_{n+1})) – \delta t \ \bm c(\bm X_{x,y,t_n}(t_n),t_n) \cdot \nabla f^n(x) + O(\delta t^2) $$ Applying also a Taylor expansion to the function $t\mapsto f^n((x,y)-t \bm c(\bm X_{x,y,t_n}(t_n),t_n))$, we get $$f^n((x,y)-t \bm c(\bm X_{x,y,t_n}(t_n),t_n)) = f^n(\bm X_{x,y,t}(t_{n+1})) – \delta t \ \bm c(\bm X_{x,y,t_n}(t_n),t_n) \cdot \nabla f^n(x) + O(\delta t^2) $$ and therefore one can approximate $f^n(\bm X_{x,y,t}(t_{n}))$ by $f^n((x,y)- \delta t \ \bm c(\bm X_{x,y,t_n}(t_n),t_n))$. For the sake of clarity, in the sequel, we note $X(t)$ the characteristic curve passing through the point $(x,y)$ at time $t$. 2.2 Numerical algorithm For numerical purpose, we consider Equation (1) in $\Omega \subset \mathbb{R}^2$ with homogeneous Neumann boundary conditions. Keeping in mind the characteristic method, Equation (1) can be written \begin{equation}\label{kolmo2} \left\{ \begin{array}{lll} \frac{d}{dt} f(\bm X(t)) - \mbox{div}(A \nabla f) =0, & (x,y)\in\Omega, t\in (0,T), T>0\\ A \nabla f \cdot \bm n =0, & \textrm{ in } \partial\Omega\\ f(x,y,0) =f_0(x,y), & (x,y)\in\Omega \end{array} \right. \end{equation} where $\bm n$ stands for the outward unit normal to $\Omega$, and for all $s\in(0,t)$, $\bm X$ is the solution of \begin{equation}\label{odekolmo} \left\{ \begin{array}{ll} \frac{d}{ds}\bm X(s) = \bm v(\bm X_{x,y,t}(s)),& \forall s\in(0,t), \\ \bm X(t) = (x,y) \ .\\ \end{array}\right. \end{equation} Here, we use the following notations $\bm v = \left(\begin{array}{c} 0\\-v \end{array}\right)$ and $A = \left(\begin{array}{cc} \mu& 0\\0&0 \end{array}\right)$. Formally, thus, one can write, for any $\varphi \in V$ for some functional space, the weak form of Equation (8) as follows \begin{equation}\label{weakform1} \int_{\Omega} \frac{d}{dt} f(X(t)) \varphi \ dx dy + \int_{\Omega} A \nabla f \cdot \nabla \varphi \ dx dy = 0 \end{equation} Let us denote $t_0<t_1<\ldots < t_M = T$ be the discrete time with $t_n = n \delta t$ where $\delta t$ denotes the time step. We set $M = T/\delta t$. Using the method of characteristic for the total derivative (see section 2.1), the weak form (10) can be approximated by \begin{equation*} \int_{\Omega} \frac{1}{\delta t}\left(f^{n+1}-f^n \circ \bm X^n \right)\varphi \ dx dy + \int_{\Omega} A \nabla f^{n+1} \cdot \nabla \varphi \ dx dy = 0 \end{equation*} or \begin{equation}\label{weakform2} a(f^{n+1},\varphi) = (f^n,\varphi) \end{equation} where $$a(f,\varphi) = ((I/ \delta t+A\nabla)f,\varphi) \ . $$ Here $(\cdot,\cdot)$ is the inner product in $L^2(\Omega)$. Therefore, denoting $\tau_h$ a partition of $\Omega$ by triangles and $V_h$ the $P_k$-finite element space (of degree $k$), the weak discrete form of the problem (8) is Find $\{f_h^n\}_{n=1}^{M=T/\delta t} \subset V_h$ such that for $n=1,\dots, M$, $$a(f_h^n,\varphi_h) = (f^n,\varphi_h), \ \forall \varphi_h \in V_h \ .$$ The Freefem++ script corresponding to the problem may as follows Freefem++ CODE: 1. Freefem++ code for prog_control.m: C C C This program solve the Kolmogorov equation C C f_t-mu*f_{xx}-v(x) f_y = 0 on Omega x [0,T] C C with free boundary conditions C C using the Characteristic-Galerkin Finite Element Method C C f(i,j) ==> unknown scalar function C C phi(i,j) ==> test function C C v(i) ==> scalar potential function C //Omega : square mesh [0,20]x[0,20] real aa = 10; real x0=-aa,x1=aa; real y0=-aa,y1=aa; int m = 100; mesh Th=square(m,m,[x0+(x1-x0)*x,y0+(y1-y0)*y]); real Tf = 10, dt = 0.01, mu = 1; // viscosity parameter (see equation above) fespace Vh(Th,P2); // P1 linear finite element Vh f0 = exp(-x^2-y^2), // initial data phi, // test function for(real t=0;t<=Tf;t=t+dt) Vh c=convect([0,v(x,y)],-dt,f0); solve Kolmogorov(f,phi) int2d(Th)(f*phi/dt +mu*(dx(f)*dx(phi))) - int2d(Th)(c/dt*phi) 3 Numerical experiment In this section we present a test case, [3], for which we confront to an exact solution of the Kolmogorov Equation (1) with $\mu=1$ and $v(x)=x$. In particular, we compare our results to the one obtained in [3]. For our numerical test case, we have used linear finite element. The initial value problem (1) with the initial data $f_0(x,y) = \exp(-x^2-y^2)$ admits the following exact solution $$f_{ex}(x,y,t) = \frac{\exp\left(-\frac{(( 3+3t^2 +4t^3)x^2 +6t(1+2t)xy+3(1+4t)y^2)}{(3+12t+4t^3 +4t^4)}\right)}{\sqrt{1 + 4 t + 4/3 t^2 + 4/3 t^4}} \ . $$ As done in [3], for each numerical tests, we have considered the time interval, and problem domain to be respectively $[0, T=10]$ and $\Omega = [−10, 10] \times [−10,10]$, The time step is kept constant equal to $\delta t = 0.01$ and the number of triangles along each side of the domain is given by $m=50$, $100$ and $150$. As one can see in the video above, the support for the function grows beyond the problem domain in the given time interval and interact with boundary conditions. This interaction, since we do not use here transparent boundary conditions, increases the error as one can also observe in Figure 1. We also show the time evolution of $\|f(\cdot,t)-f_{ex}(\cdot,t)\|_2$, $\|\partial_xf(\cdot,t)\|_2$, $\|\partial_y f(\cdot,t)\|_2$ and $$D(t)=\left(\| f(t)\|_{L^2}+ \sqrt t\, \| \partial_x f(t) \|_{L^2}+ t^{\frac 32} \|\partial_y f(t)\| _{L^2}\right)/||f_0||_{L^2} \ ,$$ in Figure 2. The $L_2$ error at time time $T=10$ is approximately $0.0072$ as one can see in Figure 2(a). Moreover, the error $\int_0^T \| f(\cdot,t)-f_{ex}(\cdot,t)\|_2 dt $ is approximately of order $0.02$. We also observe, due to the interaction with the boundary conditions that the errors increase sensibly for each numerical experiments approximately at time $t\approx 8.5$. Therefore, in order to compute the numerical order of convergence, we have computed for each $m$, $m\mapsto \max_t\left( \|f(\cdot,t)-f_{ex}(\cdot,t)\|_2\right)$. We find almost the order $1$ which is satisfactory. Finally, we have computed the quantity for which we numerically show that the constant for the decay rates is $C=1$ as shown in Figure 2(d) Movie 1: Numerical simulation of the test case . Freefem solution with m=100 Figure 1.a: Numerical and exact solution at time $T=10$, Freefem solution with $m=100$ Exact solution Figure 1.b: Numerical and exact solution at time $T=10$, Exact solution L_2 errors Figure 2.a: $L_2$ errors, $t\mapsto\|f(\cdot,t)-f_{ex}(\cdot,t)\|_2$ L_2 errors Figure 2.b: $L_2$ errors, $t\mapsto \|\partial_xf(\cdot,t)\|_2$ L_2 errors Figure 2.c: $L_2$ errors, $t\mapsto \|\partial_yf(\cdot,t)\|_2$ L_2 errors Figure 2.d: $L_2$ errors,$t\mapsto D(t)$ To end, we present a last numerical simulation of rotating and moving initial data. Movie 2: Numerical simulation on $\Omega = [0,10]\times[0,20]$, $[0,T=0.6]$ with $\mu=10^{-3}$, $v(x)=-x$ and $f_0(x,y) = 20 \exp(-(y-15)^2) \exp(-0.1(x-10)^2)$. [1] K. Beauchard and E. Zuazua, Sharp large time asymptotics for partially dissipative hyperbolic systems, Arch. Ration. Mech. Anal. 199 (2011) 177-7227. [2] A. Carpio, Long-time behavior for solutions of the Vlasov-Poisson-Fokker-Planck equation, Mathematical methods in the applied sciences 21 (1998), 985-1014. [3] E. L. Foster, J. Lohéac and M.-B. Tran, A Structure Preserving Scheme for the Kolmogorov Equation, preprint 2014 (arXiv:1411.1019v3). [4] F. Hérau, Short and long time behavior of the Fokker-Planck equation in a confining potential and applications, J. Funct. Anal. 244 (2007), 95-118. [5] F. Hérau and F. Nier, Isotropic hypoellipticity and trend to equilibrium for the Fokker-Planck equation with a high-degree potential, Arch. Ration. Mech. Anal. 171 (2004), 151-218. [6] L. Höormander, Hypoelliptic second order differential equations, Acta Math. 119 (1967), 147-171. [7] L. Ignat, A. Pozo and E. Zuazua, Large-time asymptotics, vanishing viscosity and numerics for 1-D scalar conservation laws, Math of Computation, to appear. [8] A. M. Il’in, On a class of ultraparabolic equations, Soviet Math. Dokl. 5 (1964), 1673-1676. [9] A. M. Il’in and R. Z. Kasminsky, On the equations of Brownian motion, Theory Probab. Appl. 9 (1964), 421-444. [10] A. Porretta and E. Zuazua, Numerical hypocoercivity for the Kolmogorov equation, Mathematics of Computation 86.303 (2017): 97-119. [11] Frédéric Hecht, Olivier Pironneau, A~Le~Hyaric and K~Ohtsuka, Freefem++ manual, 2005. [12] A. Kolmogoroff, Zufallige bewegungen (zur theorie der brownschen bewegung) Annals of Mathematics, pages 116–117, 1934. [13] C. Villani, Hypocoercivity, Mem. Amer. Math. Soc. 202 (2009). [14] C. Villani, Hypocoercive diffusion operators, In International Congress of Mathematicians, Vol. III, 473-498. Eur. Math. Soc. Zürich, 2006. [15] E. Zuazua, Propagation, observation, and control of waves approximated by finite difference methods, SIAM Review, 47 (2) (2005), 197-243. Authors: Mehmet Ersoy, Enrique Zuazua November, 2017
12b290a84a6c4b0c
Sunday, November 20, 2011 The Speed of Light 'This is reinforcing the previous finding and ruling out some possible systematic errors which could have in principle been affecting it,' said Antonio Ereditato of the Opera collaboration. 'We didn't think they were, and now we have the proof,' he told BBC News. 'This is reassuring that it's not the end of the story.'" If this proves to be true, it would have serious consequences. If the speed of light is not the limit, it would shake the principles of modern science. We would probably have to discard the Theory of Relativity. So everything written in Metaphysics Part IV of this blog would be void too. But the consequences could even go farther and affect quantum physics too. In this case we would effectively be thrown back to the 19th century and Newton physics as the best approximation to physical reality. It's still to early for such a radical judgement, but modern science is in serious problems now. Physics could be far more complex as we assumed. But we still have to wait for further verifications of the Opera measurements, which may take about one year or more. Then we will see if the Theory of Relativity can be adjusted to the new observations or has to be completely discarded. There might be something really big ahead of us and we might just have  scratched the surface. Or mya be it was just a systematic error of measurement. We will see. But one thing is already sure: Modern physics has become to much dominated by speculations of theoreticla physics without enough experimental proof. Physics needs to rethink its approach. There are too many theories (String Theory, Superstring Theory, Big Bang Theory, Dark Matter, Dark Energy, Black Holes and other singularities) without any experimental or observational proof. We have to go back to what we actually know and can observe instead of making new speculations based on previous speculations. We have for example never observed a Black Hole, but the theory of Black Holes has spawned entire branches of physics. It is highly speculative to assume that the density of mass can be infinite. May be there is an upper limit, a so far unknown property of space itself (e.g. the density of Neutron Stars, which are among the most dense objects that have really be observed). We have not understood gravity, but we try to extrapolate by playing with numbers and speculate about singularities inside Black Holes and at the beginning of the universe. This is not how science should work. We have to calm down a little bit. May be light speed is not the maximum speed, may be there is no such thing as an event horizon and the universe is bigger than we think. We will soon know more. Meanwhile we should limit ourselves to what we actually know. Considering the recent discoveries, there will be no further posts  in this blog on the topic of Metaphysics until the issue of light speed has been clarified. So let's get back to rather earthly topics. Friday, November 11, 2011 Metaphysics Part VIII - Reality What is reality? What are the requirements for something to be called "real"? Let's consider an example. Would we call the tragical events at the end of Shakespeare's 'Romeo and Juliet' real events? - Certainly not, because they were not historical. There were no persons called Romeo and Juliet in 16th century Verona who committed suicide because of their love. However couldn't there be such a couple in a parallel world, which is completely separate from our world and unknown to us? The events of Shakespeare's novel are certainly consistent and theoretically possible. If nobody from this other world of Romeo and Juliet would ever visit our world and nobody from our would ever visit the world of Romeo and Juliet and no event in each world would ever affect the events in the other world, what would be the requirement for this other world to be considered 'real'? We could say that it would only be real, if there was also some kind of consciousness in this other world that would be able to experience the events in this world. But even this requirement would be questionable. Is experience by a consciousness a requirement of reality? At least it seems to be a reasonable definition. Otherwise we would have to consider everything, which is theoretically possible and consistent in itself as real. Limiting reality to things that directly or indirectly affect the experience of a consciousness makes therefore sense. However we have to ask, what makes another consciousness real for us? Would another consciousness in a world whose events never affect us and which is not affected by any events in our world be considered real? Such a consciousness would not even be in any time-related relation to us. It would neither be before, after or at the same time as us, because the concept of time is based on causality. Earlier events affect later events and can therefore be put in a temporal sequence. Event A is earlier than event B and may somehow affect the later event. Events or objects in two separate worlds that never affect each other cannot be put into a temporal order. There is no earlier and later because the temporal order of both events can never be compared and is therefore meaningless. In the same way two separate consciousnesses in two separate words that never affect each other can never be put into a temporal order. None of the experiences of one consciousness can be considered earlier or later than the experience of the other consciousness. How can we therefore call the consciousness in this other separate world real or not real at all? What is the difference between these distinctions? In fact there is no difference at all. The concept of some consciousness in another completely separate word being real or not real is meaningless, because either option would make no difference for our consciousness. And the distinction between two things that are not different is meaningless, just as the distinction between mercury and quicksilver is meaningless. Mercury is quicksilver. Language allows us to call on thing quicksilver and the other one mercury but in reality it would still be the same. Therefore this linguistic distinction makes no sense.  It is the same with reality. We can linguistically distinguish between something in a completely separate world that does not affect our world and that our world does not affect being real or not real. But this distinction makes no sense. It means the same. There is no difference between both statements making them meaningless. Reality is always a subjective concept. Shakespeare’s Romeo and Juliet are not real in our world but Juliet is real in Romeo's world, although this statement is meaningless for us since Romeo is not real for us either. He might be real in another possible world, which is separate from us, but the distinction if such a possible world exists or not is also meaningless. If such a world is separate from us and there is never any form of interaction with us, then it is always not real and any further statement is meaningless. Even the question if Romeo has a consciousness in such a possible separate world is meaningless, because consciousness is also subjective. One consciousness is completely separate from another one. There is no way how a consciousness will ever affect another one. My pain is not your pain, and your pain is not mine. You cannot feel anybody else’s pain. He can tell you that he feels pain and you might see him crying or reacting to a perceived pain but he might also just pretend to feel pain or be programmed to express pain under particular circumstances. His pain is not real for you. The distinction of him consciously experiencing pain or not experiencing it and just mechanically expressing pain is meaningless, because there is no difference between it. His pain is not part of your world and will never be. This pain is not more real for you than the pain felt or not felt in another possible world. The consciousness of others is not part of your reality. And there is no objective reality that would be universally valid. Reality is only a subjective concept. This concept is far more basic then it looks like because it affects everything, our entire understanding of the world. Commonly we think in objective concepts. But this is only a theoretical concept that has nothing to do with reality. If two people look at a football field, which is in our objective model rectangular with angles of 90°. Both observers sit or stand in different positions around the football field. Neither of them sees a rectangular field but they see the playing field with the corner next to them and the opposite corner each measuring an angle close to 180° and the two other angles being extremely small. Since the perspective of each observer is different, they see all different angles. However both observers are sure that they are looking at a rectangular field, although they can’t observe it. Even if they went down right to one corner of the playing field to measure its angle, they would get this angle right but the other three angles would be distorted and differ from 90°. In fact nobody has ever experienced a football field having exactly 90°. It is distorted from any angle we look at it. The rectangular field is actually not real at all. It is a construction in our brain, a model that allows us to make calculations and estimates easier in order to orientate us in the environment. But this objective world, which has no particular observer and where the observer is only placed into it like a figure on a chessboard does mot exist. It is a model derived from our actual observations, which form the primary reality. So the objectivist or materialist point of view is actually completely unscientific. They believe in the existence of an objective materialist world, where a soul or consciousness is only an illusion resulting from mechanical processes in the objective world denying any validity to subjectivism. However none of them has ever seen this objective world. Their only real observation was from their subjective perspective. So they derived a theoretical objective model in their mind and gave it more validity than their actual observation, which it is derived from. This results in some kind of circular logic that denies the existence of consciousness proving itself from its own conclusions without being based on direct observations. We observe actually only our mind and conclude that the images in our mind have some exterior reality as a cause. The objectivist assumes the exterior world as a given fact and doubts the subjective observation of the images in his own mind that led to the assumption of an exterior world in the first place. Reality is the subjective distorted football field that each observer can perceive. The objective rectangular field is not real. It can never be observed. It is an abstract model in our mind that helps us to predict how reality will change when we change the relative position in which the observed object is oriented to us. What is real is the way the object is perceived. This is the real world. And our consciousness is the center of the world. What can never be perceived, either directly or through its consequences, is not real.    If we accept this, we suddenly see that there is no actually difference between the different interpretations of quantum physics. The Copenhagen Interpretation means nothing different from the Many Worlds Interpretation. Every other possible world that departs from our world of reality in every moment is not real for us. The distinction between these other worlds existing or not is meaningless. Since they are separate from us and don't affect us they are not real anymore. They are not real just as anything else that can't affect our subjective world is not real for us. The other worlds of the Many Worlds Interpretation are not real in any way since there is no common universal reality where they can substantiate their 'realness'. The separate worlds of the Many Worlds Interpretation will never join together in some way. They are separate forever and therefore not real for us. Whatever we can't experience is always not real. Any further distinction between different forms of not being real is meaningless. Tuesday, November 8, 2011 Metaphysics Part VII - Life and Beyond Soul and Death What are we? What is our essence? What is our self? The Copenhagen interpretation of quantum mechanics has already answered these questions to us. We are the center of observation. We are not part of the world, we don’t come out of the world, we create the world by our observation. We don’t depend on the physical world; the physical world depends on us. Without us as observers, the physical world would not be real. We are located where past and future meet, where the wave function collapses, where possibilities become reality. The past is the reality that cannot be changed, the future are the possibilities that have not become real yet. We are the present. The present is part of our essence. This is why consciousness cannot be imagined without time. This is why we have knowledge of time without the need of any physical senses. It is part of us. We are what turns the possibilities of the wave function into reality. We are what creates past out of future. But when we look back into the physical past, we notice that we have a beginning, that there was a moment, we call birth, when our existence in the physical world started. So fearful we look into the future, when our physical existence will end, the moment of death. Therefore the question that worries us most is: Will it all end with death? Is there a way of existence beyond death? It seems far more important to us than the equally difficult question: Was there a way of existence before birth? It is hard to investigate such questions, sine there are so strong emotions involved - the ultimate existential fear. But let’s try to look at the facts without emotions, without false hopes and self-delusions.  What is death? Death is the coincidence of physical circumstances that make it impossible to maintain our existence in the physical world. It is the destruction of the physical body, which is the center of our observation of the physical world. The destruction of the physical body makes further observation of the physical world impossible for us. But does this include the end of our self, the end of the observer? Physics is actually not able to define the concept of the observer, as it is introduced by the Copenhagen Interpretation of quantum mechanics. This means the physical world is not able to explain, where the observer comes from and what it is. On the other hand we have seen that it is the observer that explains the physical reality. So the physical world somehow depends on the observer. But does this mean that the physical world is really incapable of having any influence on the observer? We know that we can have a strong influence on the observer by destroying the physical senses of the body. Nevertheless the observer persists, even if we destroy his ears and eyes and any other perceptive organ of the body. But we can also have a strong influence of the ability to think by using drugs or destroying particular parts of the brain. So there seems to be some kind of influence from the physical world on the observer. But all these examples were superficial. The process of thinking is a physical process of the body, while perception itself is different from this process. Can we completely destroy the perceptive capability of the observer by physical means? How far does the influence of the physical world on the observer actually go? Let’s look at the most extreme situations that we can imagine, and we will see that there is some very strange mechanism in nature. Whenever we are exposed to some really extreme negative qualia, like extreme pain, shock or horror; something happens. We become unconscious. We become suddenly disconnected from the perception of the physical world. The observer gets unplugged from the physical world in order to spare him an extremely negative experience. There is some kind of safety mechanism that protects our consciousness, when things turn really bad. How can we explain this mechanism? It is an illusion to think that there may be some scientific explanation for this mechanism, because consciousness cannot be fully explained by science, for the simple reason that the observer is not explained by quantum mechanics. So it is impossible to explain, why observation suddenly stops. We simply have to accept the fact that the observer is protected against really unpleasant perceptions.  For the survival in the physical world such an inbuilt safety mechanism doesn’t actually make much sense. In a situation of extreme pain, it would be helpful to be fully awake and alert to be able to do something against the cause of the pain, instead of being unconscious and helpless. So why does such a safety mechanism exist, if it is not helpful? Obviously it is more important to protect the consciousness against the experience of pain than ensuring the survival of the body in the physical world. The convenience of the consciousness is given a mysterious priority over the survival of the body. There is an inbuilt safety mechanism that protects the observer from extremely unpleasant experiences. Can we therefore conclude that the survival of the physical body is not vital for the observer himself? But where does the observer go, when he is not connected with the physical world anymore? The answer to this question is not that difficult as it appears. In fact we all know the answer to this question quite well from our own experience. Because the observer disconnects himself from the physical world in regular intervals of more or less 18 hours. It is what happens when we sleep. Every night our consciousness disconnects from the physical world and is not present in the body. And nevertheless the consciousness returns every morning unharmed back into the physical body. We experience it every night that our consciousness is quite well able to exist without being connected with a physical body, although we don’t remember what happens with it in this state, since our physical memory is part of the physical body. Sleep itself is a strange phenomenon. It has no biological explanation. Nevertheless most animals need to sleep. The purpose of sleep is not regeneration or the need to recover energy. In this case we would just have to eat more in order to have enough energy available to stay awake for 24 hours a day. However a human who is deprived of sleep will become more and more psychotic and finally die after about 11 days without sleep, as some Nazi experiments during World War II proved. So obviously sleep is vital for the consciousness, although there has not been found any evident physiological necessity for sleep. Sleep is even dangerous for the organism, since the lack of alertness exposes it to predators and turns it into an easy victim. This contradiction cannot be explained. The mysterious reason for the need to sleep can obviously not be found in the physical world. It must have to do something with the nature of consciousness itself. There is a need for the consciousness to withdraw from the physical world in regular intervals. It cannot stay in the physical world for an extended period of time. Considering all these observations we can conclude that consciousness can indeed exist without a permanent connection to the physical world. We don’t know where the consciousness is and what it does, while it is not present in the physical body, but it is there somewhere, because it is able to return to the body after it has been disconnected from it. But what happens, if there is no functioning body anymore, where it can return to, which is the situation that we call “death”? Since the consciousness can exist in a disconnected state from the body, and seems to be protected by inbuilt safety mechanism against any harm that is done to the body, it is reasonable to assume that consciousness does not cease to exist along with the body, which it is temporarily connected with. We don’t know where it is after death or what it does, but given the analogy to what happens during sleep, it cannot be much different from what it does, every night. Sleep and death both mean the same – the disconnection of consciousness from the physical world. In one case it has the possibility to return to the same physical body, in the other case it has not. This is the only difference between both states. So death does not mean the end of our consciousness, just as sleep does not mean it. There is no reason to fear death more than we fear sleep each night. Neither death nor sleep is a diminished state of reality. The opposite is true. Reality is created by consciousness, not by the physical world. Consciousness is real and the physical world is its product. So when our consciousness is connected with the physical world it is actually not awake, it is in a dream that it is permanently creating. Since during death and sleep consciousness is not held captive by the physical world, which it has created itself, these states come closer to the state of being really awake. When we think that we are awake, we are actually dreaming and living in a self-created world with a limited grade of reality, while sleep and death are the exits out of this dream. But whatever happens, when we are dead or asleep is unknown to us during our dream in the physical world. The Meaning of Life Considering that life in the physical world is not ultimately real, shouldn’t we been eager to leave this world as soon as possible, since there is no obvious sense in the physical existence? This might be a wrong conclusion. There is absolutely no reason to assume that existence independent from the physical world is in any way more pleasant or worthwhile. Fact is that we return every morning back into the physical world. And there must be a reason for it. If there was a reason to prefer existence disconnected from the physical world to our current existence here, then it would not make sense for our consciousness to return back into this world again and again. It is unlikely that the physical world is some kind of malevolent prison for our consciousness, since there is some kind of safety-mechanism that withdraws our consciousness from the physical world when things get too ugly here. It is obviously taken care of our well-being while our consciousness is present in the physical world. So it is plausible to assume that there are benevolent reasons for our presence in the physical world. Being disconnected from the physical world may involve a higher grade of awareness, but this reality seems either to be far worse (boring, miserable, depressing or whatever) than our existence in the physical world or there are other unknown reasons that require our presence in the physical world. Our consciousness would not return into the physical world every morning, if it would not be worthwhile. So even if we cannot say, what the meaning of our life in the physical world may be, we can be quite sure that there are good reasons for it to be just like it is. It would be a bad idea to aspire escaping from the physical world. Therefore all efforts of Eastern religions to leave the world and enter Nirvana by meditation or other techniques are a wrong approach and work against the actual sense of life whatever it may be. We should not try to overcome our physical existence, but welcome it. Rejecting life and the physical world occurs out of ignorance. Monday, November 7, 2011 Metaphysics Part VI - Materialism or Idealism? Dream and Reality To understand the concept of creating reality by observation, it is helpful to compare it with the process of dreaming. The physical world has more in common with our dreams than with the materialist concept of objective physical reality. In our dreams we create the world of the dream. We are not aware of it and believe this world to be real. We think we are subjected to it and mostly helpless victims of whatever happens to us in the dream. Only when we wake up, we suddenly become aware that we created the world of our dream ourselves subconsciously. Nothing in our dream was real, when we didn’t observe it. All the things and persons that we saw, all the events that occurred were only real as our observation. Our observation intentionally and subconsciously created them. In fact we controlled the dream without knowing it.  The physical world is surprisingly similar to this situation. The difference is that the world of our dreams is created completely subjective, while the physical world is inter-subjective. In our dreams we are actually the only observer, while there are many observers in the physical world. Whatever we observe in the physical world, it must not contradict with the perception of other observers. Therefore there are certain rules to obey in the physical world. These rules are the probabilities of Schrödinger´s wave function. Furthermore we cannot change the state of an object after the wave function has already collapsed by the observation of another observer. The observers limit each other in their control over the physical world. We can intentionally create the outcome of an event in the physical world within the limits of the wave function, but we cannot control the outcome of an event that has already been observed either by us or by another observer. Within these limitations we control the physical world that surrounds us, like we control our dreams. Consciousness and Body The dualism of our self as mind and body was realized by all human civilizations from the earliest days of human history on and even before. It has been a main problem of philosophy. Are we essentially mind or body or both? The belief that we are essentially a body and that the mind can be explained as a body function, is called materialism. The belief that we are essentially mind and that the body is an illusion or an idea of the mind, is called idealism. And the concept that body and mind are from distinct worlds, the physical world and the spiritual world, and have somehow been united to form living beings is called dualism. However it has been unclear how body and mind relate to each other (body-mind-problem) and which of them controls the other one. Is there such a thing as a free will? Or are all our actions subject of our biological needs? From what we have learned now, we can try to answer these questions. Apparently the materialist worldview has been proven wrong by quantum mechanics. The state of a particle depends on the observer. When there is no observer, particles have no particular state and remain as a wave function of probabilities with all possible states in superposition to each other. Therefore matter does not exist by its own virtue and only has a distinct state by observation. And since matter itself is not real, the fundament of the materialist concept has crumbled. However the physical world is not totally an illusion. The mind does not arbitrarily create the physical world as a hallucination. The observer only controls the state of the particles of matter within the limitations of their wave function. This finally gives us an idea how body and mind interact and how our mind exercises control over our body. As far as the extremities of our body are concerned, biology and medicine have discovered how electrical impulses in our nervous system make muscles contract and cause the movement of our extremities. In the same way sensations like pain are conducted by the nervous system from our extremities to the brain. So we have obviously only indirect control over the extremities of our body using biophysical mechanisms. Our control can easily be neutralized by external physical means like physical restrictions of our movement or severing nerve fibers so that we lose control of parts of our body. Therefore we can conclude that the interaction between our mind and our body does not take place in the extremities of our body. They are simply part of the physical environment and not actually part of our self. When we examine the structure of our nervous system, we find out that all electrical impulses that it conducts are centralized in our brain. Therefore somewhere within the brain there must be the place, where the interaction between mind and physical world takes place. It is inside the brain, where the measurements of our perceptive organs take place. It is the place in the physical world, where observation occurs. In some way our mind is able to observe the electrical impulses from our nervous system. And by observing them, we control them. If we look at the biochemical processes that take place at the synapses, where the nerves touch each other and where information is transmitted, we get indeed into a range where quantum mechanics becomes relevant. What exactly occurs within the molecules of the biochemical transmitters that travel between the microscopic gaps of the synapses of our nerve cells does not follow the macroscopic determinism. Its outcome is non-deterministic and only limited by the wave function of atomic and sub-atomic particles. If we assume that the observer intentionally determines the state of these particles by his observation, we have found the mechanism how our mind controls our body. Controlling the quantum mechanical processes in the synapses of our brain means controlling our body. And while our control of our physical environment is limited by other observers who already caused the collapse of the wave functions of almost all particles in the world around us, we are the only observer in the small area of the synapses in our brain. Only we control the collapse of the wave function there by our observation, because we are the only observer. This is why we have control over our body, but we don’t have in the same way full control over our environment. Only in or brain we are the only observers. Outside our brain, even in our own body, there are other observers. There are blood cells, body cells, and microorganisms. We don’t know what they observe. We don’t know what all the billions of macrophages in our blood vessels observe, what they feel, what they perceive. This is why we are limited there. We have only limited control there by the nerve impulses that work deterministically on a macroscopic scale.  Macroscopic processes are not that easily controlled by observation, because they have many observers. The state of their particles is to a large extent already determined; their wave function has already collapsed into distinct states. This is why most macroscopic processes seem to be deterministic to us. We are not the first to observe them. They have already been observed, either earlier by us or by others. Their particles have already distinct states. There is not much space for randomness. We have seen that the nature of the universe is not objective; it is subjective, or inter-subjective, if we assume that there are more than one observer. This is a very important fact. Actually it is quite obvious, because the way the universe is perceived is subjective. If the universe would be completely objective, then no point of view would stand out. The universe would be perceived from a third person view. But in reality we perceive the universe in a first person view. This means we don’t know how others perceive the world. We can only see it from our point of view. The concept of "qualia" is a subjective concept. It is limited to our own personal perception.  What are qualia? Qualia are all kind of perceptions, pain, lust, colors etc. We cannot describe them to others. This means we can give names to them, but we can’t know that others perceive them in the same way as we do. Let’s take the color "red" for example. We can tell somebody that cherries are red, and the other will agree with us, but we can not be sure, if the other one has the same impression of red as we have. Maybe he has the impression that we call "blue". But since he calls his impression of blue "red", he will also say that a cherry is red, even if the impression that he has equals our impression of "blue". This is why colors for example are considered to be qualia. In fact we know for certain that not all human beings perceive colors the same way. There are people who are red-green colorblind. And only some sophisticated tests can reveal this abnormal perception. The person is not aware that he perceives red differently than other people. He still calls the color in which cherries appear to him "red". Nevertheless, it is not the same perception that others have. This is the nature of qualia. It is a subjective perception. It cannot be described objectively. This is just one example for the fact that the world is not objective. Quantum mechanics has taught us that the world is in an undefined state of probabilities, when no observer is present. The Schrödinger equation has no single solution. Only an observer can cause the Schrödinger wave function to collapse and an event to become "real". As long as the event is not observed, it is nothing but a possibility.  And the Theory of Relativity has also shown us that the laws of classical physics only apply to the world as it is seen from the point of view of one observer. There is a huge difference in how two observers perceive the world if they are moving with relativistic speed in relation to each other. The world and the experience of time and space of one observer contradicts the world of the other observer, and there is no absolute and objective scale or coordinate system, which would allow us to make absolute and objective measurements. Every observer takes his own coordinate system of time and space with him.  So if we take some space where only two observers are present, and they stand back to back to each other with focus of their attention directed into opposite directions, then this means that the space between them is actually in an undefined, unreal state. Their two subjective realities are separated by an area, which is not real. We can say that they live in two separate and independent realities (left picture). This is why one observer cannot know, what the other observer perceives. This is the reason why our perception of the universe is from a first person point of view, not from a neutral and objective third person point of view. There is no objective point of view because there is no "reality" in the space between the two observers. Only when the observers turn their attention to each other, their perception overlaps and their realities become linked to each other. So we see that the assumptions of one common reality is an illusion. There are only separate subjective realities, which sometimes may overlap (right picture). This has far-reaching consequences. It means that the premises of all pantheistic religions are false. There is no such thing as "Oneness". We are not parts of one single Self, as many Eastern philosophies like Hinduism, Buddhism or Jainism assume. There is no such thing as the one common reality, the common self, which is the origin of all of us. In fact we are completely separate. We don’t share the same reality. Every individual supposed there are more than one, lives in its own separate reality. Separateness and subjectivity are the ultimate truth. Objectivity is an illusion. It is an illusion to please our fear of ultimate loneliness. So any ideology or religion based on collectivism is based on an illusion. We are not part of something higher, a community or an all-compassing unity of the universe. We are individual observers. Sometimes our field of observation can be temporarily linked to the observation of other observers, but we will always be separate. Observers cannot merge with each other. They are always separate centers of observation, separate by their very nature. Sunday, November 6, 2011 Metaphysics Part V - Indeterminism Random or Intention Now another question arises: How is it determined into which of all possible states the wave function will collapse? Is the wave function collapse random? Is it intentional? Is it determined by a third person or another unknown instance? We are getting now into an area, which is speculation rather than science. Nevertheless we have logic as tool to investigate the different possibilities, even if there is currently no way to prove our conclusions experimentally. When the wave function collapses, for example the decay of atoms of a radioactive isotope, it happens in a unpredictable way. This means we cannot say in advance, which atom will decay. All atoms have the same probability to decay, none of them is standing out or somehow preferred. If the atoms were not completely equal and interchangeable in this aspect and it was somehow predictable, which atom is the first to decay, we would immediately have our classic deterministic worldview back, which we just got rid of. So this is obviously not the case.  However an observer will see particular atom decay. So why is it this atom and not another one? Is the observer intentionally causing that this and not another one decays, as some people claim? So is the observer able to create reality intentionally? Or does it happen randomly without any causality? Or perhaps is it determined by somebody else, e.g. by a god or some higher instance? According to the Copenhagen Interpretation it is the act of observation that causes the collapse of the wave function. So if it was a third party, another instance that determined into which state the wave function collapses, what would the observer be needed for? The wave function could collapse into this particular state without the act of observation. But we have seen in the above-mentioned experiments, that observation is needed, and without an observer, the wave function does not collapse. Therefore we can exclude the involvement of any third party like a god or any other higher instance.  So we are left with the question, whether the decision is randomly or influenced by the intention of the observer. But what is randomness? Is randomness a thinkable concept? How can a particular event take place, if all possible events are totally equal in all its aspects, and no possible event stands out in any way? This is a philosophical question, and we would have to abandon causality, if we accept the concept of true randomness. And this is a tricky thing. Abandon causality means giving up the rules. We would even threaten the principles of logic. It is difficult to accept this idea and one would rather feel inclined to give up the Copenhagen Interpretation entirely in favor of the Many-Worlds Interpretation, which would be far more plausible, since it doesn't require the concept of randomness and acausality. The assumption that the collapse of the wave function is intentionally caused by the observer is far more elegant and plausible. It would also provide an answer to several other philosophical problems like the free will and the question how our consciousness and the physical body relate to each other.  But there is another good reason to assume that the collapse of the wave function is caused intentionally by the observer. From the experiments above we have learned that causality does not work, as we believed. The concept of cause and effect is actually reversed. It is not the event that causes the effect of observation. It is the observation that causes the event. Without the observation, there is simply no event that could be observed. It is the observation that creates the event. Therefore observing is no passive process; it is an active process. The observation itself is the cause; the collapse of the wave function, i.e. the particular event that we observe, is only the effect.  If the observer was subjected to the randomness of the collapse of the wave function, he would not play an active role in this process and could not be the cause of anything. Therefore we have most likely to discard the concept of true randomness. But supposed an observer can intentionally cause the collapse of the wave function, does this mean he can control the outcome of any event he observes? The limitation of this control would naturally be the wave function of the environment and the collapses caused by other observers. Furthermore we must not forget the concept of probability. Not all possible outcomes are equal. Some of them are preferred by the wave function. The world of quantum mechanics not only consists of possibilities, it consists of probabilities. Particular outcomes are preferred over others. Certain events are more likely to occur than others. It does not mean that a certain outcome is somehow predetermined, it may happen or it may not happen. It is just more likely to happen than others. In our example of the decay of an atom, it does not mean that an isotope with a half-life of one our will necessarily decay within two hours, it only means a certain probability for this event to occur. It is still possible that the atom does not decay after two hours or even after a whole day; this outcome just gets increasingly unlikely. So if we assume that every outcome is caused by observation, unlikely outcomes are certainly harder to observe than more likely outcomes. May be it needs some more effort by the observer in order to observe an unlikely outcome. If the observer can really intentionally influence a certain outcome, then it would require some stronger intention to observe a rather unlikely event. This raises the question, if we can strengthen our intention in order to make unlikely things happen. May be this strength is what some people call "faith", faith that is able to move mountains according to the bible, faith that manifests itself in the so called "placebo"-effect in medicine.  Since the whole universe is subject to probabilities, hardly anything would be impossible, it may just be very, very unlikely. And it may depend on our faith, if it can become real or not. Saturday, November 5, 2011 Metaphysics Part IV - Theory of Relativity The concept of independent fields of reality can also help us to understand the paradox that led to Albert Einstein’s Theory of Relativity, which is the other important pillar of modern physics that revolutionarily replaced the old paradigms of Newton’s mechanics. It is the observation that light has always the same velocity; no matter if the light source is moving or not that contradicted the traditional worldview of classical physics.  Surprisingly the speed of a light source does not add to the speed of light, although a second observer moving together with the light source can measure that the light is emitted with the same speed as if the light source was not moving. Therefore it cannot be determined if the first observer who sees the light source moving relative to him, is moving himself or the light source is moving. There is no absolute movement. Every movement can only be defined relative to a reference system, i.e. a particular observer. As a result moving objects experience an effect called time dilatation. This means the time of a moving object is passing slower than of an object, which stands still in relation to the observer. For example the clock in a spaceship that moves with relativistic speed is slower than a clock on earth. To illustrate this effect, let’s make a simple thought experiment. Mr. Smith on Earth wants to send a parcel to his friend on a planet in Alpha Centauri, which is 4.37 light-years away from Earth. An interstellar courier service advertises to be able to deliver any parcel in less than 2 years and 3 months to Alpha Centauri. So on October 1, 2997 Mr. Smith gives the parcel to the captain of the courier spaceship expecting it to be delivered before New Year 3000. The captain takes the parcel on board his spaceship and accelerates it to what he measure to be twice the speed of light, confident that he is going to deliver the parcel in time. After 6 months, on April 1, 2998 according to the board calendar the spaceship passes the periphery of the Oort cloud behind which is about one light-year away from Earth. So the captain of the courier ship concludes that he has successfully traveled 23% of the trip in just 6 months and will therefore arrive in time on Alpha Centauri. However an observatory on Earth is watching his trip and calculates that the spaceship is only traveling at a speed of 89 % light speed.  As expected according to Einstein’s Theory of relativity, nothing can travel faster than light. So the courier spaceship will not be able to do this either. But since the courier is traveling with relativistic velocity, time is passing slower on board of the spaceship. Therefore he measures a far higher speed than the observer on Earth. This speed is called proper velocity and is the distance as it appears to an observer on Earth divided by the time passing on board the spaceship. The equation for the proper velocity (ω) is: w is the proper velocity the time passed on Earth v is the speed of the spacecraft in relation to Earth c is the speed of light (~299,792 km/s) In order to reach a proper velocity of twice the speed of light (ω = 2c) as the captain of the courier ship calculated he needs an actual velocity of 89% (Ö0.8) the speed of light measured from Earth, as we can see, when we replace the variable v with Ö0.8   c.  When the courier spaceship reaches Alpha Centauri, the board calendar shows December 7, 2999 more than 3 weeks before New Year 3000. So the captain concludes that he made the trip from Earth to Alpha Centauri in 2.185 years as expected. But when he delivers the parcel to Mr. Smith’s friend, the calendar on Alpha Centauri as well as the calendar on Earth shows August 21, 3002, more than 2 ½ years after the promised delivery time. Of course Mr. Smith and his friend are upset about the delayed delivery and file a complaint, while the captain of the courier ship refuses any responsibility and insists that the parcel was delivered in time. When the case goes to the interstellar court, the captain of the courier spacecraft proves his point with the spaceship’s logbook and the board calendar, which both confirm that the parcel was delivered in time, while Mr. Smiths friend can prove that he received the parcel not before August 3002, over 2 ½ years too late. So what decision would an objective and neutral judge make? Is the spaceship captain right? Is Mr. Smith’s friend right? Are both of them wrong? Can’t there be only one truth? No, in fact both of them are right.  There is no objective truth and no objective reality. There are only subjective realities. In the world of the courier ship only 2.185 years have passed, but on Alpha Centauri and Earth 4.886 years have passed. There is no objective time. There is only subjective time, because there is only subjective reality.  The subjective world of the spaceship has been consistent with itself and always followed classic Newtonian mechanics. In the same way the world of the two planets Earth and Alpha Centauri has been consistent and followed classic Newtonian mechanics. But when the courier ship accelerated to relativistic speed, its world disconnected from the other world. And when it connected again with the world of the stationary planets after arriving on Alpha Centauri, the world where it arrived was not consistent anymore with the world it left, because both worlds were disconnected for too long time. The two time scales of both worlds have been distorted and didn’t fit together anymore using the Newtonian laws of physics. The relativistic effect of time dilatation has disrupted the common reality between both worlds. So suddenly two contradictory statements about the duration of the trip from Earth to Alpha Centauri can both be true contradicting the basic laws of logic. Of course the statement of the captain of the courier ship is not true in the reality sphere of Mr. Smith’s friend, just as the statement of Mr. Smith’s friend is not true in the reality sphere of the spaceship. In each world for itself the laws of logic have not been violated. The problem results from the fact that we are talking about two different realities, two subjective realities. There is no objective reality. Outside of each observer’s reality sphere, there is no reality, as it is commonly understood. Every observer, every consciousness is a world in itself. He exists in his own world. He is the center of his world. There is no objective reality between the different worlds of the observers. Only his own world is real for every observer. Sometimes the different worlds touch and overlap each other. Then the observers interact with each other. Then we get the phenomenon as described above. Real is only what an observer observes himself. The world ends, where his observation ends. With the simple assumption that every observer is the center of an independent world, we can simplify our concept of the universe. We can explain all the phenomena that the Theory of Relativity and Quantum Mechanics try to describe under the assumption of a common universe for all observers. But without the concept of a common objective reality, everything becomes suddenly easier. And it changes our view of the world. We are not one out of myriads of beings in a huge common world. We are the center of the world – the center of our world. But our world is everything, which is. There is no reality beyond the limits of our world. There are possibilities, and there may be other worlds, but we are independent from them, and we don’t share a common concept of reality. This view of the world suddenly gives our self a much higher importance. We are not a small object in a vast objective universe, we are the center of the world, and there is no such thing as an objective universe. Our observation is what makes things become real. Without our self as observer, the world cannot be. Without our self as observer, there are only undetermined possibilities as described by the Copenhagen Interpretation of Quantum Mechanics. Friday, November 4, 2011 Metaphysics Part III - Creating Reality We have found out that the physical world itself is not real; it becomes real by observation. This coincides with a very basic phenomenon that is often overlooked. When talking about science, we talk a bout the physical world from an objective point of view, let's say from a neutral third person perspective. However we have never experienced the world in an objective way. We always see the world from a certain point of view. And this is what the world in fact is; it is subjective. The world is no objective reality. It is a subjective reality. We create it originating from our point of view, simply by viewing it. Therefore we are not observers moving around in a real existing objective world; the world is instead the rather undefined field of possibilities between the countless subjective fields of reality that we create around us. The size of this field of reality, which surrounds us, is limited by the range of our perception. The space where these fields of reality of different observers touch each other is the field of intersubjective reality. The blurred space between the different fields of subjective reality created by the observers is in a non-real, undefined state, whose possibilities are determined by the wave function in order to prevent contradictory events in the fields of reality, where the wave function collapses into distinct states. So when subjective fields of reality merge into an intersubjective field of reality, it can never happen that the observers create contradictory events by their observation. As soon as the wave function has collapsed, it has a clearly defined state. The observation by a second observer doesn't have any influence on this state anymore. So both observers always see the same, and every observation they make in their subjective field of reality is consistent with the subjective field of reality of any other observer. Observers don't create reality in an arbitrary way, they simply make the wave function collapse into one possible state, which is described by the Schrödinger equation. Thursday, November 3, 2011 Metaphysics Part II - Quantum Physical Explanation Quantum mechanics explains these phenomena in the following way: The location or the state of the particle is described by a wave function, which was first discovered by the Austrian physicist Erwin Schrödinger in 1926.  Y(r,t) is the wavefunction, which is the probability amplitude for different configurations of the system. ħ is Planck's constant divided by 2π. Ĥ is the Hamiltonian operator This wave function does not give a particular location for the particle in our experiment. Nevertheless we can detect the particle at a distinct location when it is measured by arriving on the screen.  Schrödinger described this paradox in a thought experiment involving a cat. Schrödinger’s Cat If we don't attribute the quality of being an observer by itself to the cat, it would indeed mean that the cat is at the same time dead and alive as long as we don't come back to check the actual state of the cat. In the real world we would actually never see a half-dead, half-alive cat. The cat would either live or be dead, when we return to observe the outcome of the experiment. How can this contradiction between the Schrödinger equation and the physical world that we observe be interpreted? Today's physics has currently several possible explanations for the paradoxes described in the experiments above. The two most common ones are the Copenhagen Interpretation and the Many-World-Interpretation. Most physicists are inclined to accept the Copenhagen Interpretation, however we should have a short look at the alternative explanation, which is also supported by many scientists. According to this interpretation in each case of a non-deterministic event, this means when the wave function of the Schrödinger equation offers more than one possible outcome, our world splits in distinct parallel worlds. In the case of the double-slit experiment the world would split in one world where the particle passes through the left slit and another one where the particle passes through the right slit. For a short period of time the two worlds are still interconnected and can interact with each other, so that we get the resulting interference pattern on the screen. But then the worlds are irreversibly separated and continue their own path of events. So in one world Schrödinger's cat would die, in the other one it would survive. Depending on which world we are, we would see the one or the other outcome. Since such non-deterministic events occur in an incredible number every millisecond, the universe would permanently split in an uncountable number of parallel universes where every possible outcome of events occurs. Everything, which is only remotely possible, would therefore happen somewhere in some of the almost infinite number of alternative universes. This may sound like a strange idea from a science fiction novel, but it has a certain advantage over the mainstream Copenhagen Interpretation. It does not need to introduce the only vaguely described concept of measurement or observation and does not need to explain how it stands out from the non-observed world.  Nevertheless most physicists today prefer the Copenhagen Interpretation to explain the phenomena of quantum mechanics. Copenhagen Interpretation The Copenhagen Interpretation can be summarized as the statement that the process of observation (measurement) causes a collapse of the wave function, which then results in a distinct state of an object. In our example of the double-slit experiment this means, that the particle has no distinct path through the slits, while it is not observed. But as soon as we measure the location of the particle and put a detector at the slit, which tells us, where the particle passes through, the wave function collapses, the particle gets a distinct location and consequently there is no interference pattern created on the screen. While the world is just a wave function of possibilities with different probabilities, this means not actually real, when unobserved; the act of observation makes it suddenly become real and taking a distinct state and a distinct location. The wave function collapse is what makes things become real. Before this collapse the world is just a field of probabilities. In the case of our cat, it would indeed be dead and alive at the same time, as long as it is not observed (supposed we don't count the cat itself as an observer). Then suddenly it takes a distinct state caused by the fact that we observe it after coming back. It means the cat is not really there, when nobody looks. It is only there as a cat in a particular state, when we look at it. Following this interpretation it means that the world is not actually real. It is a huge field of probabilities. It becomes only real due to our observation. We are creating the world by observing it. So it is not, as many materialists believe, that we depend on the physical world, that our consciousness is the result of the physical world. Quantum mechanics has shown that the physical world depends on us. The physical world is the result of our consciousness. We are now entering an interpretation of quantum mechanics, which goes even further than the Copenhagen Interpretation. This interpretation is called "Consciousness causes collapse". It seems to be the logical consequence of the Copenhagen Interpretation. Because the Copenhagen Interpretation falls short of explaining its essential question: What is "observation"? When is something observed, when is it unobserved? The "Consciousness causes collapse" interpretation answers this question by introducing the concept of consciousness. It also solves the philosophical and religious questions, what consciousness actually is. According to this interpretation, consciousness is what causes the wave function collapse.  So finally we can answer the question: Who has a soul, only humans, or also animals and plants? The answer is: Whatever can cause the collapse of the wave function.  The only problem with this answer is that there is no way to find out, if somebody or something can cause the collapse of the wave function by himself alone or if it just collapsed after we looked to find out the result of the experiment. Since we cannot think about a way to prove it experimentally, the "Consciousness causes collapse" interpretation is often considered to be pseudoscientific and rather belonging into the field of philosophy than physics.  In the next part let's therefore forget the concept of "consciousness" for a moment and go back to the Copenhagen Interpretation and its implications. Wednesday, November 2, 2011 Metaphysics Part I - The Nature of Light and Matter Double-Slit Experiment Interferometer Experiment Why does this occur? There are four theoretically possible paths the light can take: Tuesday, November 1, 2011 Religion Part V - Strategies to Overcome Religion Strategies likely to fail: The use of force Examples for these failures are: • The ill-fated persecution of Christians in the Roman Empire • The Crusades • The systematic execution of clerics during the French Revolution • The suppression or religion in the former Soviet Union and Cuba Strategies likely to succeed: Not attacking existing deities • Moses made the pagan demon Yahweh into his monotheistic god • Christianity integrated the Jewish god but not the Ewish laws • Hinduism continued the polytheistic worship of the former folk religion Integrating former religious rituals giving them a new meaning • Islam integrated the pagan pilgrimage to the kaaba. • Taoism and Confucianism maintain temples without worshiping any gods. Shifting the focus from worshiping gods to philosophy Ignoring formal religious affiliation Applying successful strategies in today's world
c0ed9d7ff134d7c3
Many-minds interpretation From Wikipedia, the free encyclopedia Jump to: navigation, search The many-minds interpretation of quantum mechanics extends the many-worlds interpretation by proposing that the distinction between worlds should be made at the level of the mind of an individual observer. The concept was first introduced in 1970 by H. Dieter Zeh as a variant of the Hugh Everett interpretation in connection with quantum decoherence,[1] and later (in 1981) explicitly called a many or multi-consciousness interpretation. The name many-minds interpretation was first used by David Albert and Barry Loewer in 1988.[2] Interpretations of quantum mechanics[edit] The various interpretations of quantum mechanics typically involve explaining the mathematical formalism of quantum mechanics, or to create a physical picture of the theory. While the mathematical structure has a strong foundation, there is still much debate about the physical and philosophical interpretation of the theory. These interpretations aim to tackle various concepts such as: 1. Evolution of the state of a quantum system (given by the wavefunction), typically through the use of the Schrödinger equation. This concept is almost universally accepted, and is rarely put up to debate. 2. The measurement problem, which relates to what we call wavefunction collapse – the collapse of a quantum state into a definite measurement (i.e. a specific eigenstate of the wavefunction). The debate on whether this collapse actually occurs is a central problem in interpreting quantum mechanics. The standard solution to the measurement problem is the "Orthodox" or "Cophenhagen" interpretation, which claims that the wave function collapses as the result of a measurement by an observer or apparatus external to the quantum system. An alternative interpretation, the Many-worlds Interpretation, was first described by Hugh Everett in 1957[3][4] (where it was called the relative state interpretation, the name Many-worlds was coined by Bryce Seligman DeWitt starting in the 1960s and finalized in the 70s[5]). His formalism of quantum mechanics denied that a measurement requires a wave collapse, instead suggesting that all that is truly necessary of a measurement is that a quantum connection is formed between the particle, the measuring device, and the observer.[4] The many-worlds interpretation[edit] In the original relative state formulation, Everett proposed that there is one universal wavefunction that describes the objective reality of the whole universe. He stated that when subsystems interact, the total system becomes a superposition of these subsystems. This includes observers and measurement systems, which become part of one universal state (the wavefunction) that is always described via the Schrödinger Equation (or its relativistic alternative). That is, the states of the subsystems that interacted become "entangled" in such a way that any definition of one must necessarily involve the other. Thus, each subsystem's state can only be described relative to each subsystem with which it interacts (hence the name relative state). This has some interesting implications. For starters, Everett suggested that the universe is actually indeterminate as a whole. To see this, consider an observer measuring some particle that starts in an undetermined state, as both spin-up and spin-down, for example - a superposition of both possibilites. When an observer measures that particle's spin, however, it always registers as either up or down. The problem of how to understand this sudden shift from "both up and down" to "either up or down" is called the Measurement problem. According to the many-worlds interpretation, the act of measurement forced a “splitting” of the universe into two states, one spin-up and the other spin-down, and the two branches that extend from those two subsequently independent states. One branch measures up. The other measures down. Looking at the instrument informs the observer which branch she's on, but the system itself is indeterminate at this and, by logical extension, presumably any higher level. The “worlds” in the many worlds theory is then just the complete measurement history up until and during the measurement in question, where splitting happens. These “worlds” each describe a different state of the universal wave function and cannot communicate. There is no collapse of the wavefunction into one state or another, but rather you just find yourself in the world leading up to what measurement you have made and are unaware of the other possibilities that are equally real. The many-minds interpretation[edit] The many-minds interpretation of quantum theory is many-worlds with the distinction between worlds constructed at the level of the individual observer. Rather than the worlds that branch, it is the observer’s mind.[6] The purpose of this interpretation is to overcome the fundamentally strange concept of observers being in a superposition with themselves. In their 1988 paper, Albert and Loewer argue that it simply makes no sense for one to think of the mind of an observer to be in an indefinite state. Rather, when someone answers the question about which state of a system they have observed, they must answer with complete certainty. If they are in a superposition of states, then this certainty is not possible and we arrive at a contradiction.[2] To overcome this, they then suggest that it is merely the “bodies” of the minds that are in a superposition, and that the minds must have definite states that are never in superposition[2] When an observer measures a quantum system and becomes entangled with it, it now constitutes a larger quantum system. In regards to each possibility within the wave function, a mental state of the brain corresponds. And ultimately, only one mind is experienced, leading the others to branch off and become inaccessible, albeit real.[7] In this way, every sentient being is attributed with an infinity of minds, whose prevalence correspond to the amplitude of the wavefunction. As an observer checks a measurement, the probability of realizing a specific measurement directly correlates to the number of minds they have where they see that measurement. It is in this way that the probabilistic nature of quantum measurements are obtained by the Many-minds Interpretation. Quantum non-locality in the many-minds interpretation[edit] The body remains in an indeterminate state while the minds picks a stochastic result. Now, consider an experiment where we are measuring the polarization of two photons. When the photon is created it has an indeterminate polarization. If a stream of these photons is passed through a polarization filter, 50% of the light is passed through. This corresponds to each photon having a 50% chance of aligning perfectly with the filter and thus passing, or being misaligned (by 90 degrees relative to the polarization filter) and being absorbed. Quantum mechanically, this means the photon is in a superposition of states where it is either passed or observed. Now, consider the inclusion of another photon and polarization detector. Now, the photons are created in such a way that they are entangled. That is, when one photon takes on a polarization state, the other photon will always behave if it has the same polarization. For simplicity, take the second filter to either be perfectly aligned with the first, or to be perfectly misaligned ( 90 degree difference in angle, such that it is absorbed). If the detectors are aligned, both photons are passed (i.e. we say they agree). If they are misaligned, only the first passes and the second is absorbed (now they disagree). Thus, the entanglement causes perfect correlations between the two measurements - regardless of separation distance, making the interaction non-local. This sort of experiment is further explained in Tim Maudlin's Quantum Non-Locality and Relativity,[8] and can be related to Bell test experiments. Now, consider the analysis of this experiment from the many minds point of view: No sentient observer[edit] Consider the case where there is no sentient observer, i.e. no mind around to observe the experiment. In this case, the detector will be in an indefinite state. The photon is both passed and absorbed, and will remain in this state. The correlations are withheld in that none of the possible "minds", or wave function states, correspond to non correlated results.[8] One sentient observer[edit] Now expand the situation to have one sentient being observing the device. Now, he too enters the indefinite state. His eyes, body, and brain are seeing both spins at the same time. The mind however, stochastically chooses one of the directions, and that is what the mind sees. When this observer goes over to the second detector, his body will see both results. His mind will choose the result that agrees with the first detector, and the observer will see the expected results. However, the observer's mind seeing one result does not directly affect the distant state - there is just no wave function in which the expected correlations do not exist. The true correlation only happens when he actually goes over to the second detector.[8] Two sentient observers[edit] When two people look at two different detectors that scan entangled particles, both observers will enter an indefinite state, as with one observer. These results need not agree – the second observer's mind does not have to have results that correlate with the first's. When one observer tells the results to the second observer, their two minds cannot communicate and thus will only interact with the other's body, which is still indefinite. When the second observer responds, his body will respond with whatever result agrees with the first observer's mind. This means that both observer's minds will be in a state of the wavefunction that always get the expected results, but individually their results could be different.[8] Locality of the many-minds interpretation[edit] As we have thus seen, any correlations seen in the wavefunction of each observer's minds are only concrete after interaction between the different polarizers. Even though the correlations on the level of individual minds correspond to the appearance of non-locality (or equivalently, violation of Bell's inequality). However, since the interactions only take place in individual minds they are local, since there is no real interaction between space-like separated events that could influence the minds of observers at two distant points. This, like the many worlds theory, makes the many-minds theory completely local.[8] There is currently no empirical evidence for the many-minds interpretation. However, there are theories that do not discredit the many-minds interpretation. In light of Bell’s analysis of the consequences of quantum non-locality, empirical evidence is needed to avoid inventing novel fundamental concepts (hidden variables).[9] Two different solutions of the measurement problem then appear conceivable: von Neumann’s collapse or Everett’s relative state interpretation.[10] In both cases a (suitably modified) psycho-physical parallelism can be re-established. Since conscious awareness has to be coupled with local physical systems, the observer’s physical environment has to interact and can influence the brain. The brain itself must have some physico-chemical processes that affect the states of awareness. If these neural processes can be described and analyzed then some experiments could potentially be created to test whether affecting neural processes can have an effect on a quantum system. Speculation about the details of this awareness-local physical system coupling on a purely theoretical basis could occur, however experimentally searching for them through neurological and psychological studies would be ideal.[11] When considering psycho-physical parallelism, superpositions appear rich enough to represent primitive conscious awareness. It seems that quantum superpositions have never been considered, for example, in neuronal models, since only classical states of definite neuronal excitation are usually taken into account. These quasi-classical states are also measured by external neurobiologists. Quantum theory would admit their superpositions, too, thus giving rise to a far greater variety of physical states which may be experienced by the subjective observer. When used for information processing, such superpositions would now be called “quantumbits” or qubits. As demonstrated by M. Tegmark, they can not be relevant for neuronal and similar processes in the brain.[12] Objections that apply to the Many-worlds Interpretation also apply to the Many-minds Interpretation. On the surface both of these theories arguably violate Occam's Razor; proponents counter that in fact these solutions minimize entities by simplifying the rules that would be required to describe the universe. Nothing within quantum theory itself requires each possibility within a wave function to complement a mental state. As all physical states (i.e. brain states) are quantum states, their associated mental states should be also. Nonetheless, it is not what we experience within physical reality. Albert and Loewer argue that the mind must be intrinsically different than the physical reality as described by quantum theory.[6] Thereby, they reject type-identity physicalism in favour of a non-reductive stance. However, Lockwood saves materialism through the notion of supervenience of the mental on the physical.[7] Nonetheless, the Many-minds Interpretation does not solve the mindless hulks problem as a problem of supervenience. Mental states do not supervene on brain states as a given brain state is compatible with different configurations of mental states.[13] Another serious objection is that workers in No Collapse interpretations have produced no more than elementary models based on the definite existence of specific measuring devices. They have assumed, for example, that the Hilbert space of the universe splits naturally into a tensor product structure compatible with the measurement under consideration. They have also assumed, even when describing the behaviour of macroscopic objects, that it is appropriate to employ models in which only a few dimensions of Hilbert space are used to describe all the relevant behaviour. Furthermore, as the Many-minds Interpretation is corroborated by our experience of physical reality, a notion of many unseen worlds and its compatibility with other physical theories (i.e. the principle of the conservation of mass) is difficult to reconcile.[6] According to Schrödinger's equation, the mass-energy of the combined observed system and measurement apparatus is the same before and after. However, with every measurement process (i.e. splitting), the total mass-energy would seemingly increase[14] Peter J. Lewis argues that the Many-minds Interpretation of quantum mechanics has absurd implications for agents facing life-or-death decisions.[15] In general, the Many-minds theory holds that a conscious being who observes the outcome of a random zero-sum experiment will evolve into two successors in different observer states, each of whom observes one of the possible outcomes. Moreover, the theory advises you to favour choices in such situations in proportion to the probability that they will bring good results to your various successors. But in a life-or-death case like getting into the box with Schrödinger's cat, you will only have one successor, since one of the outcomes will ensure your death. So it seems that the Many-minds Interpretation advises you to get in the box with the cat, since it is certain that your only successor will emerge unharmed. See also quantum suicide and immortality. Finally, it supposes that there is some physical distinction between a conscious observer and a non-conscious measuring device, so it seems to require eliminating the strong Church–Turing hypothesis or postulating a physical model for consciousness. See also[edit] 1. ^ Zeh, H. D. (1970-03-01). "On the interpretation of measurement in quantum theory". Foundations of Physics. 1 (1): 69–76. Bibcode:1970FoPh....1...69Z. doi:10.1007/BF00708656. ISSN 0015-9018.  2. ^ a b c Albert, David; Loewer, Barry (1988-01-01). "Interpreting the Many-Worlds Interpretation". Synthese. 77 (November): 195–213.  3. ^ Everett, Hugh (1957-07-01). ""Relative State" Formulation of Quantum Mechanics". Reviews of Modern Physics. 29 (3): 454–462. Bibcode:1957RvMP...29..454E. doi:10.1103/RevModPhys.29.454.  4. ^ a b Everett, Hugh (1973-01-01). DeWitt, B.; Graham, N., eds. The Theory of the Universal Wavefunction. Princeton UP.  5. ^ Dewitt, Bryce S. (1973-01-01). "Quantum Mechanics and Reality": 155. Bibcode:1973mwiq.conf..155D.  6. ^ a b c Wendt, Alexander (2015-04-23). Quantum Mind and Social Science. Cambridge University Press. ISBN 9781107082540.  7. ^ a b Lockwood, Michael (1996-01-01). "Many-Minds Interpretations of Quantum Mechanics". British Journal for the Philosophy of Science. 47 (2): 159–88. doi:10.1093/bjps/47.2.159.  8. ^ a b c d e Maudlin, Tim (2011-05-06). Quantum Non-Locality and Relativity: Metaphysical Intimations of Modern Physics. John Wiley & Sons. ISBN 9781444331264.  9. ^ Bell, John (1964). "On the Einstein Podolsky Rosen Paradox" (PDF). Physics. 1 (3): 195–200.  10. ^ Zeh, H. D. (1999-08-28). "The Problem of Conscious Observation in Quantum Mechanical Description". arXiv:quant-ph/9908084Freely accessible.  11. ^ Zeh, H. D. "Quantum Theory and Time Asymmetry". Foundations of Physics. 9 (11–12): 803–818. Bibcode:1979FoPh....9..803Z. doi:10.1007/BF00708694. ISSN 0015-9018.  12. ^ Tegmark, Max (2000-04-01). "The importance of quantum decoherence in brain processes". Physical Review E. 61 (4): 4194–4206. Bibcode:2000PhRvE..61.4194T. doi:10.1103/PhysRevE.61.4194. ISSN 1063-651X.  13. ^ "On Dualistic Interpretations of". Retrieved 2016-03-14.  14. ^ "Locality and Mentality in Everett Interpretations: Albert and Loewer's Many Minds". Retrieved 2016-03-14.  15. ^ Lewis, Peter J. (2000-01-01). "What Is It like to Be Schrödinger's Cat?". Analysis. 60 (1): 22–29. doi:10.1093/analys/60.1.22. JSTOR 3329285.  External links[edit]
dd5acfeff88d2c4c
Atoms of an Isotope Are Identical, Literally Matt Strassler [December 14, 2012] Now here’s a remarkable fact, with enormous implications for biology.  Take any isotope of any chemical element with atomic number Z.  If you take a collection of atoms that are from that isotope — a bunch of atoms that all have Z electrons, Z protons, and N neutrons — you will discover they are literally identical.   [A bit more precisely: they are identical when, after being left alone for a brief moment, each atom settles down into its preferred configuration, called the “ground state.”]   You cannot tell two such atoms apart.   They all have exactly the same mass, the same chemical properties, the same behavior in the presence of electric and magnetic fields; they emit and absorb exactly the same wavelengths of light waves.  This a consequence of the identity of their electrons, of their protons and of their neutrons, which will be discussed later. That all atoms of the same isotope are identical, and that different isotopes of the same element have nearly identical chemistry, is a profound fact of nature!  Among other things, it explains how our bodies can breathe oxygen and drink water and process salt and sugar without having to select which oxygen or water or salt or sugar molecules to consume.  Contrast this with what a construction company has to do when building a house out of bricks, or out of concrete blocks.  Bricks and concrete blocks vary, and are sometimes defective, and so a builder must exercise quality control, to make sure that cracked or over-sized or misshapen bricks and blocks aren’t used in the walls of the house.  No such quality control is generally needed for our bodies when we breathe; any oxygen atom will do as well as any other, because we only need the oxygen to make molecules inside our bodies, and chemically all oxygen atoms are essentially the same.  (This is all the more true since, for most elements, one isotope is much more common than the rest; for example, most hydrogen atoms [one electron and one proton] have no neutrons, and most oxygen atoms [eight electrons and eight protons] have eight neutrons.)   36 responses to “Atoms of an Isotope Are Identical, Literally 1. Would a radioactive atom (Say K-40) be a defective building block? If any two given atoms of the same isotope of an element are completely identical, how does this sit with the Pauli exclusion principle? Surely something must differentiate them, or do they obey the maths of bosons? • I’m probably stepping on the professor’s toes here, but atoms aren’t single particles. In 2 atoms of the same element, each electron is in each atom in its proper orbital that obeys the principle, but the two atoms aren’t linked (unless they are a molecule, in which case each electron is in its orbital and the valence electrons are in their hybrid bonding orbital, all of which obey the exclusion principle). • You are correct in that atoms are composite particles, but so are things like protons. An atom can behave like a single fundamental particle in that I can perform experiments like the double slit experiment on it. (I beleive the largest aggregation this has successfully been performed on is C60 fullerene molecules.) I also know that atomic nuclei can be fermions (Odd number of nucleons) or bosons (even number of nucleons) and that this is directly responsible for say, Helium-4’s ability to become a superfluid at higher temperatures than that of helium-3. • Kudzu is correct, andy; your objection isn’t accurate. Because of quantum mechanics, composite objects in their ground states can still be exactly identical, just as Kudzu describes. Kudzu, your point about radioactivity is a good one. Not sure how to bring it in though. Our biology is designed with error-correction mechanisms, to handle the damage from the small amounts of radioactivity that we’re likely to encounter in daily life. As for Pauli’s exclusion principle for identical fermions — I am confused about your point. Electrons are identical, in that you can swap one for another and nothing changes. They are indistinguishable. But that doesn’t mean they are *doing* the same thing; one of them could be here on earth and another on the moon, or one could be in the inner shell of a carbon atom with another in the outer shell. (Think about identical twins; you can’t tell them apart, but they don’t have to do the same thing at the same time.) Since electrons are fermions, and identical, they can’t be in the same location, doing exactly the same thing; that’s Pauli exclusion. Exactly the same logic holds for atoms of the same isotope in their ground state. The statement that they are identical isn’t the statement that they are doing identical things; it is the statement that if you swapped two of them, making the first do what the second was doing and vice versa, you wouldn’t be able to tell you’d made a swap. And if they are fermions, they can’t be doing exactly the same thing (since, for experts, the swap would produce a relative minus sign in the wave function, which would be impossible if they were behaving identically.) • Right. I guess I should have been more specific, and on further thought this question may be better worded not relating primarily to the exclusion principle at all. As you know you cannot force an arbitrary number of identical fermions into the same space, I cannot make a ‘laser’ beam out of fermions. Given that some atoms are fermions, I assume that you cannot place two of them in the same space in their ground state. Does this affect interatomic forces? I have always assumed the primary force keeping atoms from packing closer together than they do was electromagnetic repulsion between electrons in the orbitals of neighboring atoms, though I have recently seen it argued with some force that it is in fact the exclusion principle not allowing an atom’s electrons to occupy the same space. I am very doubtful of this, but would like to know if the exclusion principle has any affect on atoms, does a gas of helium 4 atoms have a higher density than one of helium-3 (In terms of atoms of course) since He-4 atoms are bosons? 2. I’m not sure that sugar is the best example because of the possibility of chirality — the difference between D-glucose and L-glucose is very important for biology! 3. You point out that the chemical activity of different isotopes of the same element is just about identical and so one’s body does not have to pick and choose. However, your choice not to mention basic molecules of an element, oxygen for example, leaves out the possibility of an interesting contrast between different isotopes, which can be processed without problem, versus different allotropes: O3 (ozone) versus O2, which do not affect the body in the same way at all. I realize that this would require a much longer article, but somehow I felt it was missing. • Hmmm. I haven’t thought about where that would fit in my presentation. I’m trying to get to particle physics as quickly as possible and not do too much with molecules. Maybe at some point I’ll be able to add that in. 4. check for a typo in the caption to figure 3 5. So are these authors just completely on drugs? “Nonidentical protons” T. Mart, A. Sulaksono (Submitted on 25 Feb 2013) We have calculated the proton charge radius by assuming that the real proton radius is not unique and the radii are randomly distributed in a certain range. 6. Let’s do this mind experiment. We have two radioactive atoms of the same isotope, A and B. Let’s say that A is in your hand and B is in mine (assume that all external forces are exactly the same for each atom). Now, yours (A) decays after 10 seconds and mine (B) does not. Imagine us going back in time 10 seconds and swapping atoms so that A is in my hand and B is in yours. Which one will decay in 10 seconds? I will suggest that the one in my hand will, because the internal workings (arrangements of subparticles in the atom) would result in decay in that particular atom. • Recall however that radioactive decay is a quantum process. Atoms do not have little ‘internal clocks’ that count down to their decay time. As such if we reversed time and swapped the atoms there’s no guarantee that either of them would decay in ten seconds. • The particular experiment you’ve just suggested can’t be done, even logically. [I.e. you can’t go back and do the same experiment with the same atoms after they’d decayed.] You have to use logic if you’re going to do science; once you drop logic you make lots of mistakes. How about suggesting an experiment you could actually do? Let’s throw the two atoms at each other. Suppose one comes back to your hand and the other comes back to mine. Did they miss each other, so that you caught mine and I caught yours? or did one bounce off the other, so you caught yours and I caught mine? Well, according to you, we can tell the difference. But experiment shows we can’t tell, because the scattering probability, which can be calculated, would be larger if we could tell the difference. (And you can test this by scattering non-identical atoms off each other and checking that your calculation always gives the correct scattering rate in that case; only when the atoms are of the same type do you get a different answer.) There are thousands of other checks. • Matt wrote: “Thank you for this comment, which appears wise and reasonable but is hopelessly naive. You seem to think that this is something that theoretical physicists made up out of their heads, and that obviously it can’t be checked because, gosh, how could you possibly go in and compare two protons?” I am not talking about comparing just two atoms of the same isotope or two protons from an external view. I am talking about taking two atoms of the same isotope or protons and looking at the specific details of the inner workings, like the specific configuration of an unstable nucleus at a particular moment or, in the case of a proton, the quarks, gluons, etc. that make up the proton. With protons, are the colors, flavors, orientations, etc exactly identical at any given time? Can gluons split into virtual quarks in one proton while remaining gluons in the other? The idea that every proton is made from two up quarks and one down quark is incorrect. A proton has two more up quarks than up antiquarks, and one more down quark than down antiquarks and they are moving all over the place. I don’t imagine the configurations of two protons to be exactly the same, but instead that the net particle appears to be the same. My point is this, just because WE can’t tell the difference with our methods and measurements at this time … does not mean that ultimately two atoms of the same isotope are exactly alike. Of course, I understand that through experiments and things like the diffusion problem, you can get at the movement of nuclei and protons. Yes, I also know that these movements currently may only be well-modeled using quantum mechanics, which assumes random behavior. But I am saying that quantum mechanics may not be the end-all, or pinnacle of our ability to understand how matter behaves. It may be unproductive to assume that randomness is the inherent underpinning of nature. 7. How do we know that? Atoms of the same isotope may APPEAR identical from an external perspective, and the process of decay may APPEAR random. Nevertheless, is it possible that there are subatomic processes in an unstable nucleus that we are unaware of from our limited view. Historically, processes that appear to be random turn out not to be once we have developed more precise instrumentation. Someone on this thread posted a comment linking to research saying that protons themselves may not be exactly identical, as previously thought, which may suggest that the nuclei of unstable atoms may not be identical, not to mention that gluon cloud configurations in quarks may differ just as electron shell configurations may differ from one atom to another. Perhaps inserting randomness in our models in order to predict decay is useful for large numbers of atoms, but to assume it is “random” for each single nuclei may be taking the concept too far. • The problem is the assumption of randomness works so *well* The half life of an isotope can be related to the difference in energy levels between the parent and daughter product plus the mechanism of decay. The standard model predicts the structure we see and fails to predict (as far as I am aware) any mechanism for radioactive decay that would be ‘non-random.’ We have evidence that particles in a nucleus are highly identical; nuclei have specific energy levels that in several cases have been measured quite exactly and any variation would show up as ‘broadening of the bands’ in these measurements. Likewise with hadrons themselves we have measured excitations on things like the proton. The evidence that protons may not be identical is tentative and the effect is hardly a major one.. Then of course there is the question of whether or not simply not being identical would have a measurable or predictable effect on decay. (The chemical environment of electron-capture nuclei affects their decay rate, but not that of other decay modes.) And if the difference itself is random… Certainly if we find even that protons are not identical it will be a monumental discovery most certainly requiring extensive new physics. • It works “well” in the sense that there is a predictable “average” for decay rates when looking at a large quantity of atoms. It does not work well when looking at an individual one. Kudzu wrote <<>> We can speed up decay rates for certain isotopes under certain conditions (like in a nuclear reactor). Solar flares of the sun tend to speed up decay rates for many radioactive isotopes so that even the average mentioned above is not actually constant (as previously thought). These ideas suggest that decay is not entirely “random”. • The weakness when dealing with individual atoms is an inherent weakness of all random processes. It is what we *expect* if the process were indeed random. This is the problem with postulating a non-random process. All the facts so far are consistent with a random process. A big problem is that, as you note, radioactive decay isn’t entirely random in that in order for it to occur various conditions have to be met. A fully ionized K40 nucleus is stable, it needs an electron in the vicinity of the nucleus to decay. (This is usually a 1s electron that spends some time there, hence the dependence of the decay rate on chemical environment.) Likewise changing conditions WILL change the decay rates of various isotopes as conditions are made more or less conductive to decay. But this does not eliminate the ‘core randomness’ of the process. You can double the rate of decay of an isotope, but all that means is that any given atom in it has twice the probability of decaying in any time. What you propose would be some mechanism where we could, in theory, measure a single atom and know exactly when it would decay, eliminating *all* randomness from the process. This is something that would be relatively easy to prove but very hard to disprove. It is one of those things like the ‘shadow biosphere’ where you can never be totally sure it isn’t there, but we have no good reason to assume it is. Physicists didn’t make an assumption that protons are identical, that electrons are identical, etc; they considered this hypothesis with care, and recognized that the question could be tested, using statistical considerations, which underlie thermodynamics. Consider three “A”s. Now if they are not the same, then there are 9 ways to arrange three A’s: for instance, AAA, AAA, etc. If they are identical, there is only one: AAA. The difference isn’t that big, but once we have a million A’s, the difference between having one arrangement and (1,000,000)1,000,000 arrangements becomes pretty noticeable. (Keep in mind that a drop of water has a million million million molecules of water in it.) And it has a huge impact on how a system changes with temperature, and on how energy is distributed into a system, etc. Following along similar (but more sophisticated) lines, the list of tests of the identity of elementary particles and of protons, neutrons, nuclei, isotopes of elements, etc. is very long indeed. Here are three sets: *** The Pauli exclusion principle, stated crudely in chemistry class, is in fact the statement that the states involving electrons change by a minus sign if you exchange one electron with another. This has a big effect on the scattering of two electrons; if it weren’t true, it would change the scattering rate by a factor of two. Quantum mechanics does give the correct scattering rate (and even its angular dependence) but only if the electrons are identical. That is not all, of course; atomic structure and even the solidity of solid matter is dependent on this principle for electrons. *** Without a similar Pauli exclusion principle affecting protons, and a similar one affecting neutrons, nuclear physics would be completely different from what is observed. And there would be no neutron stars in the sky, or even white dwarfs; they would collapse to form black holes, because only the Pauli exclusion principle keeps them from doing so. *** If atoms of a certain class were not identical, then bosonic atoms could not form Bose-Einstein condensates. This requires all of the atoms be in lock-step, which is not possible if they are different from one another. But Bose-Einstein condensates do indeed form: . Of course the same is true for photons, which can be made into a laser as a result. Similar statements apply to superfluids like helium-four, and superconducting materials. Basically, your question is a little like asking whether we’re sure the earth orbits the sun due to gravity. Yes, we’re sure, and we have tons of evidence to back up the statement. Not only that, but a lot of modern technology was designed under the assumption that it was true, and that technology works. There are hundreds of experiments (including many going on at the Large Hadron Collider right now) that could have proved the identity of particles was false, but none have done so. So this is just not something open for much debate. But the other point is that back in the 1940s physicists understood why all electrons are identical, etc. In quantum field theory, which is the essential mathematics of the Standard Model that describes all of particle physics in such great detail, particles of a particular type are ripples in a quantum field. Two ripples on a pond have identical properties; and two ripples in an electron field do too. • Once again I am awed by your knowledge. I knew all of the points to which you refer, yet I had not linked them together and asked myself what conclusion they pointed to. This post has fundamentally changed my view of the universe. If not drastically then at least deeply. • Let me bring up the uncertainty principle of Quantum Theory for a moment, and let’s see if that takes the conversation somewhere. As you know, we can only know both the position and momentum of a particle within a certain degree of accuracy. The more accurate of our measure of the position, the less accurate will be the measure of momentum (and vice versa). We can never know the exact values of both. Would you say that the exact values of both do not exist at all? • In the case of position-vs-velocity, I would say no, the exact values do NOT exist. Given our current understanding of particles as waves in a field the uncertainty is ‘built in’; you cannot get an infinitely precise value for either property and measuring one more precisely does not simply ‘hide’ the true value of the other but in fact changes the object being measured so that the other property becomes less exact. It is not simply a matter of not having accurate enough experiments but a fundamental property of the universe. • Kudzu wrote: In the case of position-vs-velocity, I would say no, the exact values do NOT exist. If a particle has a position, does that mean it does not have a velocity? If it has velocity, does it not have a position? All the uncertainty principal says is that one can be KNOWN. It does not say that the other does not exist. You are confusing the uncertainty principle with the “observer effect”. I don’t believe that it is the same concept. • That’s all the uncertainty principle “says”, when you write it in words. But when you write it in math, it says much more. You’re incorrect about the math. What the math says is that there’s no meaning to the questions that you are asking. Now maybe the math is incomplete. But again, it’s been shown, using the math and testing it using experiments, that nothing so simple as “hidden variables” ((i.e., the idea that you are espousing, that the particle has both a position and velocity, you just can’t measure it) can be consistent with DATA. That’s the Bell inequality and its generalizations. This is not an interpretation issue. It’s a data issue. You’re ignoring data. That’s illegal in physics. Why don’t you go learn about this and leave us alone until you understand it? • The uncertainty principle is a manifestation of the basic nature of particles. They are not ‘small hard balls’ like we tend to imagine them. They are waves like the waves on the ocean (but in three dimensions.) This means that any given particle is not a ‘fixed’ object that remains the same whether or not we can measure it. Instead when we measure a property we arrange that particle in a particular way. Imagine trying to measure the *exact* position of a wave in a pond. A wave has no clearly defined edge there, it curves into the flat surface of the pond gradually. So its position is not exact, there is uncertainty in it. (We could try and define the position of the exact center of the wave, but to find *that* exactly we need to measure the edges of the wave exactly.) The only way we can improve the accuracy is to somehow ‘squash’ the wave into a smaller volume of space, make it more like a point particle. There is in fact a way to do this; instead of having a wave with a single momentum, we can make the wave by combining a number of lesser waves. These will interfere in a manner such that the resulting wave will occupy a smaller and smaller volume of space, allowing its position to be known more and more accurately. Shown here: But what have we done by making the wave out of all these ‘sub waves’? Each wave can be considered as being the same particle but having a different frequency (Thus energy and thus momentum) So in making our wave’s position more accurate we have made its momentum (and by relation, speed) less accurate, not obscured. This can be considered similar to the observer effect which basically states that with a rough observation what is being observed will be altered. Many people think this explains the uncertainty principle, that somehow all our measurements of say, speed, must disturb the system’s position and that the ‘real’ position was there, just unobservable. but in fact we can see this effect even without measuring a system. We can look for experimental evidence for this and one of the best is to pass a laser beam through an adjustable slit. As the slit is narrowed at first the beam that emerges from the slit narrows too; this is logic, less of the beam can get through a narrower slit. But eventually something strange happens, the beam begins to *widen* actually it begins to disperse, to radiate out in a fan from the slit. This is because to have passed through the slit the photons must have been in a small volume of space, their position must have been quite accurate. Because of the way the world works their momentum becomes less certain. The beam that passes through the slit is now not a laser beam of particles all moving in the same direction but a spreading, radiating beam. But nothing has been measured; the photons that get through the slit get through precisely because they have not hit anything on the way through. There can be no observer effect here because all the photos that are interfered with are absorbed, destroyed. Only unmolested ones pass. See here for video demonstration: This is a neat demonstration because it is not hard to build and test in your own home. • I think Matt just made a couple interesting comments: “Now maybe the math is incomplete. …. That’s the Bell inequality and its generalizations.” – Matt Yet you both act as if the math IS complete and act as if these are NOT generalizations. 8. Some scientists have suggested that the solar flares are producing increased particles (perhaps neutrinos) which when reacting with an unstable nucleus in a certain way, which may cause the decay. Granted, this is speculative at this time, but there is some evidence:,-give-advance-warning.html Just because we have a hard time measuring when a reaction from one of these particles will trigger decay in an unstable nucleus, does not mean that there is ‘core randomness’. Instead, as previous posters have suggested, it just means that our methods/instruments are not sophisticated enough in order to predict it. “It is random” does not seem like a scientific statement, and I do not even think it is possible to measure a process and establish for a fact that it is “random”. You can only establish that the process obeys a given statistical model. Quantum mechanics yields statistical distributions, which are backed up by experiment. In other words, we insert “randomness” into our models and our equations, and sometimes we may even claim there’s “randomness” yielding the distribution, but that’s only for reasons of convenience; i.e. in order to have practical use for the phenomenon. That is not grounds to claim that there is ultimately no cause for radioactive decay whatsoever. • “It is random” is as much a scientific statement as “It is not random” , it can be tested and disproved. What is unscientific is dogmatically sticking to an explanation in the face of good evidence. Part of the problem here is I believe the different uses of the word ‘random’ Radioactivity has definite ’causes’ that aren’t random, but the process does not currently appear deterministic.Solar neutrinos would just be another non-random factor. To eliminate randomness entirely we would need to observe a perfectly predictable situation where a particle such as a neutrino caused a decay. If I read your posts right you are suggesting that the universe is a ‘clockwork’ one; where there is no inherent randomness in QM. I myself rather liked this idea in my youth but things like Bells theorem have led me to the view that randomness is an inherent part of our universe. I might be wrong but I do not currently see any compelling evidence. Incidentally the solar neutrino mechanism you link to would cause your original atom switching problem not to work, the atom in your hand will never decay because it will never be in the right location to be hit by a solar particle in ten seconds. • Your words make rational sense, but they’re very naive. For one thing, you do need to learn about the Bell inequality and its generalizations. It can be shown — and it has been shown experimentally — that the correlation patterns in quantum mechanics are inconsistent with classical statistics. So what you’re suggesting is inconsistent with experiment. Other, more subtle alternatives to quantum mechanics may be possible, but nothing as simple as what you’re suggesting. 9. <<>> Are you saying that ANYTHING we cannot “perfectly predict” is ultimately and inherently “random”? Anytime you can’t figure something out, just call it “random” and then move on with your life? So much for scientific investigation. Lucky for me, I didn’t make such a statement. Instead, I said: “You can only establish that the process obeys a given statistical model. Quantum mechanics yields statistical distributions, which are backed up by experiment. In other words, we insert “randomness” into our models and our equations, and sometimes we may even claim there’s “randomness” yielding the distribution, but that’s only for reasons of convenience; i.e. in order to have practical use for the phenomenon.” I am suggesting that our prediction models use randomness because that is the best we can do. We are unable to detect a single neutrino (or similar particle) reacting with an unstable nucleus. So, instead we use averages of large quantities of atoms decaying in order to make predictions. They aren’t sure what the exact mechanism is. They only know that decay increases/decreases along with solar increases/decreases of solar particles. This phenomenon was confirmed by independent labs. Nevertheless, not knowing the precise details of a mechanism does not mean that there ultimately IS no mechanism. It only means that the best we can do is inject “randomness” into the theory in order to overcome our lack of precision/knowledge. • The wonderful thing about science is that it has the ‘last word on nothing’; it is always possible for example that the earth is in fact flat and all our data was a gigantic collective mistake. However if we assume that our experiments to date are true then there is *no way* that the data can be explained by particles with definite positions and velocities even if those positions\velocities can’t be known. (That is they are ‘hidden’) Whatever math there might be it is more complex than that. To stick to the ‘But it could be!’ line is to take a stand with those who think the earth is 6000 years old. It would require a massive and systematic and utterly unlikely failure of basic experiments. 11. Wolfgang Vogelbein Dear Prof your comments about the identity of isotopes is not correct – quite. In making it, you assume, without saying so explicitly, that each atom may assume any position in space, with the probability of being there given by the appropriate choice of statistics (Maxwell, Fermi or Bose-Einstein, as dictated by Z, temperature, crystal lattice{superconductivity!}, etc). This is a valid assumption for situations where the position of one atom does not depend upon the position of another atom. In a chemical compound, where atoms are held in (almost) fixed positions relative to each other, this assumption is no longer true. This is borne out by all the spectroscopic methods used by organic chemists, be it infrared, Raman, visible light, nuclear magnetic resonance . . . Take the hydrogen molecule as an example: If the two atoms are not bonded, the probability of each atom to have an up spin is independent of that of the other atom, it is 50%. Once they bond together as a single molecule, only one can have the up spin, the other must have a down spin. That is a consequence of the Pauli principle. And because their spin is different, they no longer are identical. In general, I find your approach to chemical bonding questionable, if not misleading. For example, I would never call NaCl to be a molecule. To be able to do so, I would demand to be able to determine which Cl atom is bonded to which Na atom, an impossibility since each Cl atom in the NaCl lattice is equally distant to six Na atoms, and vice versa. Please read also what the Wikipedia has to say about Daltonide and Berthollide compounds ( and consider that non-stoichiometry is by no means rare among compounds, but rather mandated by thermodynamics. When you discussed the internal structure of the atom (, you set yourself up for the long and fruitless discussion with Kudzu. What is missing there is the deBroglie wavelength of an electron at the energy of a chemical bond, i.e. at 1 eV or a fraction thereof. From there, it would have been rather obvious that chemical bonds must be treated within the context of the Schrödinger equation, i.e. as a standing wave of (several) electrons shrouding the nucleus and not as dimensionless charges flitting about the nucleus. Then you could have introduced the quantum numbers n, l and s, which ultimately determine whether a chemical bond will form, the bonding angle and the bonding distance. It’ll be interesting to see how you extricate yourself from the mess you have put yourself into. Wolfgang Vogelbein • You talk a good game. But you understand neither the science nor the pedagogical issues involved. You haven’t understood the scientific point that the identity of atoms of the same isotope is independent of context, so talking about them as being different in a particular context isn’t relevant. If their identity *did* depend on context, all sorts of aspects of physics would be different. In particular, you wouldn’t be able to swap out one atom and replace it with another of the same isotope. In other words: in the word “pointless”, you can say that one of the “s”‘s is different from the other because of context — it comes at the end. But when I switch them, I get the word “pointless”, which, according to you, is different from the one I started with, but according to me is the same word. Your example of Hydrogen is exactly of this type. Answering your other question is pointless. I mean, pointless. approximately! Bookmarked. Please also seek advice from my site =). We may have a hyperlink alternate contract between us. Leave a Reply You are commenting using your account. Log Out /  Change ) Google+ photo Twitter picture Facebook photo Connecting to %s
1f859d0bdc3daaeb
If You Can’t Make Predictions, You’re Still In A Crisis A New York Times article by Northeastern University professor Lisa Feldman Barrett claims that Psychology Is Not In Crisis: Is psychology in the midst of a research crisis? An initiative called the Reproducibility Project at the University of Virginia recently reran 100 psychology experiments and found that over 60 percent of them failed to replicate — that is, their findings did not hold up the second time around. The results, published last week in Science, have generated alarm (and in some cases, confirmed suspicions) that the field of psychology is in poor shape. When physicists discovered that subatomic particles didn’t obey Newton’s laws of motion, they didn’t cry out that Newton’s laws had “failed to replicate.” Instead, they realized that Newton’s laws were valid only in certain contexts, rather than being universal, and thus the science of quantum mechanics was born […] Needless to say, I disagree with this rosy assessment. The first concern is that it ignores publication bias. One out of every twenty studies will be positive by pure chance – more if you’re willing to play fast and loose with your methods. Probably quite a lot of the research we see is that 1/20. Then when it gets replicated in a preregistered trial, it fails. This is not because the two studies were applying the same principle to different domains. It’s because the first study posited something that simply wasn’t true, in any domain. This may be the outright majority of replication failures, and you can’t just sweep this under the rug with paeans to the complexity of science. The second concern is experimenter effects. Why do experimenters who believe in and support a phenomenon usually find it occurs, and experimenters who doubt the phenomenon usually find that it doesn’t? That’s easy to explain through publication bias and other forms of bias, but if we’re just positing that there are some conditions where it does work and others where it doesn’t, the ability of experimenters to so often end out in the conditions that flatter their preconceptions is a remarkable coincidence. The third and biggest concern is the phrase “it is more likely”. Read that sentence again: “If the studies were well designed and executed, it is more likely that the phenomenon from Study A is true only under certain conditions [than that it is illusory]”. Really? Why? This is exactly the thing that John Ioannidis has spent so long arguing against! Suppose that I throw a dart at the Big Chart O’ Human Metabolic Pathways and when it hits a chemical I say “This! This is the chemical that is the key to curing cancer!”. Then I do a study to check. There’s a 5% chance my study comes back positive by coincidence, an even higher chance that a biased experimenter can hack it into submission, but a much smaller chance that out of the thousands of chemicals I just so happened to pick the one that really does cause cancer. So if my study comes back positive, but another team’s study comes back negative, it’s not “more likely” that my chemical does cure cancer but only under certain circumstances. Given the base rate – that most hypotheses are false – it’s more likely that I accidentally proved a false hypothesis, a very easy thing to do, and now somebody else is correcting me. Given that many of the most famous psychology results are either extremely counterintuitive or highly politically motivated, there is no reason at all to choose a prior probability of correctness such that we should try to reconcile our prior belief in them with a study showing they don’t work. It would be like James Randi finding Uri Geller can’t bend spoons, and saying “Well, he bent spoons other times, but not around Randi, let’s try to figure out what feature of Randi’s shows interferes with the magic spoon-bending rays”. I am not saying that we shouldn’t try to reconcile results and failed replications of those results, but we should do so in an informed Bayesian way instead of automatically assuming it’s “more likely” that they deserve reconciliation. Yet even ignoring the publication bias, and the low base rates, and the statistical malpractice, and the couple of cases of outright falsification, and concentrating on the ones that really are differences in replication conditions, this is still a crisis. A while ago, Dijksterhuis and van Knippenberg published a famous priming study showing that people who spend a few minutes before an exam thinking about brilliant professors will get better grades; conversely, people who spend a few minutes thinking about moronic soccer hooligans will get worse ones. They did four related experiments, and all strongly confirmed their thesis. A few years later, Shanks et al tried to replicate the effect and couldn’t. They did the same four experiments, and none of them replicated at all. What are we to make of this? We could blame differences in the two experiments’ conditions. But the second experiment made every attempt to match the conditions of the first experiment as closely as possible. Certainly they didn’t do anything idiotic, like switch from an all-female sample to an all-male sample. So if we want to explain the difference in results, we have to think on the level of tiny things that the replication team wouldn’t have thought about. The color of the wallpaper in the room where the experiments were taking place. The accents of the scientists involved. The barometric pressure on the day the study was conducted. We could laboriously test the effect of wallpaper color, scientist accent, and barometric pressure on priming effects, but it would be extraordinarily difficult. Remember, we’ve already shown that two well-conducted studies can get diametrically opposite results. Who is to say that if we studied the effect of wallpaper color, the first study wouldn’t find that it made a big difference and the second study find that it made no difference at all? What we’d probably end out with is a big conflicting morass of studies that’s even more confusing than the original smaller conflicting morass. But as far as I know, nobody is doing this. There is not enough psychology to devote time to teasing out the wallpaper-effect from the barometric-pressure effect on social priming. Especially given that maybe at the end of all of these dozens of teasing-apart studies we would learn nothing. And that quite possibly the original study was simply wrong, full stop. Since we have not yet done this, and don’t even know if it would work, we can expect even strong and well-accepted results not to apply in even very slightly different conditions. But that makes claims of scientific understanding very weak. When a study shows that Rote Memorization works better than New Math, we hope this means we’ve discovered something about human learning and we can change school curricula to reflect the new finding and help children learn better. But if we fully expect that the next replication attempt will show New Math is better than Rote Memorization, then that plan goes down the toilet and we shouldn’t ask schools to change their curricula at all, let alone claim to have figured out deep truths about the human mind. Barrett states that psychology is not in crisis, because it’s in a position similar to physics, where gravity applies at the macroscopic level but not the microscopic level. But if you ask a physicist to predict whether an apple will fall up or down, she will say “Down, obviously, because we’re talking about the macroscopic level.” If you ask a psychologist to predict whether priming a student with the thought of a brilliant professor will make them do better on an exam or not, the psychologist will have no idea, because she won’t know what factors cause the prime to work sometimes and fail other times, or even whether it really ever works at all. She will be at the level of a physicist who says “Apples sometimes fall down, but equally often they fall up, and we can’t predict which any given apple will do at any given time, and we don’t know why – but our field is not in crisis, because in theory some reason should exist. Maybe.” If by physics you mean “the practice of doing physics experiments”, then perhaps that is justified. If by physics you mean “a collection of results that purport to describe physical reality”, then it’s clear you don’t actually have any. So the Times article is not an argument that psychology is not in crisis. It is, at best, an IOU, saying that we should keep doing psychology because maybe if we work really hard we will reach a point where the crisis is no longer so critical. On the other hand, there’s one part of this I agree with entirely. I don’t think we can do a full post-mortem on every failed replication. But we ought to do them on some failed replications. Right now, failed replications are deeply mysterious. Is it really things like the wallpaper color or barometric pressure? Or is it more sinister things, like failure to double-blind, or massive fraud? How come this keeps happening to us? I don’t know. If we could solve one or two of these, we might at least know what we’re up against. 318 Responses to If You Can’t Make Predictions, You’re Still In A Crisis 1. Ton says: “On the other hand, there’s one part this it I agree with entirely.” 2. Tracy W says: While I agree that there are some serious problems with lack of replication, I also find myself thinking, that, well, we know that radios work but when my engineering class got to build a radio for a lab assignment the only people whose radios worked first off were those built by the people who already had technical certificates. Although on the other hand, a radio that requires precise steps to build is more useful than a psychology discovery that only works under equally precise building for obvious reasons. Which brings me back to your conclusion. • Scott Alexander says: I think your “other hand” paragraph is really important. There are cases where it’s important that a phenomenon exists even if it’s vanishingly rare and precise – for example, that anything can transmit radio waves at all, even if you have to get everything just right. But psychology often tries to generate laws of human behavior. They’re usually trying to say that something is relevant to the real world, or explains some particular phenomenon. The fact that it’s very hard to make a radio right is relevant to our observation that things aren’t constantly forming radios and broadcasting everything we say to people thousands of miles away, and that we don’t have to take all these natural radios into account when trying to explain the world. • discursive2 says: It seems to me like the test of a discipline is whether the general principles it proposes about how the world works have corollaries that allow you to accomplish very specific things. The difference between Greek philosophers saying that everything is made of the 5 elements and modern physicists saying everything is made up of elementary particles is that modern physicists figured out how to build radios. If your story about physics doesn’t let you build radios or something equally impressive, the odds of it being grounded in anything remotely true is pretty dubious. Social psychology seems to be at the level of ancient Greek philosophy… there’s a lot of theories, but people can’t do anything with them. Where are the revolutions in education, or running a business? Obligatory XKCD link • PSJ says: This seems to fail on the level of how complex the phenomenon you are studying is. It’s hard to say that genetics is untrue in any real way, yet it hasn’t done astounding things in terms of commercialization (at least nothing compared to the radio, computers, or the bomb). The human brain is significantly more complex, so it wouldn’t seem too surprising that we haven’t found a way to commercialize the huge amount of well-validated research yet. (it has been commercialized, but just not on a grand scale) And to say you can’t do anything with it at the level of Greek philosophy seems absurd. There’s a huge difference between literally not being able to predict anything and not revolutionizing some aspect of commerce yet. • Ptoliporthos says: You’re restricting yourself to human genetics. Think about plant and animal genetics, where companies *are* using it to make a lot of money. • PSJ says: You’re absolutely right, but I think the point still stands about the truth/usability disconnect in that area. • roystgnr says: Even in the case of human genetics, where the manipulation isn’t all there yet, the diagnostic tools are pretty astounding, don’t you think? Give 23andme some of your spit. If your long-lost brother from across the country did too, they can introduce you. If your dad did too, but his spit doesn’t match yours, the spit is more trustworthy than your sainted mother. • Paul Kinsky says: Only if you don’t count machine learning using neural networks, specifically convolutional neural networks which are based on structures found in the visual cortex. • PSJ says: I would absolutely count it, but I wanted to argue from the least convenient world. It can be reasonably argued that a majority of such research lies outside the umbrella of psychology in general and especially outside the study of social psychology and human behavior. Edit:not entirely sure why I changed colors 😛 • kerani says: I think that you’d have to discount the Green Revolution, the Innocence Project (and DNA testing in crime scenes in general), and food supply/food safety issues in general in order to keep this statement accurate. • Earthly Knight says: Not to mention testing for heritable diseases. Our investment in genetics has really paid huge dividends, by any measure. • Dave says: The book “Nudge” by Thaler and Sunstein points at some applications. (I am only barely starting it and laid it aside for a while, so I’m not remembering a good example.) Guy named Sutherland has a couple of TED talks with examples. • WWE says: I really like the radio example here, and I think that is what is going on to a large extent – there’s a complex system that is hard to hit precisely. But, I wouldn’t go to say that it couldn’t be useful if you managed to figure it out (imagine a computerized testing format that gives a customized priming to optimize testing results). As I see it, priming is interesting only because the priming activity is supposed to be something subtle and yet still have a large effect on the results (yelling angrily at your participants is almost certainly going to influence their testing results, so that’s not so interesting). Since the priming is somewhat subtle, I think you should be precise to replicate exactly that priming and measurement as much as possible before you even think of starting to generalize. Instead, the replications that exist might not actually be very good replications. I’m happy to initially ignore some differences like barometric pressure and wall color, but when Shanks et al modified the procedure in multiple ways, first by showing a video to extend the priming session and then by having them take an entirely different type of test to gauge the effect of the priming… well, it’s not so unimaginable that they get different results! (what do studies say about watching TV before taking a test?) The priming example seems like an example of premature generalization rather than replication. I haven’t looked much at the other attempts (and failures) at replication to see if they are at all like this. • Deiseach says: I think there may be people for whom priming works, and people for whom it does not work, and figuring out which is which is going to take a whole lot more work on a finer level than “Well, if we tell women to think like men…” 🙂 • AJD says: Replicating “exactly” can be troublesome also. I can’t find it again now, but I recently read an article dealing with a failure to replicate the priming study in which subjects read a bunch of words related to old age (“elderly”, “Florida”, “retirement”, etc.) and then ended up walking slower. The article I was reading observed that the replication study had taken place 20 years (or whatever) later than the original study using the same methodology—but in 20 years, the kind of background factors that would cause priming effects have changed. The set of words that are stereotypically associated with aging are different now than they were 20 years ago, the relative frequency with which one encounters those words in speech or text is different, societal attitudes to and stereotypes about the elderly are different, etc. So even if the priming found in the earlier study was a real effect, we need a well-developed theory of whether we would even therefore expect the same effect to be found 20 years later. My field is sociolinguistics. It’s certainly the case that in sociolinguistics, if an experiment conducted on the same population speaking the same population using the same methodology has a different result 20 years later, the smart money is usually on the population and language having changed in the intervening time, rather than the original study having found a false result. (Not to say that false results don’t happen!—but changing populations and languages always happen.) • HeelBearCub says: I think this is a very good point. I have to imagine, also, that priming also needs to have some novelty to it to work well. IOW, if one tries to give a prime that people hear all the time as a prime this will not be nearly as effective. • Steve Sailer says: Right, the priming college students to walk slightly slower back to the elevator experiment was made famous by Malcolm Gladwell, who made a lot of money blurring the boundaries between marketing research and psychology. I spent a long time in the marketing research business, and one thing we learned was that effective marketing wears off. What was good marketing a few years ago might be boring and trite today, just like fashions from a few years ago don’t seem fresh anymore. You’ll notice that the marketing research industry is no danger of going out of business because it’s developed replicable methods to predict the success or failure of future marketing. Instead, marketers continue to have to hire marketing researchers to test whether there new ideas are going to work or not under the latest conditions. • Deiseach says: When I read the humorous article which mentioned “college students walking slower”, I thought it was all part of the leg-pull. And now you are telling me this was a genuine real serious study. I’m boggling at this. Did anyone check the phrasing? Were the students told (or did they pick up that this is what they were supposed to do) “Imagine you’re old” and then they walked slower because they thought “If I’m an old person, I’ll walk slowly”? Because yes, all this is sounding much less like psychology and more like “If you have the smell of fresh baked bread wafting through the store when customers walk in, they’re more likely to buy pastries” type of marketing. • houseboatonstyx says: @ Steve Sailer “Right, the priming college students to walk slightly slower back to the elevator experiment was made famous by Malcolm Gladwell, who made a lot of money blurring the boundaries between marketing research and psychology. …. [the marketing research industry has] developed replicable methods to predict the success or failure of future marketing.” Where were the boundaries supposed to be? From way outside those forests, it sounds like psychology findings that work on one side get claimed by neuroscience, and those that work on the other side get claimed by marketing research. No wonder psychology never gets a break. Psychologists may scorn focus groups, but don’t market researchers crunch a lot of numbers also? The smell of bread in supermarkets would be easy to test: how strong the smell (from zero to X) is and how the cash register receipts add up — from one hour to another, if the researchers like. • houseboatonstyx says: @ Deiseach But actually, that could make an easy way to scout the territory. Set up a camera to film all the students walking out of all classes, see how many ‘slow walks’ follow what kind of classes, look up the content of each class’s session. If you find a lot of slow walks following other classes that bore students to sleep, or that are so interesting that the students are still absorbed in thinking about the subject, then slow walking is not a good way to test priming. • Deiseach says: I was nodding along in agreement (because come on, reproducibility is one of the cornerstones of validation: someone else repeats your experiment, they didn’t screw up, it doesn’t come out the same way, you go “Welp, better scrap that idea and start again”). But then it hit me: this is psychology we’re talking about, and people. This is not like setting up a titration where a set molarity of base is neutralised by a particular volume of a set molarity of acid every time. People do have vagaries. Was the professor/soccer hooligan experiment set up so that for the first lot of experimenters, the students thought “I know the correct answer to this question, but I’m supposed to be a soccer hooligan so I’ll answer it wrongly”? You don’t know the inside of people’s heads and how they think, and maybe the first lot of experimenters got their results by the way they instructed the participants the same way in all four tests, while the second lot phrased or explained it slightly differently. I think you’re right about a crisis, if such a high percentage of results are simply not reproducible, especially as psychology results get used to implement policies on school children or the mentally ill in the community or any other vulnerable group that the government has been pressured into Something Must Be Done. But I also hope that this will knock on the head the notion that there is some Grand Universal Theory of Mind and once we figure that out, plus we have a handle on the genes, then we can deal with humans as if humans are wind-up toys and you feed input/stimulus A in and reliably get output/reaction B all the time, every time. We’re a bundle of contradictions and easily influenced by different moods. I’ve been doing the Mood Monitor exercise for this CBT nonsense and one day I was at 5 on the “How are you feeling?” scale and the next it plunged down to 2, the only reason being a fit of melancholy triggered by overhearing a conversation that had nothing to do with me (a work colleague telling another about a family holiday they’d been on). We are not yet easily reducible to a neat scheme. I think maybe the main problem is that the studies are trying to be too scientific, where you are never going to get that neat parallel with classical laboratory experiments on cell cultures or organic chemistry or grating diffraction. • HeelBearCub says: Where you ended up seems to point at an individual being variable. But where you started seems far more salient. Imagine one was handed a bunch of beakers. They are opaque and filled with unknown substances. You can add liquid, but only through filters, each of individual, unknown properties. You can extract some liquid, but through a different set of filters. You do some experiments, publish some results. Your beakers are taken away and brand new beakers are given to you, with new filters. You repeat the experiment exactly, and get different results. Hardly surprising. • Setsize says: And it’s a whole lot easier if you can keep the same beakers for your second experiment. Collect enough data about the same beakers and you can get a handle on what’s in them and what the filters are, and actually find results that generalize to chemistry. This is why the more replicable subfields of psychology, like cognitive and sensory/motor, rely heavily on within-subject designs instead of between-subject designs. [subvocal grumbling about bloggers who use “psychology” as an anti-synecdoche for “social psychology”] • Earthly Knight says: Synecdoche is a contronym, the opposite of a synecdoche is a synecdoche. • Winter Shaker says: Earthly Knight: the opposite of a synecdoche is a synecdoche. And did you know, you can use the word for part of a synecdoche to refer to the whole synecdoche? • Steve Sailer says: If in 1996 an experiment succeeded in priming college students to dance the Macarena and in 2015 a replication experiment fails to prime students into dancing the Macarena, is that a crisis in science? Well, it is if you have marketed psychology as the Science of Getting People to Do Things They Otherwise Wouldn’t Do. But if you assume, more realistically, that much of priming — even when it works — isn’t long-term Science with a capital S but just fads and fashions and marketing, well then maybe people would start to realize that a lot of what is labeled these days as the Science of Psychology is really just the business of marketing research. • FullMeta_Rationalist says: But psychology often tries to generate laws of human behavior. I wonder… is this even the right way to view psychology? Because the relevant analogy doesn’t compare the science of the psyche to the science of radio waves (a simple physical phenomenon). That would be neuroscience. The relevant analogy compares the science of the psyche to the science of transistor radios (complex, arbitrarily-designed consumer-products). In other words, “discovering the laws of transistor radios” sounds awfully silly to me. “Discovering statistical regularities in transistor radios” sounds more apt. The space of transistor radios is wide and deep. Not every possible kind is actually engineered, mass-produced, and marketed. This is probably what Feynman meant when he said “All science is either physics or stamp collecting.” This is important because maybe some aspects of psychology aren’t meant to be universally generalized. I think someone down thread mentioned something about same-subject vs cross-subject studies. If we think of people as stamps or transistor radios, how much variance can we expect from cross-subject studies? Maybe a U.S. stampologist notices that lots of stamps have eagles and then a Russian stampologist tries to replicate and says “are you high? I didn’t find a single eagle!” I don’t know anything about statistics. So I don’t know what the correct solution is. But it probably involves context. E.g. heritability, nature vs nurture, the color of the wallpaper. And then maybe we can predict stuff like “Russian stamp? 1% chance of eagle” Or “INTJ? 5% chance s/he’s a scientist” Or “vacuum-tube radio? It’s over 30 years old, 60% chance”. Why is priming even a psychology thing? Shouldn’t that be like, a linguistics thing? • FullMeta_Rationalist says: n.b. I have never seen a Russian stamp before. I pulled those numbers out of the air. In other news, I think I now understand why assigning probabilities without models might be counterproductive. • AJD says: Priming is studied in psycholinguistics, but not all topics on which priming research is done have anything to do with language. • James Picone says: That one’s Rutherford, after he got the Nobel prize for chemistry. • Andy says: Since people with technical certificates could replicate working radios, your class can be sure radios work. As a general rule, instructables written by technicians work when reproduced by other technicians. There is no no replication crisis in radios, the instructions clearly was not detailed enough for non-technician to follow, that is it. Plus, technicians would be able to tell what was wrong with non-functioning radios. Otherwise said, everything missing from radio instructables is written in some textbook and all certified technicians know what it is. Other psychologists should be equivalent of students with technical certificates through. E.g. experiment described by one should be reproducible by another psychologists. What we have here is equivalent of certified technicians not being able to build machines, not knowing what the problem is and not knowing whether they are in the radios situation (possible to build) or rather telepathy machine situation (impossible to build). • Tracy W says: Not quite comparable though. You don’t learn electronics purely from a textbook, nearly everyone seems to need the experience of actually putting components together and seeing what happens and learning yes you do need to be careful to avoid dry joins and what-not. These things might even be in the textbook but it takes some practice to be able to remember to do them in the actual real world. I asked one of my classmates who did have the certificate what So if someone does a replication experiment and fails it’s possible that they’re just at that early stage of learning to do procedures that do fundamentally work. Or of course that it didn’t work in the first place. • Alex Godofsky says: Tracy, there is clearly no replication crisis in radios because our civilization successfully produces millions of radios every single year. “Replicates” them, if you will. • Tracy W says: That’s why I used radios as a counter-example. Replicated millions of times a year, but still people not experienced with electronics can fail to build one. • Alex Godofsky says: Yes, but no one is consistently replicating these psychology results except, occasionally, the original discoverers. • Anthony says: One would expect that a psychology professor at a research university to be at least as competent at running psychological experiments as a certified radio technician is at assembling radios. And one would expect that the professor has some good amount of hands-on experience running such experiments. • Tracy W says: If one’s expectations were a reliable guide to reality one would be on the way to receiving one’s Nobel Prize. And, more prosaically, it is a lot easier to get experience building electronic circuits than experimenting on real live people. No one objects if you keep piles of capacitors in a box in the lab and occasionally fry one but for some reason they get all uptight about the treatment of students, even art students. So I doubt that psychology professors get that much experience, in particular I doubt they get the experience of being up until 1am determindly struggling to make the damn thing work. • FeepingCreature says: No relation, I just want to note here that you can turn a Raspberry Pi into a radio transmitter that can hit the FM band by sticking a wire in a specific port and running a certain piece of software. (Though keep in mind that depending where you live, this may be illegal.) • Who wouldn't want to be anonymous says: For reference: Unlicensed FM transmitters are legal in the US, but are extremely restricted in power. According to the FCC’s FAC, unlicensed operation (and construction) is limited to a 200ft operating range. If you know enough about the subject to be really, really confident that your raspberry’ll not exceed those limits (or have $75000 to blow on fines and/or a few years to waste in jail)… Have fun? • Alex Z says: I think the difference is whether or not you have a “known good state”. If the students try to build a radio and fail, we can say: ” OK, we know experienced technicians can build radios most of the time, but students can’t. Why?” And you can go back and check that experienced technicians can indeed build radios. For some of those experiments, there is no such known good state where you can consistently reproduce the results and compare to other experiments where the results are not found. So it seems weird to say: “There is no known good state, but there is a good state. We just can’t find it anymore.” • Kyle says: Maybe journals should have two stages. The interesting phase 1 journal, and a subsequent replication/pre announced methodology/higher sample size phase 2 journal, which would be done by the original team 1 year later. I think you want to allow some flexible conclusions and adaptation while doing studies. As you observe data hypotheses change, you notice interesting effects, or realize you weren’t thinking about it correctly at the beginning. I think those are completely legitimate things that a truth seeker would do as they investigate a phenomenon, although it might make the p-stats not as accurate, done well and thoughtfully it will minimize that issue. However, having the same team do the replication study has the twin benefits of preventing claims that the replication study didn’t match the original, and limits the amount of scientific bullshitting that just looks for interesting results they can get published because the scientists needs to really believe that their own study will hold up to replication since the original can get pulled a year later. • Matt says: Having the same team do the replication seems to run into many of the same problems Scott discussed at the beginning of the article. • Kyle says: Well, Scott lays out three categories. The first, publication bias + being aggressive around statistics, he describes as “may be the outright majority of replication failures.” This method solves both by forcing replication and by making it costly to submit articles that are “statistically significant,” but the authors aren’t confident actually are statistically significant (publication bullshit). It’s a commitment device by the two entities closest to the data – the scientists and the journal – that they are willing to bet on their results. The second category of experimenter effects it doesn’t solve. Agree there. The third category I would summarize as: two different scientists do roughly the same experiment and get different results. You could say that this is an interesting tension and we should figure out what’s causing the difference. Or you could say that given the huge problems with replication, the first study was probably wrong. The later makes a lot of sense to me along with it seems everyone in this discussion. However, that was why I liked this method of replication. By getting one study with decent sample sizes, consistent methodology, preregistration (because it’s the same study), and the same lab team you can really narrow down the set of differences in the experiment. So, you can’t say “oh well, guess we need to find the differences” if the replication fails. You have to say “I guess we shouldn’t rely on that first study.” 3. Sam says: You mention things like barometric pressure that might have an effect on priming, but isn’t it much more important what subject population you’re testing? Priming is exactly the sort of thing I would expect to work better on students who went to a lower-ranked university (like University of Nijmegen, which the original study tested) than students who went to a higher-ranked university (like University College London, one of the places the replication study tested). I would expect that top students are more likely able to focus on the task at hand and ignore previous distracting priming attempts (negative priming) or would already take tests at their peak ability and not need any additional inspiration (positive priming). There’s definitely a difference between “Students can be primed to do better on tests” and “Lower-performing students can be primed to do better on tests” but this distinction feels a lot closer to the physics analogy than throwing exceptions for barometric pressure does. • pneumatik says: It’s certainly possible that certain types of priming work better (or at all) on people with certain mental traits. There definitely seems to be evidence that people can be influenced mentally by their environment. But the specificity of effect should be itself testable. First create two groups of people, one with the trait suggesting sensitivity as the experimental group and one without it as the control group. Then test how much each group is influenced by the same priming experiment. But these sorts of tests can only come about if researchers try to replicate experiments. • Sam says: Yes, this explanation would be easy to test, about as easy as testing barometric pressure or anything else. (You’d need to find a large heterogeneous sample, but that’s not unheard-of.) My point is that while we’re all armchair-theorizing, it’s a much more believable alternative to propose that the intelligence of the students matters. Unfortunately, I don’t see it being offered, and I’m curious if anyone has a good reason why. Hasn’t anyone else here read some psychology study and thought, “I would never fall for that” or “If I were doing that experiment, I would be able to tell exactly what they were studying and be able to tune it out”? Sure, we’re probably overconfident, but I’m guessing that’s what some of the UCL students in the replication studies thought. 4. Sergey Kishchenko says: I believe there is another simple attack on Barrett’s statement. Let’s assume that failed replication means “it is not false, it just requires some unknown conditions to replicate”. It still means that published paper was wrong stating there was a connection between A and B. It still means that experiment wasn’t well designed if it didn’t capture those conditions. So Barrett’s statement can not prove that psychology is ok. In fact the best it can do it is to prove that “psychology is fucked up but not that fucked up as you can think”. • Earthly Knight says: This was my thought as well. The conclusion of the original brilliant professor priming study was that the priming increased student’s tests scores, not that there was a priming X wallpaper-color interaction. This conclusion was wrong. • RCF says: But there is a connection between A and B. Suppose I run a study on priming, and find that in the experimental group, 700 out of 1000 pass the test, and in the control group, 300 out of 1000 pass. That’s a highly significant result. If someone says “Well, there were 300 people in the experimental group that the priming didn’t work on, so there must be some X-factor that differs between the people it worked on, and those it didn’t. So you didn’t find a connection between priming and passing the test, you found a connection between priming+X and passing the test.” To say that there’s a connection between priming and passing the test is not to say that priming ensures passing, it means that priming interacts with all the other factors such that, overall, it results in more people passing. If priming makes people sitting in a room with blue wallpaper pass, and doesn’t cause people sitting in a room with red wallpaper to pass, then priming still has a connection with passing. • Earthly Knight says: In the example you give, the presumptive explanation of the variation is differences in the background psychology of subjects in the experimental condition. That’s okay. But if the explanation turns out to be some incidental feature of the experimental set-up, like “a janitor was vacuuming loudly in the hallway for the 300 who failed the test”, you’ve conducted a poor experiment, because you’re not identifying the causal structure which produced or failed to produce the effect. • RCF says: So it’s legitimate to not account for causes that occurred before the experiment, but causes that occur during the experiment must be accounted for? That seems a bit too restrictive to me. • Earthly Knight says: You’re right that it can’t be quite so simple. The claim the experimenters are implicitly making is something like “under such-and-such conditions, if we intervene in such-and-such a fashion, ceteris paribus, undergraduates will behave thus-and-so x% of the time.” What we’re really quibbling about here is the scope of the ceteris paribus clause. If we draw it too broadly, the experiment will have no external validity– we do not want the results to fail to generalize to other wallpaper colors or ambient temperatures. If we draw it too narrowly, we will wind up including causes that are intuitively spurious– all effects in psychology depend on the subjects not being set afire during the experiment, but this is beside the point. That being said, it seems pretty clear to me that if your experimental result depends on the wallpaper color and this doesn’t come out during the study, you’ve bungled it pretty badly. 5. Taradino C. says: Speaking of priming studies, I’ve seen this going around recently: “Picture yourself as a stereotypical male” Summary: When men and women were primed with a story about the day in the life of a stereotypical woman, then given a mental rotation test, the expected gap between men and women was observed. When they were primed to think about a stereotypical man, the gap vanished. This result is quite surprising, since the gap in scores on mental rotation tests seems pretty universal, and the evidence for a strong stereotype threat effect (of which this is given as an example) seems pretty shaky. Any idea if this has been investigated further? • Deiseach says: I am extremely surprised by that result, as I am (a) female (b) spectacularly bad at mental rotation and spatial tests in general. I could imagine I’m Joe (not Josephine) Soap till I’m blue in the face and I don’t see how it would make me do any better. The only thing I can think of to explain that (other than “It’s wrong“) is that they used the same test and by giving people a second chance at it, they changed their minds (because they knew the first answer they’d given before was wrong) and did better that way. EDIT: They used four different groups, instead of the same two groups twice? Okay, I have no idea how they got that result. Seeing as how this was conducted in Austria, should we be pondering why Austrians find the idea of Real Manly Men who are tough and do weight training after work so inspirational? (Arnold Schwarzenegger, can you shed any light on your compatriots’ views?) 🙂 • PSJ says: I’m about 80% confident that a study with that problem wouldn’t have been done, let alone passed review. That’s like, first day psych 101 WhatNotToDo. Edit: I checked the abstract. They didn’t make that mistake. Priming is supposed to be hard to notice consciously. While it’s been taken overboard in terms of power and relevance, the principle itself is pretty hard to deny given the age old physiological and word-completion data. • Deiseach says: But does it work on anything more than “If you make people anxious before a test, they don’t do as well”? I think if I was instructed before a test that “This is big important test telling us all about your own personal intelligence, flaws and weaknesses”, I’d do a lot worse than being instructed “This is just a test, if you’re interested in your results, ask the supervisor later” (e.g. those stereotype threat tests the linked article was talking about). I really would like to see that men/women study done elsewhere to see if something is going on. • PSJ says: Semantic priming is the most obvious example. Say I have a list of words and non-words. Your task is to tell me if each item is a word or not as fast as possible. It turns out that if I show you MAT SNOW and RAIN SNOW, people on average will respond correctly to the second case faster than the first. In other words, seeing RAIN primes your brain to be able to process related terms faster. This much is noncontroversial and is a fairly textbook example of priming. The question “does it work on anything more?” is a very difficult question to answer. On one hand, I am fairly convinced that the mechanisms by which priming occurs are vitally important to the structure and algorithms of thought and emerge almost necessarily from the function of neural networks. On the other hand, it would be academically irresponsible to say so. So all I can say is that priming is a well documented effect in myriad domains, although a lot of work in priming of social situations has come under heavy suspicion and doesn’t seem to have the same theoretical foundation. (hence the skepticism towards the current study) • Ariel Ben-Yehuda says: The linked article shows the priming staying for 2.5s in a lab environment. It does not say much about the effect staying for multiple minutes. • PSJ says: Did I claim it did? I’ve studied short-term priming more, but if you’re interested, search for long-term priming. Here is an example. I remember some linguistics paper showing an effect lasting a few weeks off of a single presentation. I think it had something to do with choosing which shape best fits a made up word, but I can’t recall the details. Neural models of short-term and long-term priming are different though, IIRC 6. Wrong Species says: So how well do the studies supporting The Nurture Assumption hold up when it comes to replication? Is this a problem for all studies in psychology or are there some areas that are able to withstand the challenge? • Tracy W says: There’s two parts to The Nuture Assumption, one part is criticising the idea that parents’ behaviour affects children’s adult outcomes (leaving aside extremely bad parenting), this part Judith Harris supports by attacking existing studies, including on the basis of their failure to replicate. The second part, the hypothesis that children are socialised by their peers, Judith Harris recognises as being more speculative because scarcely any research has been done on it, although the claim that immigrant children acquire their peers’ accent, not their parents, is easily observable in everyday life (including my own child). • Wrong Species says: She didn’t just criticize existing studies in part one, she pointed to alternative studies that suggested that the effects of shared environments(once accounting for genetics and differentiating shared and non-shared environments) were essentially zero. I’m wondering how well those hold up. • Tracy W says: I don’t have my copy of the book with me now, but from memory Harris referenced multiple studies on this point. • Tibor says: Including some metastudies, most of them based on twin studies if I remember correctly. Of course not even metastudies are impenetrable holy truths, but it is definitely more solid. I don’t remember which parts exactly were supported by the metastudies though. • gwern says: They hold up fine. The replicability of behavioral genetics, by which I mean the twin studies and family designs, have never been an issue. If you run a twin study on criminality or IQ or whatever, then (subject to sampling error) you will get the usual results of the outcome being highly heritable, low shared-environment, and rest non-shared-environment. There are hundreds or thousands of such studies – see for example the recent meta-analysis by Polderman et al 2015 covering them all. The problem has always been that the people who don’t want to accept the results criticize the interpretation of the results and argue they are being driven by systematic biases/flaws in the study design such as large violations in the real-world of the core assumptions. Replication of a biased algorithm may only replicate the bias, and so the 100th twin study showing low shared-environment adds little over & above the 50th twin study showing low shared-environment: the sampling error is already small, and repeating the same design doesn’t resolve the critics’ objections. To make progress, you need to do something like check the assumptions directly or remove the need for them using a different study design. This is why GCTA was so important: by avoiding family design approaches entirely (and thus all the various arguments against them in general) and estimating genetic-similarity directly from genomes of unrelated people and finding lower bounds consistent with all the family designs, it destroys the remaining criticisms. • PGD says: But GCTA estimates of heritability are significantly lower than twin study estimates of heritability, right? Also, GCTA is still subject to environmental confounding — genetic similarities are associated with environmental similarities, since populations cluster together physically and socially due to migration and evolutionary history, etc. • Douglas Knight says: Nope, GCTA estimates of heritability are the same as other methods. • Stezinech says: I hadn’t heard of this point against GCTA before. Google gave me this result: I’m not sure about the credibility of the author or website; perhaps someone can enlighten us? • Douglas Knight says: If you write out in your own words what you think it says, I’ll comment on that. If you just think it says “I don’t like heritability,” I agree that it says that, but I don’t see why I should care, regardless of the credentials of the author. As to the precise issue of the heritability computed by GCTA, that link gives 16 citations, 2-17. If you are interested, you might consider looking at the 16 papers and seeing how the heritability found in those papers compares to older measures of heritability. The link only provides 2 examples, one that “callous-unemotional” had a twin estimate of 65% and a GCTA estimate of 7%, which it claims is representative. Yet it is pretty different from the other example, IQ, where it cites a twin estimate of 50% but gives a GCTA estimate of 35%. • Douglas Knight says: I should add a caveat using SNPs is not using the whole genome. A GCTA using widely dispersed samples cannot detect the effect of mutational load. But if the samples are taken from a closely related population, it will capture almost all the information in the genome. That is, the common SNPs will be in linkage disequilibrium with mutational load and thus predicts its effects, even though the SNPs play no more causal a role than with a more dispersed population. 7. James D. Miller says: “People respond to incentives” is the best one sentence definition of economics, and I’d bet it explains psychology’s failure to replicate problem. If it’s much harder to get the results of a psychology experiment published if you limit yourself to running experiments that produce replicable results, and if there is almost no cost to authoring a study that doesn’t replicate, then you would expect the field to be dominated by science-as-attire non-reproducible studies. • nope says: The crisis isn’t about people testing hypotheses that turn out not to be replicable later. You’re right about the incentives part, but the incentive problem actually centers around the fact that there aren’t really any positive ones for doing replication studies, while there are a lot of negative ones, such as the fact that a lot of prestigious journals will turn you away at the door for not being “original” or “significant” enough. We need all sorts of ideas tested, even out-there ones. But we also need results verified, and that’s why the replication that isn’t being done right now is so important. • 27chaos says: Why does getting a bad study into some journal matter in the first place? Because many decisionmakers are stupid, and judge based on broad metrics or how excited they feel about an idea, rather than the actual quality and rigor of someone’s work. I think trying to reform journals is a doomed strategy. Instead, we should make it so that publication count itself matters less. This would require educated administrators who care about the field’s truth seeking enough that they are willing to insist on quality. If we don’t have such people in charge, any changes made to incentives will only be minor tweaks. • nope says: This will never happen. Incompetent administration is a self-perpetuating problem, because who makes decisions about administration structure, hiring, etc? Administrators. Administrators, by and large, are going to be on the less intelligent side of university staff. Publication count is really the only quick and straightforward metric for productivity, and you’re not going to make administrators stop using it. Furthermore, it doesn’t solve the problem that there are basically no incentives for scientists to focus on replication. If you can’t motivate people to do different things, they’re not going to do those things. • Anthony says: Furthermore, it doesn’t solve the problem that there are basically no incentives for scientists to focus on replication. – right, so what’s needed is to get journals to publish replication studies more. Or to demand that someone try a replication before publishing the initial paper. Or something, but that just pushes the problem somehwere else – how do you incent publication editors to reward replication work more? • Dennis Ochei says: You need to push the problem all way back to “I need to do x” if you want to solve it • baconbacon says: People make choices or people have preferences would be afar better one sentence summary of economics. • Marc Whipple says: I see your point, but I think those are a little too basic. They are pretty trivial observations, which don’t give any predictive power. People could be making random choices, or have irrational preferences. “People respond to incentives,” while it also seems trivial, tells us where to look for something. If economics works, and we see people doing something we don’t expect, we know to look for an incentive we haven’t noticed. • “People tend to take those actions that best achieve their objectives” aka “rationality.” • Jeffrey Soreff says: >“People tend to take those actions that best achieve their objectives” Depending on how much of a disclaimer “tend” is in that statement, it isn’t true. People have all sorts of biases (salience, status quo, many many more) which prevent them from picking actions which best achieve their objectives. >”People respond to incentives” is a weaker claim. • Marc Whipple says: @JS: I respectfully suggest that the original observation is true, and that the point you are trying to make would be better expressed as, “People aren’t very good at figuring out what their objectives are or how to achieve them.” • Anonymous says: @Jeffrey Soreff Of course people have biases. But I think extrapolating that to suggest that people are mostly irrational is an extraordinary claim. Think for a moment what the world would look like if people really did not manage to find ways to achieve their objectives most of the time. Someone wants to get somewhere a reasonable distance away: do they drive their car, or attempt to drive their lawnmower? Do they look up the route or do they drive to the nearest city and hope that happens to be where their destination is? Do they follow traffic laws and norms or do they just point their car in the right direction and hit the gas? When they want to turn left, do they turn the wheel left or right? Trying to correct your biases of course requires you to have biases in the first place, but does not mean that your actions are mostly irrational rather than mostly rational. • keranih says: @ Jeffrey Soreff – Perhaps it would help to look at biases/preferences/etc as “competing objectives”, so that a person going after goal A while trying also to reach goal B doesn’t quite travel in a straight line. • Dennis Ochei says: Humans only have a bounded form of friction-off, perfect knowledge rationality. We can’t consider every possible choice, we don’t know perfectly the outcomes of our actions, we can’t perform our actions without mistakes, we have fuzzy or ill-defined objectives, and even when these things aren’t true we still sometimes make choices against our best interest. Furthermore, the execution of our rational faculties is costly in terms of time and utils. So I can’t actually just consider all possible actions and choose the one that maximizes my utility, that costs like a bazillion utils to do. All that said, the fiction of the rational human is very useful. But it’s still a fiction. I don’t think it’s as trivial as I think you are saying it is, unless by “people behave rationally,” you mean the lay meaning of rational which might as well be replaced with the word “normal” • Splodhopper says: Economics suffers from much the same problems as psychology. In fact, one might go so far as to say that it is in even a worse spot, since some people appear to consider it more credible than psychology. 8. merzbot says: So, how could we solve the crisis? Would things like pre-registration be sufficient, or are there just so many variables involved in the study of human behavior that psychology is inherently screwed? • LTP says: I do think experimental psychology is possibly screwed. It seems like human psychology is just too complicated to study in an experimental way comparable to the hard sciences. This doesn’t mean psychology is worthless. For instance, I think there may be value to therapeutic techniques developed by practioneers over time through experience, or theories developed through extensive examining of case studies by an academic, but this isn’t scientific in the way that experimental psychology aspires to be, but more of a humanistic-oriented approach to the discipline. Maybe this is irrational, but at this point I don’t trust *any* experimental psychological results, especially those that lead to conclusions that are counterintuitive, politically convenient, or sensationally covered in the popular science press. ETA: I will note, though, that I’m no expert on psychology. • Douglas Knight says: Barrett is basically saying that’s the situation, yet that psychology is not screwed. Out of the frying pan into the fire, because she hasn’t given any thought to what any of this means, just what she wants her conclusion to be. Whereas, those complaining about a replication crisis think it’s just fraud. People who know what they want their conclusion to be. • xq says: Medicine doesn’t replicate any better. This isn’t a psysch-specific problem, it’s a problem with many fields (roughly, all of them that make heavy use of statistical hypothesis testing). • James D. Miller says: Genetic data, brain scans, and wearable tracking devices could turn psychology into a real data-driven discipline. But it probably won’t be professors who find the interesting results, rather it will be business people who figure how to profit from figuring out how to make us happier, more productive, and more willing to buy targeted products. • PSJ says: Have you heard of Neuroscience? Psychology and related fields have been a large progenitor of new statistical techniques for a long time-I’m not sure why you think business people would be better at it. Start here if you want some examples for psych in general, here for social psych in particular. • Steve Johnson says: Because of natural selection. If you make wrong predictions in psychology you get published as long as you make the right wrong predictions. In business if you make the wrong predictions you have to keep persuading people to give you money so wrong predictions will keep getting made. On the other hand, if you make correct predictions and they’re revolutionary you make massive amounts of money and get to have sex with beautiful women and live in nice houses and swim in the ocean on weekends. • PSJ says: I…I’m not sure you have a great picture of what psychology actually looks like from the inside. Since Scott likes to criticize bad psychology, I know you’ve been exposed to that side of things, but we all make fun of it too. If you click through the links in the post you’re replying to, I’d challenge you to find anything that feels even slightly politically motivated. You probably just hear about the most outwardly political parts of it (stereotype threat/social priming). Other parts are inwardly political, but that more has to do with actual scientific disagreements (theory of mind/language acquisition). But it’s nothing like Economics or Sociology where politics plays a central role in the field. I’d also point you to Thinking Fast and Slow. Which is a psychology book about how to make correct predictions in general. And summarized work that won the Nobel (memorial) Prize for Economics. Which seems like the kind of thing you’d respect. • vV_Vv says: Business data analytics may lead to more better products, but it will probably not lead to new scientific discoveries, at least not without substantial involvement of academic researchers. For instance, consider web advertising click-through prediction: from your search history, the content of your gmail mailbox, your browsing history as tracked by cookies, etc., Google can estimate a better-than-chance probability that you will click on any specific ad. Presumably, they do this by pre-processing the raw data into some engineered features and then feeding them to a black box machine learning system like a neural network or a random forest. The recent trend in machine learning seems to be towards using less engineered features and more raw data. Strictly speaking, this is a form of psychological prediction. However, we can’t say that it advances psychology as a science: All the details about the system are closed-source trade secrets while science is normally done in the public domain. But even if the system was released to open source, it still wouldn’t change much, because these models are task-specific and opaque: we wouldn’t be able to peek inside a trained neural network or big random forest and gain useful and generalizable knowledge about how the human mind works. • nope says: Pre-registration doesn’t do anything to solve the replication crisis. Our federal-level science funding organizations need to start mandating that the people who receive public funding take part in the replication process. I can’t think of any possible way that this problem could be solved from the bottom up, but a top-down solution would actually be quite easy, and it baffles me that this isn’t being done. • Deiseach says: I think it’s valuable in a very broad brush way, but where the trouble comes is when those results are then taken and bruited abroad as “Aha! This proves all X are Y and we should do Z to inhibit/encourage them!”, which then gets translated into policies, as in education, then every so often the whole system gets turned upside-down again when a new study with a new result which results in a new fad comes along. It’s like all those management guru books. I’m sure everyone has experience of working somewhere where every so often, a manager gets a burst of enthusiasm and everyone has to stop doing things the old way and do them the New Fancy More Productive way, until just as the disruption has settled down, a different manager gets promoted and then decides “No, I prefer this colour of umbrella for my cheese” and it’s all change again 🙂 9. Is a failure to replicate more common in some fields of psychology than others? • nope says: I’m pretty sure social psychology is at the bottom of the barrel in terms of being wrong about things. That’s what you get when the only people who go into your field are stupid and can’t math. • Anon says: I would expect the causality to be the other way around; the more rigorous fields that signal lots of intelligence, attract the more capable people, and then the less capable ones end up in social science (on average). • Psych says: The “soft sciences” like psychology are easier to do poorly than the hard sciences, but also harder to do well. So yeah, there may be fewer capable people in these disciplines, but there are also some incredibly brilliant people who are not flattered by the higher status of the hard sciences and not afraid of how hard the work is going to be. • Douglas Knight says: There are ways in which the soft sciences are inherently more difficult, such as it being easier to fool yourself, especially because you have lots of intuitions about psychology. But how much are these the problems and how much is the problem p-hacking, which is no easier in one field than another? • Sylocat says: Since when does a field “signal(ing) lots of intelligence” mean it attracts more capable people? • Jacob says: We really don’t have enough data to say for sure, but I can tell you in biology replication is the exception, not the norm. See for instance and Which is pretty appalling; if we get priming wrong I don’t know how much it matters, but cancer research? • Setsize says: Well, if you want to get depressed about the state of replicability in cancer research, watch this lecture. Summary: A study can have a good idea and a good method, and get screwed by making one sign error and accidentally copying the header row alongside your data. Then someone else can come along and heroically debug the study, and be pointedly dismissed. All up until one fudged line on someone’s CV is discovered, but that doesn’t lead to more truthful outcomes either. • Vilgot Huhn says: In the study in question they found that cognitive psychology replicated more than social psychology (50% vs 25%). It’s open access so you can read for youself if you want to. 10. John Sidles says: There is a saying among chemists, physicists and engineers that “When you’re confused, there’s likely more than one thing wrong [with your apparatus/protocol/software].” The following passage points to just one thing (among many) that can go wrong with STEAM communities in general, and with psychology and psychiatry in particular. For “mathematics” read “psychology and psychiatry” … The work of Nicholas Bourbaki by Jean Dieudonné [*] (1970) Here is my [Bourbaki’s] picture of mathematics now. It is a ball of wool, a tangled hank where all mathematics rects upon one another in an almost unpredictable way. Unpredictable, because a year almost never passes without our finding new reactions of this kind. And then, in this ball of wool, there are a certain number of threads coming out in all directions and not connecting up with anything else. Well, the Bourbaki method is very simple—we cut the threads. […]; There I with to explain myself a little. I absolutely do not mean that in making this distinction Bourbaki makes the slightest evaluation on the ingeniousness and strength of theories catalogued in this way. […] If I had to make an evaluation I should probably say that the most ingenious mathematics is excluded from Bourbaki, the results most admired because they display the ingenuity and penetration of the discoverer. We are not talking about classification then, the good on my right, the bad on my left — we are not playing God. I just mean that if we want to be able to give an account of modern mathematics which satisfies this idea of establishing a center from which all the rest unfolds, it is necessary to eliminate many things. […] Bourbaki can only and only wants to set forth theories which are rationally organized, where the methods follow naturally from the premises, and where there is hardly any room for ingenious stratagems. Open questions  Are we approaching an epoch in which — to borrow phrases from Bourbaki — the “craftsmanlike ingenuity” that is associated to 20th century psychology and psychiatry can be distilled to a “coherent theory, logically arranged, easily set forth and easily used”? And in this epoch, will the too-numerous “loose threads” of present-day psychological and psychiatric research — be they ever so ingenious — none-the-less be cut-and-discarded, without being sorely missed? The real optimists  People who believe in the STEM-feasibility and enlightened desirability of a “thread-cutting” Bourbakian medical synthesis are (as it seems to me) the real optimists of 21st century medical practice. Conclusion  There are many urgent and difficult problems with modern psychological and psychiatric practice, yet for so long as the hopes that real optimists cherish for a Bourbakian medical synthesis remain unfulfilled, it is scarcely likely that much progress can be made overall. Remark  These optimistic medical hopes and sobering medical concerns are relevant to quantum information theorists too … the respective hopes and concerns of these two disciplines being (from a Bourbakian perspective) naturally entangled. [*] Historical note  In 1971, Dieudonné’s often-hilarious article received the Paul R. Halmos — Lester R. Ford Awards, given for “articles of expository excellence,” from the Mathematical Association of America. • Deiseach says: I think the problem is that, instead of getting into a thread-cutting STEM-style synthesis, as we discover more about human physiology and psychology, we are discovering more individual differences, not fewer. For example, that men and women experience the symptoms of heart attacks differently. Even my female GP, when investigating my chest pains, asked me did I have the “shooting or tingling pains down the left arm”. I didn’t (and luckily, whatever the trouble was, at least it turned out not to be my heart) but that does not mean I would not have been having cardiac trouble. So instead of being able to simplify “Diagnose a heart attack by shooting pains down left arm”, we’ve gone further in knowledge and found out that it’s more complicated and unique. More threads are sticking out of the hank of wool, and snipping them off may mean (for example) treating women as if they’re men and missing when they’re having a heart attack and so more deaths, not fewer. I think the 21st century and beyond optimists may be those who decide “Damn it, it’s going to get to the point where individual patients need an individually designed slate of medicines because one size does not fit all for the same condition”. • John Sidles says: Jack Vance’s oft-reprinted short story The Men Return (1957) is an account of the world you describe, namely, a world whose phenomena are unique and irreproducible. • Deiseach says: Well, it’s like Scott has described about anti-depressants; you try the most common/popular one first, because generally it works for most patients, you twiddle around adjusting the dose, and if it doesn’t work or has undesirable side-effects you switch to another one and keep going till you hit one that works for this particular patient 🙂 Or painkillers: there’s aspirin, paracetamol, and ibuprofen which work for me in that order; aspirin will take down any pain but kills my stomach, paracetamol is next, and ibuprofen does nothing at all. Someone else might find it better than aspirin for them. On the broad level, “Pain-Go” will work for 80% of people with no effects; a further 10% won’t find it as effective as “Stop-Ache” and for 2% it makes their eyebrows turn orange. Future medicine may be more “Let’s check you’re not one of the 2% before we prescribe this” rather than “Oh yeah, “Pain-Go” will fix your right up!” I think the hey-day of Big Psychological Explain-All Theories was, as in SF, the Golden Age of the 40s/50s. You certainly see it in things like Asimov’s psychohistory or van Vogt’s adoption of non-Aristotelian logic; this notion that with increasing scientific knowledge, we could work out the psychological drives and the neurological areas of behaviour and then it would only be a matter of inputting the correct stimulus to get the desired reaction when dealing with people en masse as a society. That we could plan and construct a world of progress and order and we’d understand all our impulses and evolutionary holdovers and could prune and govern them as desired. I think that attitude now survives mainly in sociology, which has never quite gotten over the 70s 🙂 11. PSJ says: The original paper had a very good addition to this discussion, so I’ll just copy it here After this intensive effort to reproduce a sample of published psychological findings, how many of the effects have we established are true? Zero. And how many of the effects have we established are false? Zero. Is this a limitation of the project design? No. It is the reality of doing science, even if it is not appreciated in daily practice. Humans desire certainty, and science infrequently provides it. As much as we might wish it to be otherwise, a single study almost never provides definitive resolution for or against an effect and its explanation. The original studies examined here offered tentative evidence; the replications we conducted offered additional, confirmatory evidence. In some cases, the replications increase confidence in the reliability of the original results; in other cases, the replications suggest that more investigation is needed to establish the validity of the original findings. Scientific progress is a cumulative process of uncertainty reduction that can only succeed if science itself remains the greatest skeptic of its explanatory claims. The present results suggest that there is room to improve reproducibility in psychology. Any temptation to interpret these results as a defeat for psychology, or science more generally, must contend with the fact that this project demonstrates science behaving as it should. Hypotheses abound that the present culture in science may be negatively affecting the reproducibility of findings. An ideological response would discount the arguments, discredit the sources, and proceed merrily along. The scientific process is not ideological. Science does not always provide comfort for what we wish to be; it confronts us with what is. Moreover, as illustrated by the Transparency and Openness Promotion (TOP) Guidelines ( (37), the research community is taking action already to improve the quality and credibility of the scientific literature. We conducted this project because we care deeply about the health of our discipline and believe in its promise for accumulating knowledge about human behavior that can advance the quality of the human condition. Reproducibility is central to that aim. Accumulating evidence is the scientific community’s method of self-correction and is the best available option for achieving that ultimate goal: truth. • PSJ says: I also want to mention that most of your response to John Ioannidis is valid for psychology as well. Most academic psychologists are very well aware that a large portion of published studies are flawed and spend a good amount of time trying to falsify them. The psychologists that are read by the population at large, however, tend to be the starry-eyed, politically driven idealists, so people get the impression that the whole field should be discarded and replaced with whatever they think is true about human nature. (and this coming from someone who hates most social psych research with a burning passion) • Douglas Knight says: There’s a big gap between are aware of problems and try to falsify them. Of course, we’re discussing this because of a big project to thoroughly replicate a bunch of studies, but that is exceptional. The normal behavior is to avoid the bad neighborhoods and try to make a positive contribution in unrelated areas. Some of the territory that has been abandoned is inherently worthless, but some would admit useful experiments if they weren’t overshadowed by exaggerations of plausible programs. In medicine there is more replication and correction of errors because there is more agreement on what types of questions are useful in the first place. • PSJ says: I’m not sure I understand you well enough to give a proper response. What do you mean by “The normal behavior is to avoid the bad neighborhoods and try to make a positive contribution in unrelated areas?” Going by my best guess, if that were the case, then we wouldn’t have a problem as bad results wouldn’t be accepted by the mainstream community anyway as they are known as “bad neighborhoods” and thus wouldn’t sully the larger theoretical frameworks. And, if the people who are willing to stay come up with convincing enough results, they can always be accepted later. In my experience, this isn’t true of psychology (but I tend to stay on the cog/neuro side). Most “bad” theory is disputed rather than ignored. See theory of mind, songbird song acquisition, modular vs non-modular systems, statistical vs logical/theory-based learning of language, and mirror neurons as good examples of the phenomenon. These are all highly controversial fields, but tend to do so through new design of experiments rather than direct replication. Replication would be a good tool, but it is not the only one that can correct errors. And even then, large replication projects are not a new thing in psychology. See: Many Labs project papers like this. 12. You mentioned a number of incidental factors that may affect the results of a study, such as barometric pressure and the colour of the wallpaper of the room in which the study was conducted. Would a ‘protocol’ that rigorously specifies each and every incidental down to the last relevant detail be sufficient to remove this problem (except perhaps in exceptional cases) – akin to a ‘clean room’ for psychology experiments? If such a protocol did come about, should greater importance be placed on studies which conform to it, and replications/non-replications of such studies be taken more seriously? (I vaguely imagine something which specifies the colour of the walls, desks, their exact specifications and measurements, their layout(s) within the rooms, the size(s) of the rooms, the exact appearance of the computer(s) to be used, a specification of the interface as well, down to the widgets and colours, how human contact can itself be eliminated or standardised (if it’s a cause of possible experimental bias), and so on.) • Tracy W says: How could any protocol do this? You’d not only need to specify what happened within the room, but also what happened outside it, and everything that happened to all of your experimental subjects (eg rain, traffic problems, exam deadlines) leading up to the stuy, including throughout their lives (where were you when you heard of September 11?) • I’m not 100% sold on it, but I got the impression that Scott’s argument was basically saying if you had to choose between blaming it on the wallpaper/barometric pressure etc, or just assuming the study itself was flawed, it’s probably more reasonable to do the later. Otherwise you’re going to risk chasing limitless non-obvious but possible confounding factors. • Deiseach says: It could come down to something as subtle as “The desks provided were not the kind I’m used to sitting at so I found it tougher to settle down to write comfortably and this distracted me during the test”. I definitely think variables such as style of desks, setting, etc. should be taken into account: a dimly-lit, cold test hall might make a difference to somewhere clean and bright. If these were specified as part of the environment and all was as standardised as possible (where “standardised” means “don’t ask for non-Americans to use American desks/numbering etc. and vice versa) might at least cut out some of the “Was it the fact that this study was done at 9 a.m. on Monday when the students had been out on the beer the night before that meant they didn’t test as well? We just don’t know” confusion 🙂 • Scott Alexander says: For each unit of effort you invest in this, you should expect discrepancies to decrease, but you’ll never get it so perfect that nobody will be able to concern-troll you. Are the experimenters in the two settings the same height? Was the research done on a Monday or a Tuesday? Full moon or new moon? Etc. • Deiseach says: Monday or Tuesday could actually make a difference, as well as morning or afternoon (the post mid-day meal slump). As for full moon versus new moon – well, if you get me on the full moon, I am more likely to be aggressive (for reasons of female physiology say no more say no more) 🙂 That’s the problem; people are not bottles of chemicals or capacitors. But if you’re going to make Big Sweeping Statements based on “Our study shows that…”, then the experiments need to be reproducible in some way, shape or form. • Jiro says: For the phases of the moon to repeat requires 29 1/2 days. 29 1/2 days is within the possible lengths of a woman’s period, but it’s not the average, and even being a couple of hours different would make it not match after a few years. Even if the study were to catch you, by chance, when your period matched the phases of the moon, on the average the periods of the women in the study would be distributed randomly with respect to moon phases and the overall study should find no effect. • Good Burning Plastic says: Another possible mechanism is full moon → brighter nights → harder to sleep well. • Deiseach says: Jiro, that individual quirk is exactly the point. If every female went with the moon, it would be an environmental given that full moon = increased levels of aggression in test subjects, and you could take it into account when doing a study on “Chocolate: soothing the savage breast or not?” But when everybody has unique characteristics, then doing a study on Monday may indeed get you different results than if you ran the same study on the same group on Tuesday. People are not extruded plastic products, is the point. We can probably trust broad general statements from psychology because it’s dealing with the mass, but when it comes down to “Women score higher on maths tests if you remind them they’re Asian but they do worse if you remind them they’re women” then we need a bit of confirmation by running a few replication tests. • Jiro says: Deseach: Taking into account the fact that someone has a period on the day of the full moon by coincidence is exactly like taking into account that someone had a big argument on the day of the full moon by coincidence–you shouldn’t be taking it into account at all, since over a large group of subjects both the arguments and the periods will be randomly distributed. You wouldn’t say “doing a study on Monday would get you different results than on Tuesday if someone happened to have a big argument on Monday”. Same with periods. • Deiseach says: Jiro, to establish a baseline for comparison (e.g. in the “priming students to walk slower”), I think you’d want to make sure beforehand that your control group weren’t wearing tight shoes, or had sprained ankles, or were all 85 year old arthritis sufferers when comparing them with your test subjects. So things like arguments, or hormone levels, or the like would affect baseline moods in the control group/test subjects, and not taking them into consideration might mean results in either the original or the replication “Our test group exhibited elevated aggression after being punched in the face, but our control group also exhibited elevated aggression even when not punched in the face, so the results are inconclusive” study. • Jiro says: If the things in question are randomly distributed, no, you don’t take them into consideration when establishing a baseline for comparison. You use your general knowledge that levels of aggression fluctuate randomly and you require that your experiment produce differences greater than is likely from random fluctuation. We have statistics for that. The fact that a particular individual in your experimental group happened to have had an argument (or a period) on Monday is just a piece of the random fluctuation and does not need to be considered separately from it. 13. Seth says: Many of those psychology experiments strike me less like “dropping an apple” and more like “blowing a soap bubble”. The soap bubble doesn’t drop straight down. Depending on the air currents, it may go up, or down, or drift sideways for a while. Any particular bubble may not behave exactly the same as the previous ones. How big are they? Exactly what kind of soap are they made of? (differences in chemical composition can affect how heavy they are, and how long they hold together). Someone walking by might stir the air enough to affect things. Or maybe the room heating system cycles on. Or off. Maybe you shouldn’t try to understand gravity using soap bubbles. 14. suntzuanime says: My barometer for how screwed experimental psychology is is how employed Dr. Jason Mitchell still is. It seems like he’s taken down his article about how replication is mean and evil and unfair to respected scientists like Dr. Jason Mitchell, but he still seems to be in charge of an experimental psychology lab at Harvard, so further work is needed. • Addict says: I don’t suppose you have an archive of that page? It sounds like an excellent opportunity for me to get in my daily dose of outrage. • Marc Whipple says: Oh, wow. The self-righteous, self-satisfied narcissistic illogic, it burns. It burns us, Precious! Although the fact that a highly-placed psychology professor at one of the most elite universities in the world can publish that with a straight face, and not be immediately laughed out of town, does illustrate part of the problem better than a thousand failed replication experiments. • 27chaos says: What should one do if they are face to face with people like this while they spew bad arguments? I’ve tried speaking up before, but it didn’t go well at all, since they were Authority. Yet I hate the idea of sitting silently while someone does low quality or unethical work. • Earthly Knight says: If it makes you feel better, that piece provoked dozens and dozens of critical responses. He basically was laughed out of town. • Steve Johnson says: Earthly Knight- He’s still a Harvard professor. He wasn’t laughed out of anywhere. The actual message here is that you can always try to push fraud as long as it’s progressive fraud and the worst thing that can happen is that people will disagree. The upside is that maybe you become the next superstar for inventing the next stage in progressivism. It’s a free bet, might as well take it. Of course, this also explains how the field is so contaminated by fraud. • PSJ says: I’m not sure I see how being an arrogant prick about your achievements is a progressive move? Simply because he stayed in his position (likely tenured), does not mean he has retained the respect of the community. • Earthly Knight says: He has tenure. For someone with tenure, the worst thing you can do to them is have no one else take their research seriously anymore. That’s all the comeuppance he’s going to get, which is okay, all he did was voice some foolish opinions on this one topic. I don’t know why you’re making this political, I can pretty much guarantee that both Mitchell and everyone dogpiling on him are leftists. If leftist academia is full of agitprop junk science, the best thing that can happen is leftist academics noticing this and trying to correct it, no? • nyccine says: @PSJ: You’re missing some context. Social Priming here is the supporting theory of how Stereotype Threat works, you know, how the underperformance of women/non-asian minorities in comparison to straight white men in certain fields is caused internalization of negative stereotypes. There’s a depressingly large section of social science pretty much devoted to explaining away achievement gaps (and some hard science as well; epigenics is being pushed in some corners as a biological cause). So, when Steve Johnson says “The actual message here is that you can always try to push fraud as long as it’s progressive fraud and the worst thing that can happen is that people will disagree.” he is emphatically *not* saying that being a dick is what makes one progressive, he’s saying that as long as you’re a dick in support of progressive causes, you’re much less likely to suffer for it. I can’t imagine, for example, people like Cochran and Harpending writing something like “On the Emptiness Of Failed Replications” in defense of some errors in their work, and there not being demands in academia that they be fired, as fast and as soon as possible. I think you also overstate just how much criticism Mitchell has gotten, compared to what he would have gotten if he were writing it in support of theories supporting right-wing causes. • Deiseach says: On one side are those who believe more strongly in their own infallibility than in the existence of reported effects; on the other are those who continue to believe that positive findings have infinitely greater evidentiary value than negative ones and that one cannot prove the null. I needed to look at that sentence a couple of times before I could understand it, but it seems to be saying that “If I perform an experiment and get a stated result, and you replicate it and don’t get the same result, then you’re wrong because you are denying real results happened” which is a bit jaw-dropping. The whole point of replication experiments is to see if the claimed results really happening; otherwise, we’re on the level of psychic experimentation where the medium cannot produced the same effects because of the negative thought waves of sceptics in the audience (maybe Mesdames Putt and Whitton should contact Dr Mitchell about how to be a gracious loser when your results can’t be reproduced independently?) His white swans/black swans example is topsy-turvy; his (or anybody else’s) experiment is the white swan. He says “Effect A happens”. Ten other people copy his experiment and say “No, it doesn’t.” The negative results are the black swans in this instance. His cook-book recipe example isn’t great either; sure, you won’t turn out a dish exactly the same as the illustration, but if you follow the recipe, you should end up with “Here, try my Jamie Oliver’s caramelised onions”, not “Hang on, Jason, these are glazed carrots”. “No, they’re onions! I followed the recipe exactly!” “Um – they’re orange and cylindrical and they taste like carrots”. “Well, you’re just mean-spirited and have a chilling effect on cookery!” • James D. Miller says: “Whether they mean to or not, authors and editors of failed replications are publicly impugning the scientific integrity of their colleagues. Targets of failed replications are justifiably upset, particularly given the inadequate basis for replicators extraordinary claims.” Very consistent with academic thinking on the importance of self esteem. • Who wouldn't want to be Anonymous says: Wow. And here I thought Dr. Oz had filled the quack quota for all of the Ivies. • Zykrom says: That “this guy’s opinion was so bad that his continued employment reflects badly on everyone in his entire field” feeling is (probably) what hardcore SJWism feels like from the inside. • suntzuanime says: This guy’s opinion directly reflects on his ability to do his job. If he worked at Foot Locker I wouldn’t care. • Nita says: Hey, at least he’s merely incompetent and wasting money, rather than putting people at risk by insisting that learning about their existence is dangerous to children. • suntzuanime says: I’m not at all sure that undermining the reliability of scientific results does not put people at risk, especially if someone foolishly teaches them that it’s only the bad people who don’t trust in Science and we mustn’t be like them so let’s Fucking Love Science with all our hearts. To be fair, there’s not much you can do that doesn’t put people at risk. It’s a shitty heuristic overall. • Deiseach says: “That mean ol’ experimenter says they couldn’t get the same results when they reproduced your study? Well, never you mind them, lil’ professor! You know you got a result, and that’s all that matters! Believe in yourself and keep your heart light shining! That’s what real science is all about!” • Zykrom says: that part too • suntzuanime says: That’s where you’re wrong; the Foot Locker example was chosen specifically because Ben Kuchera tried to get somebody fired from Foot Locker for supporting GamerGate. Or to take a more recent case, I don’t think anybody is under any illusions that shooting a lion makes you bad at dentistry. There’s something different at work. • Zykrom says: Fair enough. • suntzuanime says: Correction: it was a Dick’s Sporting Goods, not a Foot Locker. I apologize for the error. 15. I thought this was an interesting post and makes a number of good points. One nitpick: in the Dijksterhuis and van Knippenberg paper, the dependent variable in the four experiments was number of correct answers on tests of general knowledge, rather than exam grades, i.e. the results of formal university exams. Although, if it really is a true effect, it might apply in real exams. I am not saying that I think that it is a true effect though, I am remaining neutral for now. Sanjay Srivastana has written a thoughtful article in which he also disagrees with Lisa Feldman Barret’s interpretation of failures to replicate. In brief, he argues that an explicit design goal of replication studies is to eliminate extraneous factors that could produce different results from the original experiment. Therefore, it is highly unlikely that most of the failures to replicate occurred due to such factors. Furthermore, the studies were pre-registered and the original experimenters were consulted in advance and thus had the opportunity to make predictions about anything that might cause the replication results to diverge from their own results. In another article that I can’t remember the link to, Rolf Zwaan argued that trying to explain away replication failures with post hoc explanations about minor differences between experimental conditions (e.g. Scott’s example of wallpaper color) is like arguing that the effects in question have very low generalizability in that they can be expected to occur only under very highly circumscribed circumstances and therefore are probably not all that interesting. Personally, I am impressed by the work of the Many Labs project which involved multiple simultaneous replication attempts in different countries with great care to make the experiments as uniform as possible. This produces many data-points, so it is not just a matter of “do we believe study A or study B.” If an effect fails to replicate in multiple instances like this, I think it is much harder to argue credibly that it is because of wallpaper effects. On the other hand, effects that are replicated in multiple instances have more robust support. 16. Daniel Armak says: Re macro vs. micro: perhaps the psychological studies are at the micro level when they work with individual people. How fare our studies of large groups of people, i.e. psychohistory? 17. Steve Johnson says: There’s an elephant in the living room of course. As Greg Cochran succinctly put it: Back in 1940, the Soviet powers that be wanted more wheat (and more dead kulaks, of course) . Today, our most desired product is excuses. He said this when discussing epigenetics. It applies equally well to priming and stereotype threat. • This is bizarre to me. I think you are massively overestimating the politicisation and leftism of psychology. Maybe its because most members of the far-left spout psychology, but that doesn’t mean most psychologists are members of the far-left. • Zebram says: I don’t think the overall set of social psychologists is politicized to the degree that conservatives generally believe, but there is some. I think the main issue is that studies which support progressive causes are touted in the media, whether mainstream or science media, while those supporting other causes are not. For example, I’ve heard repeatedly that studies show liberals are more intelligent than conservatives from many different sources. I haven’t looked at all the studies, but generally speaking, there does seem to be a small difference. However, other studies seem to relatively consistently show that libertarians are more intelligent than liberals. We don’t hear about those. • PSJ says: If you are talking about this, these are all sociologists. Psychometrics was developed in psychology, but using it to talk about culture and politics as a whole is more of a sociology thing. Psychologists tend to associate it with other biological and psychological features, how smaller interventions affect measurement, or psychopathologies. • Zebram says: Ah, I see. I suppose this confirms what others have been saying above. To those of us in the more ‘hard sciences’, such as physics and chemistry, we see fields like sociology and psychology as lumped together into one incoherent mess. I didn’t even think to differentiate between sociology and social psychology. • Sociology is highly politicised, especially since the 70s. It’s a shame, there is some interesting content hidden behind the (mostly left wing) politics. • PSJ says: What would your reaction be if somebody said that about chemistry and biology 🙂 • Zebram says: @ PSJ: • Not Usually Anonymous says: Sociology and social psych: my naive outside view gives me a lot more respect for the latter than the former. Some people have mentioned the “cargo cult” concept, and the current replication crisis does make me a bit twitch about this, but in terms of the metaphor, large parts of sociology don’t even look like an airport. To put it another way, some people have a two-part distinction between the sciences and humanities, and some have a three-part distinction between natural sciences, social sciences and humanities (we’ll forget about maths and allied fields for now). IMO the border between social sciences and humanities runs roughly through the middle of sociology, with most of the “exciting” lefty stuff being on the humanities, and there being a much more dull-but-worthy numerical side that you rarely get to hear about. I’d go as far as to say that humanities-sociology has more in common with field such as continental philosophy than it does with the other sort of sociology – of course I’m getting deep into the realm of vague impressions and well out of my expertise here. It’s all a lot further from chemistry than, for example, linguistics is. • Yes, but what has that got to do with the far-left and the USSR? Conflating liberals and communists is silly. It’s like lefties that go around calling anything they don’t agree with “fascist”. • Wrong Species says: I love the hypocrisy in that. Let’s make sure we hype the studies showing conservatives are dumb but let’s ignore the studies showing differences in IQ between races. • Fairhaven says: they are contradictory studies, since most liberal voters are black and hispanic • Deiseach says: government bureaucrats (average to low) As a low-level public service minion, I should probably resent that – just as soon as my feeble intellect can work out whether I’ve been insulted or not 🙂 The majority of liberals are black and Hispanic and white poor(not proven to have superior intelligence). Be careful if you’re conflating “vote Democrat” and “liberal”. African-American churches, for instance, are very conservative on “gay rights” and have strongly resisted attempts to identify activism with the Civil Rights movement of the 60s or “Gay rights are the Civil Rights of our day”. Their congregants may vote Democrat, but a lot of that would be in the same vein as old-style Irish Catholic voting Democrat; when the Democrats moved strongly socially liberal, a lot of those were the ones who then went Republican reluctantly. • Donttellmewhattothink says: A bit of common sense: liberals are over represented in academia (smart), Hollywood ( not smart), teachers (average), lawyers(average to smart), government bureaucrats (average to low), . The majority of liberals are black and Hispanic and white poor(not proven to have superior intelligence). Consevatives are over represented in the middle and working class, farmers, businessmen, doctors. Millionaires are evenly divided. It doesn’t seem plausible that these studies are accurately measuring intelligence, ony that they are designed by academics. • Fairhaven says: Zebram says: Does anyone have links to those studies or study? I have a vague memory of looking at one that was cited and dropping it when I saw the study subjects were all college students, as if that would be a representative sample of conservatives. My neice is going to the University of Wisconsin. Like so many kids of her generation, raised in creches and afterschool programs and on social media, she looks left and right at what everyone her age is thinking before she dares form a thought. To be a conservative in that milieu is tying a sign around your neck saying: “I am a social pariah. Give me failing grades. I’m okay with no sex and no friends for the next four years.” More seriously, no conservative would think up a study to prove conservatives are smarter. That’s a liberal trope. Since the parties are so polarized these days (see last post), is it legit to use Democrat and Republican for a rough rule of thumb for liberal and conservative? Ppolitical science actually has some hard data would give you more meaningful results than an academic study on liberalism and IQ. For example, lawyers are the most consistently Democrat voting block. There are four times more Democrat lawyers in Congress than doctors, and five times more Republican doctors than Democrat doctors. No one would set out to prove lawyers are smarter than doctors. The reason they sort that way is that lawyers’ economic self-interest leads them to want big government and no medical tort reform. Many doctors work for themselves, and across the board, most people who work for themselves are Republican. In 2012 doctors voted Republican by a 19 point margin, which I could guess is more than usual, and is presumably because they don’t believe Obamacare is good for them or their patients. It turns out, self-interest and life experience account very well for predicting how you vote. Within an economic class, political sorting correlates with other factors. In 2008 Obama carried the majority of the richer rich, those making $200,000 or more per year. These Democrat super-rich live on the two coasts, and have a different value system. They are not religious. They are not family oriented. In fact, they are rather bigoted and feel superior to people who are religious or family oriented, and don’t want to be in a group with them. Maggie Gallagher in Human Events : ” A 2009 Quinnipiac poll notes that socially liberal values rise with income – “support for same-sex marriage also rises with income, as those making less than $50,000 per year oppose it 54 to 39 percent, while voters making more than $100,000 per year support it 58 to 36 percent.” The very rich are disproportionately strong social liberals….” The very richest are Democrats: “…there seems to be a tipping point where the ultra-wealthy begin leaning Democratic. The most famous example would be the entertainment industry, where star-studded events have become a significant part of Democratic culture. …. A review of the 20 richest Americans… found that 60 percent affiliate with the Democratic Party…Among the richest families, the Democratic advantage rises even higher, to 75 percent.” Peter Schweizer at National Review explained in 2006 that Democrat millionaires and billionaires earn their money differently than rich Republicans. … the answer may lie in the way much of this wealth was accumulated. Some of these individuals (Kerry, Dayton, Rockefeller, etc.) inherited their wealth … they haven’t spent time building a business or even holding down a demanding job in corporate America. Others, particularly in the high-tech sector and Hollywood, amassed their wealth quickly and faced fewer challenges in dealing with invasive government and regulations. In Hollywood and high tech, there is a sudden jump into wealth. It can seem unearned and unfair. Taxes touch them less than the Republican two-income “rich” family making one hundred thousand. Über-rich Democrats often see good fortune as a lottery: The Silicon Valley 30-year-old worth $200 million on a stock IPO after six years in the business is likely to have a different view of wealth accumulation than the industrialist who amassed a similar fortune over the course of a lifetime. It might be true that liberals include America’s highest IQ cohort – that is, people with advanced degrees, but they are not the majority of liberals. Voting in real life sorts by class,race and economic self-interest, not IQ, although education and region have a role to play. Democrats are the super-rich, half of the rich, the upper middle class and the poor. Most of the middle class Democrat voters are government employees (including teachers). Republicans are half the rich, the middle class and working class. Most liberals are non-white, most conservatives are white. If ‘the liberals have a higher IQ’ study were true, it would also be true that poor people of color are overall higher IQ than working and middle class white people who don’t on the west coast or the northeast. don’t “studies” have to pass any kind of reality testing (apples do fall down when dropped from a tower) before getting taken seriously in academia? Nor do they necessarily have good brains, or even a good education, when it comes to public policy, morals, people or political issues.- the things that inform party affiliation. • Cauê says: • Sastan says: No, most are not members of the hard left. All are members of the hard left. To be fair, they’re very nice about it, and most are perfectly decent people. And they do show the almost-universal tendency to get very libertarian about the stuff they really care about. But seriously, the farthest right politically you can be known as being and hold a job in academic psychology is somewhere between Trotsky and Castro. It’s so bad that professors will straight up tell you that they would never hire or vote to hire someone who didn’t share their hard-left views. In an age where discrimination has such a strong adverse reaction, the fact that strong majorities will say things like this is extremely indicative. If you own your own practice, or are in industry, I have no idea, but I assume politics don’t matter nearly as much. • Peter says: “All are members of the hard left.”: Citation needed. For example, explain to me how Jonathan Haidt could be described as “hard left”. This article shows Haidt showing a bias in psychology, but he doesn’t show anything as ridiculously one-sided as you state. • Sastan says: Thank you for bringing up Haidt, he’s a perfect exemplar! If you read his books, you see he is quite open about being very liberal. In fact, his whole moral psychology of political groupings began as opposition research specifically to help Democrats win office. He was so disheartened after Kerry’s loss in 2004 he dedicated his whole research to cracking the moral code of conservatives, the better to lobby them with keywords that played to their prejudices! However, once he dug into the material, and found it more nuanced than that. And once he developed a baseline respect for the ideals of non-liberals, he was finished. He’s never worked in academic psychology since. He was hired at the NYU School of Business, as there isn’t a psychology program in the nation that will touch him now that he showed them all how biased they all are. And he’s still quite a liberal! He just doesn’t hate conservatives and libertarians enough to be kosher. Edit: You should read your own links, they make my point very well! • Earthly Knight says: This is bollocks. Haidt is a celebrity, he could get hired just about anywhere. Presumably he went to a business school because they pay better. • Sastan says: I dispute your assertion. Perhaps you have evidence? I have the fact that he used to work in a psych department, and now doesn’t. And that anywhere between forty and eighty percent of psych professors will admit in a survey they discriminate in hiring decisions. • Earthly Knight says: But this is not really a question where evidence is needed, anyone with a passing familiarity with academia could tell you the same (among other things, Haidt was tenured at Virginia). The conservatives who are victims of liberal bias in the academy will be graduate students, adjuncts, and junior hires, not the Haidts of the world, who can fend just fine for themselves. • Marc Whipple says: EK: I don’t see any evidence. I see speculation from someone who has no obvious connection to the matter. I think “he used to be in a psych department and now he isn’t” is still the only hard fact we have at this point. Also, “this is not a question where evidence is needed” is a huge, screaming, flashing, red light with a siren on it. • Peter says: Oh boy, where do I start? a) Haidt identifies as moderate these days. I have read some of his books – one of the episodes that stuck in my mind was him going to India, doing his best to fit into the culture like a good little liberal and thereby absorbing a whole bunch of conservative ideas. b) “very liberal” is not “hard left”. Haidt is no Trotsky or Castro, he’s hardly even Jeremy Corbyn. I take it you’re not familiar with those parts of the radical left who use “liberal” as a pejorative… much like as many parts of the American right do. I suspect you’re suffering from a serious case of outgroup homogeneity. c) You should read your own links, they make my point very well. There’s a comment from someone griping about being a member of a “closeted conservative minority”. “All are members of the hard left” denies that there is any such minority. • Earthly Knight says: “Hard facts” are great. Understanding how institutions work is sometimes more valuable. If you poke around the internet you can find Haidt getting a target article published in BBS, Haidt getting marquee billing at various university-run conferences and speaking engagements (including at Virginia), and interviews with Haidt where he talks about how excited he is to move to NYU but somehow forgets to mention that he now faces a coordinated blacklist at every psychology department in the country. You will also notice a distinct absence of articles with alarming headlines like “superstar Virginia professor has tenure revoked” or “Haidt forced out at Virginia” which would have cropped up everywhere if the scenario Sastan envisions had any basis in reality. If you look especially hard, you can even find some data on comparative salary ranges of professors in psych departments versus professors in business schools. But this is only evidence given a background of knowledge which I cannot concisely transmit to you. Edit: For reference, here is what it looks like when a university tries to fire a tenured professor for his political views: • Sastan says: So allow me to grant for the sake of argument your theses. 1: There are a vanishing few non-liberals in academia, underground, closeted, fearful for their careers so they dare not say so publicly, but there’s a couple! 2: If you’re well known enough and a “superstar”, the university probably can’t directly fire you, since you were so liberal when they gave you tenure. Anyone else see how the best face you all are able to put on things is kind of……….horrible? And as to the quibbling over terminology of “hard left”, there’s really no objective measure, but in my experience any professor who calls himself a raving right-wing lunatic is more liberal than 90% of the American public. Anyone who calls himself a “moderate” generally means moderate between the Dems and the Communist party. And the self-styled “liberals” are farther to the left than any group in the US except journalists and actors. This especially holds for social issues, less so for financial. • Peter says: So if my theses go through, then you’ve been caught exaggerating and it follows that everything else you say is hyberbole too that we can’t trust that the other things you say aren’t hyperbole either. There is a problem. Ridiculous overblown hyperbole won’t help with it. • John Schilling says: I don’t think you have accurately described his theses, which almost certainly allow for a greater range of political opinion in academia than your capsule description, and also a broader range of status than just a handful of superstars and everyone else cowering in fear. But we’re also talking about your thesis, which was simply “All are members of the hard left.” And you made that in specific disagreement with the alternate thesis that “most [psychologists] are members of the far left”, so you clearly weren’t using “all” as a colloquialism for “most”. Most psychologists being members of the far left is a reasonable thesis. It may even be true if you squint and tilt your head just right when looking for far-leftness. But literally all of them? That thesis is disproven by a single example of a not-far-left psychologist, even if it is a moderate-leftist with superstar status. Again, I’m not a fan of argument by single anecdote, but you raised an argument that called for a single anecdote in rebuttal. You score no points now by saying, “Aha, but aside from your single anecdote I’m right about everything else!” Try not to do that next time. • Jiro says: This is wrong, because you failed to consider that “most” was also interpreted as a colloquialism. If “all” is a colloquialsm for “most”, “most” can be taken as a colloquialism for “most, but not so much that the weaker side has no practical influence”. • Scott Alexander says: Most of the studies in the replication project were not political by any stretch of the imagination. They were things like “do complicated combinations of stimuli have longer sensory processing times than a different complicated combination of stimuli?” I agree that problems with science feed into problems with politics, but the problems with science got there independently. 18. I broadly agree that psychology at the moment is in the grips of a crisis, for basically the reasons presented in the article and comments – publication bias, lack of talent amongst many many of its participants, etc. I’d disagree only with this: Given that many of the most famous psychology results are either extremely counterintuitive or highly politically motivated Huh? This is not obviously true more than economics, sociology or any other related field. In my opinion economics is the one where people’s theories line up most directly with their politics. Also, Milgram experiment has been replicated in a large number of different settings in multiple different countries/cultures/demographics. Afaik, Zimbardo’s prisoner experiment, the obvious famous one that is a bit wtf, is not widely regarded as true or useful within the field, apart from a teaching tool for thinking about methodology. Conclusion does not obviously follow from premises? This seems like a massive leap. The general thesis of this article is true I think though. Publication bias in particular seems to damage psychology worse than many other fields because there’s so much scope for subtle fiddling with the setup, measurement etc. I think the danger however is to react in the way I think many STEM people do – by saying “psychology is pseudoscience stay away from it”, which makes the problem much much worse. I think the more appropriate reaction is “psychology is in a crisis, we need to invest more talent in the field to get better results”. The reasons why I think this more measured response is correct are: -There’s faulty expectations/conceptions of what psychology is. Psychology isn’t like physics, it’s like meteorology or climatology – massive complex systems are at work. There’s often sensitive dependence on initial conditions, and a massive multicausal clust**** of factors at play most of the time. If you expect something like the laws of physics, you’re going to be disappointed. Or, comparing psychiatry, while many of the “mechanical” principles of neurobiology are totally solid, actually applying it in real circumstances has quite variable results, even though the psychiatrist has extraordinary legal powers to directly intervene. That’s because stuff stuff to do with humans is really really hard. -Like PSJ said “most academic psychologists are very well aware that a large portion of published studies are flawed and spend a good amount of time trying to falsify them.” In my experience this is true. There’s a lot of idiots that seem to be doing their best to make the field look rubbish. -Chance are, you probably underestimate the achievements and usefulness of psychology, because to pinch Scott’s phrase, there’s psychology in the water supply. A lot of theories have sort of disseminated into the general populus, often through the hit and miss pathway of pop-psychology, and so now the bar has been raised for what can be considered novel and what gets attention. Hence the psychology you probably need to know is hidden amongst a huge load of awful rubbish. You might be tempted to decide the field is no longer worth investing in, except… -Psychology is very useful because it applies to more situations than most other fields, because it covers not only most domains of work, it also remains useful in social and private life, as well as in macro level political, economic and cultural situations. -STEM people tend to accept theories that are essentially psychology, provided they don’t intuitively sound or feel too much like psychology. The main examples of this, neither of which I dislike, is memes and signalling. But it’s worth noting that theories in social science/sociology/psychology have existed for a long time that are remarkably similar – conspicous consumption is the obvious one that comes to mind for signalling status. They also have an inbalance in skepticism for “sciency” feeling versions, such as evolutionary psychology, which amongst the social sciences is mostly regarded as untestable and extremely vague in its actual predictions of human behaviour. I’m not saying I think it’s worthless either, but I think “sciency” feeling theories tend to get a bit of a free pass. -If we don’t consider a least some empirical psychology, people often end up with whatever flimsy alternatives suits their politics. Using mostly philosophy for the prediction of human behaviour is probably the worst but most common way to screw up in this regard. I totally agree with the idea that psychology is in a crisis. But I think it’s worthwhile clarifying the appropriate reaction to that – it’s not about problematising it so we can legitimise any old conception of behaviour, but methodically working the through the flaws and trying to fix them. • suntzuanime says: I would say that if you’re defending the robustness of scientific findings in your field by comparing it to economics and friggin’ sociology, that’s a sign your field is in fact in serious crisis. Can you imagine what, say, climatologists would say if you said “well, the state of climate science isn’t really any worse than sociology”? • I agree that the field has a crisis, but the comparison point isn’t amongst the reasons. The point is that its a complex system you’re studying, and studying complex systems doesn’t look like studying isolated systems like in physics or chem. I don’t think its sensible to expect it to. • HeelBearCub says: It’s a bit like saying physics has a problem because it can’t predict the weather. It’s also a bit like trying to develop physics when you can only study the weather, • Odoacer says: But we can predict the weather with decent accuracy for the next several days. • PSJ says: This is my new favorite analogy. But we’re also pretty good at predicting behavior for the next few seconds-minutes. • HeelBearCub says: But you won’t do it using physics. Not directly. High praise! • Odoacer says: You’re really downplaying the achievement of accurately predicting the weather (see the article I linked). I’d wager the average person would be very good at predicting behavior for the next seconds-minutes, w/o the use/foreknowledge of any psych studies. Whereas • PSJ says: I don’t believe I am. I’m simply considering the achievements of psychology to be greater than you believe. Here is just one example. • vV_Vv says: that’s a neuroscience finding, not a psychology finding. And anyway, I think that the philosophical implications about “free will” that the authors engage in are overly speculative. • PSJ says: No. There is a largely false separation between cognitive psychology and cognitive neuroscience. My university doesn’t even have separate departments and most people in each field consider themselves as both. • Richard says: I honestly believe that psychology is conflated with neuroscience the same way astrology used to be conflated with astronomy and I predict that in 50 years or so, this will be obvious. Any evidence to the contrary appreciated. • PSJ says: Frank Tong, one of the authors in the paper that sparked this discussion is in the psychology department of Vanderbilt. I am a psychology student at my university, but most of my current work would be considered neuroscience, but most of the work that inspired it is in Psychology. I have not to this date heard a coherent definition that clearly separates cognitive neuroscience and cognitive psychology • Odoacer says: I’m having difficulty pinning you down here. First you say psychology has achievements similar to weather forecasting, but you use a neuroscience example. Then you say that there’s no difference between cognitive psychology and cognitive neuroscience. You’ve gone from psychology -> neuroscience -> cog neuroscience -> cog psych. What exactly are you defending? Psychology as a field or only subsections like cognitive neuroscience? Regardless, the original argument is against psychological studies like those by Diederik Stapel, priming effects, etc. This is about the reproducibility crisis in psychology, as identified by people like Brian Nosek as written here: Abstract: Reproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown. We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Replication effects (Mr = .197, SD = .257) were half the magnitude of original effects (Mr = .403, SD = .188), representing a substantial decline. Ninety-seven percent of original studies had significant results (p < .05). Thirty-six percent of replications had significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result; and, if no bias in original results is assumed, combining original and replication results left 68% with significant effects. Correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams. • PSJ says: Apologies. Frank Tong was an author of the paper that vV_Vv claimed to be a neuroscience study in order to not allow it to count towards psychological success. That is the line of argument my most recent comment was addressing So, from my point of view I went Psychology->psychology sub discipline->same sub discipline->same sub discipline I wasn’t aiming to only defend that sub discipline, but to give examples of psychological successes, I have to give an example that falls under some sub discipline of psychology. None of my comments not directly below yours were responding to your arguments directly. • Setsize says: “Psychology” and “neuroscience” are two umbrella terms, each covering several subfields with disparate lineages, whose Venn diagram consists largely of overlap. There is not a meaningful boundary between them. The above referenced study (Haynes et al) might be called “neuroscience” because it uses functional imaging in addition to behavioral report. But techniques do not establish disciplinary boundaries — psychologists have always used neurophysiological measures when they are available and applicable. In fact that study is a followup to studies that used EEG rather than fMRI, which in turn were a followup to studies that merely asked observers to note the time on the clock when they “chose” to press the button. So this is a line of research that originates in psychology. It is a pattern that psychology first notices things that are later investigated using neurophysiological techniques. There are psychology journals and neuroscience journals; journals are better differentiated by technique than scientists are. If a psychologist does a study on visual perception that uses behavioral measures, they might publish in Journal of Vision or Perception or JOSA:A. If the study also includes neurophysiological measures they might publish in Journal of Neuroscience or Neuron or Visual Neuroscience. If you try to say that people who publish on one group of journals are “psychologists” and people who publish in the other group are “neuroscientists” you will find that all the neuroscientists cite psychology journals and all the psychologists cite neuroscience journals and most of them publish in both; this demarcation fails to carve the disciplines at their joints. One example of a historical success of psychology is the identification of trichromacy in color vision, and the identification (up to a linear transform) of the basis vectors that map electromagnetic radiation to color sensation (Young-Helmholz theory), which led to the standardized CIE coordinate spaces that we use to calibrate our monitors. Another historical success is the development of signal detection theory and its application to perceptual judgements, which took place simultaneously in psychology and electrical engineering (The historical motivation was radar; the performance of 1950s radar systems being a combination of the radar apparatus and the trained operator who interprets it). This leads to too many developments to list; for instance the models underlying lossy media compression like mpeg/mp3/jpeg. We’re hearing about a reproducibility project in psychology because (a) a subset of psychologists noticed the issue and started investigating it (somewhat later than people in biostatistics; see link to a cancer biology talk I posted upthread) and (b) replication of behavioral studies is cheap. I would wager the replicability of neuroimaging studies will turn out just as bad, but scanner time is more expensive. • PSJ says: Thank you for saying all this much better than I did! 🙂 I also want to add reinforcement learning and certain planning methods in AI as successes of psychology • @HeelBearCub But are we claiming actual (modern) psychologists think they’re developing the physics of people? Wouldn’t that be a massive straw man? I think everyone knows psychology is kinda like studying the weather. It’s just an attempt to correlate behaviours, thoughts and circumstances (sometimes with neuroscience thrown in) into a complex model, which is, well, obviously quite hard. It’s still worth it because psychology is much more useful than knowing tomorrow’s weather (unless you’re a farmer). Yet people, especially STEM people, seem beat up on psychology its pseudoscience etc etc. in ways they’d never say about meteorology, which gets it wrong all the time. It’s weird, and I think probably a mistake. I agree there’s a massive crisis, but imo that’s different from there being something wrong with the field itself. • vV_Vv says: I guess that experimental psychology can be described as the study of human behavior at the level of external stimuli and externally observable actions, while neuroscience studies how these stimuli and actions correlate with brain activity. Obviously, the two fields have an overlap, since you can design a study that includes all these elements, and ultimately a theory of human behavior should be consistent with both neurological and behavioral evidence. But the study you cited is clearly in the neuroscience camp, since it investigates the correlation between neural activity and actions, not how external stimuli influence actions, while priming studies seem mostly in the psychology camp, since they investigate the correlation between external stimuli and actions, without measuring neural activity. Your response typical of how people try to defend “soft” sciences like psychology or economy when they are attacked for their lack of empirical rigor: “We are studying complex systems, you can’t expect the deterministic precision of classical physics, our field is more like meteorology/seismology/quantum mechanics.” I find this kind of response flawed. Meteorology/seismology/quantum mechanics don’t make deterministic predictions, but they do make lots of stochastic predictions which are statistically falsifiable, and in practice they are constantly tested and verified to a high accuracy. “Soft” sciences, on the other hand, often make vague predictions. When they make falsifiable predictions they are generally difficult to test, and when somebody bothers to test them, we see disastrous results such as the one discussed in this thread. Therefore, the analogy is invalid. • Urstoff says: @vV_Vv: I think that last bit is right as long as it is qualified to encompass only certain parts of economics and psychology. Microeconomics does make some pretty solid predictions, and there are even a (very) few solid predictions in macroeconomics (printing more money leads to inflation). In psychology, there are lots of solid predictions made in cognitive psychology; there is no crisis of confidence in areas researching memory, perception, etc. When you get to social psychology, the fuzzier areas of cognitive psychology (e.g., social priming, if that even counts as cognitive psychology), etc., the predictions start to fall apart. That isn’t to say that the theoretical work done in macroeconomics, social psychology, etc., isn’t important. It is, as it provides us with ways to think about these things, but confidence in predictions should be pretty low. Of course, what this doesn’t mean is that you then should have more confidence in “common sense” or your gut or some other heterodox theory just because the mainstream social science doesn’t make solid predictions. Just because macro (for example) doesn’t really know what’s going on doesn’t mean we can resort to economic predictions rooted in Marxism, ideology, etc. Macro isn’t a great science, but it’s the only one we’ve got. • vV_Vv says: Isn’t this a prediction of macroeconomics in the same sense that “a dropped apple will fall down” is a prediction of physics? • Urstoff says: No, because it was not self-evident (even though it may seem self-evident to us now). • 27chaos says: Are you saying that psychology is in a crisis because psychology is complex and difficult to understand, or are you saying that psychology is in a crisis, and also psychology happens to be complex and difficult to understand? In other words, are you trying to say complexity exonerates the non-replications, or no? Because I can’t tell. • I’m saying the second one – the two facts are unrelated. IMO psychology isn’t fundamentally flawed if you don’t expect it to be the physics of people, and it separately is in crisis at the moment because of other reasons like publication bias etc listed in my OP and the article. • Zebram says: I’d agree with your statement regarding economics. When it comes to empirical evidence, you can find loads of evidence for almost any theoretical position. When I read Austrian and Keynesian economists citing empirical evidence, they both seem to make sense to me. You can make strong cases for either side with empirical evidence. It’s confirmation bias all over the place if you ask me. • Yeah a lot of stuff in the humanities and social sciences seem to be like – they sound fine when you’re reading them in isolation but they contradict eachother and can’t all be true. I find sometimes having intuitive reactions like that to stuff I’m reading can actually be harmful, because you assume you have some idea of the truth and get lazy about checking stuff, unlike with physics, where you know for a fact you have no real idea about what a particle does and why. • Peter says: Austrian economists cite empirical evidence? I thought they were all “economics is a priori and if the evidence disagrees with theory then the evidence is wrong”. • Zebram says: I wrote that incorrectly. They don’t believe in empirical data as evidence, but they start from theory and use it to explain empirical data. They do look at the empirical data, just not for proof of their theory’s validity. Probably because most other economists look at empirical data as evidence, and so they try to demonstrate how that same data could could be used as evidence for Austrian theory if one so desired. • Wrong Species says: The nice thing about Austrians is that the evidence is so widely against what they suggest that’s it’s easy to stop taking them seriously. I thought what they were saying had some validity until I realized that the hyperinflation wasn’t coming. With everyone else, the evidence is more mixed and their pronouncements aren’t as strong. • MichaelM says: Wrong Species: It’s really, deeply important to distinguish between three things: 1. People who read a little bit of Austrian economics from and think that makes them Austrian economists. 2. Some Austrian economists who fall on one side of a theoretical debate with respect to the money supply. 3. Some Austrian economists who fall on the other side of this theoretical debate. Austrian theory can indeed understand the possibility of a flight to liquidity (ie. cash hording, which is why we HAVEN’T had serious inflation since the monetary base tripled seven years ago). Some modern Austrians (and a metric ton of internet commentator ‘Austrians’) disagree with what some others say about this possibility and so they think runaway inflation is right around the corner. You can find yourself disagreeing with the one side and find other Austrians agreeing with you on that. It’s just unfortunate that the ones you disagree with are numerous and loud and the ones who agree with you are not quite so noisy. Austrian economics has a great deal to offer a modern student of the subject without that student having to become a raving style loony. • I expect the Austrians and Keynesians are disagreeing about macroeconomics, that being the field for which “Keynesian” is a meaningful label. Macro is the part of economics that economists don’t understand. I like to describe a course in Macro as a tour of either a cemetery or a construction site. Economists do just fine at predicting the effects of price control, or tariffs, or … . • Nathan says: While I’m not about to disagree with you on this point, I’m a little surprised to hear you making it. As I recall your father had rather a lot to say about macro, at least as far as monetary policy went. Did he just not know what he was talking about? • Zebram says: I don’t think that’s the same David Friedman. • It is the same David Friedman. My father is one of the reasons that 60’s Keynesianism got relegated to the cemetery, although it has a tendency to rise from the dead when politically convenient. We know more about macro as a result of his work, but not nearly enough to make it a solved problem in even the limited sense in which price theory is (for the limits of the price theory case, read Coase). After I concluded that I was better at economics than at physics, which is what my doctorate is in, I decided that one way of slightly reducing the degree to which I spent my life categorized as my father’s son would be to stay out of the area where he had made his major contributions. I ended up in a field of economics in part invented by my uncle—but, fortunately for me, he is much less famous. • Steve Sailer says: Thank you, Dr. Friedman, for the charming personal note. I presume Dr. Friedman is referring to his uncle Aaron Director, 1901-2004(!): • Peter says: A psychologist of my aquantance said, “it’s a way of teaching you about research ethics – it’s a good exercise to try to spot all of the unethical things in the experiment”. • Setsize says: Another phenomenon is that results in psychology which are robust, replicable, and support reduction to underlying mechanisms, have been reclassified (in the popular imagination) as being results in neuroscience, because psychology can’t get a break. 19. Peter says: It’s interesting that the term “conceptual replication” hasn’t come up yet. With organic chemistry, often you need to adapt some reaction to your needs. You find some reaction from a paper from the 1970s, say, “well, one of my starting materials has a couple of extra methyl groups in it but hopefully they won’t get in the way, and they use benzene as their solvent but I’ll use toluene because it’s safer and thus the paperwork is easier, and I’m going to scale it up by a few hundred percent because I need lots of the product” but apart from that you follow the description. This wouldn’t count as a replication attempt to a chemist. It would only count if you used the original starting material and solvent, and tried to follow the recipe as faithfully as possible. If we talked like psychologists, we would call this a “conceptual replication attempt”. The Barrett article appears to be saying “reproduction failures aren’t a problem because you don’t expect conceptual replications to work all the time”, whereas as far as I can tell the Reproducibility Project isn’t doing conceptual replications – it’s attempting to do good-old-fashioned proper plain old replications. When most of those come back with different results, you have a problem. • Alexander Stanislaw says: How often do organic chemistry replications fail? I couldn’t find any data on that (replication is apparently not a good search term in this context). I’ve heard mentions of troubleshooting limbo in organic chemistry. I’d imagine the first attempt success rate is quite low, and the eventual success rate depends on the researchers patience. Of course the problem with psychology is that there is no way to check if you screwed up, whereas chemists have all sorts of tools. Could this be the main problem with psychology as a field? Its not necessarily that its practitioners are any less competent (I have no opinion on whether this is true, I don’t know much about psychology). • Peter says: Oooh, erm. You’ll no doubt be unsurprised to learn that there’s no major effort to ensure reproducibility of academic results. There are some areas of analytical chemistry in industry where they consider reproducibility – I did a summer job in an industrial analytic lab, they had a distinction between “repeatability” (same experimenter, same lab) and “reproducibility” (different experimenter, different lab) and if I recall right the error bars on the second tended to be twice the size as on the first. Chemistry is a big field, and I expect that things differ by subfield. In organic synthesis, an amount of repetition often happens because you often want to make something in order to use it in some later experiment; the general feeling seems to be that the reactions usually work, but the yield you get is often smaller than reported, especially if you’re a not-very-skilled grad student. Also the yields reported from certain countries which will remain un-named have a reputation for being inflated. There’s this journal, Organic Syntheses, which collects and curates synthesis procedures, and they actually repeat the syntheses in-house before they publish anything. Apparently their rejection rate due to non-reproducibility is about 3-5% – but in the period of 1982-2005 it was 12%. • Ant says: In the field of materials, Experiments are often partially replicated between studies, in order to check the validity of the new content. If you want to test a new measurement system, you will use the result of previous studies to calibrate it. Idem if you want to check the properties of new material, you will perform the same tests as some other team to show that you are correctly measuring what you are suppose to measure. Maybe this sort of calibration could work for psychology. Never do a study with completely new data, always have something to compare your data with • Anthony says: How often do organic chemistry replications fail? More often than you’d think. Derek Lowe and his commenters make comments about the irreproducibility of academic results quite often, and the problem is so bad they had a symposium about it. Retraction Watch has 251 posts on physical science retractions, though not all are for reproducibility. (The top one is for plagiarism, for example.) 20. T. says: You can make anything statistically significant in psychology with enough “tricks” ( Some people still think it’s perfectly fine to use those tricks. Unfortunately, all these little tricks are allowed and only a few journals actually try and do something about them (like insisting on providing the reader with details). I don’t think that will help in preventing false-positives: just because you report what tricks you used doesn’t make a subsequent false-positive less likely. I fear the only thing that will help is setting new standards, which everyone will have to adhere to (like pre-registration), otherwise nothing will change. Imagine how many resources are wasted with how things are going now. • 27chaos says: but meta analyses mean it’s okay if there are a lot of fraudulent papers so really there’s no problem here at all /s 21. piero says: It seems to me that trying to replicate a result in psychology is like trying to repilicate a weather forecast: you can never be absolutely certain that all relevant factors have been considered, and how much each factor contributes to the outcome. That short-term weather forecasts are pretty accurate only goes to show how much simpler fluid dynamics is compared to psychology. 22. Zakharov says: Incidentally, are successful replications always published, or are they put in the file drawer because failed replications are much more interesting? • PSJ says: Depending on your definition of replication, they are often not. Many labs will do “pilot” studies before engaging in a research project to test viability. These often constitute a partial replication of the main thrust of the foundation papers (they are generally small scale). They are almost never published (usually for good reasons, but nonetheless). This is one way researchers build up an awareness of which new (or old) ideas are most promising as they, or other labs, may have tried a few out already without publication. 23. Alex Zavoluk says: ” One out of every twenty studies will be positive by pure chance ” What? I think you’re alluding to the .05 critical p-value threshold, but that’s not what that tells you. That p-value tells you that of tests of nonexistant phenomena, 1 in 20 will appear true by chance–but what fraction of all published studies are these studies depends on quite a lot of other factors. • Adam says: I believe what he means is any individual researcher can select 20 hypotheses at random and get at least 1 significant result just by chance. Since this will be the one they publish, the prevalence of hypotheses significant only because of chance in the published literature will be much greater than 1 in 20. 24. Typo: In the paragraph starting “The third and biggest concern”, you wrote ” I just so happened to pick the one that really does cause cancer”, when you meant to write “…. cure cancer”. 25. terdragon says: Oh gosh, you’re right, it’s the conjunction fallacy: “Chemical X cures cancer, and only under certain very finicky conditions!” trying to be a more likely statement than “Chemical X cures cancer!” • PSJ says: This doesn’t seem quite right. It’s wouldn’t be surprising to find that the probability of (x plus any one of unnamed factors) is more likely to be true than (x on its own) to cure cancer. Not a real conjunction fallacy. • HeelBearCub says: Necessary, but not sufficient. Like saying it should be more likely that turning the handle would open a locked door than: inserting key, turning key, then turning handle. • RCF says: The term “conjunction fallacy” refers to only top-level conjunctions. Believing that “If (A and B) then C” is more likely than “If A then C” is not an example of the conjunction fallacy, because the “and” resides within an if-then statement. It is quite possible for the first statement to be true but the second false. • HeelBearCub says: Isn’t conjunction fallacy believing simply that A and B is more likely than A? Remove the “and then C”. • AJD says: The conjunction fallacy is believing that A and B is more likely than A, and it’s a fallacy because A and B is in fact less likely than A. ((A & B) –> C) actually is logically more likely than (A –> C), so believing it isn’t a fallacy. 26. Q says: Is psychology any worse than biosciences regarding replication ? I dimly recollect they are in the same or similar situation. You would, however, still consider biosciences a promissing field. 27. Professor Frink says: I don’t think psychology’s replication crisis is worse than any other fields. The issue is that some weird, counterintuitive result pops up and suddenly the media is running with it and reporting breathlessly. Meanwhile, the sober psychologists actually are attempting the replications, trying to figure out how the phenomena extend,etc. Psychology would have a replication crisis if none of these things are being replicated. But they are! Statistics isn’t a perfect tool. With garden of forking path problems and mild, unintentional p-hacking you can easily end up expecting lots of published results to fail replication (especially if you replicate the most counterintuitive, which are the most likely to fail). As long as replication and extension keeps fixing these problems we have nothing to worry about it. • Sastan says: I think it is, merely because humans are infinitely more complex than pretty much anything else we study. It’s not that psychological methods are terrible, it’s that the subjects are so sensitive. The problem Psychology has is fifty years of overconfidence and media coverage of that overconfidence. We’ve wrecked whole areas of society by broadcasting unproven new ideas (suppressed memories!) to a gullible public. What Psychology needs is the humility to tell everyone we’re a very young science, we barely know anything. 28. Fairhaven says: After 15 years of private practice doing long term psychoanalytic psychotherapy, I concluded that the only solid thing I was taught was the benefit of the limits of the 50 minute hour and not having a personal relationship, and some very basic things about listening skills. other than that, I think the main thing any therapist has to offer (outside of drugs, which I didn’t prescribe) is your own personal wisdom and kindness, and the relief of a place to express yourself. for some people, sadly, the encouragement to express grievances and dwell on sorrow, and blame parents for everything, is so seductive, it is actually counterproductive. plus you’re getting all that lovely attention for remaining whiney and miserable. this sounds nasty, and i don’t really mean it that way – it’s just that therapy can reward being negative and stuck, and your heart breaks for the people who get no relief and no help despite your best and sometimes heroic efforts to help them. I finally stopped being a therapist and became a novelist. • LTP says: Might that be an artifact of psychoanalytic style therapy? I’ve been in therapy a lot, and honestly therapists have often actively discouraged me from wallowing and blaming my parents and so on if I did it too much. • Scott Alexander says: Can you explain the “not having a personal relationship?” I don’t feel like I really understand this well enough yet. I’m obviously not talking about “go on dates with them”, but things like “ask them how their kids are doing and show some sympathy and say you’ve been there too”. 29. Steven says: The fact that the planes don’t land doesn’t prove there’s a crisis in our cargo cult! 30. 27chaos says: I hate how the authors of that paper keep insisting that maybe other fields of science do just as bad, so there’s no justification to be irritated with psychologists or more skeptical of their results than of the results of someone else in another field. Like, it’s true that fraud happens everywhere, but that doesn’t lessen the importance of fraud anywhere. Psychology should get less credit after this result, even though it’s true that other fields should also get less credit. Plus, it’s entirely possible psychology is indeed worse than most other fields, this study constitutes decent evidence for that possibility. I suppose what I’m saying is that they’re refusing to update incrementally, and instead just denying that the results have any practical importance at all. But jumping to the idea that everyone is unfairly bullying psychologists seems very rich, after finding such a major flaw in the field’s research. The fact that you’re criticizing others in your field doesn’t mean your field gets a gold star sticker that magically immunizes it to generalized criticism. It should be the default, not something to be extremely proud about. 31. RCF says: Are experimenter effects found by people who don’t believe in them? If so, would that confirm or refute the concept? 32. pf says: Only about 40% of psychology experiments replicate: have there been other systematic attempts at replicating many studies that found a similar proportion? Are there any planned? I’m not sure I trust such a dramatic outcome as that in a field where only 40% of studies have reproducible results…. (…and if you systematically go through modern psychology textbooks and remove all conclusions drawn from experiments that haven’t been reproduced half a dozen times, what changes would that produce? How much of what is treated as “accepted knowledge” in psychological fields, on a par with Newtonian mechanics in physics prior to quantum theory and relativity, is built on results that haven’t been sufficiently replicated? I’ve been wondering about that since high school, but I’ve never known how to find out.) 33. Rick G says: Read enough Andrew Gelman, and you’ll be become aware of a bunch of examples of and terms for statistical problems (e.g. Garden of Forking Paths) that are probably already floating around in your head. Then you can come to terms with the fact that the overwhelming cause of failed replications is that the the original result was the product of noise-mining, i.e. wasn’t true to begin with. 34. (Most of this is lifted from a comment I wrote at Andrew Gelman’s blog, on the same topic.) I agree with your issues with this weak “defense” of replication failures, except perhaps “Right now, failed replications are deeply mysterious.” Reading a lot of papers with poor statistics, and following the literature on the crisis of non-reproducibility, false results, it seems that in most of these cases, people are fitting models to very noisy data with very few data points, which is always a dangerous thing to do. Doesn’t everyone realize that noise exists? After asking myself this a lot, I’ve concluded that the answer is no, at least at the intuitive level that is necessary to do meaningful science. This points to a failure in how we train students in the sciences. (Or at least, the not-very-quantitative sciences, which actually are quantitative, though students don’t want to hear that.) If I measured the angle that ten twigs on the sidewalk make with North, plot this versus the length of the twigs, and fit a line to it, I wouldn’t get a slope of zero. This is obvious, but I increasingly suspect that it isn’t obvious to many people. What’s worse, if I have some “theory” of twig orientation versus length, and some freedom to pick how many twigs I examine, and some more freedom to prune (sorry) outliers, I’m pretty sure I can show that this slope is “significantly different” from zero. I suspect that most of the people [that perform irreproducible studies] never done an exercise like this, and have also never done the sort of quantitative lab exercises that one does repeatedly in the “hard” sciences, and hence they never absorb an intuition for noise, sample sizes, etc. This “sense” should be a pre-requisite for adopting any statistical toolkit. If it isn’t, delusion and nonsense are the result. 35. gwern says: I am not saying that we shouldn’t try to reconcile results and failed replications of those results, but we should do so in an informed Bayesian way is a Bayesian interpretation of the results: he treats it as a Bayes factor problem, and calculates the BF of each replication, finding that a lot increase the posterior probability of a non-zero effect but a lot also decreasing the posterior probability (sometimes by an enormous amount). Unfortunately, I don’t think he takes it a step further by calculating the original BF of the studies and seeing what the net posterior probabilities look like. (I think a good definition of a failure to replicate is if the replication’s BF is so small it completely neutralizes the original.) 36. P. George Stewart says: Well, what have been the known motives of scientific fraud in the past? Use those criteria to spot-check on results going forward. No, they might not cover all possible cases, but at least we have somewhere to start. At a rough guess, I’d say bias arising from desire to preserve or enhance one’s social status, desire for money and desire to prove one’s ideology are going to be things you’ll want to test for. 37. Urstoff says: Okay, so given all the hullabaloo in the last few years, do any types of priming still hold up? Perceptual priming seems pretty obvious to anyone with senses. Semantic priming was (seemingly) established in the literature long before social priming ever became a research topic. Presumably those types of priming are still legit, right? • 27chaos says: You might consider anchoring a type of priming. • Scott Alexander says: Stroop effect is still rock solid. I think the kind where if you see the word “doctor” you’re more likely to read _URSE as “nurse” rather than “curse” works too. But I think it has to be very short-term and very closely related. 38. First comment in this your very, very excelent blog, so let me start sending big congratulations for the insightful, mostly funny and always entertaining posts. I very much agree with this one, and was specially moved by this sentence, which summarizes the problem with psychology (and, I’de dare to say, with every “social science”) today: I dare to say what most psychologist, both in clinical practice and in academia, consider to be the purpose of psychology is not to gain a deeper understanding of how the human mind actually works (“actually” has those nasty objectivistic, cognitivist connotations), but to get papers published and accumulate quotations, as that is what will be most beneficial for their careers. It is easy to forget, within the hubbub of preparing those papers, submitting them, reading what others publish to stay up to date, reviewing their papers in turn, attending conferences where one can network and gain access to additional publication venues… that the papers should be about something “real”, or, using an even more tainted word, have to be “true”. Of course, part of the problem is that truth (and the objective existence of an independent reality we can gain knowledge about) seem to be pretty unfashioanble these days… 39. Lemminkainen says: It seems that the obvious solution for this sort of problem would be for some philanthropist (or perhaps the government?) to set up an institution dedicated to replicating studies in fields with replication problems like psychology, biology, medicine, etc. From what I understand about the academic labor market, graduate schools in biology and psychology produce a lot more PhDs than the set of science-doing institutions actually needs, so you could probably pick up some people who failed to get academic or research-institute or science-industry jobs but who still want to work as scientists fairly cheaply. (Medicine would be a bit harder for this) You could also create an endowment to pay a bunch of eminent people in the field who are concerned about replication a bunch of money to edit a peer-reviewed journal that publishes nothing but replication attempts. 40. Ustun Sunay says: As a practicing chemist with a degree in psychology, it has come to my attention that a non-reproducible result in chemistry leads to more questioning than one in psychology by its practitioners. This may have something to do with the research culture in each field. And then maybe not… • Tom Womack says: In chemistry you’re much more confident about the consistency of the things you’re working with, and you’ve got a big set of anecdotal evidence about the ways that things that go wrong … if the reaction works with one bottle of samarium triflate and doesn’t work with another, you can buy a new bottle much more easily than you can round up another thirty undergrads; if it doesn’t work with the new bottle, then unless it was an exceptionally fascinating reaction you mutter ‘unknown contaminants’ and go on to do something else. • Peter says: Alternatively you can try to find out what it was was in the old bottle. There was an episode in my PhD lab where someone did some experiments using chloroform as the solvent, got some results, spent a few months doing something else, went back to those experiments and found she couldn’t repeat the results… until she realised that previously she’d been using “technical grade” chloroform and now she was using the purer “analytical grade” chloroform. The former contains traces of methanol. So she added traces of methanol to the analytical grade chloroform and then she was able to reproduce her old results. There seem to be some areas of inorganic chemistry, where leaving a half-empty half-sealed bottle of something in a cupboard for a few years is a good way to let just enough moisture and oxygen at things to make something with interesting, exciting and hard-to-reproduce activity… • Deiseach says: As with “Doctor Jekyll and Mr Hyde”: My provision of the salt, which had never been renewed since the date of the first experiment, began to run low. I sent out for a fresh supply, and mixed the draught; the ebullition followed, and the first change of colour, not the second; I drank it and it was without efficiency. You will learn from Poole how I have had London ransacked; it was in vain; and I am now persuaded that my first supply was impure, and that it was that unknown impurity which lent efficacy to the draught. • Deiseach says: Well, that’s your problem right there. The same way that specific lines of lab rats and mice have been bred to obtain consistent experimental results, somebody should look into breeding lines of undergrad psychology test subjects. More consistency, plus making it like Real Science running tests on rats! 🙂 41. Doctor Mist says: Well, probably everybody has seen this, but just in case. Today’s xkcd. 42. Dan Simon says: Your last paragraph, Scott, hints obliquely at what I think is the real core question here: why is there so much research on cases where priming works, and so little research on actually understanding priming–exploring possible mechanisms at work, for instance? My hypothesis: priming research was never intended to give insight into human psychology at all. Rather, it was developed as a magnificently productive source of relatively inexpensive, easy-to-perform experiments that could, with a minimal sprinkling of creativity, produce “surprising” results suitable for publication. And since the modern academic research career is focused entirely on generating publishable results rather than gaining scientific insight, priming was thus a psych researcher’s ideal research topic. The obvious follow-up question: what fraction of popular scientific research topics these days are popular for a similar reason? • HeelBearCub says: This is annoying. It basically says “psychology is replete with charlatans and liars. I would be happy to call anyone who does priming research a liar to their face. Actually, come to think of it, I’ll happily accuse anyone who is a professor a fraud until they provide evidence otherwise.” Am I putting words in your mouth? Or are you ascribing motivations to a whole cohort, absent evidence? • Douglas Knight says: Dan did not call them liars. Really, he didn’t. Nor will I call you a liar for this false accusation. He said that the experiments were intended to be cheap, not that they were intended to produce false positives. I suppose that some definitions of “charlatan” cover that. But the weak definition of people who fool themselves. But I did use the word “fraud.” Unlike Dan, I do not use the word “intend.” I make no claims about people’s thought processes. What is important is the selection pressure from the system. The system selects for people who produce a large number of published papers. It selects for people who get news coverage. It selects for people who are not held back from their experiments and their publication by their understanding of statistics. Publication selects for positive results. • HeelBearCub says: @Douglas Knight: “never intended to give insight into human psychology at all” and “a minimal sprinkling of creativity” both strongly imply an attempt to deceive. In additon, you don’t get psych research grants if you have no intention of providing insight into psychology. “I never said the word liar” is the kind of defense a 12 year old offers. I think there are problems with the publishing, grant and tenure process. When the model is now “publish or perish” and replication and negative results aren’t publishable and grant making bodies only fund things that seem the most sure to generate results, it will push the science in a certain direction. But that is really different than saying that all of academia is entirely unfocused on gaining scientific insight. There is a very, very important baby in that bath water. In fact, this idea that academics are bullshitters is, in no small part, at the heart of the current crisis. What attitudes and pressures do you think caused the current “publish or perish” paradigm? • Douglas Knight says: Grant committees aren’t mind-reading machines. Here is another phrasing: professors have been taught a procedure and have been promised that it will produce insights into psychology. They honestly pursue cargo cult science. They must have some interest in psychological insights, or they would be in another field. But there is a very strong selection pressure against people who give much thought to this procedure or what constitute psychological insight. They generally design research programs with the goal of being publishable. Or they inherit a research program from their advisor — another form of selection. I don’t uniformly condemn all academic experimental science. At the very least, some parts teach a more complicated procedures with lots of checks and replication. But there are a lot of areas without a baby under the bathwater. • HeelBearCub says: OP didn’t say that there were some specific areas that had issues, he said “the modern academic research career is focused entirely on generating publishable results rather than gaining scientific insight”. That means the entirety of academic research, which is flat out BS. Why are you defending that? • Douglas Knight says: If I were defending that statement, you could look at my defense of it and see why. In fact, I do think it is quite defensible statement, and completely compatible with everything I said. But I did not bother to defend it, because I don’t care what Dan thinks. It is much better to make my own statements, seeking to learn from Dan’s errors of obscurity. Not to mention your reaction to my previous interpretation. But, if you really care what Dan thinks, note that his last paragraph explicitly asks about diversity. • Dan Simon says: I wholeheartedly endorse Douglas Knight’s rephrasing of my point. My goal was not to impute deliberate fraud and deceit to the research community, the vast majority of whom conduct their actual investigations honestly and to the best of their ability. But in my experience, most researchers are also well aware that when it comes to *what* they investigate, they’re essentially participants in a massive game of “survivor”, and accept with varying degrees of regret that they must play along to avoid getting voted off the island. Many make the best compromises they can between their scientific idealism and their careerist realism, choosing areas of investigation that they can at least rationalize to themselves aren’t entirely useless charades (and bitterly reproaching, among like-minded friends, any colleague who is more compromised in this respect than they consider themselves to be). But those who are actually willing to sacrifice career goals to pursue what they believe is a far more important and productive research area for the betterment of the human condition are pretty few and far between. • 27chaos says: How dare someone doubt expert opinion! • Tom Scharf says: “absent evidence”? There is plenty of evidence to support exactly that conclusion. It is discussed in this post. The fact that you respond emotionally to the accusation and present no counter evidence is a foundation to the very problem itself, an appeal to self authority coupled with no apparent desire to hold the science to a higher standard. • Jiro says: Somebody mentioned this earlier, but priming is researched so much because priming is a social justice topic; priming is related to stereotype threat, which is used to explain minority underachievement. I suppose that is a subset of “never intended to give insight into human psychology” and of “getting publishable results”, though. • Scott Alexander says: I don’t think most priming research is explicitly related to this. • Sastan says: Actually, I think it is that priming is a very sensitive phenomenon, which means it can change in weird and interesting ways. More cynically, if you run enough priming experiments, you’ll get the result you want pretty soon, because anything and everything affects priming. 43. LosLorenzo says: “Apples sometimes fall down, but equally often they fall up” I tried to replicate this, but failed. The apple just hovered. 44. njnnja says: Isn’t the fundamental problem that humans have spent the last 50,000 years of societal evolution trying to figure out what makes us tick? So we understand pretty well the basic stuff about human motivation. So the only thing that modern psychology can add is counterintuitive results, which are likely to get published precisely because they are “adding to the sum of knowledge” while at the same time, most likely to be incorrect (given the huge prior against the result) If you want to understand human nature, it’s tough to beat cultural advances such as great literature, philosophy, and legal and ethical systems. As an aside, I think that the comparisons to things like physics are interesting because we tend to forget how advanced humans became in physics and engineering in ancient civilizations (pyramids, anyone?). So modern science gave us advances in physics in things like EM and thermodynamcs but at the end of the day it is likely that human behavior is far more complex than Maxwell’s equations, and the entire science paradigm might not ever work well in understanding human behavior. 45. alaska3636 says: von Mises explained the difficulty facing the social sciences many years ago, first in Human Action and then again in Theory and History. Whereas the components of physical science exhibit a regular relationship between constants and variables, the actions of humans exhibit no such mathematical constancy. Psychology has always been a politically motivated creature and so it is not unusual that its proponents would like to see its results treated on the same level as those of the physical sciences. 46. emblem14 says: I think it’s amusing when people get all verklempt about “soft sciences” trying to be real science and failing. Psychology, sociology, economics, political “science” etc. The disturbing part is when otherwise smart people start taking a field of study which on overview cannot provide us meaningful knowledge with any degree of confidence or predictability, and imposing someone’s pet theories du jour as public policy. The timeless Tom Leherer: 47. Richard Kennaway says: Coincidentally or otherwise, an article in a similar vein recently appeared here. (HT Irina Margold.) “Science Isn’t Broken,” it is titled, then lists a catalogue of woes (failure to reproduce, p-hacking, post hoc hypothesizing, unconscious bias, deliberate fraud, vanity journals, corrupt refereeing, bad refereeing, and more). All that it then says to justify the proposition of the title is that science is difficult, all of those defects are actually signs of it getting better, and every result is a “temporary truth”. • Douglas Knight says: Scott linked to that in the previous link post. Most people have extremely false beliefs about how science actually works. A lot of things people complain about are very old and science seemed to work back then, anyhow. But new problems are probably a sign of things getting worse. 48. Tom Scharf says: If tests were done honestly, then replication efforts would find STRONGER results 50% of the time, right? Anyone care to apply the “disparate impact” legal theory to this? There is a lot of fertile ground to plow in a self examination of how the social sciences reaches its conclusions. Anyone who has been around academia or the sciences for decades knows how much an experimenter’s desire to prove a conclusion NOT very mysteriously affects results. Confirmation bias and data mining being two leading contenders. Professional courtesy means looking the other way and not calling colleagues out. Add in that once someone makes their strong viewpoint public and retracting it becomes professionally embarrassing, you have a toxic mixture that self policing simply will not solve. I look at social science conclusions the same way I scan The National Enquirer at the grocery store. The result might actually be true, but there is no easy way to know if it is, and the source simply cannot be trusted. Any result that touches on the Red/Blue culture wars is to be assumed invalid until proven true beyond a clear and convincing threshold in my opinion. The preponderance of the evidence threshold is not adequate here. Too much self imposed political correctness and a profound political tilt in the social sciences taints results. Social sciences, investigate yourself. We don’t trust you. That’s the real crisis. • Anonymous says: No. For example, suppose all experiments are conducted 100% honestly but there are actually no interesting effects to discover. Some experiments will still find interesting effects because of statistical flukes, but these effects will almost always fail to replicate. 49. Steve Sailer says: I’d like to emphasize the distinction between short-term and long-term predictions by pointing out two different fields that use scientific methods but come up with very different types of results. At one end of the continuum are Physics and astronomy. They tend to be useful at making very long term predictions: we know to the minute when the sun will come up tomorrow and when it will come up in a million years. The predictions of physics tend to work over very large spatial ranges, as well. As our astronomical instruments improve, we’ll be able to make similarly long term sunrise forecasts for other planetary systems. At the other end of the continuum is the marketing research industry, which uses scientific methods to make short-term, localized predictions. For example, “Dear Jello Brand Pudding, Your new TV commercials nostalgically bringing back Bill Cosby to endorse your product again have tested very poorly in our test market experiment, with the test group who saw the new commercials going on to buy far less Jello Pudding over the subsequent six months than the control group that didn’t see Mr. Cosby endorsing your product. We recommend against rolling these new spots out in the U.S. However, they tested remarkably well in China, where there has been coverage of Mr. Cosby’s recent public relations travails.” I ran these kind of huge laboratory-quality test markets over 30 years ago in places like Eau Claire, Wisconsin and Pittsfield, MA. (We didn’t have Chinese test markets, of course.) The scientific accuracy was amazing, even way back then. But while our marketing research test market laboratories were run on highly scientific lines, that didn’t make our results Science, at least not in the sense of discovering Permanent Laws of the Entire Universe. I vaguely recall that our company did a highly scientific test involving Bill Cosby’s pudding ads, and I believe Cosby’s ads tested well in the early 1980s. That doesn’t mean we discovered a permanent law of the universe: Have Bill Cosby Endorse Your Product. In fact, most people wouldn’t call marketing research a science, although it employs many people who studied sciences in college and more than a few who have graduate degrees in science, especially in psychology. Marketing Research doesn’t have a Replication Crisis. Clients don’t expect marketing research experiments from the 1990s to replicate with the same results in the 2010s. Where does psychology fall along this continuum between physics and marketing research? Most would agree it falls in the middle somewhere. My impression is that economic incentives push academic psychologists more toward interfacing closely with marketing research, which is corporate funded. Malcolm Gladwell discovered a goldmine in recounting to corporate audiences findings from social sciences. People in the marketing world like the prestige of Science and the assumption that Scientists are coming up with Permanent Laws of the Universe that will make their jobs easier because once they learn these secret laws, they won’t have to work so hard coming up with new stuff as customers get bored with old marketing campaigns. That kind of marketing money pushes psychologists toward experiments in how to manipulate behavior, making them more like marketing researchers. But everybody still expects psychological scientists to come up with Permanent Laws of the Universe even though marketing researchers seldom do. • Who wouldn't want to be Anonymous says: I am not sure this is technically true. The n-body problem is really hard. Over the long term, perturbations between the planets are… unpredictable. If you add a zero or two, we don’t even know what order the planets are going to be in. Differences of a few meters in the starting position of Mercury in simulations, for example, is the difference between it crashing into the Sun, Venus, or Earth. Or differences as small as 15 meters in the position of the Earth makes it impossible to predict the season on Earth. If we can’t tell what season it is going to be in 100 million years, I have a hard time believing we know the exact minute of sunrise in one million years. • Eric says: Also how exactly would we test those predictions, wait a million years to see if you’re right? • According to Wikipedia: The planets’ orbits are chaotic over longer timescales, such that the whole Solar System possesses a Lyapunov time in the range of 2–230 million years. This suggests that it should be possible to predict when the sun will rise in a million years, but not in any longer timescale. • Who wouldn't want to be Anonymous says: Okay, I’m not going to lie, I was being a little smarmy. Nevertheless, my point was that if we can’t even predict the location of the Earth on time scales only an orders of magnitude or two larger (or three, for catastrophic collisions between the planets), predicting the rotation of the Earth in a million years is a fools errand. It relies on processes that are much more chaotic. Like climate change (and tectonic activity, and convention in the mantel, and… stuff we may not even know about). But if we’re going to play the Wikipedia game, try this one: But the principal effect is over the long term: over many centuries tidal friction inexorably slows Earth’s rate of rotation by about +2.3 ms/day/cy. However, there are other forces changing the rotation rate of the Earth. The most important one is believed to be a result of the melting of continental ice sheets at the end of the last glacial period. This removed their tremendous weight, allowing the land under them to begin to rebound upward in the polar regions, which has been continuing and will continue until isostatic equilibrium is reached. This “post-glacial rebound” brings mass closer to the rotation axis of the Earth, which makes the Earth spin faster (law of conservation of angular momentum): the rate derived from models is about −0.6 ms/day/cy. So the net acceleration (actually a deceleration) of the rotation of the Earth, or the change in the length of the mean solar day (LOD), is +1.7 ms/day/cy. This is indeed the average rate as observed over the past 27 centuries. We can’t get scientists to agree on what the glaciation is going to look like in 100 years, much less a million. More importantly, in order to make the prediction about sunrise, you would need to know the extent of glaciation for the entire duration between now and then. • Peter says: And given that a part of the uncertainty over glaciation is to do with uncertainty over what people will do to combat climate change, and the uncertainty about that is due to uncertainty about politics and thus people persuading each other… astronomy[1] is influenced by social psychology. [1] As in, “astronomical phenomena” not “astronomy, the field of study. Likewise for social psychology. Although social-psychological phenomena may be influenced by social psychology, the field of study. • AJD says: Although I understand the distinction you’re drawing between physics and market research, I think you’re ignoring or glossing over the fact that, since any individual piece of data can be evidence for any of a number of competing hypotheses, and things like background priors and relative explanatory power are involved in choosing which hypotheses to entertain. Have A Beloved Celebrity Whom No One Believes Anything Bad About Endorse Your Product isn’t exactly a Permanent Law of the Universe either, but it’s a lot closer to being one than Have Bill Cosby Endorse Your Product; and it’s no less supported by the ’80s research you refer to. • Deiseach says: So don’t bet your bottom dollar that tomorrow (in one million years’ time) there’ll be sun? 🙂 50. Santoculto says: Many psychological studies can also come with the label ” education ”, ” Freudian studies ”, etc … In all, the political subjectivity or cultural Marxism, which took possession of the ‘human sciences’, is having a strong role. Many studies in the humanities are not based on theory and practice, but only in the analysis of certain possibilities. Almost all ” human sciences’ are taken by ideological and abstract pollution. This lack of contact with the literal reality, not to mention the politically correct hysteria, may be having a great effect on the sharp drop of the credibility of human sciences. Many studies have analyzed ” student groups ” and extrapolate the behavior found as ” human behavior ”. Other factors in this same line of methodological flaw. The human being is a combination of biological variables from different natures, for each study in psychology, there would be the need to create reports, similar to population censuses, but with the addendum of biological, psychological, physiological, etc. 51. aviti says: Wow, this is so true. It reflects how importance is attached to publications than to the conduct of the research itself. Now who to blame? Of course the consumers of the research. No matter what, you have to publish something. Usually you will not want to publish something that shows how an idiot you are. Result is that even scientific method has been hijacked by the people of shaman`s character that now it is difficult to differentiate the two. This also reminds of the fiasco of the STAP cells discovery. It led to shaming very important professors, some commited suicide, others forced into retirement, and others continue to suffer. 52. FWC says: Here’s one idea that might help with some of the issues brought up in this post. The bottom line is to encourage researchers to reproduce their own work. 53. Anonymous says: I find this paragraph – I’m sorry – absolutely idiotic: By “replicating an experiment”, we of course want to replicate all of it as closely as possible. If the original study is flawless and a replicate of it finds something else than the original study, then it can’t be a replicate of the original study – something was clearly changed. There is absolutely no experiment in physics which sometimes yields results that follow Newton’s laws and sometimes results that follow quantum mechanics. It requires very different experiment to verify Newton’s second law and to verify that small particles under non-relativistic conditions follow Schrödinger equation. If the message here is that in psychology things are so chaotic that they can depend on arbitrary small differences between experiments (like the wallpaper color), then perhaps the results from these experiences are so weak that they are hardly worth studying in the first place. Ernest Rutherford famously said: “If your result needs a statistician then you should design a better experiment.” If you do a psychology experiment and your conclusion doesn’t outright jump out from the data, I would seriously reconsider how meaningful this result is.
385b26e810eb33b3
Principle of least action From Scholarpedia Chris G. Gray (2009), Scholarpedia, 4(12):8291. doi:10.4249/scholarpedia.8291 revision #150617 [link to/cite this article] Jump to: navigation, search The principle of least action is the basic variational principle of particle and continuum systems. In Hamilton's formulation, a true dynamical trajectory of a system between an initial and final configuration in a specified time is found by imagining all possible trajectories that the system could conceivably take, computing the action (a functional of the trajectory) for each of these trajectories, and selecting one that makes the action locally stationary (traditionally called "least"). True trajectories are those that have least action. Statements of Hamilton and Maupertuis Principles There are two major versions of the action, due to Hamilton and Maupertuis, and two corresponding action principles. The Hamilton principle is nowadays the most used. The Hamilton action \(S\) is defined as an integral along any actual or virtual (conceivable or trial) space-time trajectory \(q(t)\) connecting two specified space-time events, initial event \(A \equiv(q_A,t_A=0)\) and final event \(B \equiv (q_B,t_B=T)\ ,\) \[\tag{1} S\; =\; \int _{0}^{T}L\, \left(q\; ,\; \dot{q}\right) \; d t\quad , \] where \(L\, \left(q\; ,\; \dot{q}\right)\) is the Lagrangian, and \(\dot{q}\; =\; dq/d t\ .\) In the integrand of (1), the Lagrangian function becomes time-dependent when \(q\) assumes the values describing a particular trajectory \(q(t)\). For most of what follows we will assume the simplest case where \(L = K - V\ ,\) where \(K\) and \(V\) are the kinetic and potential energies, respectively; see Section 4 for discussion of the freedom of choice for \(L\), and the relativistic sections 9 and 11 for cases for which \(L \) is not equal to \(K - V\). In general, \(q\) stands for the complete set of independent generalized coordinates, \(q_1, q_2, \ldots\ ,\) \(q_f\ ,\) where \(f\) is the number of degrees of freedom (see Section 4). Hamilton's principle states that among all conceivable trajectories \(q(t)\) that could connect the given end points \(q_A\)and \(q_B\) in the given time \(T\ ,\) the true trajectories are those that make \(S\) stationary. As we shall see in Section 5, if the trajectory is sufficiently short, the action \(S \) is a local minimum for a true trajectory, i.e., "least". In general, for long trajectories \(S\) is a saddle point for a true trajectory (and is never a maximum). In Hamilton's principle the conceivable or trial trajectories are not constrained to satisfy energy conservation, unlike the case for Maupertuis' principle discussed later in this section (see also Section 7). Energy conservation results as a consequence of the Hamilton principle for time-invariant systems (Section 12), for which the Lagrangian \( L(q,\dot{q}) \) does not depend on \(t\) explicitly, but only implicitly when \(q\) takes values \(q(t)\) describing a trajectory . More than one true trajectory may satisfy the given constraints of fixed end-positions and travel time (see Section 3). To emphasize a particular constraint on the varied trajectories, we write Hamilton's principle as \[\tag{2} \left(\delta S\right)_{T} \; =\; 0\quad , \] where the constraint of fixed travel time \(T\) is written explicitly, and the constraint of fixed end-positions \(q_A\) and \(q_B\) is left implicit. We will consider other variational principles below, but all will have fixed \(q_A\) and \(q_B\) (quantities other than \(T\) will also be constrained) so we will always leave the constraint of fixed \(q_A\) and \(q_B\) implicit. (In Section 7 we mention generalized action principles with relaxed end-position constraints.) Some smoothness restrictions are also often imposed on the trial trajectories. It is clear from (1) that \(S\) is a functional of the trial trajectory \(q(t)\), often denoted \(S[q(t)]\), and in (2) \(\delta S\) denotes the first-order variation in \(S\) corresponding to the small variation \(\delta q(t)\) in the trial trajectory: i.e., \( \ S[q(t) + \delta q(t)] - S[q(t)] = \delta S + \delta^2 S + ... \), where \( \delta S \) is first-order in \( \delta q(t), \delta^2 S \) is second-order in \( \delta q(t) \), etc. Explicit expressions for these variations of \(S\) in terms of \(\delta q(t)\) are not needed here, but are given in calculus of variations and advanced mechanics texts, and in Gray and Taylor (2007), for example. The Hamilton principle means that the first-order variation of the action \( \delta S \) vanishes for any small trajectory variation \( \delta q(t) \) around a true trajectory consistent with the given constraints. The quantities \( \delta S \) and \( \delta^2 S \) are usually referred to simply as the first and second variations of \(S\), respectively. The action \(S\) is stationary for a true trajectory (first variation vanishes for all \(\delta q(t)\)), and whether \(S\) is a minimum depends on whether the second variation is positive definite for all \( \delta q(t) \) (see Section 5). The second major version of the action is Maupertuis' action \(W\ ,\) where \[\tag{3} W\; =\; \int _{q_{A} }^{q_{B} }pdq\; =\; \int _{0}^{T}2\, K\, d t\quad , \] where the first (time-independent) form is the general definition, with \(p\; =\; \partial L/\partial \dot{q}\) the canonical momentum (equal to the ordinary momentum in many cases of interest), and \(pdq\) stands for \(p_1dq_1 + p_2dq_2 + \ldots + p_fdq_f\) in general. The second (time-dependent) form for \(W\) in (3) is valid for normal systems in which the kinetic energy \(K\) is quadratic in the velocity components \(\dot{q}_{1} \; ,\; \dot{q}_{2} \; ,\; \cdots \; ,\; \dot{q}_{f} \ .\) The Maupertuis principle states that for true trajectories \(W\) is stationary on trial trajectories with fixed end positions \(q_A\) and \(q_B\) and fixed energy \(E = K+V\ .\) Following our earlier conventions, we write this principle as \[\tag{4} \left(\delta W\right)_{E} \; =\; 0\quad . \] Note that \(E\) is fixed but \(T\) is not in Maupertuis' principle (4), the reverse of the conditions in Hamilton's principle (2). Solution of the variational problem posed by Hamilton's principle (2) yields the true trajectories \(q(t)\ .\) Solution of Maupertuis' variational equation (4) using the time-dependent (second) form of \(W\) in (3) also yields the true trajectories, whereas using the time-independent (first) form of \(W\) in (3) yields (in multidimensions) true orbits, i.e. spatial shape of the true paths. In the latter case, in two and more dimensions, the action \(W\) can be rewritten as an integral involving the arc length along the orbit (Jacobi's form), and the problem then resembles a geodesic or reciprocal isoperimetric problem (Lanczos 1970). In one dimension, for simplicity assume \(q\) is an inertial Cartesian coordinate and note that since the momentum \(p\) is a simple function of \(q\) in the first form for \(W\) due to the constraint of fixed energy \( E = p^2/2m + V(q) \), the only freedom for variation is instantaneous momentum reversals, so that the principle is essentially uninformative, i.e., provides essentially no information beyond what is contained in the assumed conservation of energy. In one dimension, one can find the one or more true trajectories \(q(t)\) from energy conservation and the two end-position values. (The generalization of Maupertuis' principle discussed in Section 7 does not have the defect of being uninformative for one-dimensional systems, and energy conservation is not assumed but derived from the generalized principle for all time-invariant systems, just as it is for Hamilton's principle.) In all cases the solutions for the true trajectories and orbits can be obtained directly from the Hamilton and Maupertuis variational principles (see Section 8), or from the solution of the corresponding Euler-Lagrange differential equations (see Section 3) which are equivalent to the variational principles. Hamilton's principle is applicable to both conservative systems and nonconservative systems where the Lagrangian \(L\, \left(q\; ,\; \dot{q}\; ,\; t\right)\) is explicitly time-dependent (e.g. due to a time-dependent potential \(V(q,t)\)), whereas the form (4) of Maupertuis' principle is restricted to conservative systems (it can be generalized – see Gray et al. 2004). Systems with velocity-dependent forces require special treatment. Dissipative nonconservative systems are discussed in Section 4. Magnetic and relativistic systems are discussed by Jackson (1999), and in Section 9 below. For conservative systems the two principles (2) and (4) are related by a Legendre transformation, as discussed in Section 6. An appealing feature of the action principles is their brevity and elegance in expressing the laws of motion. They are valid for any choice of coordinates (i.e., they are covariant), and readily yield conservation laws from symmetries of the system (Section 12). They generate covariant equations of motion (Section 3), but they also supply an alternative and direct route to finding true trajectories which bypasses equations of motion; this route can be implemented analytically as an approximation scheme (Section 8), or numerically to give essentially exact trajectories (Beck et al. 1989, Basile and Gray 1992, Marsden and West 2001). Action principles transcend classical particle and rigid body mechanics and extend naturally to other branches of physics such as continuum mechanics (Section 11), relativistic mechanics (Section 9), quantum mechanics (Section 10), and field theory (Section 11), and thus play a unifying role. Unifying the various laws of physics with the help of action principles has been an ongoing activity for centuries, not always successful, e.g., successful for particle mechanics and geometric optics by Maupertuis and Hamilton (Yourgrau and Mandelstam 1968), and not completely successful for particle mechanics and thermodynamics by Helmholtz, Boltzmann and others (Gray et al. 2004), and continues to this day with some of the modern quantum field theories. The action principles have occasionally assisted in developing new laws of physics (see comments at the end of Section 9). The Hamilton and Maupertuis principles are not applicable, however, if the system is nonholonomic, and usually not if the system is dissipative (Section 4). Various aspects of the extensive history of action principles and variational principles in general are discussed in the historical references at the end of this article. Maupertuis' principle is older than Hamilton's principle by about a century (1744 vs 1834). The original formulation of Maupertuis was vague and it is the reformulation due to Euler and Lagrange that is described above. Maupertuis' motivation was to rationalize the laws of both ray optics (Fermat's principle of least time (1662)) and mechanics with a metaphysical teleological argument (using design or purpose to explain natural phenomena) that "nature acts as simply as possible". Today we recognize that a principle of least action is partly conventional (changing the sign in the definition of the action leaves the equations of motion intact but changes the principle to one of greatest action), that in general action is least only for sufficiently short trajectories (Section 5), that the principle is not valid for all force laws (Section 4), and, when valid, it is a mathematical consequence of the equations of motion. No a priori physical argument requires a principle of least or stationary action in classical mechanics, but the classical principle is a consequence of quantum mechanical variational principles in the classical limit (Section 10). Quantum mechanics itself can be based on postulates of action principles (Section 10), and in new fields one often simply postulates an action principle. At the end of Section 9 we briefly discuss the history of the role of action principles in establishing new laws of physics, and at the end of the preceding section we mention the long history of using action principles in attempting to unify the various laws of physics. As with Maupertuis, unifying the treatments of geometric optics and mechanics motivated Hamilton. He used both actions, \(W\) and \(S\), to find paths of rays in optics and paths of particles in mechanics. Hamilton introduced the action \(S\) and its variational principle described above, and an extension he called the law of varying action, which is closely related to a generalization of Hamilton's principle which we call the unconstrained Hamilton principle in Section 7. Just as the true paths satisfy the Euler-Lagrange differential equation discussed in the next section, from his law of varying action Hamilton showed that the action \(S\) for true paths, when considered as a function of the final end-point variables \( q_B \) and \( T \), satisfies a partial differential equation, nowadays called the time-dependent Hamilton-Jacobi equation. He found the corresponding time-independent Hamilton-Jacobi equation for action \(W\) for true paths, and the optical analogues of both Hamilton-Jacobi equations. He also reformulated the second-order Euler-Lagrange equation of motion for coordinate \(q(t)\) as a pair of first-order differential equations for coordinate \(q(t)\) and momentum \(p(t)\), with the Hamiltonian \(H(q,p)\) replacing the Lagrangian \(L(q,\dot{q})\) (via equation (6) below), giving what are called the Hamilton or canonical equations of motion, i.e., \( \dot{q} = \partial H/\partial p \) and \( \dot{p} = -\partial H/\partial q \). These form the basis of modern Hamiltonian mechanics (Goldstein et al. 2002), with its large array of useful concepts and techniques such as canonical transformations, action-angle variables, integrable vs nonintegrable systems (Section 10), Poisson brackets, canonical perturbation theory, canonical or symplectic invariants, and flow in phase space and Liouville's theorem. (In general there is one pair of Hamilton canonical equations for each pair of canonical variables \(q_{\alpha},p_{\alpha}\), with \( \alpha = 1,2,...,f \). The set of canonical variables \(q_\alpha,p_\alpha \) defines the \(2f\)-dimensional phase space of the system. Because of the uniqueness of the solution of the first-order Hamilton equations for the trajectory starting from any point \( (q_{\alpha}(0), p_{\alpha}(0)) \), a set of trajectories in phase space \((q_{\alpha}(t), p_{\alpha}(t))\) flows without crossing, thus behaving like the flow lines of an incompressible fluid, which is a simple version of Liouville's theorem.) Using (6) we can express the Lagrangian in terms of \(q\) and \(p\) and the Hamiltonian, instead of \(q\) and \(\dot{q}\), and the Hamilton action principle then yields directly the Hamilton equations of motion as its Euler-Lagrange equations (Goldstein et al. 2002). This last result was given implicitly by Hamilton in his papers and somewhat more explicitly by Jacobi in his lectures (Clebsch 1866). To distinguish this form of the Hamilton principle from the usual one involving the Lagrangian \( L(q,\dot{q}) \), it is sometimes called the phase space form of Hamilton's principle. The various versions of quantum mechanics all developed from corresponding classical mechanics results of Hamilton, i.e., wave mechanics from the Hamilton-Jacobi equation (Schrödinger), matrix and operator mechanics from the Hamilton equations of motion and Poisson Brackets (Heisenberg and Dirac), and the path integral from Hamilton's action (Dirac and Feynman). Over the years, and even recently, a number of reformulations and generalizations of the basic Maupertuis and Hamilton action principles have been given (see Gray et al. 1996a, 2004 for extensive discussions and references). In Section 7 we discuss several of the most recent generalizations. Euler-Lagrange Equations Using standard calculus of variations techniques one can carry out the first-order variation of the action, set the result to zero as in (2) or (4), and thereby derive differential equations for the true trajectory, called the Euler-Lagrange equations, which are equivalent to the variational principles. For Hamilton's principle, the corresponding Euler-Lagrange equation of motion (often called simply Lagrange's equation) is (see, e.g., Brizard 2008, Goldstein et al. 2002) \[\tag{5} \frac{d}{d t} \; \left(\frac{\partial L}{\partial \dot{q}_{\alpha}} \right)\; -\; \frac{\partial L}{\partial q_{\alpha}} \; =\; 0\quad , \] where \(\alpha = 1,2,...,f\ .\) As with the action principles, eqs.(5) are covariant (i.e. valid for any choice of the coordinates \( q_\alpha \)), and can be written out explicitly as coupled second-order differential equations for the \( q_\alpha \)'s. For particle systems these equations reduce to the standard Newton equations of motion if one chooses Cartesian coordinates in an inertial frame. The time-dependent version of Maupertuis' principle yields the same equation of motion for the space-time trajectories \(q(t)\ .\) The time-independent version of Maupertuis' principle yields (Lanczos 1970, Landau and Lifshitz 1969) corresponding differential equations for the true spatial paths (orbits). As a simple example, consider the Hamilton principle for the one-dimensional harmonic oscillator with the usual inertial frame Cartesian coordinate \(x\). The Lagrangian is \(L \; = \; K \;- \; V \; = \; (1/2)m \dot{x}^2 \, - \; (1/2)k x^2 \ ,\) where m is the mass and k is the force constant. The partial derivatives of \( L \) are \(\partial L/ \partial \dot{x} \; = \; m \dot{x} \) and \( \partial L/ \partial x \, = \; -kx \) so that the Euler-Lagrange equation (5) gives \( m \ddot{x} \; + \; kx \; = \; 0 \ ,\) which is Newton's equation of motion for this system. The well known general solution is \( x(t) \; = \; C_1 sin \omega t \; + \; C_2 cos \omega t \ ,\) where \( \omega \; = \; (k/m)^{1/2} \) is the frequency. The constants \( C_1 \) and \( C_2 \) are chosen to satisfy the constraints \( x \; = \; x_A \) at \( t \; = \; 0 \) and \( x \; = \; x_B \) at \( t \; = \; T -\) see next paragraph. Strictly speaking, because the action principles are formulated as boundary value problems (\( q \) is specified at two points \(q_A\) and \(q_B\)) and not as initial value problems (\( q \) and \( \dot{q} \) are specified at one point \( q_A \)), there may be more than one solution: there can in fact be zero, one, two, ..., up to an infinite number of solutions in particular problems. For example, applying the Hamilton principle to the one dimensional harmonic oscillator with coordinate \(x\) (see preceding paragraph) and specifying \(x = 0\) at \(t = 0\) and \(x = 0\) at \(t = T\) (one period \(2\pi/\omega\)) gives an infinite number of solutions, i.e. \(x(t) = A sin \omega t\) with one solution for each value of the amplitude \(A\), which is arbitrary. The same system with the constraints \(x = 0\) at \(t = 0\) and \(x = A\) at \(t = T/4\) has the unique solution \(x(t) = A sin \omega t\ ,\) and for the constraints \(x = 0\) at \(t = 0\) and \(x = C\) at \(t = T/2\) no solution exists for nonzero \(C\). In practice, one usually has initial conditions in mind, where the solution is unique, and selects the appropriate solution of the corresponding boundary value problem, or imposes the initial conditions directly on the solution of the Euler-Lagrange equation of motion. Thus for the harmonic oscillator example with specified initial conditions, say \(x=0\) and \( \dot{x}=v_0\) at \(t=0\), we simply choose \(C_1 = v_0/\omega\) and \(C_2=0\) in the general solution given in the paragraph above. Another system exhibiting multiple solutions under space-time boundary condition constraints is the quartic oscillator, discussed in Sections 5 and 8. In Fig.1 note that two true trajectories (labelled \(1\) and \(0\)) are shown connecting the initial space-time event P at the origin and the final space-time event denoted by a square symbol where the two trajectories intersect. The true trajectories \(1\) and \(0\) shown on the figure are the two of lowest energies having P and the square symbol as space-time end-events. Additional true trajectories with higher energies also satisfy the boundary conditions. As an example giving multiple solutions with Maupertuis principle constraints (specified initial and final positions, and specified energy), consider throwing a ball in a uniform gravitational field from a specified position P and with specified energy \(E\) (which corresponds to a specific initial speed). Ignore air friction. If we throw the ball twice, in the same vertical plane, with two different angles of elevation of the initial velocity, say one with 45 degrees and the other with 75 degrees, but the same initial speed, the two parabolic spatial paths will recross at some point in the plane, call it R. Thus specifying P and R and \(E\) does not in general determine a unique true trajectory, as we have found two true trajectories here with the same values of P and R and \(E\). We see in these examples of multiple solutions the roles of the differing constraints in the Hamilton and Maupertuis principles. In the Hamilton principle examples the multiple solutions have the same prescribed travel time \(T\), and differ in energy \(E\) which is not prescribed. In the Maupertuis principle example, the opposite is true: the multiple solutions have the same prescribed \(E\), and differ in \(T\) which is not prescribed. The complementarity between prescribed \(T\) and prescribed \(E\) is discussed in Section 6. Restrictions to Holonomic and Nondissipative Systems The action principles (2) and (4) are restricted to holonomic systems, i.e. systems whose geometrical constraints (if any) involve only the coordinates and not the velocities. Simple examples of holonomic and nonholonomic systems are a particle confined to a spherical surface, and a wheel confined to rolling without slipping on a horizontal plane, respectively. Attempts to extend the usual action principles to nonholonomic systems have been controversial and ultimately unsuccessful (Papastavridis 2002). Hamilton's principle in its standard form (2) is not valid, but a more general and correct Galerkin-d'Alembert form has been derived. For a holonomic system with \(n \) coordinates and \(c\) constraints, the number of independent coordinates (degrees of freedom) is \( f = n-c \). Thus for the example of the particle confined to a spherical surface we have \(n = 3 \) coordinates, \(c = 1 \) constraint, and hence \(f = 2 \) independent coordinates. These can be chosen as any two of the particle's three Cartesian coordinates with respect to axes with origin at the center of the sphere, or as latitude and longitude coordinates on the sphere surface, etc. One can implement holonomic constraints as in Sections 1 and 3 by using a Lagrangian \(L \) with any set of \( f \) independent coordinates \(q \), or one can treat the \(n \) coordinates symmetrically by expressing \(L \) as a function of all of them and using the method of Lagrange multipliers (Lanczos 1970, Morse and Feshbach 1953, Fox 1950) to take account of the constraints. In essence, the Lagrange multipliers relax the constraints, with one multiplier for each constraint relaxed. In the literature (e.g., Dirac 1964) a second type of velocity-dependent constraint, nongeometic and called "kinematic" in Gray et al. (2004), has been discussed. The usual action principles are valid for this type of velocity-dependent constraint. As simple examples, for conservative systems one could impose the additional constraint of fixed energy \( K(\dot{q}) \; + \; V(q) \) on the trial trajectories in the Hamilton principle, and the fundamental constraints in the Hamilton and Maupertuis principles involve the velocities. The Dirac-type constraints are implemented by the method of Lagrange multipliers. In Section 7 we use Lagrange multipliers to relax the fundamental constraints of the Hamilton and Maupertuis principles. In general, the action principles do not apply to dissipative systems, i.e. systems with frictional forces. However, for some dissipative systems, including all one-dimensional ones, Lagrangians have been shown to exist, and Hamilton's principle then applies (see Gray et al. 2004 for a brief review, and Chandrasekhar et al. 2007 for more recent developments). More generally, the question of whether a Lagrangian and corresponding action principle exist for a particular dynamical system, given the equations of motion and the nature of the forces acting on the system, is referred to as the "inverse problem of the calculus of variations" (Santilli 1978). If a Lagrangian \( L \) does exist, it will not be unique. For example, it is obvious from (1) and (2), or (5), that \( c_1L \) and \( L + c_2 \) are equally good Lagrangians for any constants \( c_1 \) and \( c_2 \). It is also clear from (1) and (2) that we can add to \( L \) a total time-derivative of any function of the coordinates and time (i.e. \( dF/dt \) for arbitrary \( F(q,t)\)) to obtain another valid Lagrangian; the action integral \(S\) will only change by the addition of constant boundary value terms \( F(q_B,T) - F(q_A,0)\), so that the variation of the action will be unchanged. Additional freedom of choice will also often exist. For example, for a free particle in one dimension with \( q \) the Cartesian coordinate in an inertial frame, it is easy to check that, in addition to \( L = K \) (the traditional choice), choosing \(L \) equal to the square of the kinetic energy \( K = \frac{1}{2}m \dot{q}^2 \) also gives the correct equation of motion \( \ddot{q} = 0 \). By putting additional conditions on the Lagrangian we can narrow down the choice. Thus for the free particle in one dimension, using an inertial frame of reference and by requiring the Lagrangian function to be invariant under Galilean transformations, i.e. have the same functional form in all inertial frames, we can rule out all but the kinetic energy, up to the free multiplicative and additive constants \( c \) discussed above (Landau and Lifschitz 1969). Requiring that \(S \) be a minimum for short true trajectories and not a maximum (see next section) will fix the sign of the multiplicative free constant \( c \) in \( cL \), requiring that the Lagrangian have the dimension of energy will fix the multiplicative and additive free constants in the Lagrangian up to numerical factors, and requiring that the Lagrangian approach zero for zero velocity and a particular value of the coordinate \(q \) (such as infinity or zero) will fix the additive free constant \(c \) in \( L + c \). In this review we restrict ourselves to the most common case where the Lagrangian depends on the coordinates and their first derivatives, but when higher derivatives occur in the Lagrangian the Euler-Lagrange equation generalizes in a natural way (Fox, 1950). As an example, in considering the vibrational motion of elastic continuum systems (Section 11) such as beams and plates, the standard Lagrangian contains spatial second derivatives, and the corresponding Euler-Lagrange equation of motion contains spatial fourth derivatives (Reddy, 2002). Figure 1: Space-time diagram for a family of true trajectories \(x(t)\) for the quartic oscillator \([V(x) = (1/4)Cx^4]\) starting at \(P(0,0)\) with \(v_0 > 0\ .\) Kinetic foci (\( Q_i \)) of the trajectories are denoted by open circles. For this particular oscillator the kinetic focus occurs approximately at a fraction 0.646 of the half-period \(T_0/2\ ,\) illustrated here for trajectory \(0\ .\) The kinetic foci of all true trajectories of this family lie along the heavy gray line, the caustic, which is approximately a hyperbolic curve for this oscillator. Squares indicate recrossing events of true trajectory \(0\) with the other two true trajectories. (From Gray and Taylor 2007.) When Action is a Minimum The action \(S\) (or \(W\)) is stationary for true trajectories, i.e., the first variation \( \delta S \) vanishes for all small trajectory variations consistent with the given constraints. If the second variation is positive definite \(( \delta^2 S > 0 )\) for all such trajectory variations, then \(S\) is a local minimum; otherwise it is a saddle point, i.e., at second order the action is larger for some nearby trial trajectories and smaller for others, compared to the true trajectory action. As defined in Section 1, action is never a local maximum, as we shall discuss. (In relativistic mechanics (see Section 9) two sign conventions for the action have been employed, and whether the action is never a maximum or never a minimum depends on which convention is used. In our convention it is never a minimum.) We discuss here the case of the Hamilton action \(S\) for one-dimensional (\(1\)D) systems, and refer to Gray and Taylor (2007) for discussions of Maupertuis' action \(W\ ,\) and \(2\)D etc. systems. For some \(1\)D potentials \(V(x)\) (those with \( \partial^2V/ \partial x^2 \leq 0\) everywhere), e.g. \(V(x) = 0\ ,\) \(V(x) = mg x\ ,\) and \(V(x) = -Cx^2\ ,\) all true trajectories have minimum \(S\ .\) For most potentials, however, only sufficiently short true trajectories have minimum action; the others have an action saddle point. "Sufficiently short" means that the final space-time event occurs before the so-called kinetic focus event of the trajectory. The latter is defined as the earliest event along the trajectory, following the initial event, where the second variation \( \delta^2 S \) ceases to be positive definite for all trajectory variations, i.e., where \(\delta^2S = 0\ \) for some trajectory variation. Establishing the existence of a kinetic focus using this criterion is discussed by Fox (1950). An equivalent and more intuitive definition of a kinetic focus can be given. As an example, consider a family of true trajectories \( x(t,v_0) \) for the quartic oscillator with \(V(x) = (1/4) Cx^4\ ,\) all starting at \(P (x = 0 \) at \( t = 0)\ ,\) and with various initial velocities \(v_0 > 0\ .\) Three trajectories of the family, denoted \(0\ ,\) \(1\ ,\) and \( 2\ ,\) are shown in Figure 1. These true trajectories intersect each other – note the open squares in Figure 1 showing intersections of trajectories \(1\) and \( 2\) with trajectory \(0\ .\) The kinetic focus event \(Q_0\) of the true trajectory \(0\ ,\) with starting event \(P\ ,\) is the event closest to \(P\) at which a second true trajectory, with slightly different initial velocity at \(P\ ,\) intersects trajectory \(0\ ,\) in the limit for which the two trajectories coalesce as their initial velocities at \(P\) are made equal. Based on this definition a simple prescription for finding the kinetic focus can be derived (Gray and Taylor 2007), i.e., \(\partial x(t,v_0)/ \partial v_0 = 0\ ,\) and for a quartic oscillator trajectory starting at \(P(0,0)\) the kinetic focus \(Q\) occurs at time \( t_Q \) given approximately by \(t_Q = 0.646(T/2)\ ,\) where \(T\) is the period, as shown in Fig.1 for trajectory \(0\). This is the first kinetic focus, usually called simply the kinetic focus. Subsequent kinetic foci may exist but we will not be concerned with them. The other trajectories shown in Figure 1 have their own kinetic foci, i.e. \(Q_1\) for trajectory \(1\) and \(Q_2\) for trajectory \( 2\ .\) The locus of all the kinetic foci of the family is called the caustic (it is an envelope), and is shown as the heavy gray line in Figure 1. Thus, for trajectory \(0\) in Figure 1, if the trajectory terminates before kinetic focus \(Q_0\ ,\) the action \(S\) is a minimum; if the trajectory terminates beyond \(Q_0\ ,\) the action is a saddle point. By an argument due originally to Jacobi, it is easy to see intuitively that action \(S\) can never be a local maximum (Morin 2008, Gray and Taylor 2007). Note that for any true trajectory the action \(S\) in (1) can be increased by considering a varied trajectory with wiggles added somewhere in the middle. The wiggles are to be of very high frequency and very small amplitude so that there is increased kinetic energy \(K\) compared to the original trajectory but only a small change in potential energy \(V\). (We also ensure the overall travel time \(T\) is kept fixed.) The Lagrangian \(L = K - V\) in the region of the wiggles is then larger for the varied trajectory and so is the action integral \(S\) over the time interval \(T\). Thus \(S\) cannot be a maximum for the original true trajectory. A similar intuitive argument due originally to Routh shows that action \(W\) also cannot be a local maximum for true trajectories (Gray and Taylor 2007). For the purpose of determining the true trajectories, the nature of the stationary action (minimum or saddle point) is usually not of interest. However, there are situations where this is of interest, such as investigating whether a trajectory is stable or unstable (Papastavridis 1986), and in semiclassical mechanics where the phase of the propagator (Section 10) depends on the true classical trajectory action and its stationary nature; the latter dependence is expressed in terms of the number of kinetic foci occurring between the end-points of the true trajectory (Schulman 1981). In general relativity kinetic foci play a key role in establishing the Hawking-Penrose singularity theorems for the gravitational field (Wald 1984). Kinetic foci are also of importance in electron and particle beam optics. Finally, in seeking stationary action trajectories numerically (Basile and Gray 1992, Beck et al. 1989, Marsden and West 2001), it is useful to know whether one is seeking a minimum or a saddle point, since the choice of algorithm often depends on the nature of the stationary point. If a minimum is being sought, comparison of the action at successive stages of the calculation gives an indication of the error in the trajectory at a given stage since the action should approach the minimum value monotonically from above as the trajectory is refined. The error sensitivity is, unfortunately, not particularly good, as, due the stationarity of the action, the error in the action is of second order in the error of the trajectory. Thus a relatively large error in the trajectory can produce a small error in the action. Relation of Hamilton and Maupertuis Principles For conservative (time-invariant) systems the Hamilton and Maupertuis principles are related by a Legendre transformation (Gray et al. 1996a, 2004). Recall first that the Lagrangian \(L \left(q\; ,\; \dot{q}\right)\) and Hamiltonian \(H(q, p)\) are so-related, i.e. \[\tag{6} H \left(q\; ,\; p\right)\; =\; p \dot{q}\; -\; L \left(q\; ,\; \dot{q}\right)\quad , \] where in general \( p \dot{q} \) stands for \( p_1 \dot{q_1} + p_2 \dot{q_2} + \; ... + \; p_f \dot{q_f}\). If we integrate (6) with respect to \(t\) along an arbitrary virtual or trial trajectory between two points \(q_A\) and \(q_B\ ,\) and use the definitions (1) and (3) of \(S\) and \(W\) we get \(\bar{E}T = W - S\ ,\) or \[\tag{7} S\; =\; W\; -\; \bar{E}\; T\quad , \] where \(\bar{E}\; \equiv \; \int _{0}^{T}d t\; H/T \) is the mean energy along the trial trajectory. (Along a true trajectory of a conservative system, with \(\bar{E}= E =\) const, (7) reduces to the well-known relation (Goldstein et al. 2002) \(S=W-ET\ .\)) From the Legendre transformation relation (7) between \(S\) and \(W\ ,\) for conservative systems one can derive Hamilton's principle from Maupertuis' principle, and vice-versa (Gray et al., 1996a, 2004). The two action principles are thus equivalent for conservative systems, and related by a Legendre transformation whereby one changes between energy and time as independent constraint parameters. The existence in mechanics of two actions and two corresponding variational principles which determine the true trajectories, with a Legendre transformation between them, is analogous to the situation in thermodynamics (Gray et al. 2004). There, as established by Gibbs, one introduces two free energies related by a Legendre transformation, i.e. the Helmholtz and Gibbs free energies, with each free energy satisfying a variational principle which determines the thermal equilibrium state of the system. We again restrict the discussion to time-invariant (conservative) systems. If we vary the trial trajectory \(q(t)\) in (7), with no variation in end positions \(q_A\) and \(q_B\) but allowing a variation in end-time \(T\), the corresponding variations \(\delta S\ ,\) \(\delta W\ ,\) \(\delta \bar{E}\) and \(\delta T\) for an arbitrary trial trajectory are seen to be related by \[\tag{8} \delta S\; +\; \bar{E}\; \delta \; T\; =\; \delta \; W\; -\; T \; \delta \; \bar{E} \; \; . \] Next one can show (Gray et al. 1996a) that the two sides of (8) separately vanish for variations around a true trajectory. The left side of (8) then gives \(\delta S + E \delta T = 0\ ,\) since \(\bar{E} = E\) (a constant) on a true trajectory for conservative systems, which is called the unconstrained Hamiltonian principle. This can be written in the standard form for a variational relation with a relaxed constraint\[\delta S = \lambda \delta T\ ,\] where \(\lambda\) is a constant Lagrange multiplier, here determined as \(\lambda = -E\) (negative of energy of the true trajectory). If we constrain \(T\) to be fixed for all trial trajectories, then \(\delta T = 0\) and we have (\(\delta S)_T = 0\ ,\) the usual Hamilton principle. If instead we constrain \(S\) to be fixed we get (\(\delta T)_S = 0\ ,\) the so-called reciprocal Hamilton principle. The right side of (8) gives \(\delta W - T \delta \bar{E} = 0\ ,\) which is called the unconstrained Maupertuis principle, which can also be written in the standard form of a variational principle with a relaxed constraint, i.e. \(\delta W = \lambda \delta \bar{E}\) where \(\lambda = T\) (duration of true trajectory) is a constant Lagrange multiplier. If we constrain \(\bar{E}\) to be fixed for the trial trajectories, we get (\(\delta W)_\bar{E} = 0\ ,\) which is a generalization of Maupertuis' principle (4); we see that the constraint of fixed energy in (4) can be relaxed to one of fixed mean energy. If instead we constrain \(W\) to be fixed, we get \[(\delta \bar{E})_W = 0\ ,\] which is called the reciprocal Maupertuis principle. In these various generalizations of Maupertuis' principle, conservation of energy is a consequence of the principle for time-invariant systems (just as it is for Hamilton's principle), whereas conservation of energy is an assumption of the original Maupertuis principle. In all the variational principles discussed here, we have held the end-positions \(q_A\) and \(q_B\) fixed. It is possible to derive additional generalized principles (Gray et al. 2004) which allow variations in the end-positions. A word on notation may be appropriate in this regard: the quantities \( \delta S \ ,\) \( \delta W \ ,\) \( \delta T \) and \( \delta \bar{E} \) denote unambiguously the differences in the values of \(S\) etc. between the original and varied trajectories, and \( q(t) \) and \( q(t) + \delta q(t) \) denote the original and varied trajectory positions at time \(t\). In considering a generalized principle involving a trajectory variation which includes an end-position variation of, say, \( q_B \ ,\) one needs a more elaborate notation (Whittaker 1937, Papastavridis 2002) in order to distinguish between the variation in position at the end-time \( t_B \) of the original trajectory, i.e. \( \delta q_B \equiv \delta q(t = t_B = T) \ ,\) and the total variation in end-position \( \Delta q_B \) which includes the contribution due to the end-time variation \( \delta t_B \equiv \delta T \) if it is nonzero, i.e. \( \Delta q_B = \delta q_B + \dot{q}_B \delta T \ .\) Since we consider only variational principles with fixed end-positions in this review (i.e.\( \Delta q_B = 0 \)), we do not need to pursue this issue here. As we shall see in the next section and in Section 10, the alternative formulations of the action principles we have considered, particularly the reciprocal Maupertuis principle, have advantages when using action principles to solve practical problems, and also in making the connection to quantum variational principles. We note that reciprocal variational principles are common in geometry and in thermodynamics (see Gray et al. 2004 for discussion and references), but their use in mechanics is relatively recent. Practical Use of Action Principles Just as in quantum mechanics, variational principles can be used directly to solve a dynamics problem, without employing the equations of motion. This is termed the direct variational or Rayleigh-Ritz method. The solution may be exact (in simple cases) or essentially exact (using numerical methods), or approximate and analytic (using a restricted and simple set of trial trajectories). We illustrate the approximation method with a simple example and refer the reader elsewhere for other pedagogical examples and more complicated examples dealing with research problems (Gray et al. 1996a, 1996b, 2004). Consider a one-dimensional quartic oscillator, with Hamiltonian \[\tag{9} H\; =\; \frac{p^{2} }{2 m} \; +\; \frac{1}{4} \; C\; x^{4} \quad . \] Unlike a harmonic oscillator, the frequency \(\omega\) will depend on the amplitude or energy of motion, as is evident in Fig.1. We wish to estimate this dependence. We consider a one-cycle trajectory and for simplicity we choose \(x = 0\) at \(t = 0\) and at \(t = T\) (the period \( 2 \pi / \omega \)). As a trial trajectory we take \[\tag{10} x(t) = A \sin \omega t\ ,\] where the amplitude \(A\) is regarded as known and where we treat \(\omega\) as a variational parameter; we will vary \(\omega\) such that an action principle is satisfied. For illustration, we use the reciprocal Maupertuis principle \((\delta \bar{E})_W = 0\) discussed in the preceding section, but the other action principles can be employed similarly. From the definitions, we find the mean energy \(\bar{E}\) and action \(W\) over a cycle of the trial trajectory (10) to be \[\tag{11} \bar{E}\; =\; \frac{\omega }{4 \pi} \; W\; +\; C\; \frac{3\; W^{2} }{32 \pi ^{2} m^2 \omega ^{2} } \quad , \] \[\tag{12} W\; =\; \pi \; \omega \; m\; A^{2} \quad . \] Treating \(\omega\) as a variational parameter in (11) and applying \(\left(\partial \bar{E}/\partial \omega \right)_{W} \; =\; 0\) gives \[\tag{13} \omega \; =\; \left(\frac{3\; C\; W}{4\; \pi \; m^{2} } \right)^{1/3} \quad . \] Substituting (13) in (11) gives for \(\bar{E}\) \[\tag{14} \bar{E}\; =\; \frac{1}{2} \; \left(\frac{C}{m^{2} } \right)^{1/3} \left(\frac{3\; W}{4 \pi } \right)^{4/3} \quad . \] Eq. (13) can be combined with (12) or (14) to give \[\tag{15} \omega = \; \left(\frac{3\; C\; }{4m } \right)^{1/2} A = \; \left(\frac{2\; C\; \bar{E}}{m^{2} } \right)^{1/4} \quad, \] i.e. a variational estimate of the frequency as a function of the amplitude or energy. The frequency increases with amplitude, confirming what is seen in Fig.1. This problem is simple enough that the exact solution can be found in terms of an elliptic integral (Gray et al. 1996b), with the result \( \omega_{exact}/ \omega_{approx} = 2^{3/4} \pi \Gamma(3/4)/ \Gamma(1/2) \Gamma(1/4) = 1.0075\ .\) Thus the approximation (15) is accurate to 0.75%, and can be improved systematically by including terms \(B \sin{3\omega t}\ ,\) \(D \sin{5\omega t}\ ,\) etc., in the trial trajectory \(x(t)\ .\) Direct variational methods have been used relatively infrequently in classical mechanics (Gray et al. 2004) and in quantum field theory (Polley and Pottinger 1988). These methods are widely used in quantum mechanics (Epstein 1974, Adhikari 1998), classical continuum mechanics (Reddy 2002), and classical field theory (Milton and Schwinger 2006). They are also used in mathematics to prove the existence of solutions of differential (Euler-Lagrange) equations (Dacorogna 2008). Relativistic Systems The Hamilton and Maupertuis principles, and the generalizations discussed above in Section 7, can be made relativistic and put in either Lorentz covariant or noncovariant forms (Gray et al. 2004). As an example of the relativistic Hamilton principle treated covariantly, consider a particle of mass \(m\) and charge \(e\) in an external electromagnetic field with a four-potential having contravariant components \(A^\alpha = (A_0, A_i) \equiv (\phi, A_i)\ ,\) and covariant components \(A_\alpha = \left(A_{0} ,\; -\; A_{i} \right) \equiv (\phi, - A_i)\ ,\) where \(\phi(x)\) and \(A_i(x)\) (for \(i = 1, 2, 3\)) are the usual scalar and vector potentials respectively. Here \( x = (x^0, x^1, x^2, x^3) \) denotes a point in space-time. A Lorentz invariant form for the Hamilton action for this system is (Jackson 1999, Landau and Lifshitz 1962, Lanczos 1970) \[\tag{16} S\; =\; m\; \int d s\; +\; e\; \int A_{\alpha } \; d x^{\alpha } \quad . \] The sign of the Lagrangian and corresponding action can be chosen arbitrarily since the action principle and equations of motion do not depend on this sign; here we choose the sign of Lanczos (1970) in (16), opposite to that of Jackson (1999). An advantage of the choice of sign of Lagrangian \(L\) implied by (16), as discussed briefly by Gray et al. (2004) and in detail by Brizard (2009) who relates this advantage to the consistent choice of sign of the metric (given just below), is that the standard definitions of the canonical momentum and Hamiltonian can be employed - with the other choice unorthodox minus signs are required in these definitions (Jackson 1999). A disadvantage of our choice of sign is that our action is a maximum for short true trajectories, rather than the traditional minimum, and correspondingly our \(L\) approaches the negative of the standard nonrelativistic Lagrangian in the nonrelativistic limit (Brizard 2009). The four-dimensional path in (16) runs from the initial space-time point \(x_{A} \) to the final space-time point \( x_{B} \ ,\) with corresponding proper times \(s_A\) and \(s_B\ .\) Here \(ds\) is the infinitesimal interval of the path (or of the proper time), \(ds^2 = dx_\alpha dx^\alpha = g_{\alpha \beta} dx^\alpha dx^\beta = dx_{0}^{2} \; -\; dx_{i}^{2} \ ,\) the metric has signature (\(+\ ,\) \(-\ ,\) \(-\ ,\) \(-\)), and we use the summation convention and take \(c\) (speed of light ) \(= 1\ .\) \(S\) itself is not gauge invariant, but a gauge transformation \(A_\alpha \rightarrow A_\alpha + \partial_{\alpha} f \) (for arbitrary \(f(x)\)), where \( \partial_{\alpha} = \partial / \partial x^{\alpha} \ ,\) adds only constant boundary points terms to \(S\ ,\) so that \(\delta S\) is unchanged. The Hamilton principle is thus gauge invariant. If we introduce a parameter \(\tau\) along the four-dimensional path (a valid choice is proper time \(s\) along the true or any virtual path), we can write \(S\) in standard form, \(S = \int L d \tau\ ,\) where \(L = m [v_\alpha v^\alpha ]^{1/2} + e A_\alpha v^\alpha \) is the Lagrangian and \(v^\alpha = dx^\alpha /d \tau \ .\) The Euler-Lagrange equation yields the covariant Lorentz equation of motion \[\tag{17} m\; \frac{d\, v_{\alpha } }{d\, s} \; =\; e\; F_{\alpha \beta } \; v^{\beta } \quad , \] where \(F_{\alpha \beta} = \partial_\alpha A_\beta - \partial_\beta A_\alpha \) is the electromagnetic field tensor, and we have chosen the parameter \(\tau = s\ ,\) the true path proper time. Specific examples, such as an electron in a uniform magnetic field, are discussed in the references (Gray et al. 2004, Jackson 1999). As discussed below, the equations for the field (Maxwell equations) can also be derived from an action principle. Action principles are important also in general relativity. First note from (16) that for a special relativistic free particle the action principle \(\delta S = \delta \int ds = 0 \) can be interpreted as a "principle of stationary proper time" (Rohrlich 1965), or more colloquially as a "principle of maximal aging" (Taylor and Wheeler 1992). The proper time is stationary, here a maximum, for the true trajectory (which is straight in a Lorentz frame) compared to the proper time for all virtual trajectories. The principle of stationary proper time, or maximal aging, is also valid in general relativity for the motion of a test particle in a gravitational field (Taylor and Wheeler 2000); for "short" true trajectories the proper time is a maximum, and for "long" true trajectories ("long" and "short" trajectories are defined in Section 5) the proper time is a saddle point (Misner et al. 1973, Wald 1984, Gray and Poisson 2011). The corresponding Euler-Lagrange equation of motion is the relativistic geodesic equation. In general relativity the Einstein gravitational field equations can also be derived from an action principle, using the so-called Einstein-Hilbert action (Landau and Lifshitz 1962, Misner et al. 1973). General relativity is perhaps the first, and still the best, example of a field where new laws of physics were derived heuristically from action principles, since Einstein and Hilbert were both motivated by action principles, at least partly, in establishing the field equations, and the principle of stationary proper time was used to obtain the equation of motion of a test particle in a gravitational field. A second example is modern (Yang-Mills type) gauge field theory. Some of the pioneers (e.g., Weyl, Klein, Utiyama) explicitly used action principles to implement their ideas, and others, including Yang and Mills, used them implicitly by working with the Lagrangian (O'Raifeartaigh and Straumann, 2000). Some of the early gauge theories were unified field theories of gravitational and electromagnetic fields interacting with matter, and other early unified field theories developed by Einstein, Hilbert and others were also based on action principles (Vizgin 1994). Modern quantum field theories under development, for gravity alone (Rovelli 2004) or unified theories (Freedman and Van Proeyen 2012, Zwiebach 2009, Weinberg 2000), are usually based on action principles. The earliest general quantum field theory (Heisenberg and Pauli 1929-30), essentially the theory used in the 1930s for quantum electrodynamics, strong, and weak interactions (Wentzel 1949), and the basis of one of the modern methods (Weinberg 1995), derives from action principles; commutation relations (or anticommutation relations for fermion fields) are applied to the field components and their conjugate momenta, with the latter being determined from the Hamilton principle and Lagrangian density for the classical fields (Section 11). As for the role of action principles in the creation of quantum mechanics in 1925-26, in the case of wave mechanics, following hints given in de Broglie's Ph.D. thesis (Yourgrau and Mandelstam 1968), there was a near miss by Schrödinger using the Maupertuis principle, as described in the next section. Heisenberg did not use action principles in creating matrix mechanics, but his close collaborators (Born and Jordan 1925) immediately showed that the equations of motion in matrix mechanics can be derived from a matrix mechanics version of Hamilton's principle. Later, following a hint from Dirac in 1933, in his Ph.D. thesis in 1942 Feynman formulated the path integral version of quantum mechanics using the classical Hamilton action, which we discuss briefly at the end of the next section (Brown 2005, Feynman and Hibbs 1965). A very general quantum operator version of Hamilton's principle was devised by Schwinger in 1951 (Schwinger 2001, Toms 2007). Relation to Quantum Variational Principles We discuss here only the Schrödinger time-independent quantum variational principle; apart from a few remarks at the end of this section, for discussion and references to the various quantum time-dependent principles, we refer to Gray et al. (2004), Feynman and Hibbs (1965),Yourgrau and Mandelstam (1968), Schwinger (2001), and Toms (2007). As is well known (e.g. Merzbacher 1998), the time-independent Schrödinger equation \[\tag{18} \hat{H}\; \psi _{n} \; =\; E_{n} \; \psi _{n} \] for the stationary states \(\psi_n\ ,\) with energies \(E_n\ ,\) is equivalent to the variational principle of stationary mean energy \[\tag{19} \left(\delta \; \frac{\left\langle \psi \; \left|\, \hat{H}\, \right|\; \psi \right\rangle }{\left\langle \psi \; |\; \psi \right\rangle } \right)_{n} \; =\; 0\quad , \] where \(\hat{H}\) is the Hamiltonian operator corresponding to the classical Hamiltonian \(H(q, p)\), \(\left\langle \psi_1 \vert \psi_2 \right\rangle \) denotes the scalar product of two states, and trial state \(\psi\) in (19) has no constraint on its normalization. (The word stationary is used in this section with two different meanings.) Equation (18) is the Euler-Lagrange equation for (19). The subscript in (19), quantum number \(n\ ,\) indicates a constrained variation of \(\psi\) such that \(\psi_n\) is the particular stationary solution selected; for example, to obtain the ground state, for which (19) is a minimum mean energy principle, one could restrict the search to nodeless trial functions \(\psi\ .\) As mentioned earlier, (19) is the basis of a very useful approximation scheme in quantum mechanics (Epstein 1974, Drake 2005), analogous to the direct use of classical action principles to solve approximately classical dynamics problems (see Section 8 above). The reader will notice the striking similarity of (19) to one of the classical variational principles discussed above in Section 7, i.e. the reciprocal Maupertuis principle applied to the case of stationary (steady-state) motions: \[\tag{20} \left(\delta \bar{E}\right)_{W} \; =\; 0\quad . \] Here the time average \( \bar{E} \; \equiv \; \int _{0}^{T}dt \; H/T \) is over a period for periodic motions, and is over an infinite time interval for other stationary motions, i.e., quasiperiodic and chaotic. The classical mean energy \(\bar{E}\; \equiv \; \int _{0}^{T}d t\; H/T \) in (20) is clearly analogous to the quantum mean energy \(\left\langle \psi \; \left|\, \hat{H}\, \right|\; \psi \right\rangle /\left\langle \psi \; |\; \psi \right\rangle \) in (19). The constraints (\(W\) in (20), n in (19)) are also analogous because at large quantum numbers we have for stationary bound motions \(W_n \sim nh\) (Bohr-Sommerfeld), where \(h\) is Planck's constant. Thus fixed \(n\) and fixed \(W\) are equivalent, at least for large quantum numbers. The above heuristic arguments can be tightened up. First, (20) can be derived (in simple cases) in the classical limit \((h \to 0)\) from (19) (Gray et al. 1996a). Conversely, one can "derive" quantum mechanics (i.e. (19)) by applying quantization rules to (20) (Gray et al. 1999). Schrödinger, in his first paper on wave mechanics (Schrödinger 1926a), tried to derive the quantum variational principle from a classical variational principle. Unfortunately he did not have available the formulation (20) of the classical action principle, and, in his second paper (Schrödinger 1926b), abandoned this route to quantum mechanics. In his second paper, instead of using the Maupertuis action principle directly he used the Hamilton-Jacobi equation for the action \(W\), which is a consequence of a generalized action principle due to Hamilton, as described briefly in Section 2. This enabled him to exploit the analogy between ray and wave optics, on the one hand, and particle and wave mechanics, on the other. He showed that just as in optics, where the short-wavelength or geometric optics equation for families of rays (the optical Hamilton-Jacobi or eikonal equation) generalizes to the standard wave equation when the wavelength is not short, in mechanics the Hamilton-Jacobi equation for families of particle trajectories can be regarded as a short-wavelength wave equation and generalized to a wave equation describing particles with nonzero de Broglie wavelength (the Schrödinger equation). Schrödinger worked with the time-independent versions of the equations and thus first found the time-independent Schrödinger equation (18), from which he later found (in part IV of his series in 1926) the time-dependent Schrödinger equation. It is a bit simpler to work with the time-dependent versions of the equations, which first gives the time-dependent Schrödinger equation, from which one can then find the time-independent one in the now standard way. A semiclassical variational principle can be based on the reciprocal Maupertuis principle (20) (Gray et al. 2004). For simplicity, consider first one-dimensional systems. Thus, for bound states, one first determines the classical energy of a periodic orbit as a function of the one-cycle action \(W\) by solving (20) as described earlier (e.g. see eq.(14) for the quartic oscillator), and then imposes the Bohr-Sommerfeld quantization condition (or one of its refinements) on action \(W\). This gives the allowed energies semiclassically as a function of the quantum number. Thus, from (14) and the Bohr-Sommerfeld quantization condition \(W_n=nh\), for a quartic oscillator we find the semiclassical estimate \(E_n=(1/2)(C/m^2)^{1/3} (3n \hbar /2)^{4/3}\), where \( \hbar = h/2 \pi \) and \(n= 0, 1, 2, ...~\) . A simple refinement is obtained by replacing the Bohr-Sommerfeld quantization rule \( W_n = nh\) by the modified old quantum theory rule due to Einstein, Brillouin, and Keller (EBK), \( W_n = (n + \alpha)h \), where \( \alpha \) is the so-called Morse-Maslov index. The latter is most easily derived in the Wentzel-Kramers-Brillouin or WKB-like semiclassical approximations in wave mechanics or path integrals, and accounts approximately for some of the quantum effects missing in Bohr-Sommerfeld theory, such as zero-point energy, the uncertainty principle, wave function penetration beyond classical turning points and tunnelling. For example, for a harmonic oscillator we have \( \alpha = 1/2 \), and using the harmonic oscillator energy-action relation \( E = W \omega/2\pi \) (the result corresponding to (14) for a quartic oscillator) we find \( E_n = (n + 1/2) \hbar \omega \), which happens to be the exact quantum result for a harmonic oscillator. The effect of the Morse-Maslov index is more noticeable at smaller quantum numbers. The EBK quantization rule was introduced originally to handle nonseparable, but integrable, multidimensional systems (Brack and Bhaduri 1997). An integrable system has at least \( f \) independent constants of the motion, or "good" actions \(W_i\), where \( f \) is the number of degrees of freedom. The classical bound motions are all periodic or quasiperiodic, i.e., nonchaotic. The total Maupertuis action \(W\) over a long true trajectory is a linear combination of \(f\) partial or good actions \(W_i\), i.e., \( W = \sum{_i} N_i W_i \), where \( N_i \) is the number of complete cycles with partial action \( W_i \) in the total trajectory. For integrable systems, the energy can be expressed as a function of \( f \) good actions, \(E = E(W_1,W_2,...,W_f)\). Examples of applying the reciprocal Maupertuis principle to multidimensional systems to find \(E(W_1,W_2,...,W_f)\) approximately, and then quantizing semiclassically using EBK quantization \(W_i = (n_i + \alpha_i)h\) are reviewed in Gray et al. (2004). This semiclassical approximation method has been applied to estimate energy levels \(E_{n_1,n_2,...,n_f}\) even for some nonintegrable systems, where strictly speaking, \(f\) good actions \(W_i\) and \(f\) corresponding good quantum numbers \(n_i\) do not exist. For example, consider the two-dimensional quartic oscillator with mass \(m\) and potential energy \(V(x,y) = Cx^2y^2\), a nonintegrable system with just one exact constant of the motion (the energy) and having mostly chaotic classical trajectories. As in Section 8 for the one-dimensional quartic oscillator, we start with the simplest trial trajectory \( x(t) = A_x \cos(\omega_x t), \; y(t) = A_y \cos(\omega_y t)\). With this trial trajectory the reciprocal Maupertuis principle gives the classical energy approximately as a function of two actions, \( E(W_x,W_y) = (3/4 \pi)(C/\pi m^2)^{1/3}[W_xW_y]^{2/3}\). EBK quantization (with \( \alpha = 1/2\)) then gives the energy levels semiclassically as \(E_{n_x,n_y} = (3/2)(2C \hbar^4/m^2)^{1/3}[(n_x + 1/2)(n_y + 1/2)]^{2/3}\), which is found to be accurate to within 5% for the 50 lowest levels in comparison to a numerical calculation. The energy level degeneracies are not given correctly by this first approximation, and simple variational and perturbational improvements are also discussed in the review cited. As discussed in Section 2, in classical mechanics per se there is no particular physical reason for the existence of a principle of stationary action. However, as first discussed by Dirac and Feynman, Hamilton's principle can be derived in the classical limit of the path integral formulation of quantum mechanics (Feynman and Hibbs 1965, Schulman 1981). In quantum mechanics the propagator \(G(q_A,0\; ;q_B,T)\) gives the probability amplitude for the system to be found with configuration \(q_B\) at time \( t = T\), given that it starts with configuration \(q_A\) at \( t = 0\). Feynman's path integral expression for the propagator is \( G(q_A,0\; ;q_B,T) = \int d[q]\; exp(i S[q]/ \hbar) \), where \( S[q]\) is the classical Hamilton action functional defined by equation (1) for virtual path \( q(t)\) which starts at \(A(q_A,0)\) and ends at \(B(q_B,T)\), and the functional or path integral \( \int d[q]... \) (defined precisely in the above references) is over all such virtual paths between the fixed events \(A\) and \(B\). In the limit \( \hbar \to 0 \) the phase factors \( exp(i S[q] / \hbar) \) contributed by all the virtual paths \( q(t)\) to the propagator cancel by destructive interference, with the exception of the contributions of the one or more stationary phase paths satisfying \( \delta S = 0 \ ;\) the latter are the classical paths. Thus, in the classical limit, the classical Hamilton principle of stationary action is a consequence of the quantum stationary phase condition for constructive interference. There is an extensive literature on a variety of systems (particles and fields) studied semiclassically via the Feynman path integral expression for the propagator discussed in the preceding paragraph (Feynman and Hibbs 1965, Schulman 1981, Brack and Bhaduri 1997). A celebrated result of this approach is the Gutzwiller trace formula, which relates the distribution function for the quantized energy levels of the system to the complete set of the system's classical periodic orbits. Continuum Mechanics and Field Theory Action principles can be applied to field-like quantities \(\phi (x, t)\ ,\) both classically (Goldstein et al. 2002, Landau and Lifshitz 1962, Soper 1976, Burgess 2002, Jackson 1999, Melia 2001, Morse and Feshbach 1953, Brizard 2008) and quantum-mechanically (Dyson 2007, Toms 2007). The systems can be nonrelativistic or relativistic. We have already mentioned above the application of action principles to the electromagnetic and gravitational fields, and to the Schrödinger wave function. These methods are also widely applied in classical continuum mechanics, e.g., to strings, membranes, elastic solids and fluids (Yourgrau and Mandelstam 1968, Lanczos 1970, Reddy 2002). As our first example, we consider the classical nonrelativistic one-dimensional vibrating string with fixed ends, following Brizard (2008). Assuming small displacements from equilibrium, we find the equation of motion for the transverse displacement \(\phi(x, t)\) is \[\tag{21} \rho \; \frac{\partial ^{2} \; \phi }{\partial \; t^{2} } \; -\; \tau \; \frac{\partial ^{2} \; \phi }{\partial \; x^{2} } \; =\; 0\quad , \] where \(\rho\) is the density and \(\tau\) the tension. Eq. (21) is the well known classical linear wave equation. It is assumed that \(\phi (x, t)\) is zero at all times at the two ends, \(x = 0 \) and \( x = X\ ,\) and that \(\phi (x, t)\) is given for all positions at two times, \(t = 0\) and \(t = T\ .\) One easily verifies that the equation of motion (21) follows from the action principle \(\delta S = 0\ ,\) with the given constraints, where \[\tag{22} S\; =\; \int _{0}^{T}d t\; \int _{0}^{X}d x\; \mathcal{L}\; \left(\phi \; ,\; \partial _{t} \phi \; ,\; \partial _{x} \phi \right)\quad , \] with \[\tag{23} \mathcal{L}\; \left(\phi \; ,\; \partial _{t} \phi \; ,\; \partial _{x} \phi \right) = \frac{1}{2} \; \rho \left(\frac{\partial \phi }{\partial t} \right)^{2} \; -\; \frac{1}{2} \; \tau \left(\frac{\partial \phi }{\partial x} \right)^{2} \] the Lagrangian density \(\left(\int_0^{X} dx\; \mathcal{L}\; =\; L \right) , ~ \partial _{t} \phi = \partial \phi / \partial t ~~\text{and}~~\partial _{x} \phi = \partial \phi / \partial x. \) Because of the simple quadratic Lagrangian density (23), the variation of (22) can readily be done directly; alternatively, we can use the Euler-Lagrange equation for 1D fields \(\phi(x, t)\ ,\) a natural generalization of (5), \[\tag{24} \frac{\partial }{\partial t} \; \left(\frac{\partial \mathcal{L}} {\partial \left(\partial _{t} \phi \right)} \right)\; +\; \frac{\partial }{\partial x} \; \left(\frac{\partial \mathcal{L}}{\partial \left(\partial _{x} \phi \right)} \right)\; -\; \frac{\partial \mathcal{L}}{\partial \phi } \; =\; 0\quad , \] which also gives (21) for the Lagrangian density (23). As a second example we consider the classical relativistic description of a source-free electromagnetic field \( F_{\alpha \beta}(x) \) enclosed in a volume V, where \( x \) denotes a space-time point and we use covariant notation (see Section 9 above). Because of the structure of the two Maxwell equations which never have source terms (due to the absence of magnetic monopoles) the field \( F_{\alpha \beta}(x) \) can be represented in terms of the four-potential \( A_{\alpha}(x) \) by \( F_{\alpha \beta} = \partial_{\alpha} A_{\beta} - \partial_{\beta} A_{\alpha} \) (Jackson 1999). As in (23) we assume that in general the Lagrangian density \( \mathcal{L}(A_{\alpha},\partial_{\beta} A_{\alpha}) \ ,\) here a Lorentz scalar, depends at most on the potential and its first derivatives so that the Lorentz invariant action \(S\) is given by \[\tag{25} S \; = \; \int d^4 x \; \mathcal{L}(A_{\alpha},\; \partial_{\beta} A_{\alpha}) \quad, \] where the space-time integration is over the spatial volume \(V\) and time interval \(T\). Assuming \( A_{\alpha}(x) \) is fixed on the boundary of \((V,T)\) and setting \( \delta S = 0 \) gives the Lorentz covariant Euler-Lagrange equations \[\tag{26} \partial_{\beta} \; \left( \frac{\partial{} \mathcal{L}} {\partial{} \left( \partial _{\beta} A_{\alpha} \right)} \right) \; - \; \frac{\partial \mathcal{L}}{\partial A_{\alpha}} \; = \; 0 \quad , \; \; \alpha \; = \; 0,1,2,3 \quad . \] For a source-free field the Lagrangian density is given by (Jackson 1999, Melia 2001) \[\tag{27} \mathcal{L}(A_{\alpha}, \; \partial_{\beta} A_{\alpha}) \; = \; \frac{g_{\mu \mu'} g_{\nu \nu'}}{16 \pi} (\partial_{\mu} A_{\nu} \; - \; \partial_{\nu} A_{\mu}) (\partial_{\mu'} A_{\nu'} \; - \; \partial_{\nu'}A_{\mu'}) \quad , \] where \( g_{\mu \mu'} \) is the Lorentz metric tensor defined earlier (Section 9). Again we have a choice of sign in (27) and have chosen that of Melia (2001), opposite to that of Jackson (1999). \( \mathcal{L} \) defined by (27) is proportional to \( F_{\mu \nu}F^{\mu \nu} \) and is therefore gauge invariant. From (27) and (26) we find the field equations \[\tag{28} \partial_{\beta} \partial^{\beta} A_{\alpha} \; - \; \partial_{\alpha} \partial^{\beta} A_{\beta} \; = \; 0 \quad , \] which represent the source-free version of the two Maxwell equations which in general contain source terms. As mentioned above, the other two Maxwell equations are satisfied identically by the representation of the field in terms of the four-potential, i.e. \( F_{\alpha \beta} \; = \; \partial_{\alpha} A_{\beta} \; - \; \partial_{\beta} A_{\alpha} \ .\) Eq.(28) is valid for any choice of gauge; in the Lorenz gauge (\( \partial^{\beta} A_{\beta} \; = \; 0 \)) (28) reduces to the simpler form \( \partial_{\beta} \partial^{\beta} A_{\alpha} \; = \; 0 \ ,\) which is the 3D homogeneous wave equation of type (21). (Note that \( \partial_{\beta} \partial^{\beta} = \partial^2/\partial t^2 - \nabla^2 \ ,\) where \( \nabla^2 \) is the Laplacian.) So far we have assumed a source-free field and the Lagrangian density \( \mathcal{L}(A_{\alpha}, \; \partial_{\beta} A_{\alpha}) \) given by (27) is actually independent of \( A_{\alpha} \ .\) If a prescribed source four-current density \( J_{\alpha}(x)\;=\; (\rho(x),\; -J_i(x)) \) is present, where \( \rho \) and \( J_i \) are the charge and three-current densities, respectively, one adds to (27) a term (assuming \( c \; = \; 1 \)) \( J^{\mu} A_{\mu} \) (Melia 2001). The Euler-Lagrange equation (26) now gives the inhomogeneous wave equation \( \partial^{\beta} \partial_{\beta} A_{\alpha} \; = \; 4 \pi J_{\alpha} \ ,\) where we have again assumed the Lorenz gauge. Conservation Laws Conservation laws are a consequence of symmetries of the Lagrangian or action. For example, conservation of energy, momentum, and angular momentum follow from invariance under time translations, space translations, and rotations, respectively. The link between symmetries and conservation laws holds for particle and continuum systems (Noether's theorem (1918)). The conservation laws can be derived either from the Lagrangian and equations of motion (Goldstein et al. 2002), or directly from the action and the variational principle (Brizard 2008, Goldstein et al. 2002, Melia 2001, Lanczos 1970, Oliver 1994, Schwinger et al. 1998). A treatment which is introductory yet reaches applications to gauge field theory, and includes historical background on Emmy Noether's work and career, is given by Neuenschwander (2011). Noether's theorem is to be discussed elsewhere in Scholarpedia, and we do not go into detail here. References (historical) • Born, M. and P.Jordan (1925). "Zur Quantenmechanik", Zeit. f. Phys.34, 858-888. (English translation available in Sources of Quantum Mechanics, edited by B.L. van der Waerden, Dover, New York, 1968) • Brown, L. M. (2005), editor. Feynman's Thesis: The Principle of Least Action in Quantum Mechanics, World Scientific, Singapore. • Clebsch, A. (1866, 2009), editor. Jacobi's Lectures on Dynamics, Hindustan Book Agency, New Delhi. (English translation in 2009 from 1866 German edition of Jacobi's Konigsberg lectures from winter semester 1842-3) • Goldstine, H.H. (1980). A History of the Calculus of Variations from the 17th Through the 19th Century, Springer, New York. • Hankins, T.L. (1980). Sir William Rowan Hamilton, Johns Hopkins U.P., Baltimore. • Heisenberg, W. and W.Pauli (1929). "Zur Quantendynamik der Wellenfelder", Zeit. f. Phys. 56,1-61; (1930) "Zur Quantentheorie der Wellenfelder II", Zeit. f. Phys. 59,168-190. • Lanczos, C. (1970). The Variational Principles of Mechanics, 4th edition, University of Toronto Press, Toronto. • Lützen, J. (2005). Mechanistic Images in Geometric Form: Heinrich Hertz's Principles of Mechanics, Oxford U.P., Oxford. • O'Raifeartaigh, L. and N. Straumann (2000). "Gauge Theory: Historical Origins and Some Modern Developments", Rev. Mod. Phys. 72, 1-23. • Schrödinger, E. (1926a). "Quantisierung als eigenwert problem I", Ann. Phys. 79, 361-376; (1926b). "Quantisierung als eigenwert problem II", Ann. Phys. 79, 489-527. (English translations available in E. Schrödinger, Collected Papers on Wave Mechanics, Chelsea, New York, 1982) • Terrall, M. (2002). The Man who Flattened the Earth, University of Chicago Press, Chicago. (biography of Maupertuis) • Todhunter, I. (1861). A History of the Progress of the Calculus of Variations During the Nineteenth Century, Cambridge U.P., Cambridge. • Yourgrau, W. and S. Mandelstam (1968). Variational Principles in Dynamics and Quantum Theory, 3rd edition, Saunders, Philadelphia. • Vizgin, V. P. (1994). Unified Field Theories in the First Third of the 20th Century, Birkhauser, Basel. • Wentzel, G. (1949). Quantum Theory of Fields, Interscience, New York. (translation of 1942 German edition) • Adhikari, S.K. (1998). Variational Principles and the Numerical Solution of Scattering Problems, Wiley, New York. • Basile, A.G. and C.G.Gray (1992). "A Relaxation Algorithm for Classical Paths as a Function of Endpoints", J. Comp. Phys. 101, 80-93. • Beck, T.L., J.D.Doll and D.L.Freeman (1989). "Locating Stationary Paths in Functional Integrals", J. Chem. Phys. 90, 3181-3191. • Brack, M and R.K. Bhaduri (1997). Semiclassical Physics, Addison-Wesley, Reading. • Brizard, A.J. (2008). An Introduction to Lagrangian Mechanics, World Scientific, Singapore. • Brizard, A.J. (2009). "On the Proper Choice of a Lorentz Covariant Relativistic Lagrangian", Physics Arxiv, arXiv:0912.0655 • Burgess, M. (2002). Classical Covariant Fields, Cambridge U.P., Cambridge. • Chandrasekhar, V.K., M. Senthilvelan and M. Lakshmanan (2007). "On the Lagrangian and Hamiltonian Description of the Damped Linear Harmonic Oscillator", J. Math. Phys. 48, 032701-1-12. • Dacorogna, B. (2008). Direct Methods in the Calculus of Variations, second edition, Springer, Berlin. • Dirac, P. A. M. (1964). Lectures on Quantum Mechanics, Yeshiva University, New York. • Drake, G. W. F. (2005). "Variational Methods", in Mathematical Tools for Physicists, G. L. Trigg, editor, pp.619-656, Wiley-VCH, Weinheim • Dyson, F. (2007). Advanced Quantum Mechanics, World Scientific, Singapore. • Epstein, S.T. (1974). The Variation Method in Quantum Chemistry, Academic, New York. • Feynman, R.P. and Hibbs, A. R. (1965). Quantum Mechanics and Path Integrals, McGraw Hill, New York. • Fox, C. (1950). An Introduction to the Calculus of Variations, Oxford U.P., Oxford. • Freedman, D. Z. and A. Van Proeyen (2012). Supergravity, Cambridge U.P., Cambridge. • Goldstein, H., C. Poole and I. Safko (2002). Classical Mechanics, 3rd edition, Addison-Wesley, New York. • Gray, C.G., G. Karl and V.A. Novikov (1996a). "The Four Variational Principles of Mechanics", Ann. Phys. 251, 1-25. • Gray, C.G., G. Karl and V.A. Novikov (1996b). "Direct Use of Variational Principles as an Approximation Technique in Classical Mechanics", Am. J. Phys. 64, 1177-1184. • Gray, C.G., G. Karl and V.A. Novikov (1999). "From Maupertuis to Schrödinger. Quantization of Classical Variational Principles", Am. J. Phys. 67, 959-961. • Gray, C.G., G. Karl and V.A. Novikov (2004). "Progress in Classical and Quantum Variational Principles", Rep. Prog. Phys. 67, 159-208. • Gray, C.G. and E. Poisson (2011). "When Action is Not Least for Orbits in General Relativity", Am. J. Phys. 79, 43-56. • Gray, C.G. and E.F. Taylor (2007). "When Action is Not Least", Am. J. Phys. 75, 434-458. • Jackson, J.D. (1999). Classical Electrodynamics, 3rd edition, Wiley, New York. • Landau, L.D. and E.M. Lifshitz (1962). The Classical Theory of Fields, 2nd edition, Pergamon, New York. • Landau, L.D. and E.M. Lifshitz (1969). Mechanics, 2nd edition, Pergamon, Oxford. • Marsden J.E. and M. West (2001). "Discrete Mechanics and Variational Integrators", Acta Numerica 10, 357-514. • Melia, F. (2001). Electrodynamics, University of Chicago Press, Chicago. • Merzbacher, E. (1998). Quantum Mechanics, 3rd edition, Wiley, New York. • Milton, K.A. and J. Schwinger (2006). Electromagnetic Radiation: Variational Methods, Waveguides and Accelerators, Springer, Berlin. • Misner, C. W., K. S. Thorne and J. A. Wheeler (1973). Gravitation, Freeman, San Francisco. • Morin, D. (2008). Introduction to Classical Mechanics, Cambridge U.P., Cambridge. • Morse, P.M. and H. Feshbach (1953). Methods of Theoretical Physics, Vol.1, McGraw Hill, New York. • Neuenschwander, D. E. (2011). Emmy Noether's Wonderful Theorem, Johns Hopkins U.P., Baltimore. • Oliver, D. (1994). The Shaggy Steed of Physics, Springer, New York. • Papastavridis, J.G. (1986). "On a Lagrangean Action Based Kinetic Instability Theorem of Kelvin and Tait", Int. J. Eng. Sci. 24, 1-17. • Papastavridis, J.G. (2002). Analytical Mechanics, Oxford U.P., New York. • Polley, L. and D. E. L. Pottinger (1988). Variational Calculations in Quantum Field Theory, World Scientific, Singapore. • Reddy, J.N. (2002). Energy Principles and Variational Methods in Applied Mechanics, Wiley, New York. • Rohrlich, F. (1965). Classical Charged Particles, Addison-Wesley, Reading. • Rovelli, C. (2004). Quantum Gravity, Cambridge U.P., Cambridge. • Santilli, R. M. (1978). Foundations of Theoretical Mechanics I, Springer, New York. • Schulman, L. S. (1981). Techniques and Applications of Path Integration, Wiley, New York. • Schwinger, J., L.L. DeRaad Jr, K.A. Milton and W-Y Tsai (1998). Classical Electrodynamics, Perseus Books, Reading. • Schwinger, J. (2001). Quantum Mechanics, Springer, Berlin. • Soper, D.E. (1976). Classical Field Theory, Wiley, New York. • Taylor, E.F. and J.A. Wheeler (1992). Spacetime Physics, 2nd edition, Freeman, New York. • Taylor, E.F. and J.A. Wheeler (2000). Exploring Black Holes: Introduction to General Relativity, Addison-Wesley Longman, San Francisco. • Toms, D. J. (2007). The Schwinger Action Principle, Cambridge U.P., Cambridge. • Wald, R.M. (1984). General Relativity, University of Chicago Press, Chicago. • Weinberg, S. (1995). The Quantum Theory of Fields, Volume I, Foundations, Cambridge U.P., Cambridge. • Weinberg, S. (2000). The Quantum Theory of Fields, Volume III, Supersymmetry, Cambridge U.P., Cambridge. • Whittaker, E. T. (1937). A Treatise on the Analytical Dynamics of Particles and Rigid Bodies, 4th edition, Cambridge U.P., Cambridge. • Zwiebach, B. (2009). A First Course in String Theory, Cambridge U.P., Cambridge. Internal references • Jean Zinn-Justin and Riccardo Guida (2008) Gauge invariance. Scholarpedia, 3(12):8287. • Jean Zinn-Justin (2009) Path integral. Scholarpedia, 4(2):8674. Further reading • Doughty, N. A. (1990). Lagrangian Interaction, Addison-Wesley, Reading. • Feynman, R.P., R.B. Leighton and M. Sands (1963). The Feynman Lectures on Physics, Vol.II, Ch.19, Addison-Wesley, Reading. • Gerjouy, E., A. R. P. Rau and L. Spruch (1983). "A Unified Formulation of the Construction of Variational Principles", Rev. Mod. Phys. 55, 725-774. • Greiner, W. and J. Reinhardt (1996). Field Quantization, Springer, Berlin. • Henneaux, M. and C. T. Teitelboim (1992). Quantization of Gauge Systems, Princeton U.P., Princeton. • Hildebrandt, S. and A. Tromba (1996). The Parsimonious Universe, Springer, New York. • Moiseiwitsch, B.L. (1966). Variational Principles, Interscience, New York. • Nesbet, R.K. (2003). Variational Principles and Methods in Theoretical Physics and Chemistry, Cambridge U.P., Cambridge. • Tabarrok, B. and F. P. J. Rimrott (1994). Variational Methods and Complementary Formulations in Dynamics, Kluwer, Dordrecht. See Also Dynamical systems, Gauge invariance, Hamilton-Jacobi equation, Hamiltonian systems, Path integral, Quasiperiodicity, Chaos, Lagrangian mechanics, General relativity, Noether's Theorem. Personal tools Focal areas
0e11571d4e3ee2e4
Measurement in quantum mechanics From Wikipedia, the free encyclopedia   (Redirected from Quantum measurement) Jump to: navigation, search Measurement from a practical point of view[edit] Measurement plays an important role in quantum mechanics, and it is viewed in different ways among various interpretations of quantum mechanics. In spite of considerable philosophical differences, different views of measurement almost universally agree on the practical question of what results form a routine quantum-physics laboratory measurement. To understand this, the Copenhagen interpretation, which has been commonly used,[1] is employed in this article. Qualitative overview[edit] Quantitative details[edit] Measurable quantities ("observables") as operators[edit] Main article: Observable Important examples of observables are: • The Hamiltonian operator , which represents the total energy of the system. In nonrelativistic quantum mechanics the nonrelativistic Hamiltonian operator is given by . • The momentum operator is given by (in the position basis), or (in the momentum basis). • The position operator is given by (in the position basis), or (in the momentum basis). Operators can be noncommuting. Two Hermitian operators commute if (and only if) there is at least one basis of vectors such that each of which is an eigenvector of both operators (this is sometimes called a simultaneous eigenbasis). Noncommuting observables are said to be incompatible and cannot in general be measured simultaneously. In fact, they are related by an uncertainty principle as discovered by Werner Heisenberg. Measurement probabilities and wavefunction collapse[edit] Discrete, nondegenerate spectrum[edit] Let be an observable. By assumption, has discrete eigenstates with corresponding distinct eigenvalues . That is, the states are nondegenerate. Consider a system prepared in state . Since the eigenstates of the observable form a complete basis called eigenbasis, the state vector can be written in terms of the eigenstates as where are complex numbers in general. The eigenvalues are all possible values of the measurement. The corresponding probabilities are given by Usually is assumed to be normalized, i.e. . Therefore, the expression above is reduced to If the result of the measurement is , then the system (after measurement) is in pure state . That is, so any repeated measurement of will yield the same result . When there is a discontinuous change in state due to a measurement that involves discrete eigenvalues, that is called wavefunction collapse. For some, this is simply a description of a reasonably accurate discontinuous change in a mathematical representation of physical reality; for others, depending on philosophical orientation, this is a fundamentally serious problem with quantum theory; others see this as statistically-justified approximation resulting from the fact that the entity performing this measurement has been excluded from the state-representation. In particular, multiple measurements of certain physically extended systems demonstrate predicted statistical correlations which would not be possible under classical assumptions. Continuous, nondegenerate spectrum[edit] Let be an observable. By assumption, has continuous eigenstate , with corresponding distinct eigenvalue . The eigenvalue forms a continuous spectrum filling the interval (a,b). where is a complex-valued function. The eigenvalue that fills up the interval is the possible value of measurement. The corresponding probability is described by a probability function given by where . Usually is assumed to be normalized, i.e. . Therefore, the expression above is reduced to Degenerate spectra[edit] Density matrix formulation[edit] Main article: Density matrix Let be an observable, and suppose that it has discrete eigenvalues , associated with eigenspaces respectively. Let be the projection operator into the space . Assume the system is prepared in the state described by the density matrix ρ. Then measuring can yield any of the results , with corresponding probabilities given by where the difference is that is the density matrix describing the entire ensemble, whereas is the density matrix describing the sub-ensemble whose measurement result was . Statistics of measurement[edit] Suppose we take a measurement corresponding to observable , on a state whose quantum state is . • The mean (average) value of the measurement is (see expectation value). These are direct consequences of the above formulas for measurement probabilities. Suppose that we have a particle in a 1-dimensional box, set up initially in the ground state . As can be computed from the time-independent Schrödinger equation, the energy of this state is (where m is the particle's mass and L is the box length), and the spatial wavefunction is . If the energy is now measured, the result will always certainly be , and this measurement will not affect the wavefunction. If the measurement result was x=S, then the wavefunction after measurement will be the position eigenstate . If the particle's position is immediately measured again, the same position will be obtained. The new wavefunction can, like any wavefunction, be written as a superposition of eigenstates of any observable. In particular, using energy eigenstates, , we have If we now leave this state alone, it will smoothly evolve in time according to the Schrödinger equation. But suppose instead that an energy measurement is immediately taken. Then the possible energy values will be measured with relative probabilities: and moreover if the measurement result is , then the new state will be the energy eigenstate . Wavefunction collapse[edit] von Neumann measurement scheme[edit] Let the quantum state be in the superposition , where are eigenstates of the operator for the so-called "measurement" prior to von Neumann's second apparatus. In order to make the "measurement", the system described by needs to interact with the measuring apparatus described by the quantum state , so that the total wave function before the measurement and interaction with the second apparatus is . During the interaction of object and measuring instrument the unitary evolution is supposed to realize the following transition from the initial to the final total wave function: where are orthonormal states of the measuring apparatus. The unitary evolution above is referred to as premeasurement. The relation with wave function collapse is established by calculating the final density operator of the object from the final total wave function. This density operator is interpreted by von Neumann as describing an ensemble of objects being after the measurement with probability in the state The transition in which the vectors for fixed n are the degenerate eigenvectors of the measured observable. For an arbitrary state described by a density operator Lüders projection is given by Measurement of the second kind — with irreversible detection[edit] Decoherence in quantum measurement[edit] One can also introduce the interaction with the environment , so that, in a measurement of the first kind, after the interaction the total wave function takes a form which is related to the phenomenon of decoherence. The above is completely described by the Schrödinger equation and there are not any interpretational problems with this. Now the problematic wavefunction collapse does not need to be understood as a process on the level of the measured system, but can also be understood as a process on the level of the measuring apparatus, or as a process on the level of the environment. Studying these processes provides considerable insight into the measurement problem by avoiding the arbitrary boundary between the quantum and classical worlds, though it does not explain the presence of randomness in the choice of final eigenstate. If the set of states , , or Interaction without interaction in quantum measurement[edit] Interaction without interaction is a new quantum mechanical measurement effect, which states that one motion A does not interact with the other motion B in a system, when we measure the physical quantity associated with the motion A, the other motion B will have an effect on the measured physical quantity. The coinage follows from John Wheeler's style, for example Wheeler's coinages include 'mass without mass', 'charge without charge' and 'law without law'.[6] The motion B is usually a classical harmonic vibration, two measured quantities associated with the motions A are proposed i.e. the quantum entanglement of the two two-level atoms in a single-mode polarized cavity field and the number of atoms in an atomic beam reaching the atomic detector. Because of the non-unity trace over the state of a classical harmonic oscillator, during a shorter time interval than its period T, the measured entanglement concurrence between the two atoms is modified by the vibrant factor .[7] , the registered number of atoms of the translational motion should be multiplied by another vibrant factor .[8][9] Actually if the Hamiltonian of the system is given by in which the coupling term between the motion A and the motion B is absent, then the state of the system reads . The measured quantity belonging to the motion A should be , usually the trace over the motion B is unity, our conventional intuition holds. However, in some condition the trace over the motion B , for instance a classical harmonic vibration during a shorter time interval than its period, should be less than unity, then the measurement effect of interaction without interaction appears. Surprisingly and interestingly the measurement effect for an atomic beam is also a macroscopic quantum phenomenon, because the classical harmonic vibration and the process of registering the number of atoms by the atomic detector are both regarded as the macroscopic events. The measurement effect for an atomic beam is potentially important in the detection of gravitational waves, because the vibrant factor is independent of the amplitude and the initial phase, which implies that an atomic beam can be used to detect the extremely weak classical harmonic vibrations induced by gravitational waves. Philosophical problems of quantum measurements[edit] What physical interaction constitutes a measurement?[edit] Does measurement actually determine the state?[edit] Is the measurement process random or deterministic?[edit] Does the measurement process violate locality?[edit] See also[edit] 3. ^ George S. Greenstein & Arthur G. Zajonc (2006). The Quantum Challenge: Modern Research On The Foundations Of Quantum Mechanics (2nd ed.). ISBN 076372470X.  5. ^ M.O. Scully; W.E. Lamb; A. Barut (1987). "On the theory of the Stern–Gerlach apparatus" (PDF). Foundations of Physics. 17: 575–583. Bibcode:1987FoPh...17..575S. doi:10.1007/BF01882788. Retrieved 9 November 2012.  6. ^ C. Misner; K. Thorne; W. Zurek (2009). "John Wheeler, relativity and quantum information" (PDF). Physics Today. 64: 40.  7. ^ Y.Y. Huang (2016). "Classical harmonic vibrations with micro amplitudes and low frequencies monitored by quantum entanglement". Optic Review. 23: 92. doi:10.1007/s10043-015-0151-0.  8. ^ Y.Y. Huang (2014). "One atomic beam as a detector of classical harmonic vibrations with micro amplitudes and low frequencies". Journal of the Korean Physical Society. 64: 775. doi:10.3938/jkps.64.775.  9. ^ Y.Y. Huang (2014). "Detecting the classical harmonic vibrations of micro amplitudes and low frequencies with an atomic Mach–Zehnder interferometer". General Relativity and Gravitation. 46: 1614. doi:10.1007/s10714-013-1614-x.  10. ^ a b Hrvoje Nikolić (2007). "Quantum mechanics: Myths and facts". Foundation of Physics. 37: 1563–1611. arXiv:quant-ph/0609163free to read. Bibcode:2007FoPh...37.1563N. doi:10.1007/s10701-007-9176-y.  11. ^ S. Gröblacher; et al. (2007). "An experimental test of non-local realism". Nature. 446 (871): 871–5. arXiv:0704.2529free to read. Bibcode:2007Natur.446..871G. doi:10.1038/nature05677. PMID 17443179.  Further reading[edit] External links[edit]
da2f4e8dfc72e01e
Measurement in quantum mechanics From Wikipedia, the free encyclopedia   (Redirected from Quantum measurement) Jump to: navigation, search A measurement always causes the system to jump into an eigenstate of the dynamical variable that is being measured, the eigenvalue this eigenstate belongs to being equal to the result of the measurement. P.A.M. Dirac (1958) in "The Principles of Quantum Mechanics" p. 36 The framework of quantum mechanics requires a careful definition of measurement. The issue of measurement lies at the heart of the problem of the interpretation of quantum mechanics, for which there is currently no consensus. Measurement from a practical point of view[edit] Measurement plays an important role in quantum mechanics, and it is viewed in different ways among various interpretations of quantum mechanics. In spite of considerable philosophical differences, different views of measurement almost universally agree on the practical question of what results from a routine quantum-physics laboratory measurement. To understand this, the Copenhagen interpretation, which has been commonly used,[1] is employed in this article. Qualitative overview[edit] In classical mechanics, a simple system consisting of only one single particle is fully described by the position \vec{x} (t) and momentum \vec{p} (t) of the particle. As an analogue, in quantum mechanics a system is described by its quantum state, which contains the probabilities of possible positions and momenta. In mathematical language, all possible pure states of a system form an abstract vector space called Hilbert space, which is typically infinite-dimensional. A pure state is represented by a state vector in the Hilbert space. Once a quantum system has been prepared in laboratory, some measurable quantity such as position or energy is measured. For pedagogic reasons, the measurement is usually assumed to be ideally accurate. The state of a system after measurement is assumed to "collapse" into an eigenstate of the operator corresponding to the measurement. Repeating the same measurement without any evolution of the quantum state will lead to the same result. If the preparation is repeated, subsequent measurements will likely lead to different results. The predicted values of the measurement are described by a probability distribution, or an "average" (or "expectation") of the measurement operator based on the quantum state of the prepared system.[2] The probability distribution is either continuous (such as position and momentum) or discrete (such as spin), depending on the quantity being measured. The measurement process is often considered as random and indeterministic. Nonetheless, there is considerable dispute over this issue. In some interpretations of quantum mechanics, the result merely appears random and indeterministic, whereas in other interpretations the indeterminism is core and irreducible. A significant element in this disagreement is the issue of "collapse of the wavefunction" associated with the change in state following measurement. There are many philosophical issues and stances (and some mathematical variations) taken—and near universal agreement that we do not yet fully understand quantum reality. In any case, our descriptions of dynamics involve probabilities, not certainties. Quantitative details[edit] The mathematical relationship between the quantum state and the probability distribution is, again, widely accepted among physicists, and has been experimentally confirmed countless times. This section summarizes this relationship, which is stated in terms of the mathematical formulation of quantum mechanics. Measurable quantities ("observables") as operators[edit] Main article: Observable It is a postulate of quantum mechanics that all measurements have an associated operator (called an observable operator, or just an observable), with the following properties: 1. The observable is a self-adjoint operator mapping a Hilbert space (namely, the state space, which consists of all possible quantum states) into itself. 2. Thus, the observable's eigenvectors (called an eigenbasis) form an orthonormal basis that span the state space in which that observable exists. Any quantum state can be represented as a superposition of the eigenstates of an observable. 3. Hermitian operators' eigenvalues are real. The possible outcomes of a measurement are precisely the eigenvalues of the given observable. 4. For each eigenvalue there are one or more corresponding eigenvectors (eigenstates). A measurement results in the system being in the eigenstate corresponding to the eigenvalue result of the measurement. If the eigenvalue determined from the measurement corresponds to more than one eigenstate ("degeneracy"), instead of being in a definite state, the system is in a sub-space of the measurement operator corresponding to all the states having that eigenvalue. Important examples of observables are: Operators can be noncommuting. Two Hermitian operators commute if (and only if) there is at least one basis of vectors, each of which is an eigenvector of both operators (this is sometimes called a simultaneous eigenbasis). Noncommuting observables are said to be incompatible and cannot in general be measured simultaneously. In fact, they are related by an uncertainty principle as discovered by Werner Heisenberg. Measurement probabilities and wavefunction collapse[edit] There are a few possible ways to mathematically describe the measurement process (both the probability distribution and the collapsed wavefunction). The most convenient description depends on the spectrum (i.e., set of eigenvalues) of the observable. Discrete, nondegenerate spectrum[edit] Let \hat{O} be an observable. By assumption, \hat{O} has discrete eigenstates |1 \rang, |2 \rang, |3 \rang,... with corresponding distinct eigenvalues O_1, O_2, O_3,.... That is, the states are nondegenerate. Consider a system prepared in state |\psi \rang. Since the eigenstates of the observable \hat{O} form a complete basis called eigenbasis, the state vector |\psi \rang can be written in terms of the eigenstates as |\psi\rang = c_1 | 1 \rang + c_2 | 2 \rang + c_3 | 3 \rang + \cdots, where c_1,c_2,\ldots are complex numbers in general. The eigenvalues O_1, O_2, O_3,... are all possible values of the measurement. The corresponding probabilities are given by \Pr( O_n ) = \frac{ | \lang n | \psi \rang |^2 }{ |\lang \psi | \psi\rang |} = \frac{ | c_n |^2 }{\sum_k | c_k |^2} Usually |\psi\rang is assumed to be normalized, i.e. \lang \psi | \psi\rang=1. Therefore, the expression above is reduced to \Pr( O_n ) = | \lang n | \psi \rang |^2 = | c_n |^2 If the result of the measurement is O_n, then the system (after measurement) is in pure state |n\rang. That is, | \psi' \rang = | n \rang so any repeated measurement of {\hat O} will yield the same result O_n. When there is a discontinuous change in state due to a measurement that involves discrete eigenvalues, that is called wavefunction collapse. For some, this is simply a description of a reasonably accurate discontinuous change in a mathematical representation of physical reality; for others, depending on philosophical orientation, this is a fundamentally serious problem with quantum theory. Continuous, nondegenerate spectrum[edit] Let \hat{O} be an observable. By assumption, \hat{O} has continuous eigenstate |x\rang, with corresponding distinct eigenvalue x. The eigenvalue forms a continuous spectrum filling the interval (a,b). |\psi\rang = \int_a^b c(x) | x \rang \, dx, where c(x) is a complex-valued function. The eigenvalue that fills up the interval (a,b) is the possible value of measurement. The corresponding probability is described by a probability function given by \Pr( d<x<e ) = \frac{\int_d^e|\lang x|\psi\rang|^2\, dx}{\int_a^b\lang\psi|\psi\rang\, dx} = \frac{ \int_d^e | c(x) |^2 \, dx }{\int_a^b | c(x) |^2 \, dx} where (d,e)\subseteq(a,b). Usually |\psi\rang is assumed to be normalized, i.e. \int_a^b\lang\psi|\psi\rang\, dx=1. Therefore, the expression above is reduced to \Pr( d<x<e ) = \int_d^e | c(x) |^2 \, dx If the result of the measurement is x, then the system (after measurement) is in pure state |x\rang. That is, |\psi'\rang = |x\rang. Alternatively, it is often possible and convenient to analyze a continuous-spectrum measurement by taking it to be the limit of a different measurement with a discrete spectrum. For example, an analysis of scattering involves a continuous spectrum of energies, but by adding a "box" potential (which bounds the volume in which the particle can be found), the spectrum becomes discrete. By considering larger and larger boxes, this approach need not involve any approximation, but rather can be regarded as an equally valid formalism in which this problem can be analyzed. Degenerate spectra[edit] If there are multiple eigenstates with the same eigenvalue (called degeneracies), the analysis is a bit less simple to state, but not essentially different. In the discrete case, for example, instead of finding a complete eigenbasis, it is a bit more convenient to write the Hilbert space as a direct sum of eigenspaces. The probability of measuring a particular eigenvalue is the squared component of the state vector in the corresponding eigenspace, and the new state after measurement is the projection of the original state vector into the appropriate eigenspace. Density matrix formulation[edit] Main article: Density matrix Instead of performing quantum-mechanics computations in terms of wavefunctions (kets), it is sometimes necessary to describe a quantum-mechanical system in terms of a density matrix. The analysis in this case is formally slightly different, but the physical content is the same, and indeed this case can be derived from the wavefunction formulation above. The result for the discrete, degenerate case, for example, is as follows: Let {\hat O} be an observable, and suppose that it has discrete eigenvalues O_1,O_2,O_3,\ldots, associated with eigenspaces V_1,V_2,\ldots respectively. Let P_n be the projection operator into the space V_n. Assume the system is prepared in the state described by the density matrix ρ. Then measuring {\hat O} can yield any of the results O_1, O_2, O_3, \ldots, with corresponding probabilities given by \Pr( O_n ) = \mathrm{Tr}(P_n \rho) where \mathrm{Tr} denotes trace. If the result of the measurement is n, then the new density matrix will be \rho' = \frac{P_n \rho P_n}{\mathrm{Tr}(P_n \rho)} Alternatively, one can say that the measurement process results in the new density matrix \rho'' = \sum_n P_n \rho P_n where the difference is that \rho'' is the density matrix describing the entire ensemble, whereas \rho' is the density matrix describing the sub-ensemble whose measurement result was n. Statistics of measurement[edit] As detailed above, the result of measuring a quantum-mechanical system is described by a probability distribution. Some properties of this distribution are as follows: Suppose we take a measurement corresponding to observable \hat O, on a state whose quantum state is |\psi\rang. \lang \psi | \hat O | \psi \rang . \lang \psi | \hat O^2 | \psi \rang - (\lang \psi | \hat O | \psi \rang)^2 These are direct consequences of the above formulas for measurement probabilities. Suppose that we have a particle in a 1-dimensional box, set up initially in the ground state |\psi_1\rang. As can be computed from the time-independent Schrödinger equation, the energy of this state is E_1=\frac{\pi^2\hbar^2}{2mL^2} (where m is the particle's mass and L is the box length), and the spatial wavefunction is \lang x|\psi_1\rang = \sqrt{ \frac{2}{L} }~{\rm sin}\left(\frac{\pi x}{L}\right). If the energy is now measured, the result will always certainly be E_1, and this measurement will not affect the wavefunction. Next suppose that the particle's position is measured. The position x will be measured with probability density \Pr(S<x<S+dS) = \frac{2}{L}~{\rm sin}^2\left(\frac{\pi S}{L}\right)dS. If the measurement result was x=S, then the wavefunction after measurement will be the position eigenstate |x=S\rang. If the particle's position is immediately measured again, the same position will be obtained. The new wavefunction |x=S\rang can, like any wavefunction, be written as a superposition of eigenstates of any observable. In particular, using energy eigenstates, | \psi_n\rang, we have |x=S\rang = \sum_n | \psi_n \rangle \left\langle \psi_n | x=S \right\rangle = \sum_n | \psi_n \rangle \sqrt{ \frac{2}{L} }~{\rm sin}\left(\frac{n \pi S}{L}\right) If we now leave this state alone, it will smoothly evolve in time according to the Schrödinger equation. But suppose instead that an energy measurement is immediately taken. Then the possible energy values E_n will be measured with relative probabilities: \Pr(E_n) = |\lang \psi_n | S \rang|^2 = \frac{2}{L}~{\rm sin}^2\left(\frac{n \pi S}{L}\right) and moreover if the measurement result is E_n, then the new state will be the energy eigenstate |\psi_n\rang. So in this example, due to the process of wavefunction collapse, a particle initially in the ground state can end up in any energy level, after just two subsequent non-commuting measurements are made. Wavefunction collapse[edit] The process in which a quantum state becomes one of the eigenstates of the operator corresponding to the measured observable is called "collapse", or "wavefunction collapse". The final eigenstate appears randomly with a probability equal to the square of its overlap with the original state.[2] The process of collapse has been studied in many experiments, most famously in the double-slit experiment. The wavefunction collapse raises serious questions regarding "the measurement problem",[3] as well as questions of determinism and locality, as demonstrated in the EPR paradox and later in GHZ entanglement. (See below.) In the last few decades, major advances have been made toward a theoretical understanding of the collapse process. This new theoretical framework, called quantum decoherence, supersedes previous notions of instantaneous collapse and provides an explanation for the absence of quantum coherence after measurement. Decoherence correctly predicts the form and probability distribution of the final eigenstates, and explains the apparent randomness of the choice of final state in terms of einselection.[4] von Neumann measurement scheme[edit] The von Neumann measurement scheme, the ancestor of quantum decoherence theory, describes measurements by taking into account the measuring apparatus which is also treated as a quantum object. "Measurement" of the first kind — premeasurement without detection[edit] Let the quantum state be in the superposition \scriptstyle |\psi\rang = \sum_n c_n |\psi_n\rang , where \scriptstyle |\psi_n\rang are eigenstates of the operator for the so-called "measurement" prior to von Neumann's second apparatus. In order to make the "measurement", the system described by \scriptstyle |\psi\rang needs to interact with the measuring apparatus described by the quantum state \scriptstyle |\phi\rang , so that the total wave function before the measurement and interaction with the second apparatus is \scriptstyle |\psi\rang |\phi\rang . During the interaction of object and measuring instrument the unitary evolution is supposed to realize the following transition from the initial to the final total wave function: |\psi\rang |\phi\rang \rightarrow \sum_n c_n |\psi_n\rang |\phi_n\rang \quad \text{(measurement of the first kind),} where \scriptstyle |\phi_n\rang are orthonormal states of the measuring apparatus. The unitary evolution above is referred to as premeasurement. The relation with wave function collapse is established by calculating the final density operator of the object \scriptstyle \sum_n |c_n|^2 |\psi_n\rang\lang \psi_n| from the final total wave function. This density operator is interpreted by von Neumann as describing an ensemble of objects being after the measurement with probability \scriptstyle |c_n|^2 in the state \scriptstyle |\psi_n\rang. The transition |\psi\rang \rightarrow \sum_n |c_n|^2 |\psi_n\rang \lang \psi_n| is often referred to as weak von Neumann projection, the wave function collapse or strong von Neumann projection |\psi\rang \rightarrow \sum_n |c_n|^2 |\psi_n\rang \lang \psi_n| \rightarrow |\psi_n\rang being thought to correspond to an additional selection of a subensemble by means of observation. In case the measured observable has a degenerate spectrum, weak von Neumann projection is generalized to Lüders projection |\psi\rang \rightarrow \sum_n |c_n|^2 P_n,\; P_n = \sum_i |\psi_{ni}\rang \lang \psi_{ni}|, in which the vectors \scriptstyle |\psi_{ni}\rang for fixed n are the degenerate eigenvectors of the measured observable. For an arbitrary state described by a density operator \scriptstyle \rho Lüders projection is given by \rho \rightarrow \sum_n P_n \rho P_n. Measurement of the second kind — with irreversible detection[edit] In a measurement of the second kind the unitary evolution during the interaction of object and measuring instrument is supposed to be given by |\psi\rang |\phi\rang \rightarrow \sum_n c_n |\chi_n\rang |\phi_n\rang, in which the states \scriptstyle |\chi_n\rang of the object are determined by specific properties of the interaction between object and measuring instrument. They are normalized but not necessarily mutually orthogonal. The relation with wave function collapse is analogous to that obtained for measurements of the first kind, the final state of the object now being \scriptstyle |\chi_n\rang with probability \scriptstyle |c_n|^2. Note that many present-day measurement procedures are measurements of the second kind, some even functioning correctly only as a consequence of being of the second kind. For instance, a photon counter, detecting a photon by absorbing and hence annihilating it, thus ideally leaving the electromagnetic field in the vacuum state rather than in the state corresponding to the number of detected photons; also the Stern–Gerlach experiment would not function at all if it really were a measurement of the first kind.[5] Decoherence in quantum measurement[edit] One can also introduce the interaction with the environment \scriptstyle |e\rang , so that, in a measurement of the first kind, after the interaction the total wave function takes a form \sum_n c_n |\psi_n\rang |\phi_n\rang |e_n \rang, which is related to the phenomenon of decoherence. The above is completely described by the Schrödinger equation and there are not any interpretational problems with this. Now the problematic wavefunction collapse does not need to be understood as a process \scriptstyle |\psi\rangle \rightarrow |\psi_n\rang on the level of the measured system, but can also be understood as a process \scriptstyle |\phi\rangle \rightarrow |\phi_n\rang on the level of the measuring apparatus, or as a process \scriptstyle |e\rangle \rightarrow |e_n\rang on the level of the environment. Studying these processes provides considerable insight into the measurement problem by avoiding the arbitrary boundary between the quantum and classical worlds, though it does not explain the presence of randomness in the choice of final eigenstate. If the set of states \{ |\psi_n\rang\} , \{ |\phi_n\rang\} , or \{ |e_n\rang\} represents a set of states that do not overlap in space, the appearance of collapse can be generated by either the Bohm interpretation or the Everett interpretation which both deny the reality of wavefunction collapse. Both of these are stated to predict the same probabilities for collapses to various states as the conventional interpretation by their supporters. The Bohm interpretation is held to be correct only by a small minority of physicists, since there are difficulties with the generalization for use with relativistic quantum field theory. However, there is no proof that the Bohm interpretation is inconsistent with quantum field theory, and work to reconcile the two is ongoing. The Everett interpretation easily accommodates relativistic quantum field theory. Philosophical problems of quantum measurements[edit] What physical interaction constitutes a measurement?[edit] Until the advent of quantum decoherence theory in the late 20th century, a major conceptual problem of quantum mechanics and especially the Copenhagen interpretation was the lack of a distinctive criterion for a given physical interaction to qualify as "a measurement" and cause a wavefunction to collapse. This is best illustrated by the Schrödinger's cat paradox. Certain aspects of this question are now well understood in the framework of quantum decoherence theory, such as an understanding of weak measurements, and quantifying what measurements or interactions are sufficient to destroy quantum coherence. Nevertheless, there remains less than universal agreement among physicists on some aspects of the question of what constitutes a measurement. Does measurement actually determine the state?[edit] The question of whether (and in what sense) a measurement actually determines the state is one which differs among the different interpretations of quantum mechanics. (It is also closely related to the understanding of wavefunction collapse.) For example, in most versions of the Copenhagen interpretation, the measurement determines the state, and after measurement the state is definitely what was measured. But according to the many-worlds interpretation, measurement determines the state in a more restricted sense: In other "worlds", other measurement results were obtained, and the other possible states still exist. Is the measurement process random or deterministic?[edit] As described above, there is universal agreement that quantum mechanics appears random, in the sense that all experimental results yet uncovered can be predicted and understood in the framework of quantum mechanics measurements being fundamentally random. Nevertheless, it is not settled[6] whether this is true, fundamental randomness, or merely "emergent" randomness resulting from underlying hidden variables which deterministically cause measurement results to happen a certain way each time. This continues to be an area of active research.[7] If there are hidden variables, they would have to be "nonlocal". Does the measurement process violate locality?[edit] In physics, the Principle of locality is the concept that information cannot travel faster than the speed of light (also see special relativity). It is known experimentally (see Bell's theorem, which is related to the EPR paradox) that if quantum mechanics is deterministic (due to hidden variables, as described above), then it is nonlocal (i.e. violates the principle of locality). Nevertheless, there is not universal agreement among physicists on whether quantum mechanics is nondeterministic, nonlocal, or both.[6] See also[edit] 1. ^ Hermann Wimmel (1992). Quantum physics & observed reality: a critical interpretation of quantum mechanics. World Scientific. p. 2. ISBN 978-981-02-1010-6. Retrieved 9 May 2011. 2. ^ a b J. J. Sakurai (1994). Modern Quantum Mechanics (2nd ed.). ISBN 0201539292.  3. ^ George S. Greenstein and Arthur G. Zajonc (2006). The Quantum Challenge: Modern Research On The Foundations Of Quantum Mechanics (2nd ed.). ISBN 076372470X.  4. ^ Wojciech H. Zurek, Decoherence, einselection, and the quantum origins of the classical,Reviews of Modern Physics 2003, 75, 715 or 5. ^ M.O. Scully, W.E. Lamb, A. Barut (1987). "On the theory of the Stern–Gerlach apparatus" (PDF). Foundations of Physics 17: 575–583. Bibcode:1987FoPh...17..575S. doi:10.1007/BF01882788. Retrieved 9 November 2012.  6. ^ a b Hrvoje Nikolić (2007). "Quantum mechanics: Myths and facts". Foundation of Physics 37: 1563–1611. arXiv:quant-ph/0609163. Bibcode:2007FoPh...37.1563N. doi:10.1007/s10701-007-9176-y.  7. ^ S. Gröblacher et al. (2007). "An experimental test of non-local realism". Nature 446 (871): 871–5. arXiv:0704.2529. Bibcode:2007Natur.446..871G. doi:10.1038/nature05677. PMID 17443179.  Further reading[edit] • John A. Wheeler and Wojciech Hubert Zurek, eds. (1983). Quantum Theory and Measurement. Princeton University Press. ISBN 0-691-08316-9.  • Vladimir B. Braginsky and Farid Ya. Khalili (1992). Quantum Measurement. Cambridge University Press. ISBN 0-521-41928-X.  External links[edit]
b71034118a2c7a27
Chemistry 251 » Fall » Full Semester 4 Credits Physical Chemistry I Instructor(s): James M. Farrar Prerequisites: Physics 113-114 or 121-122 and Math 163 or 165. Crosslisting: CHM 441 Course Summary: This course is an introduction to the quantum theory of matter, with particular applications to problems of chemical interest. Our discussion of the subject of quantum chemistry will be based on the Schrödinger equation, the wave equation for matter waves. We will discuss the solutions to the Schrödinger equation for a number of important model systems, including piecewise constant potentials, the simple harmonic oscillator, the rigid rotor, and the Coulomb potential. We will apply these results to chemical bonding and atomic and molecular structure. Chemistry 251 is for undergraduates. There are weekly problem sets. Students also participate in workshops each week. Chemistry 441 is for graduate students who have not had previous coursework in quantum chemistry. Chemistry 441 students will have additional homework assignments. This course uses the Tuesday/Thursday 8:00 - 9:30 am Common Exam time. Course Topics: 1. Introduction, Planck distribution, necessity for quantum hypothesis. 2. Photoelectric effect, heat capacity of solids, line spectra of atoms, Bohr theory of the atom. 3. deBroglie waves, Davisson-Germer experiment, Heisenberg Uncertainty Principle, two-slit diffraction experiment and wave-particle duality. 4. Mathematics of waves, wave equations, separation of variables, solving linear second-order differential equations with constant coefficients. 5. Harmonic oscillator differential equation, clamped string: spatial, temporal solutions, normal modes. 6. Standing waves as superposition of travelling waves, Schrödinger equation for free particle, particle in 1-D infinite square well. 7. Quantization in the 1-D infinite square well, spectra of conjugated molecules, Born interpretation of wavefunctions, linear operators. 9. Postulates of quantum mechanics: maximum information in wavefunction, expectation values, observation of eigenvalues, zero variance of eigenfunctions, operators of quantum mechanics. 10. Wavefunction not an eigenstate of 1-D square well, time-independent Schrödinger equation, stationary states, superposition states. 11. Hermitian operators: eigenvalues real, eigenfunctions are orthogonal, complete. Projections of wavefunction onto basis functions. 12. Completeness, orthogonal expansions, Fourier series: resolution into components; probability of measuring an eigenvalue in terms of Fourier coefficients. 13. Commuting observables, simultaneous eigenfunctions, Schwartz inequality and Uncertainty Principle. 14. Relationships with commutators, time dependence of expectation values, Ehrenfest's Theorem, classical harmonic oscillator. 15. Relative coordinates, Taylor's series expansion of real potentials. Schrödinger equation for harmonic oscillator in reduced coordinates. 16. Asymptotic form for harmonic oscillator wavefunctions. Power series solution to Hermite differential equation. 17. Two-term recursion relations and termination of power series, quantized energy levels. 18. Hermite polynomials, parity, comparison with 1-D particle in a box wavefunctions. 19. Classically forbidden motion, 3-D systems, separability of Hamiltonian, wavefunction, energy. 20. Spherical polar coordinates, rigid rotor, molecular bond lengths. 21. Legendre polynomials, associated Legendre functions, angular momentum commutation relations, eigenfunctions of z-component of angular momentum. 22. Physical significance of m-quantum number. Vector model, space quantization, introduction to the hydrogen atom. 23. Radial equation for the hydrogen atom, Laguerre, associated Laguerre polynomials, radial wavefunctions. 24. Radial functions: functional forms and graphs. Angular functions for p-, d- orbitals. Hydrogen atom in a magnetic field. 25. Approximate methods: first order perturbation theory, corrections to the energy. Introduction to the Variation Theorem. 26. Proof of Variation Theorem; Gaussian approximation to the hydrogen atom ground state. 27. Linear variation method. secular determinant and secular equation. 28. Atoms: atomic units. Perturbation approach to the helium atom. Variation theorem and effective nuclear charge. Slater-type orbitals. Self-Consistent Field Method. 29. Hartree, Hartree-Fock method. Electron correlation. Electron spin, Pauli Exclusion Principle, Slater determinant applied to helium atom. 30. Slater determinants for N-electron systems. Coulomb, exchange integrals, Koopman's theorem. Term symbols. 31. Examples of term symbols. Spin-orbit coupling, atomic spectroscopy. Born-Oppenheimer approximation. 32. Heitler-London (Valence Bond) method. Chemical bond arising from exchange integral. 33. Electron spin and the hydrogen molecule. Introduction to the LCAO-MO method. 34. MO theory for second row homonuclear diatomics; molecular term symbols. 35. Semiclassical radiation theory: time-dependent perturbation theory, transition dipole. 36. The electromagnetic spectrum: pure vibrational and rotational spectroscopy. Boltzmann distribution for initial state population. 37. Vibrational-rotational spectroscopy: centrifugal distortion and vibration-rotation interaction. 38. Polyatomic vibrations: degrees of freedom and normal coordinates. 39. Electronic transitions. Franck-Condon Principle. Required Text: Donald A. McQuarrie, Quantum Chemistry, Second Edition, University Science Books, 2008. ISBN 978-1-891389-50-4.
9017c31f669bbab5
Related Searches found solution Vector space In mathematics, a vector space (or linear space) is a collection of objects (called vectors) that, informally speaking, may be scaled and added. More formally, a vector space is a set on which two operations, called (vector) addition and (scalar) multiplication, are defined and satisfy certain natural axioms which are listed below. Vector spaces are the basic objects of study in linear algebra, and are used throughout mathematics, science, and engineering. The most familiar vector spaces are two- and three-dimensional Euclidean spaces. Vectors in these spaces can be represented by ordered pairs or triples of real numbers, and are isomorphic to geometric vectors—quantities with a magnitude and a direction, usually depicted as arrows. These vectors may be added together using the parallelogram rule (vector addition) or multiplied by real numbers (scalar multiplication). The behavior of geometric vectors under these operations provides a good intuitive model for the behavior of vectors in more abstract vector spaces, which need not have a geometric interpretation. For example, the set of (real) polynomials forms a vector space. A much more extensive idea of what constitutes a vector space is found in the See also subsection for this article, which provides links to more abstract examples of this term. Motivation and definition The space R2 consisting of pairs of real numbers, (x, y), is a common example for a vector space. It is one, because any pair (here a vector) can be added: (x1, y1) + (x2, y2) = (x1 + x2, y1 + y2), and any vector (x, y) can be multiplied by a real number s to yield another vector (sx, sy). The general vector space notion is a generalization of this idea. It is more general in several ways: • other fields instead of the real numbers, such as complex numbers or finite fields, are allowed. • the dimension, which is two above, is arbitrary. • most importantly, elements of vector spaces are not usually expressed as linear combinations of a particular set of vectors, i.e. there is no preference of representing the vector (x, y) as (x, y) = x · (1, 0) + y · (0, 1) (x, y) = (−1/3·x + 2/3·y) · (−1, 1) + (1/3·x + 1/3·y) · (2, 1) The pairs of vectors (1, 0) and (0, 1) or (−1, 1) with (2, 1) are called bases of R2 (see below). Let F be a field (such as the rationals, reals or complex numbers), whose elements will be called scalars. A vector space over the field F is a set V together with two binary operations, satisfying the axioms below. Let u, v, w be arbitrary elements of V, and a, b be elements of F, respectively. Associativity of addition u + (v + w) = (u + v) + w Commutativity of addition v + w = w + v Identity element of addition There exists an element 0V, called the zero vector, such that v + 0 = v for all vV. Inverse elements of addition For all v ∈ V, there exists an element wV, called the additive inverse of v, such that v + w = 0. Distributivity of scalar multiplication with respect to vector addition a (v + w) = a v + a w Distributivity of scalar multiplication with respect to field addition (a + b) v = a v + b v Compatibility of scalar multiplication with field multiplication a (b v) = (ab) v Identity element of scalar multiplication 1 v = v, where 1 denotes the multiplicative identity in F Elementary remarks The first four axioms can be subsumed by requiring the set of vectors to be an abelian group under addition, and the rest are equivalent to a ring homomorphism f from the field into the endomorphism ring of the group of vectors. Then scalar multiplication a v is defined as (f(a))(v). This can be seen as the starting point of defining vector spaces without referring to a field. Some sources choose to also include two axioms of closure u + vV and a vV for all a, u, and v. When the operations are interpreted as maps with codomain V, these closure axioms hold by definition, and do not need to be stated independently. Closure, however, must be checked to determine whether a subset of a vector space is a subspace. Expressions of the form “v a”, where vV and aF, are, strictly speaking, not defined. Because of the commutativity of the underlying field, however, “a v” and “v a” are often treated synonymously. Additionally, if vV, wV, and aF where vector space V is additionally an algebra over the field F then a v w = v a w, which makes it convenient to consider “a v” and “v a” to represent the same vector. There are a number of properties that follow easily from the vector space axioms. Some of them derive from elementary group theory, applied to the (additive) group of vectors: for example the zero vector 0V and the additive inverse −v of a vector v are unique. Other properties can be derived from the distributive law, for example scalar multiplication by zero yields the zero vector and no other scalar multiplication yields the zero vector. The notion of a vector space stems conceptually from affine geometry, via the introduction of coordinates in the plane or usual three-dimensional space. Around 1636, French mathematicians Descartes and Fermat found the bases of analytic geometry by tying the solutions of an equation with two variables to the determination of a plane curve. To achieve a geometric solutions without using coordinates, Bernhard Bolzano introduced in 1804 certain operations on points, lines and planes, which are predecessors of vectors. This work was considered in the concept of barycentric coordinates of August Ferdinand Möbius in 1827. The founding leg of the definition of vectors was the Bellavitis' definition of the bipoint, which is an oriented segment, one of whose ends is the origin and the other one a target. The notion of vector was reconsidered with the presentation of complex numbers by Jean-Robert Argand and William Rowan Hamilton and the inception of quaternions by the latter mathematician, being elements in R2 and R4, respectively. Treating them using linear combinations goes back to Laguerre in 1867, who defined systems of linear equations. In 1857, Cayley introduced the matrix notation which allows one to harmonize and simplify the writing of linear maps between vector spaces. At the same time, Grassmann studied the barycentric calculus initiated by Möbius. He envisaged sets of abstract objects endowed with operations. His work exceeds the framework of vector spaces, since his introduction of multiplication led him to the concept of algebras. Nonetheless, the concepts of dimension and linear independence are present, as well as the scalar product (1844). The primacy of these discoveries was disputed with Cauchy's publication Sur les clefs algébriques. Italian mathematician Peano, one of whose important contributions was the rigorous axiomatisation of extant concepts, in particular the construction of sets, was one of the first to give the modern definition of vector spaces around the end of 19th century. An important development of this concept is due to the construction of function spaces by Henri Lebesgue. This was later formalized by David Hilbert and Stefan Banach, in his 1920 PhD thesis. At this time, algebra and the new field of functional analysis began to interact, notably with key concepts such as spaces of p-integrable functions and Hilbert spaces. Also at this time, the first studies concerning infinite dimensional vector spaces were done. Linear maps and matrices Two given vector spaces V and W (over the same field F) can be related by linear maps (also called linear transformations) from V to W. These are functions that are compatible with the relevant structure—i.e., they preserve sums and scalar products: f(v + w) = f(v) + f(w) and f(a · v) = a · f(v). An isomorphism is a linear map such that there exists an inverse map such that the two possible compositions and are identity maps. Equivalently, f is both one-to-one (injective) and onto (surjective). If there exists an isomorphism between V and W, the two spaces are said to be isomorphic; they are then essentially identical as vector spaces, since all identities holding in V are, via f, transported to similar ones in W, and vice versa via g. Given any two vector spaces V and W, the set of linear maps VW forms a vector space HomF(V, W) (also denoted L(V, W): two such maps f and g are added by adding them pointwise, i.e. and scalar multiplication is given by (a·f)(v) = a·f(v). The case of W = F, the base field, is of particular interest. The space of linear maps from V to F is called the dual vector space, denoted V. Matrices are a useful notion to encode linear maps. They are written as a rectangular array of scalars, i.e. elements of some field F. Any m-by-n matrix A gives rise to a linear map from Fn, the vector space consisting of n-tuples x = (x1, ..., xn) to Fm, by the following (x_1, x_2, ..., x_n) mapsto left(sum_{i=1}^m x_i a_{i1}, sum_{i=1}^m x_i a_{i2}, ..., sum_{i=1}^m x_i a_{in} right), or, using the matrix multiplication of the matrix A with the coordinate vector x: mathbf x mapsto A mathbf x. Moreover, after choosing bases of V and W (see below), any linear map is uniquely represented by a matrix via this assignment. The determinant of a square matrix tells whether the associated map is an isomorphism or not: to be so it is sufficient and necessary that the determinant is nonzero. Eigenvalues and eigenvectors A particularly important case are endomorphisms, i.e. maps . In this case, vectors v can be compared to their image under f, f(v). Any vector v satisfying λ · v = f(v), where λ is a scalar, is called an eigenvector, with eigenvalue λ. Rephrased, this means that v is an element of kernel of the difference (the identity map In the finite-dimensional case, this can be rephrased using determinants: f having eigenvalue λ is equivalent to det (fλ · Id) = 0. Spelling out the definition of the determinant, the left hand side turns out to be polynomial function in λ, called the characteristic polynomial of f. If the field F is large enough to contain a zero of this polynomial (which automatically happens for F algebraically closed, such as F = C) any linear map has at least one eigenvector. The vector space V may or may not be spanned by eigenvectors, a phenomenon governed by Jordan–Chevalley decomposition. Subspaces and quotient spaces In general, a nonempty subset W of a vector space V that is closed under addition and scalar multiplication is called a subspace of V. Subspaces of V are vector spaces (over the same field) in their own right. The intersection of all subspaces containing a given set of vectors is called its span. Expressed in terms of elements, the span is the subspace consisting of finite sums (called linear combinations) a1v1 + a2v2 + ... + anvn, where the ai and vi (i = 1, ..., n) are scalars and vectors, respectively. The counterpart to subspaces are quotient vector spaces. Given any subspace WV, the quotient space V/W ("V modulo W") is defined as follows: as a set, it consists of v + W = {v + w, wW}, where v is an arbitrary vector in V. The sum of two such elements v1 + W and v2 + W is (v1 + v2) + W, and scalar multiplication is given by a · (v + W) = (a · v) + W. The key point in this definition is that v1 + W = v2 + W if and only if the difference of v1 and v2 lies in W. This way, the quotient space "forgets" information that is contained in the subspace W. For any linear map f: VW, the kernel ker(f) consists of elements v that are mapped to 0 in W. It, as well as the image im(f) = {f(v), vV}, are linear subspaces of V and W, respectively. There is a fundamental isomorphism V / ker(f) ≅ im(f). The existence of kernels and images as above is part of the statement that the category of vector spaces (over a fixed field F) is an abelian category. Examples of vector spaces Coordinate spaces and function spaces The first example of a vector space over a field F is the field itself, equipped with its standard addition and multiplication. This is the particular case n = 1 in the vector space usually denoted Fn, known as the coordinate space where n is an integer. Its elements are n-tuples (f1, f2, ..., fn), where the fi are elements of F. Infinite coordinate sequences, and more generally functions from any fixed set Ω to a field F also form vector spaces. The latter applies in particular to common geometric situations, such as Ω being the real line or an interval, open subsets of Rn etc. The vector spaces stemming of this type are called function spaces. Many notions in topology and analysis, such as continuity, integrability or differentiability are well-behaved with respect to linearity, i.e. sums and scalar multiples of functions possessing such a property will still have that property. Hence, the set of such functions are vector spaces. The methods of functional analysis provide finer information about these spaces, see below. The vector space F[x] is given by polynomial functions, i.e. f (x) = rnxn + rn−1xn−1 + ... + r1x + r0, where the coefficients r0, ..., rn are in F, or power series, which are similar, except that infinitely many terms are allowed. Systems of linear equations Systems of linear equations also lead to vector spaces. Indeed this source may be seen as one of the historical reasons for developing this notion. For example, the solutions of a + 3b + c = 0 4a + 2b + 2c = 0 given by triples with arbitrary a, b = a/2, and c = −5a/2 form a vector space. In matrix notation, this can be interpreted as the solution of the equation Ax = 0, where x is the vector (a, b, c) and A is the matrix 1 & 3 & 1 4 & 2 & 2end{bmatrix}. Equivalently, this solution space is the kernel of the linear map attached to A (see above). In a similar vein, the solutions of homogeneous linear differential equations, for example f ''(x) + 2f '(x) + f (x) = 0 also form vector spaces: since the derivatives of the sought function f appear linearly (as opposed to f ''(x)2, for example) and (f + g)' = f ' + g ', any linear combination of solutions is still a solution. In this particular case the solutions are given by where a and b are arbitrary constants, and e=2.718.... Algebraic number theory A common situation in algebraic number theory is a field F containing a smaller field E. Then, by the given multiplication and addition operations of F, F becomes an E-vector space. F is also called a field extension. As such C, the complex numbers are a vector space over R. Another example is Q(z), the smallest field containing the rationals and some complex number z. The dimension of this vector space (see below) is closely tied to z being algebraic or transcendental. Basic constructions In addition to the above concrete examples, there are a number of standard linear algebraic constructions that yield vector spaces related to given ones. In addition to the concrete definitions given below, they are also characterized by universal properties, which determines an object X by specifying the linear maps from X to any other vector space. Direct product and direct sum The direct product prod_{i in I} V_i of a family of vector spaces Vi, where i runs through some index set I, consists of tuples (vi)iI, i.e. for any index i, one element vi of Vi is given. Addition and scalar multiplication is performed componentwise: (vi) + (wi) = (vi + wi). a · (vi) = (a · vi), A variant of this construction is the direct sum oplus_{i in I} V_i (also called coproduct and denoted coprod_{i in I}V_i), where only tuples with finitely many nonzero vectors are allowed. The direct sum is denoted . If the index set I is finite, the two constructions agree, but differ otherwise. Tensor product The tensor product VF W, or simply VW, is a vector space consisting of finite (formal) sums of symbols v1w1 + v2w2 + ... + vnwn, subject to certain rules mimicking bilinearity, such as a · (vw) = (a · v) ⊗ w = v ⊗ (a · w). The tensor product—one of the central notions of multilinear algebra—can be seen as the extension of the hierarchy of scalars, vectors and matrices. Via the fundamental isomorphism HomF (V, W) ≅ VF W, matrices, which are essentially the same as linear maps, i.e. contained in the left hand side, translate into an element of the tensor product of the dual of V with W. In the important case VV, the tensor product can be loosely thought of as adding formal "products" of vectors (which, ad hoc, don't exist in vector spaces). In general, there are no relations between the two tensors v1v2 and v2v1. Forcing two such elements to be equal leads to the symmetric algebra, whereas forcing v1v2 = − v2v1 yields the exterior algebra. The latter is the linear algebraic fundament of differential forms: they are elements of the exterior algebra of the cotangent space to manifolds. Tensors, i.e. element of some tensor product have various applications, for example the Riemann curvature tensor encodes all curvatures of a manifold at a time, which finds applications in general relativity, for example, where the Einstein curvature tensor describes the curvature of space-time. Bases and dimension If, in a (finite or infinite) set {vi}iI no vector can be removed without changing the span, the set is said to be linearly independent. Equivalently, an equation 0 = a1vi1 + ai2v2 + ... + anvin can only hold if all scalars a1, ..., an equal zero. A linearly independent set whose span is V is called a basis for V. Hence, every element can be expressed as a finite sum of basis elements, and any such representation is unique (once a basis is chosen). Vector spaces are sometimes introduced from this coordinatised viewpoint. Using Zorn’s Lemma (which is equivalent to the axiom of choice), it can be proven that every vector space has a basis. It follows from the ultrafilter lemma, which is weaker than the axiom of choice, that all bases of a given vector space have the same cardinality. This cardinality is called the dimension of the vector space. Historically, the existence of bases was first shown by Felix Hausdorff. It is known that, given the rest of the axioms, this statement is in fact equivalent to the axiom of choice. For example, the dimension of the coordinate space Fn is n, since any element in this space (x1, x2, ..., xn) can be uniquely expressed as a linear combination of n vectors e1 = (1, 0, ..., 0), e2 = (0, 1, 0, ..., 0), to en = (0, 0, ..., 0, 1), namely the sum sum_{i=1}^n x_i mathbf{e}_i. By the unicity of the decomposition of any element into a linear combination of chosen basis elements vi, linear maps are completely determined by specifying f(vi). Given two vector spaces, V and W, of the same dimension, a choice of bases of V and W and a bijection between the sets of bases gives rise to the map that maps any basis element of V to the corresponding basis element of W. This map is, by its very definition, an isomorphisms. Therefore, vector spaces over a given field are fixed up to isomorphism by the dimension. Thus any n-dimensional vector spaces over F is isomorphic to F0n. Vector spaces with additional structures From the point of view of linear algebra, the vector spaces are completely understood insofar as any vector space is characterized, up to isomorphism, by its dimension. The needs of functional analysis require considering additional structures, especially with respect to convergence of infinite series. On the other hand, the notion of bases as explained above can be difficult to apply to infinite-dimensional spaces, thus also calling for an adapted approach. Therefore, it is common to study vector spaces with certain additional structures. This is often necessary to recover ordinary notions from geometry or analysis. Topological vector spaces Convergense issues are adressed by considering vector spaces V which also carry a compatible topology, i.e. a structure that allows to talk about elements being close to each other. Compatible here means that addition and scalar multiplication should be continuous maps, i.e. if x and y in V, and a in F vary by a bounded amount (the field also has to carry a topology in this setting), then so do x + y and ax. Only in such topological vector spaces can one consider infinite sums of vectors, i.e. series, through the notion of convergence. For example, the term sum_{i=0}^{infty} f_i, where the fi are some elements of a given vector space of real or complex functions means the limit of the corresponding finite sums of functions. A way of ensuring the existence of limits of infinite series as above is to restrict attention to complete vector spaces, i.e. any Cauchy sequence (which can be thought of as sequences that "should" possess a limit) do have a limit. Roughly, completeness means the absence of holes. E.g. the rationals are not complete, since there are series of rational numbers converging to irrational numbers such as sqrt 2. A less immediate example is provided by functions equipped with the Riemann integral. In the realm of topological vector spaces, such as Banach and Hilbert spaces, all notions should be coherent with the topology. For example, instead of considering all linear maps (also called functionals) VW, it is useful to require maps to be continuous. For example, the dual space V consists of continuous functionals VR (or C). If V is some vector space of (well-behaved) functions, this dual space, called space of distributions, which can be thought of as generalized functions, find applications in solving differential equations. Applying the dual construction twice yields the bidual V∗∗. There is always an natural, injective map VV∗∗. This map may or may not be an isomorphism. If so, V is called reflexive. Banach spaces Banach spaces, in honor of Stefan Banach, are complete normed vector spaces, i.e. the topology comes from a norm, a datum that allows to measure lengths of vectors. A common example is the vector space lp consisting of infinite vectors with real entries x = (x1, x2, ...) whose p-norm (1 ≤ p ≤ ∞) given by |mathbf x|_p := (sum_i |x_i|^p)^{1/p} for p < ∞ and |mathbf x|_infty := text{sup}_i |x_i| is finite. In the case of finitely many entries, i.e. Rn, the topology does not yield additional insight—in fact, all topologies on finite-dimensional topological vector spaces are equivalent, i.e. give rise to the same notion of convergence. In the infinite-dimensional situation, however, the topologies for different p are inequivalent. E.g. the sequence xn of vectors xn = (2n, 2n, ..., 2n, 0, 0, ...)—the first 2n components are 2n, the following ones are 0 |x_n|_1 = sum_{i=1}^{2^n} 2^{-n} = 1 and |x_n|_infty = sup (2^{-n}, 0) = 2^{-n}, i.e. the sequence xn, with n tending to ∞ converges to the zero vector for p = ∞, but does not for p = 1. This is an example for the remark that the study of topological vector spaces is richer than that of vector spaces without additional data. More generally, it is possible to consider functions endowed with a norm that replaces the sum in the above p-norm by an integral, specifically the Lebesgue integral |f|_p := left(int |f(x)|^p dx right)^{1/p}. The set of integrable functions on a given domain Ω (for example an interval) satisfying |f |p < ∞, and equipped with this norm is denoted Lp(Ω). Since the above uses the Lebesgue integral (as opposed to the Riemann integral), these spaces are complete. Concretely this means that for any sequence of functions satisfying the condition lim_{k, n to infty}int_Omega |{f}_k (x)-{f}_n (x)|^p dx = 0 . there exists a function f(x) belonging to the vector space Lp(Ω) such that lim_{k to infty}int_Omega |{f} (x)-{f}_k (x)|^p dx = 0 . Imposing boundedness conditions not only on the function, but also on its derivatives leads to Sobolev spaces. Hilbert spaces Slightly more special, but equally crucial to functional analysis is the case where the topology is induced by an inner product, which allows to measure angles between vectors. This entails, that lengths of vectors can be defined too, namely by |mathbf v| = sqrt {langle v, v rangle}. If such a space is complete, it is called Hilbert space, in honor of David Hilbert. A key case is the Hilbert space L2(Ω), whose inner product is given by langle f | g rangle = int_Omega overline{f(x)} g(x) dx ., with overline{f(x)} being the complex conjugate of f(x). Reversing this direction of thought, i.e. finding a sequence of functions fn that approximate a given function, is equally crucial. Early analysis, for example, in the guise of the Taylor approximation, established an approximation of differentiable functions f by polynomials. Ad hoc, this technique is local, i.e. approximating f closely at some point x may not approximate the function globally. The Stone-Weierstrass theorem, however, states that every continuous function on [a, b] can be approximated as closely as desired by a polynomial. More generally, and more conceptually, the theorem yields a simple description what "basic functions" suffice to generate a Hilbert space, in the sense that the closure of their span (i.e. finite sums and limits of those) is the whole space. For distinction, a basis in the linear algebraic sense as above is then called a Hamel basis. Not only does the theorem exhibit polynomials as sufficient for approximation purposes, it, together with the Gram-Schmidt process, also allows the construction of a basis of orthogonal polynomials. Orthogonality means that langle p | q rangle = 0, i.e. the polynomials obtained don't interfer. Instead of polynomials, similar statements hold for Legendre polynomials, Bessel functions and Hypergeometric functions. Resolving general functions into sums of trigonometric functions is known as the Fourier expansion, a technique much used in engineering. It is possible to describe any function f(x) on a bounded, closed interval (or equivalently, any periodic function) as the limit of the following sum f_N (x) = frac{a_0}{2} + sum_{m=1}^{N}left[a_mcosleft(mxright)+b_msinleft(mxright)right] as N → ∞ , with suitable coefficients am and bm, called Fourier coefficients. This expansion is surprising insofar that countably many functions, namely the rational multiples of sin(mx) and cos(mx), where m takes values in the integers, are enough to express any other function, of which there are uncountably many. The solutions to various important differential equations can be interpreted in terms of Hilbert spaces. For example, a great many fields in physics and engineering lead to such equations and frequently solutions with particular physical properties are used as basis functions, often orthogonal, that serve as the axes in a corresponding Hilbert space. As an example from physics, the time-dependent Schrödinger equation in quantum mechanics describes the change of physical properties in time, by means of a partial differential equation determining a wavefunction. Definite values for physical properties such as energy, or momentum, correspond to eigenvalues of an associated (linear) differential operator and the associated wavefunctions are called eigenstates. The spectral theorem describes the representation of linear operators that act upon functions in terms of these eigenfunctions and their eigenvalues. Distributions (or "generalized functions") are a powerful instrument to solve differential equations, and are exceeding the cadre of Hilbert spaces. Concretely, a distribution is a map assigning a number to any function in a given vector space. A standard example is given by integrating the function over some domain Ω: f mapsto int_Omega f(x)dx The great use of distributions stems from the remark that standard analytic notions such as derivatives can be generalized to distributions. Thus differential equations can be solved in the distributive sense first. This can be accomplished using Green's functions. Then, the found solution can in some cases be proven (e.g. using the Riesz representation theorem) to be actually a true function. Algebras over fields In general, vector spaces do not possess a multiplication operation. (An exceptional case are finite-dimensional vector spaces over finite fields, which turn out to be (finite) fields, as well.) A vector space equipped with an additional bilinear operator defining the multiplication of two vectors is an algebra over a field. An important example is the ring of polynomials F[x] in one variable, x, with coefficients in a field F, or similarly with several variables. In this case the multiplication is both commutative and associative. These rings, and their quotient rings form the basis of algebraic geometry, because they are the rings of functions of algebraic geometric objects. Another crucial example are Lie algebras, which are neither commutative, nor associative, but the failure to be so is measured by the constraints ([x, y] denotes multiplication of x and y): [x, y] = −[y, x] and [x, [y, z]] + [y, [x, z]] + [z, [x, y]] = 0 The standard example is the vector space of n-by-n matrices, setting [x, y] to be the commutator xyyx. Lie algebras are intimately connected to Lie groups. A special case of Lie algebras are Poisson algebras. Ordered vector spaces An ordered vector space is a vector space equipped with an order ≤, i.e. vectors can be compared. Rn can be ordered, for example, by comparing the coordinates of the vectors. Riesz spaces present further key cases. Modules are to ring (mathematics) what vector spaces are to fields, i.e. the very same axioms, applied to a ring R instead of a field F yield modules. In contrast to the good understanding of vector spaces offered by linear algebra, the theory of modules is in general much more complicated. This is due to the presence of elements rR that do not possess multiplicative inverses. For example, modules need not have bases as the Z-module (i.e. abelian group) Z/2 shows; those modules that do (including all vector spaces) are known as free modules. Vector bundles A family of vector spaces, parametrised continuously by some topological space X, is a vector bundle. More precisely, a vector bundle E over X is given by a continuous map π : EX, which is locally a product of X with some (fixed) vector space V, i.e. such that for every point x in X, there is a neighborhood U of x such that the restriction of π to π−1(U) equals the projection V × UU . The case dim V = 1 is called a line bundle. The interest in this notion comes from the fact that while the situation is simple to oversee locally, there may be global twisting phenomena. For example, the Möbius strip can be seen as a line bundle over the circle S1 (at least if one extends the bounded interval to infinity). The (non-)existence of vector bundles with certain properties can tell something about the underlying space X. For example, over the 2-sphere S2, there is no tangent vector field which is everywhere nonzero, as opposed to the circle S1. The study of all vector bundles over some topological space is known as K-theory. An algebraic counterpart to vector bundles are locally free modules, which—in the guise of projective modules—are important in homological algebra and algebraic K-theory. Affine and projective spaces Affine spaces can be thought of being vector spaces whose origin is not specified. Formally, an affine space is a set with a transitive vector space action. In particular, a vector space is an affine space over itself, by the structure map V2V, (a, b) ↦ ab. Sets of the form x + Rm (viewed as a subset of some bigger Rn), i.e. moving some linear subspace by a fixed vector x, yields affine spaces, too. The set of one-dimensional subspaces of a fixed vector space V is known as projective space, an important geometric object formalizing the idea of parallel lines intersecting at infinity. More generally, the Grassmann manifold consists of linear subspaces of higher (fixed) dimension n. Finally, flag manifolds parametrize flags, i.e. chains of subspaces (with fixed dimension) 0 = V0V1 ⊂ ... ⊂ Vn = V Linear algebra Functional analysis Historical references Further references See also Search another word or see found solutionon Dictionary | Thesaurus |Spanish Copyright © 2013, LLC. All rights reserved. • Please Login or Sign Up to use the Recent Searches feature
cdd8b4deff455a21
Dismiss Notice Dismiss Notice Join Physics Forums Today! A question about tunneling 1. Apr 19, 2008 #1 This isn't actually a homework question/problem, but a conceptual problem that i've been having regarding tunneling. Can someone please tell me what will happen to a particle's energy if a particle tunnels through some potential barrier, given that the particles energy is less than that of the potential barrier. I would assume that the particles kinetic energy would deffinitely decrease, but how could you find by how much it decreases? its not simply (V - Uo), is it? (where V is the particles KE and Uo is the potential of the barrier). Also, if the potential barrier has lower energy than the energy of the particle, how does the energy of the particle change once it crosses the barrier? If someone could please help me out with understanding this I would appreciate it greatly. 2. jcsd 3. Apr 19, 2008 #2 Nevermind, found an explanation online: The energy of the particle doesn't change after tunneling -- but, the wavefunction amplitude decreases, which makes sense. 4. Apr 19, 2008 #3 Yes, energy is conserved even in quantum mechanics. You're right, the wavefunction amplitude should decrease because the particle is less likely to be found in the region after the tunneling barrier. What about the wavelength of the particle? 5. Apr 19, 2008 #4 The wavelength of the particle should be the same on either side of the barrier since E and lambda are proportional to one another. However, if the particle has more initial energy V than the potential barrier Uo, its energy in the region of the barrier will be (V-Uo), right? Then the wavelength will increase because the particles energy will be decreased,but only in that region. Is that last part right? Thanks for the help! 6. Apr 19, 2008 #5 Right, for the particular case of tunneling the wavelength stays the same because the kinetic energy is the same. However, think about a potential step (where V<E). Since E=K+V then K=E-V, and so if a particle sees a step up or down then the wavelength will change. Are you more likely to see a particle in a certain region if it moves faster through the region or slower? Slower obviously, and so we can see that the kinetic energy is related to the probability, which then relates back to the wavefunction. A big kinetic energy means a small wavelength, and also a small probability. These same arguments extend into all kinds of situations, which is why I brought it up, so that you can tell what the solution to the Schrödinger equation will look like even before you solve it. For example, you could figure out the stationary states of the harmonic oscillator (a U shaped potential) just by looking at the energies and wavelengths, and how they relate to the stationary state's wavefunction. Knowing how to see the solutions before you start in will either save you calculations altogether, or it will help you check what calculations you do make. Hope I made things clearer rather than muckier. 7. Apr 19, 2008 #6 thank you so much for helping me out with my question and going beyond. You definitely cleared things up. Have something to add?
81a313be6553f797
Hydrogen atoms under the magnifying glass May 22, 2013 Figure: (left) two-dimensional projection of electrons resulting from excitation of hydrogen atoms to four electronic states labeled with a set of quantum numbers (n1,n2,m) and having (from top to bottom) 0, 1, 2 and 3 nodes in the wave function for the ξ = r+z parabolic coordinate; (right) comparison of the experimentally measured radial distributions (solid lines) with results from quantum mechanical calculations (dashed lines), illustrating that the experiment has measured the nodal structure of the quantum mechanical wave function. To describe the microscopic properties of matter and its interaction with the external world, quantum mechanics uses wave functions, whose structure and time dependence is governed by the Schrödinger equation. In atoms, electronic wave functions describe - among other things - charge distributions existing on length-scales that are many orders of magnitude removed from our daily experience. In physics laboratories, experimental observations of charge distributions are usually precluded by the fact that the process of taking a measurement changes a wave function and selects one of its many possible realizations. For this reason, physicists usually know the shape of charge distributions through calculations that are shown in textbooks. That is to say, until now. An international team coordinated by researchers from the Max Born Institute has succeeded in building a microscope that allows magnifying the wave function of excited electronic states of the hydrogen atom by a factor of more than twenty-thousand, leading to a situation where the nodal structure of these electronic states can be visualized on a two-dimensional detector. The results were published in Physical Review Letters and provide the realization of an idea proposed approximately three decades ago. The development of quantum mechanics in the early part of the last century has had a profound influence on the way that scientists understand the world. Quantum mechanics extended the existing worldview based on classical, Newtonian mechanics by providing an alternative description of the micro-scale world, containing numerous elements that cannot be classically intuited, such as wave-particle duality, the importance of interference and entanglement, the and the . Central to quantum mechanics is the concept of a wave function that satisfies the time-dependent Schrödinger equation. According to the Copenhagen interpretation, the wave function describes the probability of observing the outcome of measurements that are performed on a quantum mechanical system, such as measurements of the energy of the system or the position or momenta of its constituents. This allows reconciling the occurrence of non-classical phenomena on the with manifestations and observations made on the macro-scale, which correspond to viewing one or more of countless realizations allowed for by the wave function. Despite the overwhelming impact on modern electronics and photonics, grasping quantum mechanics and the many possibilities that it describes continues to be intellectually challenging, and has over the years motivated numerous experiments illustrating the intriguing predictions contained in the theory. For example, the 2012 Nobel Prize in Physics was awarded to Haroche and Wineland for their work on the measurement and control of individual quantum systems in quantum non-demolition experiments, paving the way to more accurate optical clocks and, potentially, the future realization of quantum computers. Using short laser pulses, experiments have been performed illustrating how coherent superpositions of quantum mechanical stationary states describe electrons that move on periodic orbits around nuclei. The wave function of each of these electronic stationary states is a standing wave, with a nodal pattern that reflects the quantum numbers of the state. The observation of such nodal patterns has included the use of scanning tunneling methods on surfaces and recent laser ionization experiments, where electrons were pulled out of and driven back towards their parent atoms and molecules by using an intense laser field, leading to the production of light in the extreme ultra-violet wavelength region that encoded the initial wave function of the atom or molecule at rest. About thirty years ago, Russian theoreticians proposed an alternative experimental method for measuring properties of wave functions. They suggested that experiments ought to be performed studying laser ionization of atomic hydrogen in a static electric field. They predicted that projecting the electrons onto a two-dimensional detector placed perpendicularly to the static electric field would allow the experimental measurement of interference patterns directly reflecting the nodal structure of the function. The fact that this is so, is due to the special status of hydrogen as nature's only single-electron atom. Due to this circumstance, the hydrogen wave functions can be written as the product of two wave functions that describe how the wave function changes as a function of two, so-called "parabolic coordinates", which are linear combinations of the distance of the electron from the H+ nucleus "r", and the displacement of the electron along the electric field axis "z". Importantly, the shape of the two parabolic wave functions is independent of the strength of the static electric field, and therefore stays the same as the electron travels (over a distance of about half a meter, in our experimental realization!!) from the place where the ionization takes place to the two-dimensional detector. To turn this appealing idea into experimental reality was by no means simple. Since hydrogen atoms do not exist as a chemically stable species, they first had to be produced by laser dissociation of a suitable precursor molecule (hydrogen di-sulfide). Next, the hydrogen atoms had to be optically excited to the electronic states of interest, requiring another two, precisely tunable laser sources. Finally, once this optical excitation had launched the electrons, a delicate electrostatic lens was needed to magnify the physical dimensions of the wave function to millimeter-scale dimensions where they could be observed with the naked eye on a two-dimensional image intensifier and recorded with a camera system. The main result is shown in the figure below. This figure shows raw camera data for four measurements, where the hydrogen atoms were excited to states with 0, 1, 2 and 3 nodes in the for the ξ = r+z parabolic coordinate. As the experimentally measured projections on the two-dimensional detector show, the nodes can be easily recognized in the measurement. As this point, the experimental arrangement served as a microscope, allowing us to look deep inside the hydrogen atom, with a magnification of approximately a factor twenty-thousand. Besides validating an idea that was theoretically proposed more than 30 years ago, our experiment provides a beautiful demonstration of the intricacies of quantum mechanics, as well as a fruitful playground for further research, where fundamental implications of can be further explored, including for example situations where the hydrogen atoms are exposed at the same time to both electric and magnetic fields. The simplest atom in nature still has a lot of exciting physics to offer! More information: physics.aps.org/articles/v6/58 add to favorites email to friend print save as pdf Related Stories Watching an electron being born May 15, 2012 On the origins of the Schrodinger equation Apr 08, 2013 Shaking the electron has strengthened quantum mechanics Aug 24, 2012 Recommended for you Exotic state of matter propels quantum computing theory 6 hours ago Quantum leap in lasers brightens future for quantum computing Jul 22, 2014 Boosting the force of empty space Jul 22, 2014 User comments : 2 Adjust slider to filter visible comments by rank Display comments: newest first no fate not rated yet May 22, 2013 This is an excellent tool to have in the box. Well done! 1 / 5 (3) May 22, 2013 Indeed it was proved that quantum mechanics is excellent, except its mysterious foundation concept… It seems that we still have one problem about the structure of hydrogen atoms, here ….
7335a9cf5226dcd2
The hallmark of a calculus course is epsilon-delta proofs. As one moves closer and closer to a point of interest (reducing δ, the distance from the point-of-interest), the phenomenon's measure is bounded by something times ε, a linear error term. The bound comes from the continuity of the function, also defined in epsilon–delta terms. So everything moves gradually, in a sense. There are no sudden jumps. But human affairs are characterized by jumping, leaping, gapping, sparking, snapping, exploding processes. One studied example is stock prices. If terrible news hits about a public company, you won’t be able to sell your shares for the previous price minus epsilon. You’ll have to unload a gap or a yawn lower. Not that it was ever possible to trade in arbitrarily small δ intervals anyway. The smallest increment on the NYSE is $.01 (it used to be  of a dollar), which by infinitesimal standards is huge. The Continuum Speaking of infinitesimal standards, I need to digress for a few paragraphs so my point will make sense to all readers. Real numbers ℝ — any number you can construct with infinity decimal places, so essentially any number that most people consider a number at all — are thick, dense, an uncountable thicket. They are complete. "The Reals" ℝ are made up of rational ℚ and irrational ℚᶜ numbers. Rational numbers ℚ are ratios of regular counting numbers, 1, 2, 3, ℕ, etc., and their negatives −ℕ. However the rational part ℚ of the reals ℝ — the part that’s easy to conceive and talk about and imagine — is a negligible part of the real number line. The irrational part ℚᶜ is further divided into algebraic 𝓐 and transcendental 𝓐 parts. Again the algebraic part 𝓐 is easier to explain and is, literally, negligible in size compared to the transcendental part. decomposition of the real numbers Algebraic numbers 𝓐𝓐 are the x's that solve various algebraic equations, like x²=2. Whatever number x you square to get 2, is an algebraic number 𝓐. We invent a symbol  and put it in front of the 2 symbol to express the number we’re talking about. Although there is no such symbol to express the number x that solves x² + x = 2 — square this number, then add itself to the result, and you get two — that is also an algebraic number. Now add in all other finite-length equations with integer or fraction coefficients. That’s a lot of equations. Their solutions constitute the algebraic numbers 𝓐. But like I said above, 99% of the real numbers — those simple things you learned about in 3rd grade when they taught you the decimal system — are NOT IN THERE. (99% of infinity, what am I talking about?  It doesn’t make sense, I know, just work with me here.) the algebraic numbers in the complex plane, coloured by degree OK so now I have gotten to these hard-to-describe numbers called transcendental 𝓐ᶜ. The black part in the picture above. It took me a few paragraphs just to sloppily say what they are. If you have never thought about this issue before it might take you hours to wrap your head around them. But it’s these transcendental numbers 𝓐ᶜ— can’t be assembled without an infinitely long equation — which essentially make calculus work. Calculus depends upon the real numbers ℝ and continuity therein, and without this thick, dense, impenetrable subset 𝓐ᶜ called transcendentals, its theorems would be unprovable and illegitimate. I don’t know about you, but I haven’t seen any transcendental numbers around lately! Other than e and π, I mean. Despite transcendental numbers being the most numerous, only a few are known, most based around e and π. That’s right, these are the largest exclusive subset of the real numbers, and we don’t really know that many of them. We use them in proofs but not by name. Just knowing that they’re there ensures that calculus works. But in the real world, you can’t buy e/3 eggs.  That, among other reasons, means you can’t optimize — even in principle — a purchasing decision at the grocery using calculus. (Maybe you don’t think you would be using calculus anyway, but the economic theorists treat you like a gas particle dispersing in the room — and while the particle doesn’t think it’s using calculus to decide where to move, it obeys those laws. So they are relevant somewhere, contrary to the title of this post.) So calculus works in gas diffusion and solving various states of atoms / molecules via Schrödinger equations. But what about us people? Here is a topographical picture of where people live. Notice that there is a lot of spikiness. Sudden jumps. For a long time there’s no people because you’re in the middle of Nevada, and then — all of a sudden — Vegas! Holy cow there are people EVERYWHERE. Flowing in and out at a phenomenal rate. But there is zero flow and zero inhabitants just a few miles away. Molecules don’t behave like that. Economic Activity Here is another picture — got it out of the same book, which my girlfriend is reading — of economic output by region. Again, much spikiness.  Not much calculus. Discontinuous outputs. Maybe that is how we are. Maybe calculus doesn’t work on us. 22 notes 1. nhmortgagebroker reblogged this from isomorphismes 2. didakticodix reblogged this from isomorphismes 3. silvousplait-merci reblogged this from isomorphismes 4. misskaffeine reblogged this from isomorphismes 5. isomorphismes posted this
59721a1dd4870129
Singularities and Black Holes First published Mon Jun 29, 2009 Black holes are regions of spacetime from which nothing, not even light, can escape. A typical black hole is the result of the gravitational force becoming so strong that one would have to travel faster than light to escape its pull. Such black holes contain a spacetime singularity at their center; thus we cannot fully understand a black hole without also understanding the nature of singularities. However, black holes raise several additional conceptual issues. As purely gravitational entities, black holes are at the heart of many attempts to formulate a theory of quantum gravity. Although they are regions of spacetime, black holes are also thermodynamical entities, with a temperature and an entropy; however, it is far from clear what statistical physics underlies these thermodynamical facts. The evolution of black holes is also apparently in conflict with standard quantum evolution, for such evolution rules out the sort of increase in entropy that seems to be required when black holes are present. This has led to a debate over what fundamental physical principles are likely to be preserved in, or violated by, a full quantum theory of gravity. 1. Spacetime Singularities tear in spacetime 1.1 Path Incompleteness non-maximal spacetime non-maximal spacetime made maximal by filling its holes 1.2 Boundary Constructions 1.3 Curvature Pathology 2. The Significance of Singularities When considering the implications of spacetime singularities, it is important to note that we have good reasons to believe that the spacetime of our universe is singular. In the late 1960s, Hawking, Penrose, and Geroch proved several singularity theorems, using the path-incompleteness definition of singularities (see, e.g., Hawking and Ellis 1973). These theorems showed that if certain reasonable premises were satisfied, then in certain circumstances singularities could not be avoided. Notable among these conditions was the “positive energy condition” that captures the idea that energy is never negative. These theorems indicate that our universe began with an initial singularity, the “Big Bang,” 13.7 billion years ago. They also indicate that in certain circumstances (discussed below) collapsing matter will form a black hole with a central singularity. Should these results lead us to believe that singularities are real? Many physicists and philosophers resist this conclusion. Some argue that singularities are too repugnant to be real. Others argue that the singular behavior at the center of black holes and at the beginning of time points to a the limit of the domain of applicability of general relativity. However, some are inclined to take general relativity at its word, and simply accept its prediction of singularities as a surprising, but perfectly consistent account of the geometry of our world. 2.1 Definitions and Existence of Singularities As we have seen, there is no commonly accepted, strict definition of singularity, no physically reasonable definition of missing point, and no necessary connection of singular structure, at least as characterized by the presence of incomplete paths, to the presence of curvature pathology. What conclusions should be drawn from this state of affairs? There seem to be two primary responses, that of Clarke (1993) and Earman (1995) on the one hand, and that of Geroch, Can-bin and Wald (1982), and Curiel (1998) on the other. The former holds that the mettle of physics and philosophy demands that we find a precise, rigorous and univocal definition of singularity. On this view, the host of philosophical and physical questions surrounding general relativity's prediction of singular structure would best be addressed with such a definition in hand, so as better to frame and answer these questions with precision in its terms, and thus perhaps find other, even better questions to pose and attempt to answer. The latter view is perhaps best summarized by a remark of Geroch, Can-bin and Wald (1982): “The purpose of a construction [of ‘singular points’], after all, is merely to clarify the discussion of various physical issues involving singular space-times: general relativity as it stands is fully viable with no precise notion of ‘singular points’.” On this view, the specific physics under investigation in any particular situation should dictate which definition of singularity to use in that situation, if, indeed, any at all. In sum, the question becomes the following: Is there a need for a single, blanket definition of singularity or does the urge for one bespeak only an old Platonic, essentialist prejudice? This question has obvious connections to the broader question of natural kinds in science. One sees debates similar to those canvassed above when one tries to find, for example, a strict definition of biological species. Clearly part of the motivation for searching for a single exceptionless definition is the impression that there is some real feature of the world (or at least of our spacetime models) which we can hope to capture precisely. Further, we might hope that our attempts to find a rigorous and exceptionless definition will help us to better understand the feature itself. Nonetheless, it is not entirely clear why we shouldn't be happy with a variety of types of singular structure, and with the permissive attitude that none should be considered the “right” definition of singularities. Even without an accepted, strict definition of singularity for relativistic spacetimes, the question can be posed of what it may mean to ascribe “existence” to singular structure under any of the available open possibilities. It is not farfetched to think that answers to this question may bear on the larger question of the existence of spacetime points in general. It would be difficult to argue that an incomplete path in a maximal relativistic spacetime does not exist in at least some sense of the term. It is not hard to convince oneself, however, that the incompleteness of the path does not exist at any particular point of the spacetime in the same way, say, as this glass of beer at this moment exists at this point of spacetime. If there were a point on the manifold where the incompleteness of the path could be localized, surely that would be the point at which the incomplete path terminated. But if there were such a point, then the path could be extended by having it pass through that point. It is perhaps this fact that lies behind much of the urgency surrounding the attempt to define singular structure as “missing points.” The demand that singular structure be localized at a particular place bespeaks an old Aristotelian substantivalism that invokes the maxim, “To exist is to exist in space and time” (Earman 1995, p. 28). Aristotelian substantivalism here refers to the idea contained in Aristotle's contention that everything that exists is a substance and that all substances can be qualified by the Aristotelian categories, two of which are location in time and location in space. One need not consider anything so outré as incomplete, inextendible paths, though, in order to produce examples of entities that seem undeniably to exist in some sense of the term or other, and yet which cannot have any even vaguely determined location in time and space predicated of them. Indeed, several essential features of a relativistic spacetime, singular or not, cannot be localized in the way that an Aristotelian substantivalist would demand. For example, the Euclidean (or non-Euclidean ) nature of a space is not something with a precise location. Likewise, various spacetime geometrical structures (such as the metric, the affine structure, etc.) cannot be localized in the way that the Aristotelian would demand. The existential status of such entities vis-à-vis more traditionally considered objects is an open and largely ignored issue. Because of the way the issue of singular structure in relativistic spacetimes ramifies into almost every major open question in relativistic physics today, both physical and philosophical, it provides a peculiarly rich and attractive focus for these sorts of questions. 2.2 The Breakdown of General Relativity? At the heart of all of our conceptions of a spacetime singularity is the notion of some sort of failing: a path that disappears, points that are torn out, spacetime curvature that becomes pathological. However, perhaps the failing lies not in the spacetime of the actual world (or of any physically possible world), but rather in the theoretical description of the spacetime. That is, perhaps we shouldn't think that general relativity is accurately describing the world when it posits singular structure. Indeed, in most scientific arenas, singular behavior is viewed as an indication that the theory being used is deficient. It is therefore common to claim that general relativity, in predicting that spacetime is singular, is predicting its own demise, and that classical descriptions of space and time break down at black hole singularities and at the Big Bang. Such a view seems to deny that singularities are real features of the actual world, and to assert that they are instead merely artifices of our current (flawed) physical theories. A more fundamental theory — presumably a full theory of quantum gravity — will be free of such singular behavior. For example, Ashtekar and Bojowald (2006) and Ashtekar, Pawlowski and Singh (2006) argue that, in the context of loop quantum gravity, neither the big bang singularity nor black hole singularities appear. On this reading, many of the earlier worries about the status of singularities become moot. Singularties don't exist, nor is the question of how to define them, as such, particularly urgent. Instead, the pressing question is what indicates the borders of the domain of applicability of general relativity? We pick up this question below in Section 5 on quantum black holes, for it is in this context that many of the explicit debates play out over the limits of general relativity. 3. Black Holes The simplest picture of a black hole is that of a body whose gravity is so strong that nothing, not even light, can escape from it. Bodies of this type are already possible in the familiar Newtonian theory of gravity. The “escape velocity” of a body is the velocity at which an object would have to travel to escape the gravitational pull of the body and continue flying out to infinity. Because the escape velocity is measured from the surface of an object, it becomes higher if a body contracts down and becomes more dense. (Under such contraction, the mass of the body remains the same, but its surface gets closer to its center of mass; thus the gravitational force at the surface increases.) If the object were to become sufficiently dense, the escape velocity could therefore exceed the speed of light, and light itself would be unable to escape. This much of the argument makes no appeal to relativistic physics, and the possibility of such classical black holes was noted in the late 18th Century by Michel (1784) and Laplace (1796). These Newtonian black holes do not precipitate quite the same sense of crisis as do relativistic black holes. While light hurled ballistically from the surface of the collapsed body cannot escape, a rocket with powerful motors firing could still gently pull itself free. Taking relativistic considerations into account, however, we find that black holes are far more exotic entities. Given the usual understanding that relativity theory rules out any physical process going faster than light, we conclude that not only is light unable to escape from such a body: nothing would be able to escape this gravitational force. That includes the powerful rocket that could escape a Newtonian black hole. Further, once the body has collapsed down to the point where its escape velocity is the speed of light, no physical force whatsoever could prevent the body from continuing to collapse down further – for this would be equivalent to accelerating something to speeds beyond that of light. Thus once this critical amount of collapse is reached, the body will get smaller and smaller, more and more dense, without limit. It has formed a relativistic black hole; at its center lies a spacetime singularity. For any given body, this critical stage of unavoidable collapse occurs when the object has collapsed to within its so-called Schwarzschild radius, which is proportional to the mass of the body. Our sun has a Schwarzschild radius of approximately three kilometers; the Earth's Schwarzschild radius is a little less than a centimeter. This means that if you could collapse all the Earth's matter down to a sphere the size of a pea, it would form a black hole. It is worth noting, however, that one does not need an extremely high density of matter to form a black hole if one has enough mass. Thus for example, if one has a couple hundred million solar masses of water at its standard density, it will be contained within its Schwarzschild radius and will form a black hole. Some supermassive black holes at the centers of galaxies are thought to be even more massive than this, at several billion solar masses. The “event horizon” of a black hole is the point of no return. That is, it comprises the last events in the spacetime around the singularity at which a light signal can still escape to the external universe. For a standard (uncharged, non-rotating) black hole, the event horizon lies at the Schwarzschild radius. A flash of light that originates at an event inside the black hole will not be able to escape, but will instead end up in the central singularity of the black hole. A light flash originating at an event outside of the event horizon will escape, but it will be red-shifted strongly to the extent that it is near the horizon. An outgoing beam of light that originates at an event on the event horizon itself, by definition, remains on the event horizon until the temporal end of the universe. General relativity tells us that clocks running at different locations in a gravitational field will generally not agree with one another. In the case of a black hole, this manifests itself in the following way. Imagine someone falls into a black hole, and, while falling, she flashes a light signal to us every time her watch hand ticks. Observing from a safe distance outside the black hole, we would find the times between the arrival of successive light signals to grow larger without limit. That is, it would appear to us that time were slowing down for the falling person as she approached the event horizon. The ticking of her watch (and every other process as well) would seem to go slower and slower as she got closer and closer to the event horizon. We would never actually see the light signals she emits when she crosses the event horizon; instead, she would seem to be eternally “frozen” just above the horizon. (This talk of “seeing” the person is somewhat misleading, because the light coming from the person would rapidly become severely red-shifted, and soon would not be practically detectable.) From the perspective of the infalling person, however, nothing unusual happens at the event horizon. She would experience no slowing of clocks, nor see any evidence that she is passing through the event horizon of a black hole. Her passing the event horizon is simply the last moment in her history at which a light signal she emits would be able to escape from the black hole. The concept of an event horizon is a global concept that depends on how the events on the event horizon relate to the overall structure of the spacetime. Locally there is nothing noteworthy about the events at the event horizon. If the black hole is fairly small, then the tidal gravitational forces there would be quite strong. This just means that gravitational pull on one's feet, closer to the singularity, would be much stronger than the gravitational pull on one's head. That difference of force would be great enough to pull one apart. For a sufficiently large black hole the difference in gravitation at one's feet and head would be small enough for these tidal forces to be negligible. As in the case of singularties, alternative definitions of black holes have been explored. These definitions typically focus on the one-way nature of the event horizon: things can go in, but nothing can get out. Such accounts have not won widespread support, however, and we have not space here to elaborate on them further.[4] 3.1 The Geometrical Nature of Black Holes One of the most remarkable features of relativistic black holes is that they are purely gravitational entities. A pure black hole spacetime contains no matter whatsoever. It is a “vacuum” solution to the Einstein field equations, which just means that it is a solution of Einstein's gravitational field equations in which the matter density is everywhere zero. (Of course, one can also consider a black hole with matter present.) In pre-relativistic physics we think of gravity as a force produced by the mass contained in some matter. In the context of general relativity, however, we do away with gravitational force, and instead postulate a curved spacetime geometry that produces all the effects we standardly attribute to gravity. Thus a black hole is not a “thing” in spacetime; it is instead a feature of spacetime itself. A careful definition of a relativistic black hole will therefore rely only on the geometrical features of spacetime. We'll need to be a little more precise about what it means to be “a region from which nothing, not even light, can escape.” First, there will have to be someplace to escape to if our definition is to make sense. The most common method of making this idea precise and rigorous employs the notion of “escaping to infinity.” If a particle or light ray cannot “travel arbitrarily far” from a definite, bounded region in the interior of spacetime but must remain always in the region, the idea is, then that region is one of no escape, and is thus a black hole. The boundary of the region is called the event horizon. Once a physical entity crosses the event horizon into the hole, it never crosses it again. Second, we will need a clear notion of the geometry that allows for “escape,” or makes such escape impossible. For this, we need the notion of the “causal structure” of spacetime. At any event in the spacetime, the possible trajectories of all light signals form a cone (or, more precisely, the four-dimensional analog of a cone). Since light travels at the fastest speed allowed in the spacetime, these cones map out the possible causal processes in the spacetime. If an occurence at an event A is able to causally affect another occurence at event B, there must be a continuous trajectory in spacetime from event A to event B such that the trajectory lies in or on the lightcones of every event along it. (For more discussion, see the Supplementary Document: Lightcones and Causal Structure.) Figure 1 is a spacetime diagram of a sphere of matter collapsing down to form a black hole. The curvature of the spacetime is represented by the tilting of the light cones away from 45 degrees. Notice that the light cones tilt inwards more and more as one approaches the center of the black hole. The jagged line running vertically up the center of the diagram depicts the black hole central singularity. As we emphasized in Section 1, this is not actually part of the spacetime, but might be thought of as an edge of space and time itself. Thus, one should not imagine the possibility of traveling through the singularity; this would be as nonsensical as something's leaving the diagram (i.e., the spacetime) altogether. Spacetime diagram of black hole formation Figure 1: A spacetime diagram of black hole formation What makes this a black hole spacetime is the fact that it contains a region from which it is impossible to exit while traveling at or below the speed of light. This region is marked off by the events at which the outside edge of the forward light cone points straight upward. As one moves inward from these events, the light cone tilts so much that one is always forced to move inward toward the central singularity. This point of no return is, of course, the event horizon; and the spacetime region inside it is the black hole. In this region, one inevitably moves towards the singularity; the impossibility of avoiding the singularity is exactly like the impossibility of preventing ourselves from moving forward in time. Notice that the matter of the collapsing star disappears into the black hole singularity. All the details of the matter are completely lost; all that is left is the geometrical properties of the black hole which can be identified with mass, charge, and angular momentum. Indeed, there are so-called “no-hair” theorems which make rigorous the claim that a black hole in equilibrium is entirely characterized by its mass, its angular momentum, and its electric charge. This has the remarkable consequence that no matter what the particulars may be of any body that collapses to form a black hole—it may be as intricate, complicated and Byzantine as one likes, composed of the most exotic materials—the final result after the system has settled down to equilibrium will be identical in every respect to a black hole that formed from the collapse of any other body having the same total mass, angular momentum and electric charge. For this reason Chandrasekhar (1983) called black holes “the most perfect objects in the universe.” 4. Naked Singularities and the Cosmic Censorship Hypothesis While spacetime singularities in general are frequently viewed with suspicion, physicists often offer the reassurance that we expect most of them to be hidden away behind the event horizons of black holes. Such singularities therefore could not affect us unless we were actually tojump into the black hole. A “naked” singularity, on the other hand, is one that is not hidden behind an event horizon. Such singularities appear much more threatening because they are uncontained, accessible to vast areas of spacetime. The heart of the worry is that singular structure would seem to signify some sort of breakdown in the fundamental structure of spacetime to such a profound depth that it could wreak havoc on any region of universe that it were visible to. Because the structures that break down in singular spacetimes are required for the formulation of our known physical laws in general, and of initial-value problems for individual physical systems in particular, one such fear is that determinism would collapse entirely wherever the singular breakdown were causally visible. As Earman (1995, pp. 65-6) characterizes the worry, nothing would seem to stop the singularity from “disgorging” any manner of unpleasant jetsam, from TVs showing Nixon's Checkers Speech to old lost socks, in a way completely undetermined by the state of spacetime in any region whatsoever, and in such a way as to render strictly indeterminable all regions in causal contact with what it spews out. One form that such a naked singularity could take is that of a white hole, which is a time-reversed black hole. Imagine taking a film of a black hole forming, and various astronauts, rockets, etc. falling into it. Now imagine that film being run backwards. This is the picture of a white hole: one starts with a naked singularity, out of which might appear people, artifacts, and eventually a star bursting forth. Absolutely nothing in the causal past of such a white hole would determine what would pop out of it (just as items that fall into a black hole leave no trace on the future). Because the field equations of general relativity do not pick out a preferred direction of time, if the formation of a black hole is allowed by the laws of spacetime and gravity, then white holes will also be permitted by these laws. Roger Penrose famously suggested that although naked singularties are comaptible with general relativity, in physically realistic situations naked singularities will never form; that is, any process that results in a singularity will safely deposit that singularity behind an event horizon. This suggestion, titled the “Cosmic Censorship Hypothesis,” has met with a fair degree of success and popularity; however, it also faces several difficulties. Penrose's original formulation relied on black holes: a suitably generic singularity will always be contained in a black hole (and so causally invisible outside the black hole). As the counter-examples to various ways of articulating the hypothesis in terms of this idea have accumulated over the years, it has gradually been abandoned. More recent approaches either begin with an attempt to provide necessary and sufficient conditions for cosmic censorship itself, yielding an indirect characterization of a naked singularity as any phenomenon violating those conditions, or else they begin with an attempt to provide a characterization of a naked singularity and so conclude with a definite statement of cosmic censorship as the absence of such phenomena. The variety of proposals made using both approaches is too great to canvass here; the interested reader is referred to Joshi (2003) for a review of the current state of the art, and to Earman (1995, ch. 3) for a philosophical discussion of many of the proposals. 5. Quantum Black Holes The challenge of uniting quantum theory and general relativity in a successful theory of quantum gravity has arguably been the greatest challenge facing theoretical physics for the past eighty years. One avenue that has seemed particularly promising here is the attempt to apply quantum theory to black holes. This is in part because, as completely gravitational entities, black holes present an especially pure case to study the quantization of gravity. Further, because the gravitational force grows without bound as one nears a standard black hole singularity, one would expect quantum gravitational effects (which should come into play at extremely high energies) to manifest themselves in black holes. Studies of quantum mechanics in black hole spacetimes have revealed several surprises that threaten to overturn our traditional views of space, time, and matter. A remarkable parallel between the laws of black hole mechanics and the laws of thermodynamics indicates that spacetime and thermodynamics may be linked in a fundamental (and previously unimagined) way. This linkage hints at a fundamental limitation on how much entropy can be contained in a spatial region. A further topic of foundational importance is found in the so-called information loss paradox, which suggests that standard quantum evolution will not hold when black holes are present. While many of these suggestions are somewhat speculative, they nevertheless touch on deep issues in the foundations of physics. 5.1 Black Hole Thermodynamics In the early 1970s, Bekenstein argued that the second law of thermodynamics requires one to assign a finite entropy to a black hole. His worry was that one could collapse any amount of highly entropic matter into a black hole — which, as we have emphasized, is an extremely simple object — leaving no trace of the original disorder. This seems to violate the second law of thermodynamics, which asserts that the entropy (disorder) of a closed system can never decrease. However, adding mass to a black hole will increase its size, which led Bekenstein to suggest that the area of a black hole is a measure of its entropy. This conviction grew when, in 1972, Hawking proved that the surface area of a black hole, like the entropy of a closed system, can never decrease. The similarity between black holes and thermodynamic systems was considerably strengthened when Bardeen, Carter, and Hawking (1973) proved three other laws of black hole mechanics that parallel exactly the first, third, and “zeroth” laws of thermodynamics. Although this parallel was extremely suggestive, taking it seriously would require one to assign a non-zero temperature to a black hole, which all then agreed was absurd: All hot bodies emit thermal radiation (like the heat given off from a stove). However, according to general relativity, a black hole ought to be a perfect sink for energy, mass, and radiation, insofar as it absorbs everything (including light), and emits nothing (including light). The only temperature one might be able to assign it would be absolute zero. This obvious fact was overthrown when Hawking (1974, 1975) demonstrated that black holes are not completely “black” after all. His analysis of quantum fields in black hole spacetimes revealed that the black holes will emit particles: black holes generate heat at a temperature that is inversely proportional to their mass and directly proportional to their so-called surface gravity. It glows like a lump of smoldering coal even though light should not be able to escape from it! The temperature of this “Hawking effect” radiation is extremely low for stellar-scale black holes, but for very small black holes the temperatures would be quite high. This means that a very small black hole should rapidly evaporate away, as all of its mass-energy is emitted in high-temperature Hawking radiation. These results were taken to establish that the parallel between the laws of black hole mechanics and the laws of thermodynamics was not a mere fluke: it seems they really are getting at the same deep physics. The Hawking effect establishes that the surface gravity of a black hole can indeed be interpreted as a physical temperature. Further, mass in black hole mechanics is mirrored by energy in thermodynamics, and we know from relativity theory that mass and energy are actually equivalent. Connecting the two sets of laws also requires linking the surface area of a black hole with entropy, as Bekenstein had suggested. This black hole entropy is called its Bekenstein entropy, and is proportional to the area of the event horizon of the black hole. 5.2 The Generalized Second Law of Thermodynamics In the context of thermodynamic systems containing black holes, one can construct apparent violations of the laws of thermodynamics, and of the laws of black hole mechanics, if one considers these laws to be independent of each other. So for example, if a black hole gives off radiation through the Hawking effect, then it will lose mass – in apparent violation of the area increase theorem. Likewise, as Bekenstein argued, we could violate the second law of thermodynamics by dumping matter with high entropy into a black hole. However, the price of dropping matter into the black hole is that its event horizon will increase in size. Likewise, the price of allowing the event horizon to shrink by giving off Hawking radiation is that the entropy of the external matter fields will go up. We can consider a combination of the two laws that stipulates that the sum of a black hole's area, and the entropy of the system, can never decrease. This is the generalized second law of (black hole) thermodynamics. From the time that Bekenstein first proposed that the area of a black hole could be a measure of its entropy, it was know to face difficulties that appeared insurmountable. Geroch (1971) proposed a scenario that seems to allow a violation of the generalized second law. If we have a box full of energetic radiation with a high entropy, that box will have a certain weight as it is attracted by the gravitational force of a black hole. One can use this weight to drive an engine to produce energy (e.g., to produce electricity) while slowly lowering the box towards the event horizon of the black hole. This process extracts energy, but not entropy, from the radiation in the box; once the box reaches the event horizon itself, it can have an arbitrarily small amount of energy remaining. If one then opens the box to let the radiation fall into the black hole, the size of the event horizon will not increase any appreciable amount (because the mass-energy of the black hole has barely been increased), but the thermodynamic entropy outside the black hole has decreased. Thus we seem to have violated the generalized second law. The question of whether we should be troubled by this possible violation of the generalized law touches on several issues in the foundations of physics. The status of the ordinary second law of thermodynamics is itself a thorny philosophical puzzle, quite apart from the issue of black holes. Many physicists and philosophers deny that the ordinary second law holds universally, so one might question whether we should insist on its validity in the presence of black holes. On the other hand, the second law clearly captures some significant feature of our world, and the analogy between black hole mechanics and thermodynamics seems too rich to be thrown out without a fight. Indeed, the generalized second law is our only law that joins together the fields of general relativity, quantum mechanics, and thermodynamics. As such, it seems the most promising window we have into the truly fundamental nature of the physical world. 5.2.1 Entropy Bounds and the Holographic Principle In response to this apparent violation of the generalized second law, Bekenstein pointed out that one could never get all of the radiation in the box arbitrarily close to the event horizon, because the box itself would have to have some volume. This observation by itself is not enough to save the second law, however, unless there is some limit to how much entropy can be contained in a given volume of space. Current physics poses no such limit, so Bekenstein (1981) postulated that the limit would be enforced by the underlying theory of quantum gravity, which black hole thermodynamics is providing a glimpse of. However, Unruh and Wald (1982) argue that there is a less ad hoc way to save the generalized second law. The heat given off by any hot body, including a black hole, will produce a kind of “buoyancy” force on any object (like our box) that blocks thermal radiation. This means that when we are lowering our box of high-entropy radiation towards the black hole, the optimal place to release that radiation will not be just above the event horizon, but rather at the “floating point” for the container. Unruh and Wald demonstrate that this fact is enough guarantee that the decrease in outside entropy will be compensated by an increase in the area of the event horizon. It therefore seems that there is no reliable way to violate the generalized second law of black hole thermodynamics. There is, however, a further reason that one might think that black hole thermodynamics implies a fundamental bound on the amount of entropy that can be contained in a region. Suppose that there were more entropy in some region of space than the Bekenstein entropy of a black hole of the same size. Then one could collapse that entropic matter into a black hole, which obviously could not be larger than the size of the original region (or the mass-energy would have already formed a black hole). But this would violate the generalized second law, for the Bekenstein entropy of a the resulting black hole would be less than that of the matter that formed it. Thus the second law appears to imply a fundamental limit on how much entropy a region can contain. If this is right, it seems to be a deep insight into the nature of quantum gravity. Arguments along these lines led ‘t Hooft (1985) to postulate the “Holographic Principle” (though the title is due to Susskind). This principle claims that the number of fundamental degrees of freedom in any spherical region is given by the Bekenstein entropy of a black hole of the same size as that region. The Holographic Principle is notable not only because it postulates a well-defined, finite, number of degrees of freedom for any region, but also because this number grows as the area surrounding the region, and not as the volume of the region. This flies in the face of standard physical pictures, whether of particles or fields. According to that picture, the entropy is the number of possible ways something can be, and that number of ways increases as the volume of any spatial region. The Holographic Principle does get some support from a result in string theory known as the “AdS/CFT correspondence.” If the Principle is correct, then one spatial dimension can, in a sense, be viewed as superfluous: the fundamental physical story of a spatial region is actually a story that can be told merely about the boundary of the region. 5.2.2 What Does Black Hole Entropy Measure? In classical thermodynamics, that a system possesses entropy is often attributed to the fact that we in practice are never able to render to it a “complete” description. When describing a cloud of gas, we do not specify values for the position and velocity of every molecule in it; we rather describe it in terms of quantities, such as pressure and temperature, constructed as statistical measures over underlying, more finely grained quantities, such as the momentum and energy of the individual molecules. The entropy of the gas then measures the incompleteness, as it were, of the gross description. In the attempt to take seriously the idea that a black hole has a true physical entropy, it is therefore natural to attempt to construct such a statistical origin for it. The tools of classical general relativity cannot provide such a construction, for it allows no way to describe a black hole as a system whose physical attributes arise as gross statistical measures over underlying, more finely grained quantities. Not even the tools of quantum field theory on curved spacetime can provide it, for they still treat the black hole as an entity defined entirely in terms of the classical geometry of the spacetime. Any such statistical accounting, therefore, must come from a theory that attributes to the classical geometry a description in terms of an underlying, discrete collection of micro-states. Explaining what these states are that are counted by the Bekenstein entropy has been a challenge that has been eagerly pursued by quantum gravity researchers. In 1996, superstring theorists were able to give an account of how M-theory (which is an extension of superstring theory) generates a number of the string-states for a certain class of black holes, and this number matched that given by the Bekenstein entropy (Strominger and Vafa, 1996). A counting of black hole states using loop quantum gravity has also recovered the Bekenstein entropy (Ashtekar et al., 1998). It is philosophically noteworthy that this is treated as a significant success for these theories (i.e., it is presented as a reason for thinking that these theories are on the right track) even though Hawking radiation has never been experimentally observed (in part, because for macroscopic black holes the effect is minute). 5.3 Information Loss Paradox Hawking's discovery that black holes give off radiation presented an apparent problem for the possibility of describing black holes quantum mechanically. According to standard quantum mechanics, the entropy of a closed system never changes; this is captured formally by the “unitary” nature of quantum evolution. Such evolution guarantees that the initial conditions, together with the quantum Schrödinger equation, will fix the future state of the system. Likewise, a reverse application of the Schrödinger equation will take us from the later state back to the original initial state. The states at each time are rich enough, detailed enough, to fix (via the dynamical equations) the states at all other times. Thus there is a sense in which the completeness of the state is maintained by unitary time evolution. It is typical to characterize this feature with the claim that quantum evolution “preserves information.” If one begins with a system in a precisely known quantum state, then unitary evolution guarantees that the details about that system will evolve in such a way that one can infer the precise quantum state of the system at some later time (as long as one knows the law of evolution and can perform the relevant calculations), and vice versa. This quantum preservation of details implies that if we burn a chair, for example, it would in principle be possible to perform a complete set of measurements on all the outgoing radiation, the smoke, and the ashes, and reconstruct exactly what the chair looked like. However, if we were instead to throw the chair into a black hole, then it would be physically impossible for the details about the chair ever to escape to the outside universe. This might not be a problem if the black hole continued to exist for all time, but Hawking tells us that the black hole is giving off energy, and thus it will shrink down and presumably will eventually disappear altogether. At that point, the details about the chair will be irrevocably lost; thus such evolution cannot be described unitarily. This problem has been labeled the “information loss paradox” of quantum black holes. The attitude physicists adopted towards this paradox was apparently strongly influenced by their vision of which theory, general relativity or quantum theory, would have to yield to achieve a consistent theory of quantum gravity. Spacetime physicists tended to view non-unitary evolution as a fairly natural consequence of singular spacetimes: one wouldn't expect all the details to be available at late times if they were lost in a singularity. Hawking, for example, argued that the paradox shows that the full theory of quantum gravity will be a non-unitary theory, and he began working to develop such a theory. (He has since abandoned this position.) However, particle physicists (such as superstring theorists) tended to view black holes as being just another quantum state. If two particles were to collide at extremely high (i.e., Planck-scale) energies, they would form a very small black hole. This tiny black hole would have a very high Hawking temperature, and thus it would very quickly give off many high-energy particles and disappear. Such a process would look very much like a standard high-energy scattering experiment: two particles collide and their mass-energy is then converted into showers of outgoing particles. The fact that all known scattering processes are unitary then seems to give us some reason to expect that black hole formation and evaporation should also be unitary. These considerations led many physicists to propose scenarios that might allow for the unitary evolution of quantum black holes, while not violating other basic physical principles, such as the requirement that no physical influences be allowed to travel faster than light (the requirement of “microcausality”), at least not when we are far from the domain of quantum gravity (the “Planck scale”). Once energies do enter the domain of quantum gravity, e.g. near the central singularity of a black hole, then we might expect the classical description of spacetime to break down; thus, physicists were generally prepared to allow for the possibility of violations of microcausality in this region. A very helpful overview of this debate can be found in Belot, Earman, and Ruetsche (1999). Most of the scenarios proposed to escape Hawking's argument faced serious difficulties and have been abandoned by their supporters. The proposal that currently enjoys the most wide-spread (though certainly not universal) support is known as “black hole complementarity.” This proposal has been the subject of philosophical controversy because it includes apparently incompatible claims, and then tries to escape the contradiction by making a controversial appeal to quantum complementarity or (so charge the critics) verificationism. 5.3.1 Black Hole Complementarity The challenge of saving information from a black hole lies in the fact that it is impossible to copy the quantum details (especially the quantum correlations) that are preserved by unitary evolution. This implies that if the details pass behind the event horizon, for example, if an astronaut falls into a black hole, then those details are lost forever. Advocates of black hole complementarity (Susskind et al. 1993), however, point out that an outside observer will never see the infalling astronaut pass through the event horizon. Instead, as we saw in Section 2, she will seem to hover at the horizon for all time. But all the while, the black hole will also be giving off heat, and shrinking down, and getting hotter, and shrinking more. The black hole complementarian therefore suggests that an outside observer should conclude that the infalling astronaut gets burned up before she crosses the event horizon, and all the details about her state will be returned in the outgoing radiation, just as would be the case if she and her belongings were incinerated in a more conventional manner; thus the information (and standard quantum evolution) is saved. 6. Conclusion: Philosophical Issues The physical investigation of spacetime singularities and black holes has touched on numerous philosophical issues. To begin, we were confronted with the question of the definition and significance of singularities. Should they be defined in terms of incomplete paths, missing points, or curvature pathology? Should we even think that there is a single correct answer to this question? Need we include such things in our ontology, or do they instead merely indicate the break-down of a particular physical theory? Are they “edges” of spacetime, or merely inadequate descriptions that will be dispensed with by a truly fundamental theory of quantum gravity? This has obvious connections to the issue of how we are to interpret the ontology of merely effective physical descriptions. The debate over the information loss paradox also highlights the conceptual importance of the relationship between different effective theories. At root, the debate is over where and how our effective physical theories will break down: when can they be trusted, and where must they be replaced by a more adequate theory? Black holes appear to be crucial for our understanding of the relationship between matter and spacetime. As discussed in Section 3, When matter forms a black hole, it is transformed into a purely gravitational entity. When a black hole evaporates, spacetime curvature is transformed into ordinary matter. Thus black holes offer an important arena for investigating the ontology of spacetime and ordinary objects. Black holes were also seen to provide an important testing ground to investigate the conceptual problems underlying quantum theory and general relativity. The question of whether black hole evolution is unitary raises the issue of how the unitary evolution of standard quantum mechanics serves to guarantee that no experiment can reveal a violation of energy conservation or of microcausality. Likewise, the debate over the information loss paradox can be seen as a debate over whether spacetime or an abstract dynamical state space (Hilbert space) should be viewed as being more fundamental. Might spacetime itself be an emergent entity belonging only to an effective physical theory? Singularities and black holes are arguably our best windows into the details of quantum gravity, which would seem to be the best candidate for a truly fundamental physical description of the world (if such a fundamental description exists). As such, they offer glimpses into deepest nature of matter, dynamical laws, and space and time; and these glimpses seem to call for a conceptual revision at least as great as that required by quantum mechanics or relativity theory alone. • Ashtekar A, J. Baez, A. Corichi, and K. Krasnov, 1998, “Quantum Geometry and Black Hole Entropy,” Physical Review Letters, 80: 904. • Ashtekar, A. and M. Bojowald, 2006, “Quantum Geometry and the Schwarzschild Singularity,” Classical and Quantum Gravity, 23: 391-411. • Ashtekar, A., T. Pawlowski and P. Singh, 2006, “Quantum Nature of the Big Bang,” Physical Review Letters, 96: 141301. • Bardeen, J. M., B. Carter, and S. W. Hawking, 1973, “The Four Laws of Black Hole Mechanics”, Communications in Mathematical Physics, 31: 161-170. • Bekenstein, J. D., 1973, “Black Holes and Entropy.” Physical Review D 7: 2333-2346. • Bekenstein, J. D., 1981, “Universal Upper Bound on the Entropy-to-Energy Ratio for Bounded Systems.” Physical Review D 23: 287-298. • Belot, G., Earman, J., and Ruetsche, L., 1999, “The Hawking Information Loss Paradox: The Anatomy of a Controversy”, British Journal for the Philosophy of Science, 50: 189-229. • Bergmann, P., 1977, “Geometry and Observables,” in Earman, Glymour and Stachel (1977), 275-280. • Bergmann, P. and A. Komar, 1962, “Observables and Commutation Relations,” in A. Lichnerowicz and A. Tonnelat, eds., Les Théories Relativistes de la Gravitation, CNRS: Paris, 309-325. • Bertotti, B., 1962, “The Theory of Measurement in General Relativity,” in C. Møller, ed., Evidence for Gravitational Theories, “Proceedings of the International School of Physics ‘Enrico Fermi,’” Course XX, Academic Press: New York, 174-201. • Bokulich, P., 2001, “Black Hole Remnants and Classical vs. Quantum Gravity”, Philosophy of Science, 68: S407-S423. • Bokulich, P., 2005, “Does Black Hole Complementarity Answer Hawking's Information Loss Paradox?”, Philosophy of Science, 72: 1336-1349. • Chandrasekhar, S., 1983, The Mathematical Theory of Black Holes, Oxford: Oxford University Press • Clarke, C., 1993, The Analysis of Space-Time Singularities, Cambridge: Cambridge University Press • Coleman, R. and H. Korté, 1992, “The Relation between the Measurement and Cauchy Problems of GTR”, in H. Sato and T. Nakamura, eds., Proceedings of the 6th Marcel Grossmann Meeting on General Relativity World Scientific Press, Singapore, 97–119. Proceedings of the meeting held at Kyoto International Conference Hall, Kyoto, Japan, 23-29 June 1991. • Curiel, E., 1998, “The Analysis of Singular Spacetimes”, Philosophy of Science, 66: S119-S145 • Earman, J., 1995, Bangs, Crunches, Whimpers, and Shrieks: Singularities and Acausalities in Relativistic Spacetimes, New York: Oxford University Press • Earman, J., C. Glymour and J. Stachel, eds., 1977, Foundations of Space-Time Theories, Minnesota Studies in Philosophy of Science, vol.VIII, University of Minnesota: Minneapolis • Ellis, G. and B. Schmidt, 1977, “Singular Space-Times,” General Relativity and Gravitation, 8: 915-953 • Geroch, R., 1968, “What Is a Singularity in General Relativity?” Annals of Physics, 48: 526-40 • Geroch, R., 1968, “Local Characterization of Singularities in General Relativity”, Journal of Mathematical Physics, 9: 450-465 • Geroch, R., 1970, “Singularities”, in Relativity, eds. M. Carmeli, S. Fickler and L. Witten, New York: Plenum Press, pp. 259-291 • Geroch, R., 1971, remarks made at a colloquium in Princeton, as reported by, among others, Israel (1987, 263). • Geroch, R., 1977, “Prediction in General Relativity”, in Earman, J. and C. Glymour and J. Stachel, eds., Foundations of Spacetime Theories (Minnesota Studies in the Philosophy of Science, vol. 18, Minneapolis: University of Minnesota Press, 1977), pp. 81-93 • Geroch, R., 1981, General Relativity from A to B, Chicago: University of Chicago Press • Geroch, R., 1985, Mathematical Physics, Chicago: University of Chicago Press • Geroch, R. and L. Can-bin and R. Wald, 1982, “Singular Boundaries of Space-times” Journal of Mathematical Physics, 23: 432-435 • Geroch, R. and E. Kronheimer and R. Penrose, 1972, “Ideal Points in Space-time” Philosophical Transactions of the Royal Society (London), A327: 545-567 • Hawking, S., 1967, “The Occurrence of Singularities in Cosmology. III”, Philosophical Transactions of the Royal Society (London), A300: 187-210 • Hawking, S. W., 1974, “Black Hole Explosions?”, Nature, 248: 30-31. • Hawking, S. W., 1975, “Particle Creation by Black Holes”, Communications in Mathematical Physics 43: 199-220. • Hawking, S. W., 1976, “The Breakdown of Predictability in Gravitational Collapse”, Physical Review D, 14: 2460-2473. • Hawking, S. W., 1982, “The Unpredictability of Quantum Gravity”, Communications in Mathematical Physics, 87: 395-415. • Hawking, S. and G. Ellis, 1973, The Large Scale Structure of Space-Time, Cambridge: Cambridge University Press • Israel, W., 1987, “Dark Stars: The Evolution of an Idea,” in S. Hawking and W. Israel, eds., 300 Years of Gravitation, Cambridge: Cambridge University Press, 199-276. • Joshi, P., 1993, Global Aspects in Gravitation and Cosmology, Oxford: Clarendon Press. • Joshi, P., 2003, “Cosmic Censorship: A Current Perspective,” Modern Physics Letters A, 17: 1067-1079. • Kiem, Y., H. Verlinde, and E. Verlinde, 1995, “Black Hole Horizons and Complementarity.” Physical Review D, 52: 7053-7065. • Laplace, P., 1796, Exposition du System du Monde, Paris: Cercle-Social. • Lowe, D., J. Polchinski, L. Susskind, L. Thorlacius, and J. Uglum, 1995, “Black Hole Complementarity versus Locality”, Physical Review D, 52: 6997-7010. • Lowe, D. and L. Thorlacius, 1999, “AdS/CFT and the Information Paradox”, Physical Review D, 60: 104012-1 to 104012-7. • Michell, J., 1784, “On the Means of discovering the Distance, Magnitude, etc. of the Fixed Stars, in consequence of the Diminution of the velocity of their Light, in case such a Diminution should be found to take place in any of them, and such Data should be procurred from Observations, as would be farther necessary for that Purpose”, Philosophical Transactions, 74: 35-57. • Misner, C. and Thorne, K. and Wheeler, J., 1973, Gravitation, Freeman Press: San Francisco • Penrose, R., 1969, “Gravitational Collapse: The Role of General Relativity”, Revista del Nuovo Cimento, 1:272-276 • Rovelli, C., 1991, “What Is Observable in Classical and Quantum Gravity?,” Classical and Quantum Gravity, 8:297-316. • Rovelli, C., 2001, “A Note on the Foundation of Relativistic Mechanics. I: Relativistic Observables and Relativistic States,” available as arXiv:gr-qc/0111037v2. • Rovelli, C., 2002a, “GPS Observables in General Relativity,” Physical Review D, 65:044017. • Rovelli, C., 2002b, “Partial Observables,” Physical Review D, 65:124013. • Rovelli, C., 2004, Quantum Gravity, Cambridge University Press: Cambridge. • Stephans, C. R., G. ‘t Hooft, and B. F. Whiting, 1994, “Black Hole Evaporation Without Information Loss”, Classical and Quantum Gravity, 11: 621-647. • Strominger, A. and C. Vafa, 1996, “Microscopic Origin of the Bekenstein-Hawking Entropy”, Physics Letters B, 379: 99-104. • Susskind, L., 1995, “The World as a Hologram.” Journal for Mathematical Physics, 36: 6377-6396. • Susskind, L., 1997, “Black Holes and the Information Paradox.” Scientific American, 272, 4, April: 52-57. • Susskind, L. and L. Thorlacius, 1994, “Gedanken Experiments Involving Black Holes”, Physical Review D, 49: 966-974. • Susskind, L., L. Thorlacius, and J. Uglum, 1993 “The Stretched Horizon and Black Hole Complementarity”, Physical Review D, 48: 3743-3761. • Susskind, L., and J. Uglum, 1996, “String Physics and Black Holes”, Nuclear Physics B (Proceedings Supplement), 45: 115-134. • 't Hooft, G., 1985, “On the Quantum Structure of a Black Hole”, Nuclear Physics B, 256: 727-745. • 't Hooft, G., 1996, “The Scattering Matrix Approach for the Quantum Black Hole: an Overview”, International Journal of Modern Physics A, 11: 4623-4688. • Thorlacius, L., 1995, “Black Hole Evolution”, Nuclear Physics B (Proceedings Supplement), 41: 245-275. • Thorne, K., 1995, Black Holes and Time Warps: Einstein's Outrageous Legacy, New York: W. W. Norton and Co. • Thorne, K., R. Price, and D. Macdonald, 1986, Black Holes: The Membrane Paradigm, New Haven: Yale University Press. • Unruh, W., 1976, “Notes on Black Hole Evaporation”, Physical Review D, 14: 870-892. • Unruh, W. R. M. Wald, 1982, “Acceleration Radiation and the Generalized Second Law of Thermodynamics”, Physical Review D, 25: 942-958. • Unruh, W. R. M. Wald, 1995, “Evolution Laws Taking Pure States to Mixed States in Quantum Field Theory”, Physical Review D, 52: 2176-2182. • van Dongen, J. and S. de Haro, 2004, “On Black Hole Complementarity”, Studies in History and Philosophy of Modern Physics, 35: 509-525. • Wald, R. M., 1984, General Relativity, Chicago: University of Chicago Press. • Wald, R., 1992, Space, Time, and Gravity: The Theory of the Big Bang and Black Holes, second edition, Chicago: University of Chicago Press • Wald, R. M., 1994, Quantum Field Theory in Curved Spacetimes and Black Hole Thermodynamics, Chicago: University of Chicago Press. • Wald, R. M., 2001, “The Thermodynamics of Black Holes”, Living Reviews in Relativity 4(6): 1-44. URL = <>. Other Internet Resources The SEP editors would like to thank John D. Norton, the subject editor for this entry, for the special effort he made in refereeing and guiding this entry towards publication. Copyright © 2009 by Erik Curiel <> Peter Bokulich <> The Encyclopedia Now Needs Your Support Please Read How You Can Help Keep the Encyclopedia Free
d4e8a68be1f9374c
Wavefunctions as gravitational waves This is the paper I always wanted to write. It is there now, and I think it is good – and that‘s an understatement. 🙂 It is probably best to download it as a pdf-file from the viXra.org site because this was a rather fast ‘copy and paste’ job from the Word version of the paper, so there may be issues with boldface notation (vector notation), italics and, most importantly, with formulas – which I, sadly, have to ‘snip’ into this WordPress blog, as they don’t have an easy copy function for mathematical formulas. It’s great stuff. If you have been following my blog – and many of you have – you will want to digest this. 🙂 Abstract : This paper explores the implications of associating the components of the wavefunction with a physical dimension: force per unit mass – which is, of course, the dimension of acceleration (m/s2) and gravitational fields. The classical electromagnetic field equations for energy densities, the Poynting vector and spin angular momentum are then re-derived by substituting the electromagnetic N/C unit of field strength (mass per unit charge) by the new N/kg = m/s2 dimension. The results are elegant and insightful. For example, the energy densities are proportional to the square of the absolute value of the wavefunction and, hence, to the probabilities, which establishes a physical normalization condition. Also, Schrödinger’s wave equation may then, effectively, be interpreted as a diffusion equation for energy, and the wavefunction itself can be interpreted as a propagating gravitational wave. Finally, as an added bonus, concepts such as the Compton scattering radius for a particle, spin angular momentum, and the boson-fermion dichotomy, can also be explained more intuitively. While the approach offers a physical interpretation of the wavefunction, the author argues that the core of the Copenhagen interpretations revolves around the complementarity principle, which remains unchallenged because the interpretation of amplitude waves as traveling fields does not explain the particle nature of matter. This is not another introduction to quantum mechanics. We assume the reader is already familiar with the key principles and, importantly, with the basic math. We offer an interpretation of wave mechanics. As such, we do not challenge the complementarity principle: the physical interpretation of the wavefunction that is offered here explains the wave nature of matter only. It explains diffraction and interference of amplitudes but it does not explain why a particle will hit the detector not as a wave but as a particle. Hence, the Copenhagen interpretation of the wavefunction remains relevant: we just push its boundaries. The basic ideas in this paper stem from a simple observation: the geometric similarity between the quantum-mechanical wavefunctions and electromagnetic waves is remarkably similar. The components of both waves are orthogonal to the direction of propagation and to each other. Only the relative phase differs : the electric and magnetic field vectors (E and B) have the same phase. In contrast, the phase of the real and imaginary part of the (elementary) wavefunction (ψ = a·ei∙θ = a∙cosθ – a∙sinθ) differ by 90 degrees (π/2).[1] Pursuing the analogy, we explore the following question: if the oscillating electric and magnetic field vectors of an electromagnetic wave carry the energy that one associates with the wave, can we analyze the real and imaginary part of the wavefunction in a similar way? We show the answer is positive and remarkably straightforward.  If the physical dimension of the electromagnetic field is expressed in newton per coulomb (force per unit charge), then the physical dimension of the components of the wavefunction may be associated with force per unit mass (newton per kg).[2] Of course, force over some distance is energy. The question then becomes: what is the energy concept here? Kinetic? Potential? Both? The similarity between the energy of a (one-dimensional) linear oscillator (E = m·a2·ω2/2) and Einstein’s relativistic energy equation E = m∙c2 inspires us to interpret the energy as a two-dimensional oscillation of mass. To assist the reader, we construct a two-piston engine metaphor.[3] We then adapt the formula for the electromagnetic energy density to calculate the energy densities for the wave function. The results are elegant and intuitive: the energy densities are proportional to the square of the absolute value of the wavefunction and, hence, to the probabilities. Schrödinger’s wave equation may then, effectively, be interpreted as a diffusion equation for energy itself. As an added bonus, concepts such as the Compton scattering radius for a particle and spin angular, as well as the boson-fermion dichotomy can be explained in a fully intuitive way.[4] Of course, such interpretation is also an interpretation of the wavefunction itself, and the immediate reaction of the reader is predictable: the electric and magnetic field vectors are, somehow, to be looked at as real vectors. In contrast, the real and imaginary components of the wavefunction are not. However, this objection needs to be phrased more carefully. First, it may be noted that, in a classical analysis, the magnetic force is a pseudovector itself.[5] Second, a suitable choice of coordinates may make quantum-mechanical rotation matrices irrelevant.[6] Therefore, the author is of the opinion that this little paper may provide some fresh perspective on the question, thereby further exploring Einstein’s basic sentiment in regard to quantum mechanics, which may be summarized as follows: there must be some physical explanation for the calculated probabilities.[7] We will, therefore, start with Einstein’s relativistic energy equation (E = mc2) and wonder what it could possibly tell us.  I. Energy as a two-dimensional oscillation of mass The structural similarity between the relativistic energy formula, the formula for the total energy of an oscillator, and the kinetic energy of a moving body, is striking: 1. E = mc2 2. E = mω2/2 3. E = mv2/2 In these formulas, ω, v and c all describe some velocity.[8] Of course, there is the 1/2 factor in the E = mω2/2 formula[9], but that is exactly the point we are going to explore here: can we think of an oscillation in two dimensions, so it stores an amount of energy that is equal to E = 2·m·ω2/2 = m·ω2? That is easy enough. Think, for example, of a V-2 engine with the pistons at a 90-degree angle, as illustrated below. The 90° angle makes it possible to perfectly balance the counterweight and the pistons, thereby ensuring smooth travel at all times. With permanently closed valves, the air inside the cylinder compresses and decompresses as the pistons move up and down and provides, therefore, a restoring force. As such, it will store potential energy, just like a spring, and the motion of the pistons will also reflect that of a mass on a spring. Hence, we can describe it by a sinusoidal function, with the zero point at the center of each cylinder. We can, therefore, think of the moving pistons as harmonic oscillators, just like mechanical springs. Figure 1: Oscillations in two dimensionsV-2 engine If we assume there is no friction, we have a perpetuum mobile here. The compressed air and the rotating counterweight (which, combined with the crankshaft, acts as a flywheel[10]) store the potential energy. The moving masses of the pistons store the kinetic energy of the system.[11] At this point, it is probably good to quickly review the relevant math. If the magnitude of the oscillation is equal to a, then the motion of the piston (or the mass on a spring) will be described by x = a·cos(ω·t + Δ).[12] Needless to say, Δ is just a phase factor which defines our t = 0 point, and ω is the natural angular frequency of our oscillator. Because of the 90° angle between the two cylinders, Δ would be 0 for one oscillator, and –π/2 for the other. Hence, the motion of one piston is given by x = a·cos(ω·t), while the motion of the other is given by x = a·cos(ω·t–π/2) = a·sin(ω·t). The kinetic and potential energy of one oscillator (think of one piston or one spring only) can then be calculated as: 1. K.E. = T = m·v2/2 = (1/2)·m·ω2·a2·sin2(ω·t + Δ) 2. P.E. = U = k·x2/2 = (1/2)·k·a2·cos2(ω·t + Δ) The coefficient k in the potential energy formula characterizes the restoring force: F = −k·x. From the dynamics involved, it is obvious that k must be equal to m·ω2. Hence, the total energy is equal to: E = T + U = (1/2)· m·ω2·a2·[sin2(ω·t + Δ) + cos2(ω·t + Δ)] = m·a2·ω2/2 To facilitate the calculations, we will briefly assume k = m·ω2 and a are equal to 1. The motion of our first oscillator is given by the cos(ω·t) = cosθ function (θ = ω·t), and its kinetic energy will be equal to sin2θ. Hence, the (instantaneous) change in kinetic energy at any point in time will be equal to: d(sin2θ)/dθ = 2∙sinθ∙d(sinθ)/dθ = 2∙sinθ∙cosθ Let us look at the second oscillator now. Just think of the second piston going up and down in the V-2 engine. Its motion is given by the sinθ function, which is equal to cos(θ−π /2). Hence, its kinetic energy is equal to sin2(θ−π /2), and how it changes – as a function of θ – will be equal to: 2∙sin(θ−π /2)∙cos(θ−π /2) = = −2∙cosθ∙sinθ = −2∙sinθ∙cosθ We have our perpetuum mobile! While transferring kinetic energy from one piston to the other, the crankshaft will rotate with a constant angular velocity: linear motion becomes circular motion, and vice versa, and the total energy that is stored in the system is T + U = ma2ω2. We have a great metaphor here. Somehow, in this beautiful interplay between linear and circular motion, energy is borrowed from one place and then returns to the other, cycle after cycle. We know the wavefunction consist of a sine and a cosine: the cosine is the real component, and the sine is the imaginary component. Could they be equally real? Could each represent half of the total energy of our particle? Should we think of the c in our E = mc2 formula as an angular velocity? These are sensible questions. Let us explore them.  II. The wavefunction as a two-dimensional oscillation The elementary wavefunction is written as: ψ = a·ei[E·t − px]/ħa·ei[E·t − px]/ħ = a·cos(px E∙t/ħ) + i·a·sin(px E∙t/ħ) When considering a particle at rest (p = 0) this reduces to: ψ = a·ei∙E·t/ħ = a·cos(E∙t/ħ) + i·a·sin(E∙t/ħ) = a·cos(E∙t/ħ) i·a·sin(E∙t/ħ)  Let us remind ourselves of the geometry involved, which is illustrated below. Note that the argument of the wavefunction rotates clockwise with time, while the mathematical convention for measuring the phase angle (ϕ) is counter-clockwise. Figure 2: Euler’s formula760px-eulers_formula If we assume the momentum p is all in the x-direction, then the p and x vectors will have the same direction, and px/ħ reduces to p∙x/ħ. Most illustrations – such as the one below – will either freeze x or, else, t. Alternatively, one can google web animations varying both. The point is: we also have a two-dimensional oscillation here. These two dimensions are perpendicular to the direction of propagation of the wavefunction. For example, if the wavefunction propagates in the x-direction, then the oscillations are along the y– and z-axis, which we may refer to as the real and imaginary axis. Note how the phase difference between the cosine and the sine  – the real and imaginary part of our wavefunction – appear to give some spin to the whole. I will come back to this. Figure 3: Geometric representation of the wavefunction5d_euler_f Hence, if we would say these oscillations carry half of the total energy of the particle, then we may refer to the real and imaginary energy of the particle respectively, and the interplay between the real and the imaginary part of the wavefunction may then describe how energy propagates through space over time. Let us consider, once again, a particle at rest. Hence, p = 0 and the (elementary) wavefunction reduces to ψ = a·ei∙E·t/ħ. Hence, the angular velocity of both oscillations, at some point x, is given by ω = -E/ħ. Now, the energy of our particle includes all of the energy – kinetic, potential and rest energy – and is, therefore, equal to E = mc2. Can we, somehow, relate this to the m·a2·ω2 energy formula for our V-2 perpetuum mobile? Our wavefunction has an amplitude too. Now, if the oscillations of the real and imaginary wavefunction store the energy of our particle, then their amplitude will surely matter. In fact, the energy of an oscillation is, in general, proportional to the square of the amplitude: E µ a2. We may, therefore, think that the a2 factor in the E = m·a2·ω2 energy will surely be relevant as well. However, here is a complication: an actual particle is localized in space and can, therefore, not be represented by the elementary wavefunction. We must build a wave packet for that: a sum of wavefunctions, each with their own amplitude ak, and their own ωi = -Ei/ħ. Each of these wavefunctions will contribute some energy to the total energy of the wave packet. To calculate the contribution of each wave to the total, both ai as well as Ei will matter. What is Ei? Ei varies around some average E, which we can associate with some average mass m: m = E/c2. The Uncertainty Principle kicks in here. The analysis becomes more complicated, but a formula such as the one below might make sense:F1We can re-write this as:F2What is the meaning of this equation? We may look at it as some sort of physical normalization condition when building up the Fourier sum. Of course, we should relate this to the mathematical normalization condition for the wavefunction. Our intuition tells us that the probabilities must be related to the energy densities, but how exactly? We will come back to this question in a moment. Let us first think some more about the enigma: what is mass? Before we do so, let us quickly calculate the value of c2ħ2: it is about 1´1051 N2∙m4. Let us also do a dimensional analysis: the physical dimensions of the E = m·a2·ω2 equation make sense if we express m in kg, a in m, and ω in rad/s. We then get: [E] = kg∙m2/s2 = (N∙s2/m)∙m2/s2 = N∙m = J. The dimensions of the left- and right-hand side of the physical normalization condition is N3∙m5.  III. What is mass? We came up, playfully, with a meaningful interpretation for energy: it is a two-dimensional oscillation of mass. But what is mass? A new aether theory is, of course, not an option, but then what is it that is oscillating? To understand the physics behind equations, it is always good to do an analysis of the physical dimensions in the equation. Let us start with Einstein’s energy equation once again. If we want to look at mass, we should re-write it as m = E/c2: [m] = [E/c2] = J/(m/s)2 = N·m∙s2/m2 = N·s2/m = kg This is not very helpful. It only reminds us of Newton’s definition of a mass: mass is that what gets accelerated by a force. At this point, we may want to think of the physical significance of the absolute nature of the speed of light. Einstein’s E = mc2 equation implies we can write the ratio between the energy and the mass of any particle is always the same, so we can write, for example:F3This reminds us of the ω2= C1/L or ω2 = k/m of harmonic oscillators once again.[13] The key difference is that the ω2= C1/L and ω2 = k/m formulas introduce two or more degrees of freedom.[14] In contrast, c2= E/m for any particle, always. However, that is exactly the point: we can modulate the resistance, inductance and capacitance of electric circuits, and the stiffness of springs and the masses we put on them, but we live in one physical space only: our spacetime. Hence, the speed of light c emerges here as the defining property of spacetime – the resonant frequency, so to speak. We have no further degrees of freedom here. The Planck-Einstein relation (for photons) and the de Broglie equation (for matter-particles) have an interesting feature: both imply that the energy of the oscillation is proportional to the frequency, with Planck’s constant as the constant of proportionality. Now, for one-dimensional oscillations – think of a guitar string, for example – we know the energy will be proportional to the square of the frequency. It is a remarkable observation: the two-dimensional matter-wave, or the electromagnetic wave, gives us two waves for the price of one, so to speak, each carrying half of the total energy of the oscillation but, as a result, we get a proportionality between E and f instead of between E and f2. However, such reflections do not answer the fundamental question we started out with: what is mass? At this point, it is hard to go beyond the circular definition that is implied by Einstein’s formula: energy is a two-dimensional oscillation of mass, and mass packs energy, and c emerges us as the property of spacetime that defines how exactly. When everything is said and done, this does not go beyond stating that mass is some scalar field. Now, a scalar field is, quite simply, some real number that we associate with a position in spacetime. The Higgs field is a scalar field but, of course, the theory behind it goes much beyond stating that we should think of mass as some scalar field. The fundamental question is: why and how does energy, or matter, condense into elementary particles? That is what the Higgs mechanism is about but, as this paper is exploratory only, we cannot even start explaining the basics of it. What we can do, however, is look at the wave equation again (Schrödinger’s equation), as we can now analyze it as an energy diffusion equation.  IV. Schrödinger’s equation as an energy diffusion equation The interpretation of Schrödinger’s equation as a diffusion equation is straightforward. Feynman (Lectures, III-16-1) briefly summarizes it as follows: “We can think of Schrödinger’s equation as describing the diffusion of the probability amplitude from one point to the next. […] But the imaginary coefficient in front of the derivative makes the behavior completely different from the ordinary diffusion such as you would have for a gas spreading out along a thin tube. Ordinary diffusion gives rise to real exponential solutions, whereas the solutions of Schrödinger’s equation are complex waves.”[17] Let us review the basic math. For a particle moving in free space – with no external force fields acting on it – there is no potential (U = 0) and, therefore, the Uψ term disappears. Therefore, Schrödinger’s equation reduces to: ∂ψ(x, t)/∂t = i·(1/2)·(ħ/meff)·∇2ψ(x, t) The ubiquitous diffusion equation in physics is: ∂φ(x, t)/∂t = D·∇2φ(x, t) The structural similarity is obvious. The key difference between both equations is that the wave equation gives us two equations for the price of one. Indeed, because ψ is a complex-valued function, with a real and an imaginary part, we get the following equations[18]: 1. Re(∂ψ/∂t) = −(1/2)·(ħ/meffIm(∇2ψ) 2. Im(∂ψ/∂t) = (1/2)·(ħ/meffRe(∇2ψ) These equations make us think of the equations for an electromagnetic wave in free space (no stationary charges or currents): 1. B/∂t = –∇×E 2. E/∂t = c2∇×B The above equations effectively describe a propagation mechanism in spacetime, as illustrated below. Figure 4: Propagation mechanismspropagation The Laplacian operator (∇2), when operating on a scalar quantity, gives us a flux density, i.e. something expressed per square meter (1/m2). In this case, it is operating on ψ(x, t), so what is the dimension of our wavefunction ψ(x, t)? To answer that question, we should analyze the diffusion constant in Schrödinger’s equation, i.e. the (1/2)·(ħ/meff) factor: 1. As a mathematical constant of proportionality, it will quantify the relationship between both derivatives (i.e. the time derivative and the Laplacian); 2. As a physical constant, it will ensure the physical dimensions on both sides of the equation are compatible. Now, the ħ/meff factor is expressed in (N·m·s)/(N· s2/m) = m2/s. Hence, it does ensure the dimensions on both sides of the equation are, effectively, the same: ∂ψ/∂t is a time derivative and, therefore, its dimension is s1 while, as mentioned above, the dimension of ∇2ψ is m2. However, this does not solve our basic question: what is the dimension of the real and imaginary part of our wavefunction? At this point, mainstream physicists will say: it does not have a physical dimension, and there is no geometric interpretation of Schrödinger’s equation. One may argue, effectively, that its argument, (px – E∙t)/ħ, is just a number and, therefore, that the real and imaginary part of ψ is also just some number. To this, we may object that ħ may be looked as a mathematical scaling constant only. If we do that, then the argument of ψ will, effectively, be expressed in action units, i.e. in N·m·s. It then does make sense to also associate a physical dimension with the real and imaginary part of ψ. What could it be? We may have a closer look at Maxwell’s equations for inspiration here. The electric field vector is expressed in newton (the unit of force) per unit of charge (coulomb). Now, there is something interesting here. The physical dimension of the magnetic field is N/C divided by m/s.[19] We may write B as the following vector cross-product: B = (1/c)∙ex×E, with ex the unit vector pointing in the x-direction (i.e. the direction of propagation of the wave). Hence, we may associate the (1/c)∙ex× operator, which amounts to a rotation by 90 degrees, with the s/m dimension. Now, multiplication by i also amounts to a rotation by 90° degrees. Hence, we may boldly write: B = (1/c)∙ex×E = (1/c)∙iE. This allows us to also geometrically interpret Schrödinger’s equation in the way we interpreted it above (see Figure 3).[20] Still, we have not answered the question as to what the physical dimension of the real and imaginary part of our wavefunction should be. At this point, we may be inspired by the structural similarity between Newton’s and Coulomb’s force laws:F4Hence, if the electric field vector E is expressed in force per unit charge (N/C), then we may want to think of associating the real part of our wavefunction with a force per unit mass (N/kg). We can, of course, do a substitution here, because the mass unit (1 kg) is equivalent to 1 N·s2/m. Hence, our N/kg dimension becomes: N/kg = N/(N·s2/m)= m/s2 What is this: m/s2? Is that the dimension of the a·cosθ term in the a·eiθ a·cosθ − i·a·sinθ wavefunction? My answer is: why not? Think of it: m/s2 is the physical dimension of acceleration: the increase or decrease in velocity (m/s) per second. It ensures the wavefunction for any particle – matter-particles or particles with zero rest mass (photons) – and the associated wave equation (which has to be the same for all, as the spacetime we live in is one) are mutually consistent. In this regard, we should think of how we would model a gravitational wave. The physical dimension would surely be the same: force per mass unit. It all makes sense: wavefunctions may, perhaps, be interpreted as traveling distortions of spacetime, i.e. as tiny gravitational waves. V. Energy densities and flows Pursuing the geometric equivalence between the equations for an electromagnetic wave and Schrödinger’s equation, we can now, perhaps, see if there is an equivalent for the energy density. For an electromagnetic wave, we know that the energy density is given by the following formula:F5E and B are the electric and magnetic field vector respectively. The Poynting vector will give us the directional energy flux, i.e. the energy flow per unit area per unit time. We write:F6Needless to say, the ∙ operator is the divergence and, therefore, gives us the magnitude of a (vector) field’s source or sink at a given point. To be precise, the divergence gives us the volume density of the outward flux of a vector field from an infinitesimal volume around a given point. In this case, it gives us the volume density of the flux of S. We can analyze the dimensions of the equation for the energy density as follows: 1. E is measured in newton per coulomb, so [EE] = [E2] = N2/C2. 2. B is measured in (N/C)/(m/s), so we get [BB] = [B2] = (N2/C2)·(s2/m2). However, the dimension of our c2 factor is (m2/s2) and so we’re also left with N2/C2. 3. The ϵ0 is the electric constant, aka as the vacuum permittivity. As a physical constant, it should ensure the dimensions on both sides of the equation work out, and they do: [ε0] = C2/(N·m2) and, therefore, if we multiply that with N2/C2, we find that is expressed in J/m3.[21] Replacing the newton per coulomb unit (N/C) by the newton per kg unit (N/kg) in the formulas above should give us the equivalent of the energy density for the wavefunction. We just need to substitute ϵ0 for an equivalent constant. We may to give it a try. If the energy densities can be calculated – which are also mass densities, obviously – then the probabilities should be proportional to them. Let us first see what we get for a photon, assuming the electromagnetic wave represents its wavefunction. Substituting B for (1/c)∙iE or for −(1/c)∙iE gives us the following result:F7Zero!? An unexpected result! Or not? We have no stationary charges and no currents: only an electromagnetic wave in free space. Hence, the local energy conservation principle needs to be respected at all points in space and in time. The geometry makes sense of the result: for an electromagnetic wave, the magnitudes of E and B reach their maximum, minimum and zero point simultaneously, as shown below.[22] This is because their phase is the same. Figure 5: Electromagnetic wave: E and BEM field Should we expect a similar result for the energy densities that we would associate with the real and imaginary part of the matter-wave? For the matter-wave, we have a phase difference between a·cosθ and a·sinθ, which gives a different picture of the propagation of the wave (see Figure 3).[23] In fact, the geometry of the suggestion suggests some inherent spin, which is interesting. I will come back to this. Let us first guess those densities. Making abstraction of any scaling constants, we may write:F8We get what we hoped to get: the absolute square of our amplitude is, effectively, an energy density ! |ψ|2  = |a·ei∙E·t/ħ|2 = a2 = u This is very deep. A photon has no rest mass, so it borrows and returns energy from empty space as it travels through it. In contrast, a matter-wave carries energy and, therefore, has some (rest) mass. It is therefore associated with an energy density, and this energy density gives us the probabilities. Of course, we need to fine-tune the analysis to account for the fact that we have a wave packet rather than a single wave, but that should be feasible. As mentioned, the phase difference between the real and imaginary part of our wavefunction (a cosine and a sine function) appear to give some spin to our particle. We do not have this particularity for a photon. Of course, photons are bosons, i.e. spin-zero particles, while elementary matter-particles are fermions with spin-1/2. Hence, our geometric interpretation of the wavefunction suggests that, after all, there may be some more intuitive explanation of the fundamental dichotomy between bosons and fermions, which puzzled even Feynman: “Why is it that particles with half-integral spin are Fermi particles, whereas particles with integral spin are Bose particles? We apologize for the fact that we cannot give you an elementary explanation. An explanation has been worked out by Pauli from complicated arguments of quantum field theory and relativity. He has shown that the two must necessarily go together, but we have not been able to find a way of reproducing his arguments on an elementary level. It appears to be one of the few places in physics where there is a rule which can be stated very simply, but for which no one has found a simple and easy explanation. The explanation is deep down in relativistic quantum mechanics. This probably means that we do not have a complete understanding of the fundamental principle involved.” (Feynman, Lectures, III-4-1) The physical interpretation of the wavefunction, as presented here, may provide some better understanding of ‘the fundamental principle involved’: the physical dimension of the oscillation is just very different. That is all: it is force per unit charge for photons, and force per unit mass for matter-particles. We will examine the question of spin somewhat more carefully in section VII. Let us first examine the matter-wave some more.  VI. Group and phase velocity of the matter-wave The geometric representation of the matter-wave (see Figure 3) suggests a traveling wave and, yes, of course: the matter-wave effectively travels through space and time. But what is traveling, exactly? It is the pulse – or the signal – only: the phase velocity of the wave is just a mathematical concept and, even in our physical interpretation of the wavefunction, the same is true for the group velocity of our wave packet. The oscillation is two-dimensional, but perpendicular to the direction of travel of the wave. Hence, nothing actually moves with our particle. Here, we should also reiterate that we did not answer the question as to what is oscillating up and down and/or sideways: we only associated a physical dimension with the components of the wavefunction – newton per kg (force per unit mass), to be precise. We were inspired to do so because of the physical dimension of the electric and magnetic field vectors (newton per coulomb, i.e. force per unit charge) we associate with electromagnetic waves which, for all practical purposes, we currently treat as the wavefunction for a photon. This made it possible to calculate the associated energy densities and a Poynting vector for energy dissipation. In addition, we showed that Schrödinger’s equation itself then becomes a diffusion equation for energy. However, let us now focus some more on the asymmetry which is introduced by the phase difference between the real and the imaginary part of the wavefunction. Look at the mathematical shape of the elementary wavefunction once again: ψ = a·ei[E·t − px]/ħa·ei[E·t − px]/ħ = a·cos(px/ħ − E∙t/ħ) + i·a·sin(px/ħ − E∙t/ħ) The minus sign in the argument of our sine and cosine function defines the direction of travel: an F(x−v∙t) wavefunction will always describe some wave that is traveling in the positive x-direction (with the wave velocity), while an F(x+v∙t) wavefunction will travel in the negative x-direction. For a geometric interpretation of the wavefunction in three dimensions, we need to agree on how to define i or, what amounts to the same, a convention on how to define clockwise and counterclockwise directions: if we look at a clock from the back, then its hand will be moving counterclockwise. So we need to establish the equivalent of the right-hand rule. However, let us not worry about that now. Let us focus on the interpretation. To ease the analysis, we’ll assume we’re looking at a particle at rest. Hence, p = 0, and the wavefunction reduces to: ψ = a·ei∙E·t/ħ = a·cos(−E∙t/ħ) + i·a·sin(−E0∙t/ħ) = a·cos(E0∙t/ħ) − i·a·sin(E0∙t/ħ) E0 is, of course, the rest mass of our particle and, now that we are here, we should probably wonder whose time we are talking about: is it our time, or is the proper time of our particle? Well… In this situation, we are both at rest so it does not matter: t is, effectively, the proper time so perhaps we should write it as t0. It does not matter. You can see what we expect to see: E0/ħ pops up as the natural frequency of our matter-particle: (E0/ħ)∙t = ω∙t. Remembering the ω = 2π·f = 2π/T and T = 1/formulas, we can associate a period and a frequency with this wave, using the ω = 2π·f = 2π/T. Noting that ħ = h/2π, we find the following: T = 2π·(ħ/E0) = h/E0 ⇔ = E0/h = m0c2/h This is interesting, because we can look at the period as a natural unit of time for our particle. What about the wavelength? That is tricky because we need to distinguish between group and phase velocity here. The group velocity (vg) should be zero here, because we assume our particle does not move. In contrast, the phase velocity is given by vp = λ·= (2π/k)·(ω/2π) = ω/k. In fact, we’ve got something funny here: the wavenumber k = p/ħ is zero, because we assume the particle is at rest, so p = 0. So we have a division by zero here, which is rather strange. What do we get assuming the particle is not at rest? We write: vp = ω/k = (E/ħ)/(p/ħ) = E/p = E/(m·vg) = (m·c2)/(m·vg) = c2/vg This is interesting: it establishes a reciprocal relation between the phase and the group velocity, with as a simple scaling constant. Indeed, the graph below shows the shape of the function does not change with the value of c, and we may also re-write the relation above as: vp/= βp = c/vp = 1/βg = 1/(c/vp) Figure 6: Reciprocal relation between phase and group velocitygraph We can also write the mentioned relationship as vp·vg = c2, which reminds us of the relationship between the electric and magnetic constant (1/ε0)·(1/μ0) = c2. This is interesting in light of the fact we can re-write this as (c·ε0)·(c·μ0) = 1, which shows electricity and magnetism are just two sides of the same coin, so to speak.[24] Interesting, but how do we interpret the math? What about the implications of the zero value for wavenumber k = p/ħ. We would probably like to think it implies the elementary wavefunction should always be associated with some momentum, because the concept of zero momentum clearly leads to weird math: something times zero cannot be equal to c2! Such interpretation is also consistent with the Uncertainty Principle: if Δx·Δp ≥ ħ, then neither Δx nor Δp can be zero. In other words, the Uncertainty Principle tells us that the idea of a pointlike particle actually being at some specific point in time and in space does not make sense: it has to move. It tells us that our concept of dimensionless points in time and space are mathematical notions only. Actual particles – including photons – are always a bit spread out, so to speak, and – importantly – they have to move. For a photon, this is self-evident. It has no rest mass, no rest energy, and, therefore, it is going to move at the speed of light itself. We write: p = m·c = m·c2/= E/c. Using the relationship above, we get: vp = ω/k = (E/ħ)/(p/ħ) = E/p = c ⇒ vg = c2/vp = c2/c = c This is good: we started out with some reflections on the matter-wave, but here we get an interpretation of the electromagnetic wave as a wavefunction for the photon. But let us get back to our matter-wave. In regard to our interpretation of a particle having to move, we should remind ourselves, once again, of the fact that an actual particle is always localized in space and that it can, therefore, not be represented by the elementary wavefunction ψ = a·ei[E·t − px]/ħ or, for a particle at rest, the ψ = a·ei∙E·t/ħ function. We must build a wave packet for that: a sum of wavefunctions, each with their own amplitude ai, and their own ωi = −Ei/ħ. Indeed, in section II, we showed that each of these wavefunctions will contribute some energy to the total energy of the wave packet and that, to calculate the contribution of each wave to the total, both ai as well as Ei matter. This may or may not resolve the apparent paradox. Let us look at the group velocity. To calculate a meaningful group velocity, we must assume the vg = ∂ωi/∂ki = ∂(Ei/ħ)/∂(pi/ħ) = ∂(Ei)/∂(pi) exists. So we must have some dispersion relation. How do we calculate it? We need to calculate ωi as a function of ki here, or Ei as a function of pi. How do we do that? Well… There are a few ways to go about it but one interesting way of doing it is to re-write Schrödinger’s equation as we did, i.e. by distinguishing the real and imaginary parts of the ∂ψ/∂t =i·[ħ/(2m)]·∇2ψ wave equation and, hence, re-write it as the following pair of two equations: 1. Re(∂ψ/∂t) = −[ħ/(2meff)]·Im(∇2ψ) ⇔ ω·cos(kx − ωt) = k2·[ħ/(2meff)]·cos(kx − ωt) 2. Im(∂ψ/∂t) = [ħ/(2meff)]·Re(∇2ψ) ⇔ ω·sin(kx − ωt) = k2·[ħ/(2meff)]·sin(kx − ωt) Both equations imply the following dispersion relation: ω = ħ·k2/(2meff) Of course, we need to think about the subscripts now: we have ωi, ki, but… What about meff or, dropping the subscript, m? Do we write it as mi? If so, what is it? Well… It is the equivalent mass of Ei obviously, and so we get it from the mass-energy equivalence relation: mi = Ei/c2. It is a fine point, but one most people forget about: they usually just write m. However, if there is uncertainty in the energy, then Einstein’s mass-energy relation tells us we must have some uncertainty in the (equivalent) mass too. Here, I should refer back to Section II: Ei varies around some average energy E and, therefore, the Uncertainty Principle kicks in.  VII. Explaining spin The elementary wavefunction vector – i.e. the vector sum of the real and imaginary component – rotates around the x-axis, which gives us the direction of propagation of the wave (see Figure 3). Its magnitude remains constant. In contrast, the magnitude of the electromagnetic vector – defined as the vector sum of the electric and magnetic field vectors – oscillates between zero and some maximum (see Figure 5). We already mentioned that the rotation of the wavefunction vector appears to give some spin to the particle. Of course, a circularly polarized wave would also appear to have spin (think of the E and B vectors rotating around the direction of propagation – as opposed to oscillating up and down or sideways only). In fact, a circularly polarized light does carry angular momentum, as the equivalent mass of its energy may be thought of as rotating as well. But so here we are looking at a matter-wave. The basic idea is the following: if we look at ψ = a·ei∙E·t/ħ as some real vector – as a two-dimensional oscillation of mass, to be precise – then we may associate its rotation around the direction of propagation with some torque. The illustration below reminds of the math here. Figure 7: Torque and angular momentum vectorsTorque_animation A torque on some mass about a fixed axis gives it angular momentum, which we can write as the vector cross-product L = r×p or, perhaps easier for our purposes here as the product of an angular velocity (ω) and rotational inertia (I), aka as the moment of inertia or the angular mass. We write: L = I·ω Note we can write L and ω in boldface here because they are (axial) vectors. If we consider their magnitudes only, we write L = I·ω (no boldface). We can now do some calculations. Let us start with the angular velocity. In our previous posts, we showed that the period of the matter-wave is equal to T = 2π·(ħ/E0). Hence, the angular velocity must be equal to: ω = 2π/[2π·(ħ/E0)] = E0 We also know the distance r, so that is the magnitude of r in the Lr×p vector cross-product: it is just a, so that is the magnitude of ψ = a·ei∙E·t/ħ. Now, the momentum (p) is the product of a linear velocity (v) – in this case, the tangential velocity – and some mass (m): p = m·v. If we switch to scalar instead of vector quantities, then the (tangential) velocity is given by v = r·ω. So now we only need to think about what we should use for m or, if we want to work with the angular velocity (ω), the angular mass (I). Here we need to make some assumption about the mass (or energy) distribution. Now, it may or may not sense to assume the energy in the oscillation – and, therefore, the mass – is distributed uniformly. In that case, we may use the formula for the angular mass of a solid cylinder: I = m·r2/2. If we keep the analysis non-relativistic, then m = m0. Of course, the energy-mass equivalence tells us that m0 = E0/c2. Hence, this is what we get: L = I·ω = (m0·r2/2)·(E0/ħ) = (1/2)·a2·(E0/c2)·(E0/ħ) = a2·E02/(2·ħ·c2) Does it make sense? Maybe. Maybe not. Let us do a dimensional analysis: that won’t check our logic, but it makes sure we made no mistakes when mapping mathematical and physical spaces. We have m2·J2 = m2·N2·m2 in the numerator and N·m·s·m2/s2 in the denominator. Hence, the dimensions work out: we get N·m·s as the dimension for L, which is, effectively, the physical dimension of angular momentum. It is also the action dimension, of course, and that cannot be a coincidence. Also note that the E = mc2 equation allows us to re-write it as: L = a2·E02/(2·ħ·c2) Of course, in quantum mechanics, we associate spin with the magnetic moment of a charged particle, not with its mass as such. Is there way to link the formula above to the one we have for the quantum-mechanical angular momentum, which is also measured in N·m·s units, and which can only take on one of two possible values: J = +ħ/2 and −ħ/2? It looks like a long shot, right? How do we go from (1/2)·a2·m02/ħ to ± (1/2)∙ħ? Let us do a numerical example. The energy of an electron is typically 0.510 MeV » 8.1871×10−14 N∙m, and a… What value should we take for a? We have an obvious trio of candidates here: the Bohr radius, the classical electron radius (aka the Thompon scattering length), and the Compton scattering radius. Let us start with the Bohr radius, so that is about 0.×10−10 N∙m. We get L = a2·E02/(2·ħ·c2) = 9.9×10−31 N∙m∙s. Now that is about 1.88×104 times ħ/2. That is a huge factor. The Bohr radius cannot be right: we are not looking at an electron in an orbital here. To show it does not make sense, we may want to double-check the analysis by doing the calculation in another way. We said each oscillation will always pack 6.626070040(81)×10−34 joule in energy. So our electron should pack about 1.24×10−20 oscillations. The angular momentum (L) we get when using the Bohr radius for a and the value of 6.626×10−34 joule for E0 and the Bohr radius is equal to 6.49×10−59 N∙m∙s. So that is the angular momentum per oscillation. When we multiply this with the number of oscillations (1.24×10−20), we get about 8.01×10−51 N∙m∙s, so that is a totally different number. The classical electron radius is about 2.818×10−15 m. We get an L that is equal to about 2.81×10−39 N∙m∙s, so now it is a tiny fraction of ħ/2! Hence, this leads us nowhere. Let us go for our last chance to get a meaningful result! Let us use the Compton scattering length, so that is about 2.42631×10−12 m. This gives us an L of 2.08×10−33 N∙m∙s, which is only 20 times ħ. This is not so bad, but it is good enough? Let us calculate it the other way around: what value should we take for a so as to ensure L = a2·E02/(2·ħ·c2) = ħ/2? Let us write it out:F9 In fact, this is the formula for the so-called reduced Compton wavelength. This is perfect. We found what we wanted to find. Substituting this value for a (you can calculate it: it is about 3.8616×10−33 m), we get what we should find:F10 This is a rather spectacular result, and one that would – a priori – support the interpretation of the wavefunction that is being suggested in this paper.  VIII. The boson-fermion dichotomy Let us do some more thinking on the boson-fermion dichotomy. Again, we should remind ourselves that an actual particle is localized in space and that it can, therefore, not be represented by the elementary wavefunction ψ = a·ei[E·t − px]/ħ or, for a particle at rest, the ψ = a·ei∙E·t/ħ function. We must build a wave packet for that: a sum of wavefunctions, each with their own amplitude ai, and their own ωi = −Ei/ħ. Each of these wavefunctions will contribute some energy to the total energy of the wave packet. Now, we can have another wild but logical theory about this. Think of the apparent right-handedness of the elementary wavefunction: surely, Nature can’t be bothered about our convention of measuring phase angles clockwise or counterclockwise. Also, the angular momentum can be positive or negative: J = +ħ/2 or −ħ/2. Hence, we would probably like to think that an actual particle – think of an electron, or whatever other particle you’d think of – may consist of right-handed as well as left-handed elementary waves. To be precise, we may think they either consist of (elementary) right-handed waves or, else, of (elementary) left-handed waves. An elementary right-handed wave would be written as: ψ(θi= ai·(cosθi + i·sinθi) In contrast, an elementary left-handed wave would be written as: ψ(θi= ai·(cosθii·sinθi) How does that work out with the E0·t argument of our wavefunction? Position is position, and direction is direction, but time? Time has only one direction, but Nature surely does not care how we count time: counting like 1, 2, 3, etcetera or like −1, −2, −3, etcetera is just the same. If we count like 1, 2, 3, etcetera, then we write our wavefunction like: If we count time like −1, −2, −3, etcetera then we write it as:  ψ = a·cos(E0∙t/ħ) − i·a·sin(E0∙t/ħ)= a·cos(E0∙t/ħ) + i·a·sin(E0∙t/ħ) Hence, it is just like the left- or right-handed circular polarization of an electromagnetic wave: we can have both for the matter-wave too! This, then, should explain why we can have either positive or negative quantum-mechanical spin (+ħ/2 or −ħ/2). It is the usual thing: we have two mathematical possibilities here, and so we must have two physical situations that correspond to it. It is only natural. If we have left- and right-handed photons – or, generalizing, left- and right-handed bosons – then we should also have left- and right-handed fermions (electrons, protons, etcetera). Back to the dichotomy. The textbook analysis of the dichotomy between bosons and fermions may be epitomized by Richard Feynman’s Lecture on it (Feynman, III-4), which is confusing and – I would dare to say – even inconsistent: how are photons or electrons supposed to know that they need to interfere with a positive or a negative sign? They are not supposed to know anything: knowledge is part of our interpretation of whatever it is that is going on there. Hence, it is probably best to keep it simple, and think of the dichotomy in terms of the different physical dimensions of the oscillation: newton per kg versus newton per coulomb. And then, of course, we should also note that matter-particles have a rest mass and, therefore, actually carry charge. Photons do not. But both are two-dimensional oscillations, and the point is: the so-called vacuum – and the rest mass of our particle (which is zero for the photon and non-zero for everything else) – give us the natural frequency for both oscillations, which is beautifully summed up in that remarkable equation for the group and phase velocity of the wavefunction, which applies to photons as well as matter-particles: (vphase·c)·(vgroup·c) = 1 ⇔ vp·vg = c2 The final question then is: why are photons spin-zero particles? Well… We should first remind ourselves of the fact that they do have spin when circularly polarized.[25] Here we may think of the rotation of the equivalent mass of their energy. However, if they are linearly polarized, then there is no spin. Even for circularly polarized waves, the spin angular momentum of photons is a weird concept. If photons have no (rest) mass, then they cannot carry any charge. They should, therefore, not have any magnetic moment. Indeed, what I wrote above shows an explanation of quantum-mechanical spin requires both mass as well as charge.[26]  IX. Concluding remarks There are, of course, other ways to look at the matter – literally. For example, we can imagine two-dimensional oscillations as circular rather than linear oscillations. Think of a tiny ball, whose center of mass stays where it is, as depicted below. Any rotation – around any axis – will be some combination of a rotation around the two other axes. Hence, we may want to think of a two-dimensional oscillation as an oscillation of a polar and azimuthal angle. Figure 8: Two-dimensional circular movementoscillation-of-a-ball The point of this paper is not to make any definite statements. That would be foolish. Its objective is just to challenge the simplistic mainstream viewpoint on the reality of the wavefunction. Stating that it is a mathematical construct only without physical significance amounts to saying it has no meaning at all. That is, clearly, a non-sustainable proposition. The interpretation that is offered here looks at amplitude waves as traveling fields. Their physical dimension may be expressed in force per mass unit, as opposed to electromagnetic waves, whose amplitudes are expressed in force per (electric) charge unit. Also, the amplitudes of matter-waves incorporate a phase factor, but this may actually explain the rather enigmatic dichotomy between fermions and bosons and is, therefore, an added bonus. The interpretation that is offered here has some advantages over other explanations, as it explains the how of diffraction and interference. However, while it offers a great explanation of the wave nature of matter, it does not explain its particle nature: while we think of the energy as being spread out, we will still observe electrons and photons as pointlike particles once they hit the detector. Why is it that a detector can sort of ‘hook’ the whole blob of energy, so to speak? The interpretation of the wavefunction that is offered here does not explain this. Hence, the complementarity principle of the Copenhagen interpretation of the wavefunction surely remains relevant. Appendix 1: The de Broglie relations and energy The 1/2 factor in Schrödinger’s equation is related to the concept of the effective mass (meff). It is easy to make the wrong calculations. For example, when playing with the famous de Broglie relations – aka as the matter-wave equations – one may be tempted to derive the following energy concept: 1. E = h·f and p = h/λ. Therefore, f = E/h and λ = p/h. 2. v = λ = (E/h)∙(p/h) = E/p 3. p = m·v. Therefore, E = v·p = m·v2 E = m·v2? This resembles the E = mc2 equation and, therefore, one may be enthused by the discovery, especially because the m·v2 also pops up when working with the Least Action Principle in classical mechanics, which states that the path that is followed by a particle will minimize the following integral:F11Now, we can choose any reference point for the potential energy but, to reflect the energy conservation law, we can select a reference point that ensures the sum of the kinetic and the potential energy is zero throughout the time interval. If the force field is uniform, then the integrand will, effectively, be equal to KE − PE = m·v2.[27] However, that is classical mechanics and, therefore, not so relevant in the context of the de Broglie equations, and the apparent paradox should be solved by distinguishing between the group and the phase velocity of the matter wave. Appendix 2: The concept of the effective mass The effective mass – as used in Schrödinger’s equation – is a rather enigmatic concept. To make sure we are making the right analysis here, I should start by noting you will usually see Schrödinger’s equation written as:F12This formulation includes a term with the potential energy (U). In free space (no potential), this term disappears, and the equation can be re-written as: We just moved the i·ħ coefficient to the other side, noting that 1/i = –i. Now, in one-dimensional space, and assuming ψ is just the elementary wavefunction (so we substitute a·ei∙[E·t − p∙x]/ħ for ψ), this implies the following: a·i·(E/ħ)·ei∙[E·t − p∙x]/ħ = −i·(ħ/2meffa·(p22 ei∙[E·t − p∙x]/ħ  ⇔ E = p2/(2meff) ⇔ meff = m∙(v/c)2/2 = m∙β2/2 It is an ugly formula: it resembles the kinetic energy formula (K.E. = m∙v2/2) but it is, in fact, something completely different. The β2/2 factor ensures the effective mass is always a fraction of the mass itself. To get rid of the ugly 1/2 factor, we may re-define meff as two times the old meff (hence, meffNEW = 2∙meffOLD), as a result of which the formula will look somewhat better: meff = m∙(v/c)2 = m∙β2 We know β varies between 0 and 1 and, therefore, meff will vary between 0 and m. Feynman drops the subscript, and just writes meff as m in his textbook (see Feynman, III-19). On the other hand, the electron mass as used is also the electron mass that is used to calculate the size of an atom (see Feynman, III-2-4). As such, the two mass concepts are, effectively, mutually compatible. It is confusing because the same mass is often defined as the mass of a stationary electron (see, for example, the article on it in the online Wikipedia encyclopedia[28]). In the context of the derivation of the electron orbitals, we do have the potential energy term – which is the equivalent of a source term in a diffusion equation – and that may explain why the above-mentioned meff = m∙(v/c)2 = m∙β2 formula does not apply. This paper discusses general principles in physics only. Hence, references can be limited to references to physics textbooks only. For ease of reading, any reference to additional material has been limited to a more popular undergrad textbook that can be consulted online: Feynman’s Lectures on Physics (http://www.feynmanlectures.caltech.edu). References are per volume, per chapter and per section. For example, Feynman III-19-3 refers to Volume III, Chapter 19, Section 3. [1] Of course, an actual particle is localized in space and can, therefore, not be represented by the elementary wavefunction ψ = a·ei∙θa·ei[E·t − px]/ħ = a·(cosθ i·a·sinθ). We must build a wave packet for that: a sum of wavefunctions, each with its own amplitude ak and its own argument θk = (Ek∙t – pkx)/ħ. This is dealt with in this paper as part of the discussion on the mathematical and physical interpretation of the normalization condition. [2] The N/kg dimension immediately, and naturally, reduces to the dimension of acceleration (m/s2), thereby facilitating a direct interpretation in terms of Newton’s force law. [3] In physics, a two-spring metaphor is more common. Hence, the pistons in the author’s perpetuum mobile may be replaced by springs. [4] The author re-derives the equation for the Compton scattering radius in section VII of the paper. [5] The magnetic force can be analyzed as a relativistic effect (see Feynman II-13-6). The dichotomy between the electric force as a polar vector and the magnetic force as an axial vector disappears in the relativistic four-vector representation of electromagnetism. [6] For example, when using Schrödinger’s equation in a central field (think of the electron around a proton), the use of polar coordinates is recommended, as it ensures the symmetry of the Hamiltonian under all rotations (see Feynman III-19-3) [7] This sentiment is usually summed up in the apocryphal quote: “God does not play dice.”The actual quote comes out of one of Einstein’s private letters to Cornelius Lanczos, another scientist who had also emigrated to the US. The full quote is as follows: “You are the only person I know who has the same attitude towards physics as I have: belief in the comprehension of reality through something basically simple and unified… It seems hard to sneak a look at God’s cards. But that He plays dice and uses ‘telepathic’ methods… is something that I cannot believe for a single moment.” (Helen Dukas and Banesh Hoffman, Albert Einstein, the Human Side: New Glimpses from His Archives, 1979) [8] Of course, both are different velocities: ω is an angular velocity, while v is a linear velocity: ω is measured in radians per second, while v is measured in meter per second. However, the definition of a radian implies radians are measured in distance units. Hence, the physical dimensions are, effectively, the same. As for the formula for the total energy of an oscillator, we should actually write: E = m·a2∙ω2/2. The additional factor (a) is the (maximum) amplitude of the oscillator. [9] We also have a 1/2 factor in the E = mv2/2 formula. Two remarks may be made here. First, it may be noted this is a non-relativistic formula and, more importantly, incorporates kinetic energy only. Using the Lorentz factor (γ), we can write the relativistically correct formula for the kinetic energy as K.E. = E − E0 = mvc2 − m0c2 = m0γc2 − m0c2 = m0c2(γ − 1). As for the exclusion of the potential energy, we may note that we may choose our reference point for the potential energy such that the kinetic and potential energy mirror each other. The energy concept that then emerges is the one that is used in the context of the Principle of Least Action: it equals E = mv2. Appendix 1 provides some notes on that. [10] Instead of two cylinders with pistons, one may also think of connecting two springs with a crankshaft. [11] It is interesting to note that we may look at the energy in the rotating flywheel as potential energy because it is energy that is associated with motion, albeit circular motion. In physics, one may associate a rotating object with kinetic energy using the rotational equivalent of mass and linear velocity, i.e. rotational inertia (I) and angular velocity ω. The kinetic energy of a rotating object is then given by K.E. = (1/2)·I·ω2. [12] Because of the sideways motion of the connecting rods, the sinusoidal function will describe the linear motion only approximately, but you can easily imagine the idealized limit situation. [13] The ω2= 1/LC formula gives us the natural or resonant frequency for a electric circuit consisting of a resistor (R), an inductor (L), and a capacitor (C). Writing the formula as ω2= C1/L introduces the concept of elastance, which is the equivalent of the mechanical stiffness (k) of a spring. [14] The resistance in an electric circuit introduces a damping factor. When analyzing a mechanical spring, one may also want to introduce a drag coefficient. Both are usually defined as a fraction of the inertia, which is the mass for a spring and the inductance for an electric circuit. Hence, we would write the resistance for a spring as γm and as R = γL respectively. [15] Photons are emitted by atomic oscillators: atoms going from one state (energy level) to another. Feynman (Lectures, I-33-3) shows us how to calculate the Q of these atomic oscillators: it is of the order of 108, which means the wave train will last about 10–8 seconds (to be precise, that is the time it takes for the radiation to die out by a factor 1/e). For example, for sodium light, the radiation will last about 3.2×10–8 seconds (this is the so-called decay time τ). Now, because the frequency of sodium light is some 500 THz (500×1012 oscillations per second), this makes for some 16 million oscillations. There is an interesting paradox here: the speed of light tells us that such wave train will have a length of about 9.6 m! How is that to be reconciled with the pointlike nature of a photon? The paradox can only be explained by relativistic length contraction: in an analysis like this, one need to distinguish the reference frame of the photon – riding along the wave as it is being emitted, so to speak – and our stationary reference frame, which is that of the emitting atom. [16] This is a general result and is reflected in the K.E. = T = (1/2)·m·ω2·a2·sin2(ω·t + Δ) and the P.E. = U = k·x2/2 = (1/2)· m·ω2·a2·cos2(ω·t + Δ) formulas for the linear oscillator. [17] Feynman further formalizes this in his Lecture on Superconductivity (Feynman, III-21-2), in which he refers to Schrödinger’s equation as the “equation for continuity of probabilities”. The analysis is centered on the local conservation of energy, which confirms the interpretation of Schrödinger’s equation as an energy diffusion equation. [18] The meff is the effective mass of the particle, which depends on the medium. For example, an electron traveling in a solid (a transistor, for example) will have a different effective mass than in an atom. In free space, we can drop the subscript and just write meff = m. Appendix 2 provides some additional notes on the concept. As for the equations, they are easily derived from noting that two complex numbers a + i∙b and c + i∙d are equal if, and only if, their real and imaginary parts are the same. Now, the ∂ψ/∂t = i∙(ħ/meff)∙∇2ψ equation amounts to writing something like this: a + i∙b = i∙(c + i∙d). Now, remembering that i2 = −1, you can easily figure out that i∙(c + i∙d) = i∙c + i2∙d = − d + i∙c. [19] The dimension of B is usually written as N/(m∙A), using the SI unit for current, i.e. the ampere (A). However, 1 C = 1 A∙s and, hence, 1 N/(m∙A) = 1 (N/C)/(m/s).      [20] Of course, multiplication with i amounts to a counterclockwise rotation. Hence, multiplication by –i also amounts to a rotation by 90 degrees, but clockwise. Now, to uniquely identify the clockwise and counterclockwise directions, we need to establish the equivalent of the right-hand rule for a proper geometric interpretation of Schrödinger’s equation in three-dimensional space: if we look at a clock from the back, then its hand will be moving counterclockwise. When writing B = (1/c)∙iE, we assume we are looking in the negative x-direction. If we are looking in the positive x-direction, we should write: B = -(1/c)∙iE. Of course, Nature does not care about our conventions. Hence, both should give the same results in calculations. We will show in a moment they do. [21] In fact, when multiplying C2/(N·m2) with N2/C2, we get N/m2, but we can multiply this with 1 = m/m to get the desired result. It is significant that an energy density (joule per unit volume) can also be measured in newton (force per unit area. [22] The illustration shows a linearly polarized wave, but the obtained result is general. [23] The sine and cosine are essentially the same functions, except for the difference in the phase: sinθ = cos(θ−π /2). [24] I must thank a physics blogger for re-writing the 1/(ε0·μ0) = c2 equation like this. See: http://reciprocal.systems/phpBB3/viewtopic.php?t=236 (retrieved on 29 September 2017). [25] A circularly polarized electromagnetic wave may be analyzed as consisting of two perpendicular electromagnetic plane waves of equal amplitude and 90° difference in phase. [26] Of course, the reader will now wonder: what about neutrons? How to explain neutron spin? Neutrons are neutral. That is correct, but neutrons are not elementary: they consist of (charged) quarks. Hence, neutron spin can (or should) be explained by the spin of the underlying quarks. [27] We detailed the mathematical framework and detailed calculations in the following online article: https://readingfeynman.org/2017/09/15/the-principle-of-least-action-re-visited. [28] https://en.wikipedia.org/wiki/Electron_rest_mass (retrieved on 29 September 2017). The Liénard–Wiechert potentials and the solution for Maxwell’s equations In my post on gauges and gauge transformations in electromagnetics, I mentioned the full and complete solution for Maxwell’s equations, using the electric and magnetic (vector) potential Φ and A. Feynman frames it nicely, so I should print it and put it on the kitchen door, so I can look at it everyday. 🙂 I should print the wave equation we derived in our previous post too. Hmm… Stupid question, perhaps, but why is there no wave equation above? I mean: in the previous post, we said the wave equation was the solution for Maxwell’s equation, didn’t we? The answer is simple, of course: the wave equation is a solution for waves originating from some source and traveling through free space, so that’s a special case. Here we have everything. Those integrals ‘sweep’ all over space, and so that’s real space, which is full of moving charges and so there’s waves everywhere. So the solution above is far more general and captures it all: it’s the potential at every point in space, and at every point in time, taking into account whatever else is there, moving or not moving. In fact, it is the general solution of Maxwell’s equations. How do we find it? Well… I could copy Feynman’s 21st Lecture but I won’t do that. The solution is based on the formula for Φ and A for a small blob of charge, and then the formulas above just integrate over all of space. That solution for a small blob of charge, i.e. a point charge really, was first deduced in 1898, by a French engineer: Alfred-Marie Liénard. However, his equations did not get much attention, apparently, because a German physicist, Emil Johann Wiechert, worked on the same thing and found the very same equations just two years later. That’s why they are referred to as the Liénard-Wiechert potentials, so they both get credit for it, even if both of them worked it out independently. These are the equations: electric potential magnetic potential Now, you may wonder why I am mentioning them, and you may also wonder how we get those integrals above, i.e. our general solution for Maxwell’s equations, from them. You can find the answer to your second question in Feynman’s 21st Lecture. 🙂 As for the first question, I mention them because one can derive two other formulas for E and B from them. It’s the formulas that Feynman uses in his first Volume, when studying light: E Now you’ll probably wonder how we can get these two equations from the Liénard-Wiechert potentials. They don’t look very similar, do they? No, they don’t. Frankly, I would like to give you the same answer as above, i.e. check it in Feynman’s 21st Lecture, but the truth is that the derivation is so long and tedious that even Feynman says one needs “a lot of paper and a lot of time” for that. So… Well… I’d suggest we just use all of those formulas and not worry too much about where they come from. If we can agree on that, we’re actually sort of finished with electromagnetism. All the chapters that follow Feynman’s 21st Lecture are applications indeed, so they do not add all that much to the core of the classical theory of electromagnetism. So why did I write this post? Well… I am not sure. I guess I just wanted to sum things up for myself, so I can print it all out and put it on the kitchen door indeed. 🙂 Oh, and now that I think of it, I should add one more formula, and that’s the formula for spherical waves (as opposed to the plane waves we discussed in my previous post). It’s a very simple formula, and entirely what you’d expect to see: spherical wave The S function is the source function, and you can see that the formula is a Coulomb-like potential, but with the retarded argument. You’ll wonder: what is ψ? Is it E or B or what? Well… You can just substitute: ψ can be anything. Indeed, Feynman gives a very general solution for any type of spherical wave here. 🙂 So… That’s it, folks. That’s all there is to it. I hope you enjoyed it. 🙂 Addendum: Feynman’s equation for electromagnetic radiation I talked about Feynman’s formula for electromagnetic radiation before, but it’s probably good to quickly re-explain it here. Note that it talks about the electric field only, as the magnetic field is so tiny and, in any case, if we have E then we can find B. So the formula is: The geometry of the situation is depicted below. We have some charge q that, we assume, is moving through space, and so it creates some field E at point P. The er‘ vector is the unit vector from P to Q, so it points at the charge. Well… It points to where the charge was at the time just a little while ago, i.e. at the time t – r‘/c. Why? Well… We don’t know where q is right now, because the field needs some time travel, we don’t know q right now, i.e. q at time t. It might be anywhere. Perhaps it followed some weird trajectory during the time r‘/c, like the trajectory below. radiation formula So our er‘ vector moves as the charge moves, and so it will also have velocity and, likely, some acceleration, but what we measure for its velocity and acceleration, i.e. the d(er)/dt and d2(er)/dt2 in that Feynman equation, is also the retarded velocity and the retarded acceleration. But look at the terms in the equation. The first two terms have a 1/r’2 in them, so these two effects diminish with the square of the distance. The first term is just Coulomb’s Law (note that the minus sign in front takes care of the fact that like charges repel and so the E vector will point in the other way). Well… It is and it isn’t, because of the retarded time argument, of course. And so we have the second term, which sort of compensates for that. Indeed, the d(er)/dt is the time rate of change of er and, hence, if r‘/c = Δt, then (r‘/cd(er)/dt is a first-order approximation of Δer. As Feynman puts it: “The second term is as though nature were trying to allow for the fact that the Coulomb effect is retarded, if we might put it very crudely. It suggests that we should calculate the delayed Coulomb field but add a correction to it, which is its rate of change times the time delay that we use. Nature seems to be attempting to guess what the field at the present time is going to be, by taking the rate of change and multiplying by the time that is delayed.” In short, the first two terms can be written as E = −(q/4πε0)/r2·[er + Δer] and, hence, it’s a sort of modified Coulomb Law that sort of tries to guess what the electrostatic field at P should be based on (a) what it is right now, and (b) how q’s direction and velocity, as measured now, would change it. Now, the third term has a 1/c2 factor in front but, unlike the other two terms, this effect does not fall off with distance. So the formula below fully describes electromagnetic radiation, indeed, because it’s the only important term when we get ‘far enough away’, with ‘far enough’ meaning that the parts that go as the square of the distance have fallen off so much that they’re no longer significant. radiation formula 2Of course, you’re smart, and so you’ll immediately note that, as r increases, that unit vector keeps wiggling but that effect will also diminish. You’re right. It does, but in a fairly complicated way. The acceleration of er has two components indeed. One is the transverse or tangential piece, because the end of er goes up and down, and the other is a radial piece because it stays on a sphere and so it changes direction. The radial piece is the smallest bit, and actually also varies as the inverse square of r when r is fairly large. The tangential piece, however, varies only inversely as the distance, so as 1/r. So, yes, the wigglings of er look smaller and smaller, inversely as the distance, but the tangential piece is and remains significant, because it does not vary as 1/r2 but as 1/r only.  That’s why you’ll usually see the law of radiation written in an even simpler way: final law of radiation This law reduces the whole effect to the component of the acceleration that is perpendicular to the line of sight only. It assumes the distance is huge as compared to the distance over which the charge is moving and, therefore, that r‘ and r can be equated for all practical purposes. It also notes that the tangential piece is all that matters, and so it equates d2(er)/dtwith ax/r. The whole thing is probably best illustrated as below: we have a generator driving charges up and down in G – so it’s an antenna really – and so we’ll measure a strong signal when putting the radiation detector D in position 1, but we’ll measure nothing in position 3. [The detector is, of course, another antenna, but with an amplifier for the signal.] But so here I am starting to talk about electromagnetic radiation once more, which was not what I wanted to do here, if only because Feynman does a much better job at that than I could ever do. 🙂radiator Traveling fields: the wave equation and its solutions We’ve climbed a big mountain over the past few weeks, post by post, 🙂 slowly gaining height, and carefully checking out the various routes to the top. But we are there now: we finally fully understand how Maxwell’s equations actually work. Let me jot them down once more: Maxwell's equations As for how real or unreal the E and B fields are, I gave you Feynman’s answer to it, so… Well… I can’t add to that. I should just note, or remind you, that we have a fully equivalent description of it all in terms of the electric and magnetic (vector) potential Φ and A, and so we can ask the same question about Φ and A. They explain real stuff, so they’re real in that sense. That’s what Feynman’s answer amounts to, and I am happy with it. 🙂 What I want to do here is show how we can get from those equations to some kind of wave equation: an equation that describes how a field actually travels through space. So… Well… Let’s first look at that very particular wave function we used in the previous post to prove that electromagnetic waves propagate with speed c, i.e. the speed of light. The fields were very simple: the electric field had a y-component only, and the magnetic field a z-component only. Their magnitudes, i.e. their magnitude where the field had reached, as it fills the space traveling outwards, were given in terms of J, i.e. the surface current density going in the positive y-direction, and the geometry of the situation is illustrated below. sheet of charge The fields were, obviously, zero where the fields had not reached as they were traveling outwards. And, yes, I know that sounds stupid. But… Well… It’s just to make clear what we’re looking at here. 🙂 We also showed how the wave would look like if we would turn off its First Cause after some time T, so if the moving sheet of charge would no longer move after time T. We’d have the following pulse traveling through space, a rectangular shape really: wavefrontWe can imagine more complicated shapes for the pulse, like the shape shown below. J goes from one unit to two units at time t = t1 and then to zero at t = t2. Now, the illustration on the right shows the electric field as a function of x at the time t shown by the arrow. We’ve seen this before when discussing waves: if the speed of travel of the wave is equal to c, then x is equal to x = c·t, and the pattern is as shown below indeed: it mirrors what happened at the source x/c seconds ago. So we write: equation 2 This idea of using the retarded time t’ = tx/c in the argument of a wave function f – or, what amounts to the same, using x − c/t – is key to understanding wave functions. I’ve explained this in very simple language in a post for my kids and, if you don’t get this, I recommend you check it out. What we’re doing, basically, is converting something expressed in time units into something expressed in distance units, or vice versa, using the velocity of the wave as the scale factor, so time and distance are both expressed in the same unit, which may be seconds, or meter. To see how it works, suppose we add some time Δt to the argument of our wave function f, so we’re looking at f[x−c(t+Δt)] now, instead of f(x−ct). Now, f[x−c(t+Δt)] = f(x−ct−cΔt), so we’ll get a different value for our function—obviously! But it’s easy to see that we can restore our wave function F to its former value by also adding some distance Δx = cΔt to the argument. Indeed, if we do so, we get f[x+Δx−c(t+Δt)] = f(x+cΔt–ct−cΔt) = f(x–ct). You’ll say: t − x/c is not the same as x–ct. It is and it isn’t: any function of x–ct is also a function of t − x/c, because we can write: Here, I need to add something about the direction of travel. The pulse above travel in the positive x-direction, so that’s why we have x minus ct in the argument. For a wave traveling in the negative x-direction, we’ll have a wave function y = F(x+ct). In any case, I can’t dwell on this, so let me move on. Now, Maxwell’s equations in free or empty space, where are there no charges nor currents to interact with, reduce to: Maxwell in free space Now, how can we relate this set of complicated equations to a simple wave function? Let’s do the exercise for our simple Ey and Bz wave. Let’s start by writing out the first equation, i.e. ·E = 0, so we get: Now, our wave does not vary in the y and z direction, so none of the components, including Ey and Edepend on y or z. It only varies in the x-direction, so ∂Ey/∂y and ∂Ez/∂z are zero. Note that the cross-derivatives ∂Ey/∂z and ∂Ez/∂y are also zero: we’re talking a plane wave here, the field varies only with x. However, because ·E = 0, ∂Ex/∂x must be zero and, hence, Ex must be zero. Huh? What? How is that possible? You just said that our field does vary in the x-direction! And now you’re saying it doesn’t it? Read carefully. I know it’s complicated business, but it all makes sense. Look at the function: we’re talking Ey, not Ex. Ey does vary as a function of x, but our field does not have an x-component, so Ex = 0. We have no cross-derivative ∂Ey/∂x in the divergence of E (i.e. in ·E = 0). Huh? What? Let me put it differently. E has three components: Ex, Ey and Ez, and we have three space coordinates: x, y and z, so we have nine cross-derivatives. What I am saying is that all derivatives with respect to y and z are zero. That still leaves us with three derivatives: ∂Ex/∂x, ∂Ey/∂x, and ∂Ey/∂x. So… Because all derivatives in respect to y and z are zero, and because of the ·E = 0 equation, we know that ∂Ex/∂x must be zero. So, to make a long story short, I did not say anything about ∂Ey/∂x or ∂Ez/∂x. These may still be whatever they want to be, and they may vary in more or in less complicated ways. I’ll give an example of that in a moment. Having said that, I do agree that I was a bit quick in writing that, because ∂Ex/∂x = 0, Ex must be zero too. Looking at the math only, Ex is not necessarily zero: it might be some non-zero constant. So… Yes. That’s a mathematical possibility. The static field from some charged condenser plate would be an example of a constant Ex field. However, the point is that we’re not looking at such static fields here: we’re talking dynamics here, and we’re looking at a particular type of wave: we’re talking a so-called plane wave. Now, the wave front of a plane wave is… Well… A plane. 🙂 So Ex is zero indeed. It’s a general result for plane waves: the electric field of a plane wave will always be at right angles to the direction of propagation. Hmm… I can feel your skepticism here. You’ll say I am arbitrarily restricting the field of analysis… Well… Yes. For the moment. It’s not a reasonable restriction though. As I mentioned above, the field of a plane wave may still vary in both the y- and z-directions, as shown in the illustration below (for which the credit goes to Wikipedia), which visualizes the electric field of circularly polarized light. In any case, don’t worry too much about. Let’s get back to the analysis. Just note we’re talking plane waves here. We’ll talk about non-plane waves i.e. incoherent light waves later. 🙂 circular polarization So we have plane waves and, therefore, a so-called transverse E field which we can resolve in two components: Eand Ez. However, we wanted to study a very simply Efield only. Why? Remember the objective of this lesson: it’s just to show how we go from Maxwell’s equations to the wave function, and so let’s keep the analysis simple as we can for now: we can make it more general later. In fact, if we do the analysis now for non-zero Eand zero Ez, we can do a similar analysis for non-zero Eand zero Ey, and the general solution is going to be some superposition of two such fields, so we’ll have a non-zero Eand Ez. Capito? 🙂 So let me write out Maxwell’s second equation, and use the results we got above, so I’ll incorporate the zero values for the derivatives with respect to y and z, and also the assumption that Ez is zero. So we get: f3[By the way: note that, out of the nine derivatives, the curl involves only the (six) cross-derivatives. That’s linked to the neat separation between the curl and the divergence operator. Math is great! :-)] Now, because of the flux rule (×E = –∂B/∂t), we can (and should) equate the three components of ×E above with the three components of –∂B/∂t, so we get: [In case you wonder what it is that I am trying to do, patience, please! We’ll get where we want to get. Just hang in there and read on.] Now, ∂Bx/∂t = 0 and ∂By/∂t = 0 do not necessarily imply that Bx and Bare zero: there might be some magnets and, hence, we may have some constant static field. However, that’s a matter of choosing a reference point or, more simply, assuming that empty space is effectively empty, and so we don’t have magnets lying around and so we assume that Bx and Bare effectively zero. [Again, we can always throw more stuff in when our analysis is finished, but let’s keep it simple and stupid right now, especially because the Bx = B= 0 is entirely in line with the Ex = E= 0 assumption.] The equations above tell us what we know already: the E and B fields are at right angles to each other. However, note, once again, that this is a more general result for all plane electromagnetic waves, so it’s not only that very special caterpillar or butterfly field that we’re looking at it. [If you didn’t read my previous post, you won’t get the pun, but don’t worry about it. You need to understand the equations, not the silly jokes.] OK. We’re almost there. Now we need Maxwell’s last equation. When we write it out, we get the following monstrously looking set of equations: However, because of all of the equations involving zeroes above 🙂 only ∂Bz/∂x is not equal to zero, so the whole set reduced to only simple equation only: Simplifying assumptions are great, aren’t they? 🙂 Having said that, it’s easy to be confused. You should watch out for the denominators: a ∂x and a ∂t are two very different things. So we have two equations now involving first-order derivatives: 1. ∂Bz/∂t = −∂Ey/∂x 2. c2∂Bz/∂x = −∂Ey/∂t So what? Patience, please! 🙂 Let’s differentiate the first equation with respect to x and the second with respect to t. Why? Because… Well… You’ll see. Don’t complain. It’s simple. Just do it. We get: 1. ∂[∂Bz/∂t]/∂x = −∂2Ey/∂x2 2. ∂[−c2∂Bz/∂x]/∂t = −∂2Ey/∂x2 So we can equate the left-hand sides of our two equations now, and what we get is a differential equation of the second order that we’ve encountered already, when we were studying wave equations. In fact, it is the wave equation for one-dimensional waves: f7In case you want to double-check, I did a few posts on this, but, if you don’t get this, well… I am sorry. You’ll need to do some homework. More in particular, you’ll need to do some homework on differential equations. The equation above is basically some constraint on the functional form of Ey. More in general, if we see an equation like: then the function ψ(x, t) must be some function So any function ψ like that will work. You can check it out by doing the necessary derivatives and plug them into the wave equation. [In case you wonder how you should go about this, Feynman actually does it for you in his Lecture on this topic, so you may want to check it there.] In fact, the functions f(x − c/t) and g(x + c/t) themselves will also work as possible solutions. So we can drop one or the other, which amounts to saying that our ‘shape’ has to travel in some direction, rather than in both at the same time. 🙂 Indeed, from all of my explanations above, you know what f(x − c/t) represents: it’s a wave that travels in the positive x-direction. Now, it may be periodic, but it doesn’t have to be periodic. The f(x − c/t) function could represent any constant ‘shape’ that’s traveling in the positive x-direction at speed c. Likewise, the g(x + c/t) function could represent any constant ‘shape’ that’s traveling in the negative x-direction at speed c. As for super-imposing both… Well… I suggest you check that post I wrote for my son, Vincent. It’s on the math of waves, but it doesn’t have derivatives and/or differential equations. It just explains how superimposition and all that works. It’s not very abstract, as it revolves around a vibrating guitar string. So, if you have trouble with all of the above, you may want to read that first. 🙂 The bottom line is that we can get any wavefunction we want by superimposing simple sinusoidals that are traveling in one or the other direction, and so that’s what’s the more general solution really says. Full stop. So that’s what’s we’re doing really: we add very simple waves to get very more complicated waveforms. 🙂 Now, I could leave it at this, but then it’s very easy to just go one step further, and that is to assume that Eand, therefore, Bare not zero. It’s just a matter of super-imposing solutions. Let me just give you the general solution. Just look at it for a while. If you understood all that I’ve said above, 20 seconds or so should be sufficient to say: “Yes, that makes sense. That’s the solution in two dimensions.” At least, I hope so! 🙂 General solution two dimensions OK. I should really stop now. But… Well… Now that we’ve got a general solution for all plane waves, why not be even bolder and think about what we could possibly say about three-dimensional waves? So then Eand, therefore, Bwould not necessarily be zero either. After all, light can behave that way. In fact, light is likely to be non-polarized and, hence, Eand, therefore, Bare most probably not equal to zero! Now, you may think the analysis is going to be terribly complicated. And you’re right. It would be if we’d stick to our analysis in terms of x, y and z coordinates. However, it turns out that the analysis in terms of vector equations is actually quite straightforward. I’ll just copy the Master here, so you can see His Greatness. 🙂 waves in three dimensions But what solution does an equation like (20.27) have? We can appreciate it’s actually three equations, i.e. one for each component, and so… Well… Hmm… What can we say about that? I’ll quote the Master on this too: “How shall we find the general wave solution? The answer is that all the solutions of the three-dimensional wave equation can be represented as a superposition of the one-dimensional solutions we have already found. We obtained the equation for waves which move in the x-direction by supposing that the field did not depend on y and z. Obviously, there are other solutions in which the fields do not depend on x and z, representing waves going in the y-direction. Then there are solutions which do not depend on x and y, representing waves travelling in the z-direction. Or in general, since we have written our equations in vector form, the three-dimensional wave equation can have solutions which are plane waves moving in any direction at all. Again, since the equations are linear, we may have simultaneously as many plane waves as we wish, travelling in as many different directions. Thus the most general solution of the three-dimensional wave equation is a superposition of all sorts of plane waves moving in all sorts of directions.” It’s the same thing once more: we add very simple waves to get very more complicated waveforms. 🙂 You must have fallen asleep by now or, else, be watching something else. Feynman must have felt the same. After explaining all of the nitty-gritty above, Feynman wakes up his students. He does so by appealing to their imagination: “Try to imagine what the electric and magnetic fields look like at present in the space in this lecture room. First of all, there is a steady magnetic field; it comes from the currents in the interior of the earth—that is, the earth’s steady magnetic field. Then there are some irregular, nearly static electric fields produced perhaps by electric charges generated by friction as various people move about in their chairs and rub their coat sleeves against the chair arms. Then there are other magnetic fields produced by oscillating currents in the electrical wiring—fields which vary at a frequency of 6060 cycles per second, in synchronism with the generator at Boulder Dam. But more interesting are the electric and magnetic fields varying at much higher frequencies. For instance, as light travels from window to floor and wall to wall, there are little wiggles of the electric and magnetic fields moving along at 186,000 miles per second. Then there are also infrared waves travelling from the warm foreheads to the cold blackboard. And we have forgotten the ultraviolet light, the x-rays, and the radiowaves travelling through the room. Flying across the room are electromagnetic waves which carry music of a jazz band. There are waves modulated by a series of impulses representing pictures of events going on in other parts of the world, or of imaginary aspirins dissolving in imaginary stomachs. To demonstrate the reality of these waves it is only necessary to turn on electronic equipment that converts these waves into pictures and sounds. If we go into further detail to analyze even the smallest wiggles, there are tiny electromagnetic waves that have come into the room from enormous distances. There are now tiny oscillations of the electric field, whose crests are separated by a distance of one foot, that have come from millions of miles away, transmitted to the earth from the Mariner II space craft which has just passed Venus. Its signals carry summaries of information it has picked up about the planets (information obtained from electromagnetic waves that travelled from the planet to the space craft). There are very tiny wiggles of the electric and magnetic fields that are waves which originated billions of light years away—from galaxies in the remotest corners of the universe. That this is true has been found by “filling the room with wires”—by building antennas as large as this room. Such radiowaves have been detected from places in space beyond the range of the greatest optical telescopes. Even they, the optical telescopes, are simply gatherers of electromagnetic waves. What we call the stars are only inferences, inferences drawn from the only physical reality we have yet gotten from them—from a careful study of the unendingly complex undulations of the electric and magnetic fields reaching us on earth. There is, of course, more: the fields produced by lightning miles away, the fields of the charged cosmic ray particles as they zip through the room, and more, and more. What a complicated thing is the electric field in the space around you! Yet it always satisfies the three-dimensional wave equation.” So… Well… That’s it for today, folks. 🙂 We have some more gymnastics to do, still… But we’re really there. Or here, I should say: on top of the peak. What a view we have here! Isn’t it beautiful? It took us quite some effort to get on top of this thing, and we’re still trying to catch our breath as we struggle with what we’ve learned so far, but it’s really worthwhile, isn’t it? 🙂 A not so easy piece: introducing the wave equation (and the Schrödinger equation) The title above refers to a previous post: An Easy Piece: Introducing the wave function. Indeed, I may have been sloppy here and there – I hope not – and so that’s why it’s probably good to clarify that the wave function (usually represented as Ψ – the psi function) and the wave equation (Schrödinger’s equation, for example – but there are other types of wave equations as well) are two related but different concepts: wave equations are differential equations, and wave functions are their solutions. Indeed, from a mathematical point of view, a differential equation (such as a wave equation) relates a function (such as a wave function) with its derivatives, and its solution is that function or – more generally – the set (or family) of functions that satisfies this equation.  The function can be real-valued or complex-valued, and it can be a function involving only one variable (such as y = y(x), for example) or more (such as u = u(x, t) for example). In the first case, it’s a so-called ordinary differential equation. In the second case, the equation is referred to as a partial differential equation, even if there’s nothing ‘partial’ about it: it’s as ‘complete’ as an ordinary differential equation (the name just refers to the presence of partial derivatives in the equation). Hence, in an ordinary differential equation, we will have terms involving dy/dx and/or d2y/dx2, i.e. the first and second derivative of y respectively (and/or higher-order derivatives, depending on the degree of the differential equation), while in partial differential equations, we will see terms involving ∂u/∂t and/or ∂u2/∂x(and/or higher-order partial derivatives), with ∂ replacing d as a symbol for the derivative. The independent variables could also be complex-valued but, in physics, they will usually be real variables (or scalars as real numbers are also being referred to – as opposed to vectors, which are nothing but two-, three- or more-dimensional numbers really). In physics, the independent variables will usually be x – or let’s use r = (x, y, z) for a change, i.e. the three-dimensional space vector – and the time variable t. An example is that wave function which we introduced in our ‘easy piece’. Ψ(r, t) = Aei(p·r – Et)ħ [If you read the Easy Piece, then you might object that this is not quite what I wrote there, and you are right: I wrote Ψ(r, t) = Aei(p/ħr – ωt). However, here I am just introducing the other de Broglie relation (i.e. the one relating energy and frequency): E = hf =ħω and, hence, ω = E/ħ. Just re-arrange a bit and you’ll see it’s the same.] From a physics point of view, a differential equation represents a system subject to constraints, such as the energy conservation law (the sum of the potential and kinetic energy remains constant), and Newton’s law of course: F = d(mv)/dt. A differential equation will usually also be given with one or more initial conditions, such as the value of the function at point t = 0, i.e. the initial value of the function. To use Wikipedia’s definition: “Differential equations arise whenever a relation involving some continuously varying quantities (modeled by functions) and their rates of change in space and/or time (expressed as derivatives) is known or postulated.” That sounds a bit more complicated, perhaps, but it means the same: once you have a good mathematical model of a physical problem, you will often end up with a differential equation representing the system you’re looking at, and then you can do all kinds of things, such as analyzing whether or not the actual system is in an equilibrium and, if not, whether it will tend to equilibrium or, if not, what the equilibrium conditions would be. But here I’ll refer to my previous posts on the topic of differential equations, because I don’t want to get into these details – as I don’t need them here. The one thing I do need to introduce is an operator referred to as the gradient (it’s also known as the del operator, but I don’t like that word because it does not convey what it is). The gradient – denoted by ∇ – is a shorthand for the partial derivatives of our function u or Ψ with respect to space, so we write: ∇ = (∂/∂x, ∂/∂y, ∂/∂z) You should note that, in physics, we apply the gradient only to the spatial variables, not to time. For the derivative in regard to time, we just write ∂u/∂t or ∂Ψ/∂t. Of course, an operator means nothing until you apply it to a (real- or complex-valued) function, such as our u(x, t) or our Ψ(r, t): ∇u = ∂u/∂x and ∇Ψ = (∂Ψ/∂x, ∂Ψ/∂y, ∂Ψ/∂z) As you can see, the gradient operator returns a vector with three components if we apply it to a real- or complex-valued function of r, and so we can do all kinds of funny things with it combining it with the scalar or vector product, or with both. Here I need to remind you that, in a vector space, we can multiply vectors using either (i) the scalar product, aka the dot product (because of the dot in its notation: ab) or (ii) the vector product, aka as the cross product (yes, because of the cross in its notation: b). So we can define a whole range of new operators using the gradient and these two products, such as the divergence and the curl of a vector field. For example, if E is the electric field vector (I am using an italic bold-type E so you should not confuse E with the energy E, which is a scalar quantity), then div E = ∇•E, and curl E =∇×E. Taking the divergence of a vector will yield some number (so that’s a scalar), while taking the curl will yield another vector.  I am mentioning these operators because you will often see them. A famous example is the set of equations known as Maxwell’s equations, which integrate all of the laws of electromagnetism and from which we can derive the electromagnetic wave equation: (1) ∇•E = ρ/ε(Gauss’ law) (2) ∇×E = –∂B/∂t (Faraday’s law) (3) ∇•B = 0 (4) c2∇×B =  j+  ∂E/∂t   I should not explain these but let me just remind you of the essentials: 1. The first equation (Gauss’ law) can be derived from the equations for Coulomb’s law and the forces acting upon a charge q in an electromagnetic field: F = q(E + v×B) – with B the magnetic field vector (F is also referred to as the Lorentz force: it’s the combined force on a charged particle caused by the electric and magnetic fields; v the velocity of the (moving) charge;  ρ the charge density (so charge is thought of as being distributed in space, rather than being packed into points, and that’s OK because our scale is not the quantum-mechanical one here); and, finally, ε0 the electric constant (some 8.854×10−12 farads per meter). 2. The second equation (Faraday’s law) gives the electric field associated with a changing magnetic field. 3. The third equation basically states that there is no such thing as a magnetic charge: there are only electric charges. 4. Finally, in the last equation, we have a vector j representing the current density: indeed, remember than magnetism only appears when (electric) charges are moving, so if there’s an electric current. As for the equation itself, well… That’s a more complicated story so I will leave that for the post scriptum. We can do many more things: we can also take the curl of the gradient of some scalar, or the divergence of the curl of some vector (both have the interesting property that they are zero), and there are many more possible combinations – some of them useful, others not so useful. However, this is not the place to introduce differential calculus of vector fields (because that’s what it is). The only other thing I need to mention here is what happens when we apply this gradient operator twice. Then we have an new operator ∇•∇ = ∇which is referred to as the Laplacian. In fact, when we say ‘apply ∇ twice’, we are actually doing a dot product. Indeed, ∇ returns a vector, and so we are going to multiply this vector once again with a vector using the dot product rule: a= ∑aib(so we multiply the individual vector components and then add them). In the case of our functions u and Ψ, we get: ∇•(∇u) =∇•∇u = (∇•∇)u = ∇u =∂2u/∂x2 ∇•(∇Ψ) = ∇Ψ = ∂2Ψ/∂x+ ∂2Ψ/∂y+ ∂2Ψ/∂z2 Now, you may wonder what it means to take the derivative (or partial derivative) of a complex-valued function (which is what we are doing in the case of Ψ) but don’t worry about that: a complex-valued function of one or more real variables,  such as our Ψ(x, t), can be decomposed as Ψ(x, t) =ΨRe(x, t) + iΨIm(x, t), with ΨRe and ΨRe two real-valued functions representing the real and imaginary part of Ψ(x, t) respectively. In addition, the rules for integrating complex-valued functions are, to a large extent, the same as for real-valued functions. For example, if z is a complex number, then dez/dz = ez and, hence, using this and other very straightforward rules, we can indeed find the partial derivatives of a function such as Ψ(r, t) = Aei(p·r – Et)ħ with respect to all the (real-valued) variables in the argument. The electromagnetic wave equation   OK. That’s enough math now. We are ready now to look at – and to understand – a real wave equation – I mean one that actually represents something in physics. Let’s take Maxwell’s equations as a start. To make it easy – and also to ensure that you have easy access to the full derivation – we’ll take the so-called Heaviside form of these equations: Heaviside form of Maxwell's equations This Heaviside form assumes a charge-free vacuum space, so there are no external forces acting upon our electromagnetic wave. There are also no other complications such as electric currents. Also, the c2 (i.e. the square of the speed of light) is written here c2 = 1/με, with μ and ε the so-called permeability (μ) and permittivity (ε) respectively (c0, μand ε0 are the values in a vacuum space: indeed, light travels slower elsewhere (e.g. in glass) – if at all). Now, these four equations can be replaced by just two, and it’s these two equations that are referred to as the electromagnetic wave equation(s): electromagnetic wave equation The derivation is not difficult. In fact, it’s much easier than the derivation for the Schrödinger equation which I will present in a moment. But, even if it is very short, I will just refer to Wikipedia in case you would be interested in the details (see the article on the electromagnetic wave equation). The point here is just to illustrate what is being done with these wave equations and why – not so much howIndeed, you may wonder what we have gained with this ‘reduction’. The answer to this very legitimate question is easy: the two equations above are second-order partial differential equations which are relatively easy to solve. In other words, we can find a general solution, i.e. a set or family of functions that satisfy the equation and, hence, can represent the wave itself. Why a set of functions? If it’s a specific wave, then there should only be one wave function, right? Right. But to narrow our general solution down to a specific solution, we will need extra information, which are referred to as initial conditions, boundary conditions or, in general, constraints. [And if these constraints are not sufficiently specific, then we may still end up with a whole bunch of possibilities, even if they narrowed down the choice.] Let’s give an example by re-writing the above wave equation and using our function u(x, t) or, to simplify the analysis, u(x, t) – so we’re looking at a plane wave traveling in one dimension only: Wave equation for u There are many functional forms for u that satisfy this equation. One of them is the following: general solution for wave equation This resembles the one I introduced when presenting the de Broglie equations, except that – this time around – we are talking a real electromagnetic wave, not some probability amplitude. Another difference is that we allow a composite wave with two components: one traveling in the positive x-direction, and one traveling in the negative x-direction. Now, if you read the post in which I introduced the de Broglie wave, you will remember that these Aei(kx–ωt) or Be–i(kx+ωt) waves give strange probabilities. However, because we are not looking at some probability amplitude here – so it’s not a de Broglie wave but a real wave (so we use complex number notation only because it’s convenient but, in practice, we’re only considering the real part), this functional form is quite OK. That being said, the following functional form, representing a wave packet (aka a wave train) is also a solution (or a set of solutions better): Wave packet equation Huh? Well… Yes. If you really can’t follow here, I can only refer you to my post on Fourier analysis and Fourier transforms: I cannot reproduce that one here because that would make this post totally unreadable. We have a wave packet here, and so that’s the sum of an infinite number of component waves that interfere constructively in the region of the envelope (so that’s the location of the packet) and destructively outside. The integral is just the continuum limit of a summation of n such waves. So this integral will yield a function u with x and t as independent variables… If we know A(k) that is. Now that’s the beauty of these Fourier integrals (because that’s what this integral is).  Indeed, in my post on Fourier transforms I also explained how these amplitudes A(k) in the equation above can be expressed as a function of u(x, t) through the inverse Fourier transform. In fact, I actually presented the Fourier transform pair Ψ(x) and Φ(p) in that post, but the logic is same – except that we’re inserting the time variable t once again (but with its value fixed at t=0): Fourier transformOK, you’ll say, but where is all of this going? Be patient. We’re almost done. Let’s now introduce a specific initial condition. Let’s assume that we have the following functional form for u at time t = 0: u at time 0 You’ll wonder where this comes from. Well… I don’t know. It’s just an example from Wikipedia. It’s random but it fits the bill: it’s a localized wave (so that’s a a wave packet) because of the very particular form of the phase (θ = –x2+ ik0x). The point to note is that we can calculate A(k) when inserting this initial condition in the equation above, and then – finally, you’ll say – we also get a specific solution for our u(x, t) function by inserting the value for A(k) in our general solution. In short, we get: u final form As mentioned above, we are actually only interested in the real part of this equation (so that’s the e with the exponent factor (note there is no in it, so it’s just some real number) multiplied with the cosine term). However, the example above shows how easy it is to extend the analysis to a complex-valued wave function, i.e. a wave function describing a probability amplitude. We will actually do that now for Schrödinger’s equation. [Note that the example comes from Wikipedia’s article on wave packets, and so there is a nice animation which shows how this wave packet (be it the real or imaginary part of it) travels through space. Do watch it!] Schrödinger’s equation Let me just write it down: Schrodinger's equation That’s it. This is the Schrödinger equation – in a somewhat simplified form but it’s OK. […] You’ll find that equation above either very simple or, else, very difficult depending on whether or not you understood most or nothing at all of what I wrote above it. If you understood something, then it should be fairly simple, because it hardly differs from the other wave equation. Indeed, we have that imaginary unit (i) in front of the left term, but then you should not panic over that: when everything is said and done, we are working here with the derivative (or partial derivative) of a complex-valued function, and so it should not surprise us that we have an i here and there. It’s nothing special. In fact, we had them in the equation above too, but they just weren’t explicit. The second difference with the electromagnetic wave equation is that we have a first-order derivative of time only (in the electromagnetic wave equation we had 2u/∂t2, so that’s a second-order derivative). Finally, we have a -1/2 factor in front of the right-hand term, instead of c2. OK, so what? It’s a different thing – but that should not surprise us: when everything is said and done, it is a different wave equation because it describes something else (not an electromagnetic wave but a quantum-mechanical system). To understand why it’s different, I’d need to give you the equivalent of Maxwell’s set of equations for quantum mechanics, and then show how this wave equation is derived from them. I could do that. The derivation is somewhat lengthier than for our electromagnetic wave equation but not all that much. The problem is that it involves some new concepts which we haven’t introduced as yet – mainly some new operators. But then we have introduced a lot of new operators already (such as the gradient and the curl and the divergence) so you might be ready for this. Well… Maybe. The treatment is a bit lengthy, and so I’d rather do in a separate post. Why? […] OK. Let me say a few things about it then. Here we go: • These new operators involve matrix algebra. Fine, you’ll say. Let’s get on with it. Well… It’s matrix algebra with matrices with complex elements, so if we write a n×m matrix A as A = (aiaj), then the elements aiaj (i = 1, 2,… n and j = 1, 2,… m) will be complex numbers. • That allows us to define Hermitian matrices: a Hermitian matrix is a square matrix A which is the same as the complex conjugate of its transpose. • We can use such matrices as operators indeed: transformations acting on a column vector X to produce another column vector AX. • Now, you’ll remember – from your course on matrix algebra with real (as opposed to complex) matrices, I hope – that we have this very particular matrix equation AX = λX which has non-trivial solutions (i.e. solutions X ≠ 0) if and only if the determinant of A-λI is equal to zero. This condition (det(A-λI) = 0) is referred to as the characteristic equation. • This characteristic equation is a polynomial of degree n in λ and its roots are called eigenvalues or characteristic values of the matrix A. The non-trivial solutions X ≠ 0 corresponding to each eigenvalue are called eigenvectors or characteristic vectors. Now – just in case you’re still with me – it’s quite simple: in quantum mechanics, we have the so-called Hamiltonian operator. The Hamiltonian in classical mechanics represents the total energy of the system: H = T + V (total energy H = kinetic energy T + potential energy V). Here we have got something similar but different. 🙂 The Hamiltonian operator is written as H-hat, i.e. an H with an accent circonflexe (as they say in French). Now, we need to let this Hamiltonian operator act on the wave function Ψ and if the result is proportional to the same wave function Ψ, then Ψ is a so-called stationary state, and the proportionality constant will be equal to the energy E of the state Ψ. These stationary states correspond to standing waves, or ‘orbitals’, such as in atomic orbitals or molecular orbitals. So we have: E\Psi=\hat H \Psi I am sure you are no longer there but, in fact, that’s it. We’re done with the derivation. The equation above is the so-called time-independent Schrödinger equation. It’s called like that not because the wave function is time-independent (it is), but because the Hamiltonian operator is time-independent: that obviously makes sense because stationary states are associated with specific energy levels indeed. However, if we do allow the energy level to vary in time (which we should do – if only because of the uncertainty principle: there is no such thing as a fixed energy level in quantum mechanics), then we cannot use some constant for E, but we need a so-called energy operator. Fortunately, this energy operator has a remarkably simple functional form: \hat{E} \Psi = i\hbar\dfrac{\partial}{\partial t}\Psi = E\Psi Now if we plug that in the equation above, we get our time-dependent Schrödinger equation   i \hbar \frac{\partial}{\partial t}\Psi = \hat H \Psi OK. You probably did not understand one iota of this but, even then, you will object that this does not resemble the equation I wrote at the very beginning: i(u/∂t) = (-1/2)2u. You’re right, but we only need one more step for that. If we leave out potential energy (so we assume a particle moving in free space), then the Hamiltonian can be written as: \hat{H} = -\frac{\hbar^2}{2m}\nabla^2 You’ll ask me how this is done but I will be short on that: the relationship between energy and momentum is being used here (and so that’s where the 2m factor in the denominator comes from). However, I won’t say more about it because this post would become way too lengthy if I would include each and every derivation and, remember, I just want to get to the result because the derivations here are not the point: I want you to understand the functional form of the wave equation only. So, using the above identity and, OK, let’s be somewhat more complete and include potential energy once again, we can write the time-dependent wave equation as: i\hbar\frac{\partial}{\partial t}\Psi(\mathbf{r},t) = -\frac{\hbar^2}{2m}\nabla^2\Psi(\mathbf{r},t) + V(\mathbf{r},t)\Psi(\mathbf{r},t) Now, how is the equation above related to i(u/∂t) = (-1/2)2u? It’s a very simplified version of it: potential energy is, once again, assumed to be not relevant (so we’re talking a free particle again, with no external forces acting on it) but the real simplification is that we give m and ħ the value 1, so m = ħ = 1. Why? Well… My initial idea was to do something similar as I did above and, hence, actually use a specific example with an actual functional form, just like we did for that the real-valued u(x, t) function. However, when I look at how long this post has become already, I realize I should not do that. In fact, I would just copy an example from somewhere else – probably Wikipedia once again, if only because their examples are usually nicely illustrated with graphs (and often animated graphs). So let me just refer you here to the other example given in the Wikipedia article on wave packets: that example uses that simplified i(u/∂t) = (-1/2)2u equation indeed. It actually uses the same initial condition: u at time 0 However, because the wave equation is different, the wave packet behaves differently. It’s a so-called dispersive wave packet: it delocalizes. Its width increases over time and so, after a while, it just vanishes because it diffuses all over space. So there’s a solution to the wave equation, given this initial condition, but it’s just not stable – as a description of some particle that is (from a mathematical point of view – or even a physical point of view – there is no issue). In any case, this probably all sounds like Chinese – or Greek if you understand Chinese :-). I actually haven’t worked with these Hermitian operators yet, and so it’s pretty shaky territory for me myself. However, I felt like I had picked up enough math and physics on this long and winding Road to Reality (I don’t think I am even halfway) to give it a try. I hope I succeeded in passing the message, which I’ll summarize as follows: 1. Schrödinger’s equation is just like any other differential equation used in physics, in the sense that it represents a system subject to constraints, such as the relationship between energy and momentum. 2. It will have many general solutions. In other words, the wave function – which describes a probability amplitude as a function in space and time – will have many general solutions, and a specific solution will depend on the initial conditions. 3. The solution(s) can represent stationary states, but not necessary so: a wave (or a wave packet) can be non-dispersive or dispersive. However, when we plug the wave function into the wave equation, it will satisfy that equation. That’s neither spectacular nor difficult, is it? But, perhaps, it helps you to ‘understand’ wave equations, including the Schrödinger equation. But what is understanding? Dirac once famously said: “I consider that I understand an equation when I can predict the properties of its solutions, without actually solving it.” Hmm… I am not quite there yet, but I am sure some more practice with it will help. 🙂 Post scriptum: On Maxwell’s equations First, we should say something more about these two other operators which I introduced above: the divergence and the curl. First on the divergence. The divergence of a field vector E (or B) at some point r represents the so-called flux of E, i.e. the ‘flow’ of E per unit volume. So flux and divergence both deal with the ‘flow’ of electric field lines away from (positive) charges. [The ‘away from’ is from positive charges indeed – as per the convention: Maxwell himself used the term ‘convergence’ to describe flow towards negative charges, but so his ‘convention’ did not survive. Too bad, because I think convergence would be much easier to remember.] So if we write that ∇•ρ/ε0, then it means that we have some constant flux of E because of some (fixed) distribution of charges. Now, we already mentioned that equation (2) in Maxwell’s set meant that there is no such thing as a ‘magnetic’ charge: indeed, ∇•B = 0 means there is no magnetic flux. But, of course, magnetic fields do exist, don’t they? They do. A current in a wire, for example, i.e. a bunch of steadily moving electric charges, will induce a magnetic field according to Ampère’s law, which is part of equation (4) in Maxwell’s set: c2∇×B =  j0, with j representing the current density and ε0 the electric constant. Now, at this point, we have this curl: ∇×B. Just like divergence (or convergence as Maxwell called it – but then with the sign reversed), curl also means something in physics: it’s the amount of ‘rotation’, or ‘circulation’ as Feynman calls it, around some loop. So, to summarize the above, we have (1) flux (divergence) and (2) circulation (curl) and, of course, the two must be related. And, while we do not have any magnetic charges and, hence, no flux for B, the current in that wire will cause some circulation of B, and so we do have a magnetic field. However, that magnetic field will be static, i.e. it will not change. Hence, the time derivative ∂B/∂t will be zero and, hence, from equation (2) we get that ∇×E = 0, so our electric field will be static too. The time derivative ∂E/∂t which appears in equation (4) also disappears and we just have c2∇×B =  j0. This situation – of a constant magnetic and electric field – is described as electrostatics and magnetostatics respectively. It implies a neat separation of the four equations, and it makes magnetism and electricity appear as distinct phenomena. Indeed, as long as charges and currents are static, we have: [I] Electrostatics: (1) ∇•E = ρ/εand (2) ∇×E = 0 [II] Magnetostatics: (3) c2∇×B =  jand (4) ∇•B = 0 The first two equations describe a vector field with zero curl and a given divergence (i.e. the electric field) while the third and fourth equations second describe a seemingly separate vector field with a given curl but zero divergence. Now, I am not writing this post scriptum to reproduce Feynman’s Lectures on Electromagnetism, and so I won’t say much more about this. I just want to note two points: 1. The first point to note is that factor cin the c2∇×B =  jequation. That’s something which you don’t have in the ∇•E = ρ/εequation. Of course, you’ll say: So what? Well… It’s weird. And if you bring it to the other side of the equation, it becomes clear that you need an awful lot of current for a tiny little bit of magnetic circulation (because you’re dividing by c , so that’s a factor 9 with 16 zeroes after it (9×1016):  an awfully big number in other words). Truth be said, it reveals something very deep. Hmm? Take a wild guess. […] Relativity perhaps? Well… Yes! It’s obvious that we buried v somewhere in this equation, the velocity of the moving charges. But then it’s part of j of course: the rate at which charge flows through a unit area per second. But – Hey! – velocity as compared to what? What’s the frame of reference? The frame of reference is us obviously or – somewhat less subjective – the stationary charges determining the electric field according to equation (1) in the set above: ∇•E = ρ/ε0. But so here we can ask the same question: stationary in what reference frame? As compared to the moving charges? Hmm… But so how does it work with relativity? I won’t copy Feynman’s 13th Lecture here, but so, in that lecture, he analyzes what happens to the electric and magnetic force when we look at the scene from another coordinate system – let’s say one that moves parallel to the wire at the same speed as the moving electrons, so – because of our new reference frame – the ‘moving electrons’ now appear to have no speed at all but, of course, our stationary charges will now seem to move. What Feynman finds – and his calculations are very easy and straightforward – is that, while we will obviously insert different input values into Maxwell’s set of equations and, hence, get different values for the E and B fields, the actual physical effect – i.e. the final Lorentz force on a (charged) particle – will be the same. To be very specific, in a coordinate system at rest with respect to the wire (so we see charges move in the wire), we find a ‘magnetic’ force indeed, but in a coordinate system moving at the same speed of those charges, we will find an ‘electric’ force only. And from yet another reference frame, we will find a mixture of E and B fields. However, the physical result is the same: there is only one combined force in the end – the Lorentz force F = q(E + v×B) – and it’s always the same, regardless of the reference frame (inertial or moving at whatever speed – relativistic (i.e. close to c) or not). In other words, Maxwell’s description of electromagnetism is invariant or, to say exactly the same in yet other words, electricity and magnetism taken together are consistent with relativity: they are part of one physical phenomenon: the electromagnetic interaction between (charged) particles. So electric and magnetic fields appear in different ‘mixtures’ if we change our frame of reference, and so that’s why magnetism is often described as a ‘relativistic’ effect – although that’s not very accurate. However, it does explain that cfactor in the equation for the curl of B. [How exactly? Well… If you’re smart enough to ask that kind of question, you will be smart enough to find the derivation on the Web. :-)] Note: Don’t think we’re talking astronomical speeds here when comparing the two reference frames. It would also work for astronomical speeds but, in this case, we are talking the speed of the electrons moving through a wire. Now, the so-called drift velocity of electrons – which is the one we have to use here – in a copper wire of radius 1 mm carrying a steady current of 3 Amps is only about 1 m per hour! So the relativistic effect is tiny  – but still measurable ! 2. The second thing I want to note is that  Maxwell’s set of equations with non-zero time derivatives for E and B clearly show that it’s changing electric and magnetic fields that sort of create each other, and it’s this that’s behind electromagnetic waves moving through space without losing energy. They just travel on and on. The math behind this is beautiful (and the animations in the related Wikipedia articles are equally beautiful – and probably easier to understand than the equations), but that’s stuff for another post. As the electric field changes, it induces a magnetic field, which then induces a new electric field, etc., allowing the wave to propagate itself through space. I should also note here that the energy is in the field and so, when electromagnetic waves, such as light, or radiowaves, travel through space, they carry their energy with them. Let me be fully complete here, and note that there’s energy in electrostatic fields as well, and the formula for it is remarkably beautiful. The total (electrostatic) energy U in an electrostatic field generated by charges located within some finite distance is equal to: Energy of electrostatic field This equation introduces the electrostatic potential. This is a scalar field Φ from which we can derive the electric field vector just by applying the gradient operator. In fact, all curl-free fields (such as the electric field in this case) can be written as the gradient of some scalar field. That’s a universal truth. See how beautiful math is? 🙂 An easy piece: introducing quantum mechanics and the wave function After all those boring pieces on math, it is about time I got back to physics. Indeed, what’s all that stuff on differential equations and complex numbers good for? This blog was supposed to be a journey into physics, wasn’t it? Yes. But wave functions – functions describing physical waves (in classical mechanics) or probability amplitudes (in quantum mechanics) – are the solution to some differential equation, and they will usually involve complex-number notation. However, I agree we have had enough of that now. Let’s see how it works. By the way, the title of this post – An Easy Piece – is an obvious reference to (some of) Feynman’s 1965 Lectures on Physics, some of which were re-packaged in 1994 (six years after his death that is) in ‘Six Easy Pieces’ indeed – but, IMHO, it makes more sense to read all of them as part of the whole series. Let’s first look at one of the most used mathematical shapes: the sinusoidal wave. The illustration below shows the basic concepts: we have a wave here – some kind of cyclic thing – with a wavelength λ, an amplitude (or height) of (maximum) A0, and a so-called phase shift equal to φ. The Wikipedia definition of a wave is the following: “a wave is a disturbance or oscillation that travels through space and matter, accompanied by a transfer of energy.” Indeed, a wave transports energy as it travels (oh – I forgot to mention the speed or velocity of a wave (v) as an important characteristic of a wave), and the energy it carries is directly proportional to the square of the amplitude of the wave: E ∝ A2 (this is true not only for waves like water waves, but also for electromagnetic waves, like light). Cosine wave concepts Let’s now look at how these variables get into the argument – literally: into the argument of the wave function. Let’s start with that phase shift. The phase shift is usually defined referring to some other wave or reference point (in this case the origin of the x and y axis). Indeed, the amplitude – or ‘height’ if you want (think of a water wave, or the strength of the electric field) – of the wave above depends on (1) the time t (not shown above) and (2) the location (x), but so we will need to have this phase shift φ in the argument of the wave function because at x = 0 we do not have a zero height for the wave. So, as we can see, we can shift the x-axis left or right with this φ. OK. That’s simple enough. Let’s look at the other independent variables now: time and position. The height (or amplitude) of the wave will obviously vary both in time as well as in space. On this graph, we fixed time (t = 0) – and so it does not appear as a variable on the graph – and show how the amplitude y = A varies in space (i.e. along the x-axis). We could also have looked at one location only (x = 0 or x1 or whatever other location) and shown how the amplitude varies over time at that location only. The graph would be very similar, except that we would have a ‘time distance’ between two crests (or between two troughs or between any other two points separated by a full cycle of the wave) instead of the wavelength λ (i.e. a distance in space). This ‘time distance’ is the time needed to complete one cycle and is referred to as the period of the wave (usually denoted by the symbol T or T– in line with the notation for the maximum amplitude A0). In other words, we will also see time (t) as well as location (x) in the argument of this cosine or sine wave function. By the way, it is worth noting that it does not matter if we use a sine or cosine function because we can go from one to the other using the basic trigonometric identities cos θ = sin(π/2 – θ) and sin θ = cos(π/2 – θ). So all waves of the shape above are referred to as sinusoidal waves even if, in most cases, the convention is to actually use the cosine function to represent them. So we will have x, t and φ in the argument of the wave function. Hence, we can write A = A(x, t, φ) = cos(x + t + φ) and there we are, right? Well… No. We’re adding very different units here: time is measured in seconds, distance in meter, and the phase shift is measured in radians (i.e. the unit of choice for angles). So we can’t just add them up. The argument of a trigonometric function (like this cosine function) is an angle and, hence, we need to get everything in radians – because that’s the unit we use to measure angles. So how do we do that? Let’s do it step by step. First, it is worth noting that waves are usually caused by something. For example, electromagnetic waves are caused by an oscillating point charge somewhere, and radiate out from there. Physical waves – like water waves, or an oscillating string – usually also have some origin. In fact, we can look at a wave as a way of transmitting energy originating elsewhere. In the case at hand here – i.e. the nice regular sinusoidal wave illustrated above – it is obvious that the amplitude at some time t = tat some point x = x1 will be the same as the amplitude of that wave at point x = 0 some time ago. How much time ago? Well… The time (t) that was needed for that wave to travel from point x = 0 to point x = xis easy to calculate: indeed, if the wave originated at t = 0 and x = 0, then x1 (i.e. the distance traveled by the wave) will be equal to its velocity (v) multiplied by t1, so we have x1= v.t1 (note that we assume the wave velocity is constant – which is a very reasonable assumption). In other words, inserting x1and t1 in the argument of our cosine function should yield the same value as inserting zero for x and t. Distance and time can be substituted so to say, and that’s we will have something like x – vt or vt – x in the argument in that cosine function: we measure both time and distance in units of distance so to say. [Note that x – vt and –(x-vt) = vt – x are equivalent because cos θ = cos (-θ)] Does this sound fishy? It shouldn’t. Think about it. In the (electric) field equation for electromagnetic radiation (that’s one of the examples of a wave which I mentioned above), you’ll find the so-called retarded acceleration a(t – x/c) in the argument: that’s the acceleration (a)of the charge causing the electric field at point x to change not at time t but at time t – x/c. So that’s the retarded acceleration indeed: x/c is the time it took for the wave to travel from its origin (the oscillating point charge) to x and so we subtract that from t. [When talking electromagnetic radiation (e.g. light), the wave velocity v is obviously equal to c, i.e. the speed of light, or of electromagnetic radiation in general.] Of course, you will now object that t – x/c is not the same as vt – x, and you are right: we need time units in the argument of that acceleration function, not distance. We can get to distance units if we would multiply the time with the wave velocity v but that’s complicated business because the velocity of that moving point charge is not a constant. […] I am not sure if I made myself clear here. If not, so be it. The thing to remember is that we need an input expressed in radians for our cosine function, not time, nor distance. Indeed, the argument in a sine or cosine function is an angle, not some distance. We will call that angle the phase of the wave, and it is usually denoted by the symbol θ  – which we also used above. But so far we have been talking about amplitude as a function of distance, and we expressed time in distance units too – by multiplying it with v. How can we go from some distance to some angle? It is simple: we’ll multiply x – vt with 2π/λ. Huh? Yes. Think about it. The wavelength will be expressed in units of distance – typically 1 m in the SI International System of Units but it could also be angstrom (10–10 m = 0.1 nm) or nano-meter (10–9 m = 10 Å). A wavelength of two meter (2 m) means that the wave only completes half a cycle per meter of travel. So we need to translate that into radians, which – once again – is the measure used to… well… measure angles, or the phase of the wave as we call it here. So what’s the ‘unit’ here? Well… Remember that we can add or subtract 2π (and any multiple of 2π, i.e. ± 2nπ with n = ±1, ±2, ±3,…) to the argument of all trigonometric functions and we’ll get the same value as for the original argument. In other words, a cycle characterized by a wavelength λ corresponds to the angle θ going around the origin and describing one full circle, i.e. 2π radians. Hence, it is easy: we can go from distance to radians by multiplying our ‘distance argument’ x – vt with 2π/λ. If you’re not convinced, just work it out for the example I gave: if the wavelength is 2 m, then 2π/λ equals 2π/2 = π. So traveling 6 meters along the wave – i.e. we’re letting x go from 0 to 6 m while fixing our time variable – corresponds to our phase θ going from 0 to 6π: both the ‘distance argument’ as well as the change in phase cover three cycles (three times two meter for the distance, and three times 2π for the change in phase) and so we’re fine. [Another way to think about it is to remember that the circumference of the unit circle is also equal to 2π (2π·r = 2π·1 in this case), so the ratio of 2π to λ measures how many times the circumference contains the wavelength.] In short, if we put time and distance in the (2π/λ)(x-vt) formula, we’ll get everything in radians and that’s what we need for the argument for our cosine function. So our sinusoidal wave above can be represented by the following cosine function: A = A(x, t) = A0cos[(2π/λ)(x-vt)] We could also write A = A0cosθ with θ = (2π/λ)(x-vt). […] Both representations look rather ugly, don’t they? They do. And it’s not only ugly: it’s not the standard representation of a sinusoidal wave either. In order to make it look ‘nice’, we have to introduce some more concepts here, notably the angular frequency and the wave number. So let’s do that. The angular frequency is just like the… well… the frequency you’re used to, i.e. the ‘non-angular’ frequency f,  as measured in cycles per second (i.e. in Hertz). However, instead of measuring change in cycles per second, the angular frequency (usually denoted by the symbol ω) will measure the rate of change of the phase with time, so we can write or define ω as ω = ∂θ/∂t. In this case, we can easily see that ω = –2πv/λ. [Note that we’ll take the absolute value of that derivative because we want to work with positive numbers for such properties of functions.] Does that look complicated? In doubt, just remember that ω is measured in radians per second and then you can probably better imagine what it is really. Another way to understand ω somewhat better is to remember that the product of ω and the period T is equal to 2π, so that’s a full cycle. Indeed, the time needed to complete one cycle multiplied with the phase change per second (i.e. per unit time) is equivalent to going round the full circle: 2π = ω.T. Because f = 1/T, we can also relate ω to f and write ω = 2π.f = 2π/T. Likewise, we can measure the rate of change of the phase with distance, and that gives us the wave number k = ∂θ/∂x, which is like the spatial frequency of the wave. So it is just like the wavelength but then measured in radians per unit distance. From the function above, it is easy to see that k = 2π/λ. The interpretation of this equality is similar to the ω.T = 2π equality. Indeed, we have a similar equation for k: 2π = k.λ, so the wavelength (λ) is for k what the period (T) is for ω. If you’re still uncomfortable with it, just play a bit with some numerical examples and you’ll be fine. To make a long story short, this, then, allows us to re-write the sinusoidal wave equation above in its final form (and let me include the phase shift φ again in order to be as complete as possible at this stage): A(x, t) = A0cos(kx – ωt + φ) You will agree that this looks much ‘nicer’ – and also more in line with what you’ll find in textbooks or on Wikipedia. 🙂 I should note, however, that we’re not adding any new parameters here. The wave number k and the angular frequency ω are not independent: this is still the same wave (A = A0cos[(2π/λ)(x-vt)]), and so we are not introducing anything more than the frequency and – equally important – the speed with which the wave travels, which is usually referred to as the phase velocity. In fact, it is quite obvious from the ω.T = 2π and the k = 2π/λ identities that kλ = ω.T and, hence, taking into account that λ is obviously equal to λ = v.T (the wavelength is – by definition – the distance traveled by the wave in one period), we find that the phase (or wave) velocity v is equal to the ratio of ω and k, so we have that v = ω/k. So x, t, ω and k could be re-scaled or so but their ratio cannot change: the velocity of the wave is what it is. In short, I am introducing two new concepts and symbols (ω and k) but there are no new degrees of freedom in the system so to speak. [At this point, I should probably say something about the difference between the phase velocity and the so-called group velocity of a wave. Let me do that in as brief a way as I can manage. Most real-life waves travel as a wave packet, aka a wave train. So that’s like a burst, or an “envelope” (I am shamelessly quoting Wikipedia here…), of “localized wave action that travels as a unit.” Such wave packet has no single wave number or wavelength: it actually consists of a (large) set of waves with phases and amplitudes such that they interfere constructively only over a small region of space, and destructively elsewhere. The famous Fourier analysis (or infamous if you have problems understanding what it is really) decomposes this wave train in simpler pieces. While these ‘simpler’ pieces – which, together, add up to form the wave train – are all ‘nice’ sinusoidal waves (that’s why I call them ‘simple’), the wave packet as such is not. In any case (I can’t be too long on this), the speed with which this wave train itself is traveling through space is referred to as the group velocity. The phase velocity and the group velocity are usually very different: for example, a wave packet may be traveling forward (i.e. its group velocity is positive) but the phase velocity may be negative, i.e. traveling backward. However, I will stop here and refer to the Wikipedia article on group and phase velocity: it has wonderful illustrations which are much and much better than anything I could write here. Just one last point that I’ll use later: regardless of the shape of the wave (sinusoidal, sawtooth or whatever), we have a very obvious relationship relating wavelength and frequency to the (phase) velocity: v = λ.f, or f = v/λ. For example, the frequency of a wave traveling 3 meter per second and wavelength of 1 meter will obviously have a frequency of three cycles per second (i.e. 3 Hz). Let’s go back to the main story line now.] With the rather lengthy ‘introduction’ to waves above, we are now ready for the thing I really wanted to present here. I will go much faster now that we have covered the basics. Let’s go. From my previous posts on complex numbers (or from what you know on complex numbers already), you will understand that working with cosine functions is much easier when writing them as the real part of a complex number A0eiθ = A0ei(kx – ωt + φ). Indeed, A0eiθ = A0(cosθ + isinθ) and so the cosine function above is nothing else but the real part of the complex number A0eiθ. Working with complex numbers makes adding waves and calculating interference effects and whatever we want to do with these wave functions much easier: we just replace the cosine functions by complex numbers in all of the formulae, solve them (algebra with complex numbers is very straightforward), and then we look at the real part of the solution to see what is happening really. We don’t care about the imaginary part, because that has no relationship to the actual physical quantities – for physical and electromagnetic waves that is, or for any other problem in classical wave mechanics. Done. So, in classical mechanics, the use of complex numbers is just a mathematical tool. Now, that is not the case for the wave functions in quantum mechanics: the imaginary part of a wave equation – yes, let me write one down here – such as Ψ = Ψ(x, t) = (1/x)ei(kx – ωt) is very much part and parcel of the so-called probability amplitude that describes the state of the system here. In fact, this Ψ function is an example taken from one of Feynman’s first Lectures on Quantum Mechanics (i.e. Volume III of his Lectures) and, in this case, Ψ(x, t) = (1/x)ei(kx – ωt) represents the probability amplitude of a tiny particle (e.g. an electron) moving freely through space – i.e. without any external forces acting upon it – to go from 0 to x and actually be at point x at time t. [Note how it varies inversely with the distance because of the 1/x factor, so that makes sense.] In fact, when I started writing this post, my objective was to present this example – because it illustrates the concept of the wave function in quantum mechanics in a fairly easy and relatively understandable way. So let’s have a go at it. First, it is necessary to understand the difference between probabilities and probability amplitudes. We all know what a probability is: it is a real number between o and 1 expressing the chance of something happening. It is usually denoted by the symbol P. An example is the probability that monochromatic light (i.e. one or more photons with the same frequency) is reflected from a sheet of glass. [To be precise, this probability is anything between 0 and 16% (i.e. P = 0 to 0.16). In fact, this example comes from another fine publication of Richard Feynman – QED (1985) – in which he explains how we can calculate the exact probability, which depends on the thickness of the sheet.] A probability amplitude is something different. A probability amplitude is a complex number (3 + 2i, or 2.6ei1.34, for example) and – unlike its equivalent in classical mechanics – both the real and imaginary part matter. That being said, probabilities and probability amplitudes are obviously related: to be precise, one calculates the probability of an event actually happening by taking the square of the modulus (or the absolute value) of the probability amplitude associated with that event. Huh? Yes. Just let it sink in. So, if we denote the probably amplitude by Φ, then we have the following relationship: P =|Φ|2 P = probability Φ = probability amplitude In addition, where we would add and multiply probabilities in the classical world (for example, to calculate the probability of an event which can happen in two different ways – alternative 1 and alternative 2 let’s say – we would just add the individual probabilities to arrive at the probably of the event happening in one or the other way, so P = P1+ P2), in the quantum-mechanical world we should add and multiply probability amplitudes, and then take the square of the modulus of that combined amplitude to calculate the combined probability. So, formally, the probability of a particle to reach a given state by two possible routes (route 1 or route 2 let’s say) is to be calculated as follows: Φ = Φ1+ Φ2 and P =|Φ|=|Φ1+ Φ2|2 Also, when we have only one route, but that one route consists of two successive stages (for example: to go from A to C, the particle would have first have to go from A to B, and then from B to C, with different probabilities of stage AB and stage BC actually happening), we will not multiply the probabilities (as we would do in the classical world) but the probability amplitudes. So we have: and P =|Φ|=|ΦAB ΦBC|2 In short, it’s the probability amplitudes (and, as mentioned, these are complex numbers, not real numbers) that are to be added and multiplied etcetera and, hence, the probability amplitudes act as the equivalent, so to say, in quantum mechanics, of the conventional probabilities in classical mechanics. The difference is not subtle. Not at all. I won’t dwell too much on this. Just re-read any account of the double-slit experiment with electrons which you may have read and you’ll remember how fundamental this is. [By the way, I was surprised to learn that the double-slit experiment with electrons has apparently only been done in 2012 in exactly the way as Feynman described it. So when Feynman described it in his 1965 Lectures, it was still very much a ‘thought experiment’ only – even a 1961 experiment (not mentioned by Feynman) had clearly established the reality of electron interference.] OK. Let’s move on. So we have this complex wave function in quantum mechanics and, as Feynman writes, “It is not like a real wave in space; one cannot picture any kind of reality to this wave as one does for a sound wave.” That being said, one can, however, get pretty close to ‘imagining’ what it actually is IMHO. Let’s go by the example which Feynman gives himself – on the very same page where he writes the above actually. The amplitude for a free particle (i.e. with no forces acting on it) with momentum p = m to go from location rto location ris equal to Φ12 = (1/r12)eip.r12/ħ with r12 = rr I agree this looks somewhat ugly again, but so what does it say? First, be aware of the difference between bold and normal type: I am writing p and v in bold type above because they are vectors: they have a magnitude (which I will denote by p and v respectively) as well as a direction in space. Likewise, r12 is a vector going from r1 to r2 (and rand r2 themselves are space vectors themselves obviously) and so r12 (non-bold) is the magnitude of that vector. Keeping that in mind, we know that the dot product p.r12 is equal to the product of the magnitudes of those vectors multiplied by cosα, with α the angle between those two vectors. Hence, p.r12  .= p.r12.cosα. Now, if p and r12 have the same direction, the angle α will be zero and so cosα will be equal to one and so we just have p.r12 = p.r12 or, if we’re considering a particle going from 0 to some position x, p.r12 = p.r12 = px. Now we also have Planck’s constant there, in its reduced form ħ = h/2π. As you can imagine, this 2π has something to do with the fact that we need radians in the argument. It’s the same as what we did with x in the argument of that cosine function above: if we have to express stuff in radians, then we have to absorb a factor of 2π in that constant. However, here I need to make an additional digression. Planck’s constant is obviously not just any constant: it is the so-called quantum of action. Indeed, it appears in what may well the most fundamental relations in physics. The first of these fundamental relations is the so-called Planck relation: E = hf. The Planck relation expresses the wave-particle duality of light (or electromagnetic waves in general): light comes in discrete quanta of energy (photons), and the energy of these ‘wave particles’ is directly proportional to the frequency of the wave, and the factor of proportionality is Planck’s constant. The second fundamental relation, or relations – in plural – I should say, are the de Broglie relations. Indeed, Louis-Victor-Pierre-Raymond, 7th duc de Broglie, turned the above on its head: if the fundamental nature of light is (also) particle-like, then the fundamental nature of particles must (also) be wave-like. So he boldly associated a frequency f and a wavelength λ with all particles, such as electrons for example – but larger-scale objects, such as billiard balls, or planets, also have a de Broglie wavelength and frequency! The de Broglie relation determining the de Broglie frequency is – quite simply – the re-arranged Planck relation: f = E/h. So this relation relates the de Broglie frequency with energy. However, in the above wave function, we’ve got momentum, not energy. Well… Energy and momentum are obviously related, and so we have a second de Broglie relation relating momentum with wavelength: λ = h/p. We’re almost there: just hang in there. 🙂 When we presented the sinusoidal wave equation, we introduced the angular frequency (ω)  and the wave number (k), instead of working with f and λ. That’s because we want an argument expressed in radians. Here it’s the same. The two de Broglie equations have a equivalent using angular frequency and wave number: ω = E/ħ and k = p/ħ. So we’ll just use the second one (i.e. the relation with the momentum in it) to associate a wave number with the particle (k = p/ħ). Phew! So, finally, we get that formula which we introduced a while ago already:  Ψ(x) = (1/x)eikx, or, including time as a variable as well (we made abstraction of time so far): The formula above obviously makes sense. For example, the 1/x factor makes the probability amplitude decrease as we get farther away from where the particle started: in fact, this 1/x or 1/r variation is what we see with electromagnetic waves as well: the amplitude of the electric field vector E varies as 1/r and, because we’re talking some real wave here and, hence, its energy is proportional to the square of the field, the energy that the source can deliver varies inversely as the square of the distance. [Another way of saying the same is that the energy we can take out of a wave within a given conical angle is the same, no matter how far away we are: the energy flux is never lost – it just spreads over a greater and greater effective area. But let’s go back to the main story.] We’ve got the math – I hope. But what does this equation mean really? What’s that de Broglie wavelength or frequency in reality? What wave are we talking about? Well… What’s reality? As mentioned above, the famous de Broglie relations associate a wavelength λ and a frequency f to a particle with momentum p and energy E, but it’s important to mention that the associated de Broglie wave function yields probability amplitudes. So it is, indeed, not a ‘real wave in space’ as Feynman would put it. It is a quantum-mechanical wave equation. Huh? […] It’s obviously about time I add some illustrations here, and so that’s what I’ll do. Look at the two cases below. The case on top is pretty close to the situation I described above: it’s a de Broglie wave – so that’s a complex wave – traveling through space (in one dimension only here). The real part of the complex amplitude is in blue, and the green is the imaginary part. So the probability of finding that particle at some position x is the modulus squared of this complex amplitude. Now, this particular wave function ignores the 1/x variation and, hence, the squared modulus of Aei(kx – ωt) is equal to a constant. To be precise, it’s equal to A2 (check it: the squared modulus of a complex number z equals the product of z and its complex conjugate, and so we get Aas a result indeed). So what does this mean? It means that the probability of finding that particle (an electron, for example) is the same at all points! In other words, we don’t know where it is! In the illustration below (top part), that’s shown as the (yellow) color opacity: the probability is spread out, just like the wave itself, so there is no definite position of the particle indeed. [Note that the formula in the illustration above (which I took from Wikipedia once again) uses p instead of k as the factor in front of x. While it does not make a big difference from a mathematical point of view (ħ is just a factor of proportionality: k = p/ħ), it does make a big difference from a conceptual point of view and, hence, I am puzzled as to why the author of this article did this. Also, there is some variation in the opacity of the yellow (i.e. the color of our tennis (or ping pong) ball representing our ‘wavicle’) which shouldn’t be there because the probability associated with this particular wave function is a constant indeed: so there is no variation in the probability (when squaring the absolute value of a complex number, the phase factor does not come into play). Also note that, because all probabilities have to add up to 100% (or to 1), a wave function like this is quite problematic. However, don’t worry about it just now: just try to go with the flow.] By now, I must assume you shook your head in disbelief a couple of time already. Surely, this particle (let’s stick to the example of an electron) must be somewhere, yes? Of course. The problem is that we gave an exact value to its momentum and its energy and, as a result, through the de Broglie relations, we also associated an exact frequency and wavelength to the de Broglie wave associated with this electron.  Hence, Heisenberg’s Uncertainty Principle comes into play: if we have exact knowledge on momentum, then we cannot know anything about its location, and so that’s why we get this wave function covering the whole space, instead of just some region only. Sort of. Here we are, of course, talking about that deep mystery about which I cannot say much – if only because so many eminent physicists have already exhausted the topic. I’ll just state Feynman once more: “Things on a very small scale behave like nothing that you have any direct experience with. […] It is very difficult to get used to, and it appears peculiar and mysterious to everyone – both to the novice and to the experienced scientist. Even the experts do not understand it the way they would like to, and it is perfectly reasonable that they should not because all of direct, human experience and of human intuition applies to large objects. We know how large objects will act, but things on a small scale just do not act that way. So we have to learn about them in a sort of abstract or imaginative fashion and not by connection with our direct experience.” And, after describing the double-slit experiment, he highlights the key conclusion: “In quantum mechanics, it is impossible to predict exactly what will happen. We can only predict the odds [i.e. probabilities]. Physics has given up on the problem of trying to predict exactly what will happen. Yes! Physics has given up. We do not know how to predict what will happen in a given circumstance. It is impossible: the only thing that can be predicted is the probability of different events. It must be recognized that this is a retrenchment in our ideal of understanding nature. It may be a backward step, but no one has seen a way to avoid it.” […] That’s enough on this I guess, but let me – as a way to conclude this little digression – just quickly state the Uncertainty Principle in a more or less accurate version here, rather than all of the ‘descriptions’ which you may have seen of it: the Uncertainty Principle refers to any of a variety of mathematical inequalities asserting a fundamental limit (fundamental means it’s got nothing to do with observer or measurement effects, or with the limitations of our experimental technologies) to the precision with which certain pairs of physical properties of a particle (these pairs are known as complementary variables) such as, for example, position (x) and momentum (p), can be known simultaneously. More in particular, for position and momentum, we have that σxσp ≥ ħ/2 (and, in this formulation, σ is, obviously the standard symbol for the standard deviation of our point estimate for x and p respectively). OK. Back to the illustration above. A particle that is to be found in some specific region – rather than just ‘somewhere’ in space – will have a probability amplitude resembling the wave equation in the bottom half: it’s a wave train, or a wave packet, and we can decompose it, using the Fourier analysis, in a number of sinusoidal waves, but so we do not have a unique wavelength for the wave train as a whole, and that means – as per the de Broglie equations – that there’s some uncertainty about its momentum (or its energy). I will let this sink in for now. In my next post, I will write some more about these wave equations. They are usually a solution to some differential equation – and that’s where my next post will connect with my previous ones (on differential equations). Just to say goodbye – as for now that is – I will just copy another beautiful illustration from Wikipedia. See below: it represents the (likely) space in which a single electron on the 5d atomic orbital of a hydrogen atom would be found. The solid body shows the places where the electron’s probability density (so that’s the squared modulus of the probability amplitude) is above a certain value – so it’s basically the area where the likelihood of finding the electron is higher than elsewhere. The hue on the colored surface shows the complex phase of the wave function. It is a wonderful image, isn’t it? At the very least, it increased my understanding of the mystery surround quantum mechanics somewhat. I hope it helps you too. 🙂 Post scriptum 1: On the need to normalize a wave function In this post, I wrote something about the need for probabilities to add up to 1. In mathematical terms, this condition will resemble something like probability amplitude adding up to some constant In this integral, we’ve got – once again – the squared modulus of the wave function, and so that’s the probability of find the particle somewhere. The integral just states that all of the probabilities added all over space (Rn) should add up to some finite number (a2). Hey! But that’s not equal to 1 you’ll say. Well… That’s a minor problem only: we can create a normalized wave function ψ out of ψ0 by simply dividing ψ by a so we have ψ = ψ0/a, and then all is ‘normal’ indeed. 🙂 Post scriptum 2: On using colors to represent complex numbers When inserting that beautiful 3D graph of that 5d atomic orbital (again acknowledging its source: Wikipedia), I wrote that “the hue on the colored surface shows the complex phase of the wave function.” Because this kind of visual representation of complex numbers will pop up in other posts as well (and you’ve surely encountered it a couple of times already), it’s probably useful to be explicit on what it represents exactly. Well… I’ll just copy the Wikipedia explanation, which is clear enough: “Given a complex number z = reiθ, the phase (also known as argument) θ can be represented by a hue, and the modulus r =|z| is represented by either intensity or variations in intensity. The arrangement of hues is arbitrary, but often it follows the color wheel. Sometimes the phase is represented by a specific gradient rather than hue.” So here you go… Unit circle domain coloring.png Post scriptum 3: On the de Broglie relations The de Broglie relations are a wonderful pair. They’re obviously equivalent: energy and momentum are related, and wavelength and frequency are obviously related too through the general formula relating frequency, wavelength and wave velocity: fλ = v (the product of the frequency and the wavelength must yield the wave velocity indeed). However, when it comes to the relation between energy and momentum, there is a little catch. What kind of energy are we talking about? We were describing a free particle (e.g. an electron) traveling through space, but with no (other) charges acting on it – in other words: no potential acting upon it), and so we might be tempted to conclude that we’re talking about the kinetic energy (K.E.) here. So, at relatively low speeds (v), we could be tempted to use the equations p = mv and K.E. = p2/2m = mv2/2 (the one electron in a hydrogen atom travels at less than 1% of the speed of light, and so that’s a non-relativistic speed indeed) and try to go from one equation to the other with these simple formulas. Well… Let’s try it. f = E/h according to de Broglie and, hence, substituting E with p2/2m and f with v/λ, we get v/λ = m2v2/2mh. Some simplification and re-arrangement should then yield the second de Broglie relation: λ = 2h/mv = 2h/p. So there we are. Well… No. The second de Broglie relation is just λ = h/p: there is no factor 2 in it. So what’s wrong? The problem is the energy equation: de Broglie does not use the K.E. formula. [By the way, you should note that the K.E. = mv2/2 equation is only an approximation for low speeds – low compared to c that is.] He takes Einstein’s famous E = mc2 equation (which I am tempted to explain now but I won’t) and just substitutes c, the speed of light, with v, the velocity of the slow-moving particle. This is a very fine but also very deep point which, frankly, I do not yet fully understand. Indeed, Einstein’s E = mcis obviously something much ‘deeper’ than the formula for kinetic energy. The latter has to do with forces acting on masses and, hence, obeys Newton’s laws – so it’s rather familiar stuff. As for Einstein’s formula, well… That’s a result from relativity theory and, as such, something that is much more difficult to explain. While the difference between the two energy formulas is just a factor of 1/2 (which is usually not a big problem when you’re just fiddling with formulas like this), it makes a big conceptual difference. Hmm… Perhaps we should do some examples. So these de Broglie equations associate a wave with frequency f and wavelength λ with particles with energy E, momentum p and mass m traveling through space with velocity v: E = hf and p = h/λ. [And, if we would want to use some sine or cosine function as an example of such wave function – which is likely – then we need an argument expressed in radians rather than in units of time or distance. In other words, we will need to convert frequency and wavelength to angular frequency and wave number respectively by using the 2π = ωT = ω/f and 2π = kλ relations, with the wavelength (λ), the period (T) and the velocity (v) of the wave being related through the simple equations f = 1/T and λ = vT. So then we can write the de Broglie relations as: E = ħω and p =  ħk, with ħ = h/2π.] In these equations, the Planck constant (be it h or ħ) appears as a simple factor of proportionality (we will worry about what h actually is in physics in later posts) – but a very tiny one: approximately 6.626×10–34 J·s (Joule is the standard SI unit to measure energy, or work: 1 J = 1 kg·m2/s2), or 4.136×10–15 eV·s when using a more appropriate (i.e. larger) measure of energy for atomic physics: still, 10–15 is only 0.000 000 000 000 001. So how does it work? First note, once again, that we are supposed to use the equivalent for slow-moving particles of Einstein’s famous E = mcequation as a measure of the energy of a particle: E = mv2. We know velocity adds mass to a particle – with mass being a measure for inertia. In fact, the mass of so-called massless particles,  like photons, is nothing but their energy (divided by c2). In other words, they do not have a rest mass, but they do have a relativistic mass m = E/c2, with E = hf (and with f the frequency of the light wave here). Particles, such as electrons, or protons, do have a rest mass, but then they don’t travel at the speed of light. So how does that work out in that E = mvformula which – let me emphasize this point once again – is not the standard formula (for kinetic energy) that we’re used to (i.e. E = mv2/2)? Let’s do the exercise. For photons, we can re-write E = hf as E = hc/λ. The numerator hc in this expression is 4.136×10–15 eV·s (i.e. the value of the Planck constant h expressed in eV·s) multiplied with 2.998×108 m/s (i.e. the speed of light c) so that’s (more or less) hc ≈ 1.24×10–6 eV·m. For visible light, the denominator will range from 0.38 to 0.75 micrometer (1 μm = 10–6 m), i.e. 380 to 750 nanometer (1 nm = 10–6 m), and, hence, the energy of the photon will be in the range of 3.263 eV to 1.653 eV. So that’s only a few electronvolt (an electronvolt (eV) is, by definition, the amount of energy gained (or lost) by a single electron as it moves across an electric potential difference of one volt). So that’s 2.6 to 5.2 Joule (1 eV = 1.6×10–19 Joule) and, hence, the equivalent relativistic mass of these photons is E/cor 2.9 to 5.8×10–34 kg. That’s tiny – but not insignificant. Indeed, let’s look at an electron now. The rest mass of an electron is about 9.1×10−31 kg (so that’s a scale factor of a thousand as compared to the values we found for the relativistic mass of photons). Also, in a hydrogen atom, it is expected to speed around the nucleus with a velocity of about 2.2×10m/s. That’s less than 1% of the speed of light but still quite fast obviously: at this speed (2,200 km per second), it could travel around the earth in less than 20 seconds (a photon does better: it travels not less than 7.5 times around the earth in one second). In any case, the electron’s energy – according to the formula to be used as input for calculating the de Broglie frequency – is 9.1×10−31 kg multiplied with the square of 2.2×106 m/s, and so that’s about 44×10–19 Joule or about 70 eV (1 eV = 1.6×10–19 Joule). So that’s – roughly – 35 times more than the energy associated with a photon. The frequency we should associate with 70 eV can be calculated from E = hv/λ (we should, once again, use v instead of c), but we can also simplify and calculate directly from the mass: λ = hv/E = hv/mv2 = h/m(however, make sure you express h in J·s in this case): we get a value for λ equal to 0.33 nanometer, so that’s more than one thousand times shorter than the above-mentioned wavelengths for visible light. So, once again, we have a scale factor of about a thousand here. That’s reasonable, no? [There is a similar scale factor when moving to the next level: the mass of protons and neutrons is about 2000 times the mass of an electron.] Indeed, note that we would get a value of 0.510 MeV if we would apply the E = mc2, equation to the above-mentioned (rest) mass of the electron (in kg): MeV stands for mega-electronvolt, so 0.510 MeV is 510,000 eV. So that’s a few hundred thousand times the energy of a photon and, hence, it is obvious that we are not using the energy equivalent of an electron’s rest mass when using de Broglie’s equations. No. It’s just that simple but rather mysterious E = mvformula. So it’s not mcnor mv2/2 (kinetic energy). Food for thought, isn’t it? Let’s look at the formulas once again. They can easily be linked: we can re-write the frequency formula as λ = hv/E = hv/mv2 = h/mand then, using the general definition of momentum (p = mv), we get the second de Broglie equation: p = h/λ. In fact, de Broglie‘s rather particular definition of the energy of a particle (E = mv2) makes v a simple factor of proportionality between the energy and the momentum of a particle: v = E/p or E = pv. [We can also get this result in another way: we have h = E/f = pλ and, hence, E/p = fλ = v.] Again, this is serious food for thought: I have not seen any ‘easy’ explanation of this relation so far. To appreciate its peculiarity, just compare it to the usual relations relating energy and momentum: E =p2/2m or, in its relativistic form, p2c2 = E2 – m02c4 . So these two equations are both not to be used when going from one de Broglie relation to another. [Of course, it works for massless photons: using the relativistic form, we get p2c2 = E2 – 0 or E = pc, and the de Broglie relation becomes the Planck relation: E = hf (with f the frequency of the photon, i.e. the light beam it is part of). We also have p = h/λ = hf/c, and, hence, the E/p = c comes naturally. But that’s not the case for (slower-moving) particles with some rest mass: why should we use mv2 as a energy measure for them, rather than the kinetic energy formula? But let’s just accept this weirdness and move on. After all, perhaps there is some mistake here and so, perhaps, we should just accept that factor 2 and replace λ = h/p by λ = 2h/p. Why not? 🙂 In any case, both the λ = h/mv and λ = 2h/p = 2h/mv expressions give the impression that both the mass of a particle as well as its velocity are on a par so to say when it comes to determining the numerical value of the de Broglie wavelength: if we double the speed, or the mass, the wavelength gets shortened by half. So, one would think that larger masses can only be associated with extremely short de Broglie wavelengths if they move at a fairly considerable speed. But that’s where the extremely small value of h changes the arithmetic we would expect to see. Indeed, things work different at the quantum scale, and it’s the tiny value of h that is at the core of this. Indeed, it’s often referred to as the ‘smallest constant’ in physics, and so here’s the place where we should probably say a bit more about what h really stands for. Planck’s constant h describes the tiny discrete packets in which Nature packs energy: one cannot find any smaller ‘boxes’. As such, it’s referred to as the ‘quantum of action’. But, surely, you’ll immediately say that it’s cousin, ħ = h/2π, is actually smaller. Well… Yes. You’re actually right: ħ = h/2π is actually smaller. It’s the so-called quantum of angular momentum, also (and probably better) known as spin. Angular momentum is a measure of… Well… Let’s call it the ‘amount of rotation’ an object has, taking into account its mass, shape and speed. Just like p, it’s a vector. To be precise, it’s the product of a body’s so-called rotational inertia (so that’s similar to the mass m in p = mv) and its rotational velocity (so that’s like v, but it’s ‘angular’ velocity), so we can write L = Iω but we’ll not go in any more detail here. The point to note is that angular momentum, or spin as it’s known in quantum mechanics, also comes in discrete packets, and these packets are multiples of ħ. [OK. I am simplifying here but the idea or principle that I am explaining here is entirely correct.] But let’s get back to the de Broglie wavelength now. As mentioned above, one would think that larger masses can only be associated with extremely short de Broglie wavelengths if they move at a fairly considerable speed. Well… It turns out that the extremely small value of h upsets our everyday arithmetic. Indeed, because of the extremely small value of h as compared to the objects we are used to ( in one grain of salt alone, we will find about 1.2×1018 atoms – just write a 1 with 18 zeroes behind and you’ll appreciate this immense numbers somewhat more), it turns out that speed does not matter all that much – at least not in the range we are used to. For example, the de Broglie wavelength associated with a baseball weighing 145 grams and traveling at 90 mph (i.e. approximately 40 m/s) would be 1.1×10–34 m. That’s immeasurably small indeed – literally immeasurably small: not only technically but also theoretically because, at this scale (i.e. the so-called Planck scale), the concepts of size and distance break down as a result of the Uncertainty Principle. But, surely, you’ll think we can improve on this if we’d just be looking at a baseball traveling much slower. Well… It does not much get better for a baseball traveling at a snail’s pace – let’s say 1 cm per hour, i.e. 2.7×10–6 m/s. Indeed, we get a wavelength of 17×10–28 m, which is still nowhere near the nanometer range we found for electrons.  Just to give an idea: the resolving power of the best electron microscope is about 50 picometer (1 pm = ×10–12 m) and so that’s the size of a small atom (the size of an atom ranges between 30 and 300 pm). In short, for all practical purposes, the de Broglie wavelength of the objects we are used to does not matter – and then I mean it does not matter at all. And so that’s why quantum-mechanical phenomena are only relevant at the atomic scale.
0c697c22d25e76fa
Friday, July 31, 2009 OU Summer School (Quantum Mechanics - SMXR358) Just back from Open University summer school, a week at Sussex University near Brighton doing experiments in quantum mechanics (course SMXR358). Saturday July 25th The school proper stared with a lecture on spectroscopy and notation. I guess this was never going to be riveting, but it’s essential for many of the experiments. The problem is that the necessary – and quite complex – theory is covered in the first few chapters of Book 3, which we're not meant to have started yet. So if you were orthodox on the study schedule, the lecture (and, sadly some of the experiments) must seem pretty incomprehensible. However, there is always someone in the audience smart and well-informed enough to point to a subtle error in the lecturer’s slide pack (no such thing as a 1p state). How the other students must have hated that person! Afterwards we all retired across the road to the lab common room for the ice-breaker. This started as doleful and frozen as I had anticipated. I wandered around looking for people with group B on their badges, as these people were candidates to be partners tomorrow in the lab, but without much success. Mark, the course director, then intervened to correctly separate people into their groups for a pub quiz (sample question: how many concrete cows in Milton Keynes?). Each of the four groups A-D was split into two, and of the eight contending teams there were three winners with 7/10. I was ridiculously pleased that one of these groups was mine. An early breakfast tomorrow as we start at 9 a.m. Sunday July 26th Sunday is a working day at summer school. After the 9 a.m. lecture, we assembled in the lab, milling around until the fateful moment when the tutor said “Oh, by the way, we do these experiments in pairs. Perhaps you’d like to team up?” I think there must have been some covert pairing the previous evening as twosomes quickly began to drift off in search of apparatus. I looked around for anyone still free and quickly cut a deal with a guy who, I found out, works at the Joint European Torus (JET) fusion project in Oxford. Good choice! In fact our first experiment was a simple measurement of exponential radioactive decay, and to handle the slightly-radioactive Barium137 I had to dress up in a pinny (pictured), or as we like to say, in a very scientific and vaguely-white coat. The author in authentic scientific gear for handling 137Barium When you’ve finished taking measurements, you enter the results into an Excel spreadsheet on one of the many PCs in the lab. Then you push in a memory stick to take the resulting file home with you. I walked across to a PC which someone else had been using – there was a memory stick already pushed, but the owner had wandered off. How cute! The memory stick had some holiday photos on it which had auto-opened and were brightly displayed on the screen in preview mode. Most of the photos showed tropical beach scenes, but right in the centre there’s this attractive girl, big smile for the camera, lifting her tee-shirt up to her chin. Which I should add was all she was wearing. So I’m transfixed in front of the screen, unable to avert my eyes. The tutor (yes, it was Stan) wanders by, takes in the scene and asks laconically “Yours?” I weakly shake my head and flee the scene, followed by a calculating look. Monday July 27th OK, it’s summer so it has to rain on campus. Today it’s the longer experiments in the lab. We’re looking at the Zeeman effect, and then measuring the spin of the caesium nucleus. The Zeeman effect equipment - for measuring the fine structure of energy levels in the neon atom in the presence of a magnetic field - is complicated, Equipment for measuring the Zeeman Effect as is the spectrum we observe through three devices in series: a Fabry-Perot etalon, followed by a spectrometer and finally a telescope. Here’s the diffraction pattern of the neon spectral lines of interest - seen through the telescope before the magnetic field is turned on to split them. Neon spectral lines from the Fabry-Perot etalon In the coffee breaks there was much gloom from staff and some of the better-informed students about the financial future of OU – talk of the end of summer schools as far too expensive, of courses without final exams because too many students weren’t turning up (the OU then loses its Government grant for that student), and of contraction in the number of intellectually-rigorous courses in favour of softer subjects where there is greater popular interest or vocational business sponsorship. Apparently there's to be a formal announcement at the end of the year. What a depressing prospect. Please Mr Cameron, don’t hit the OU too hard with all those public spending cuts! Tuesday July 28th Let me get on to swine flu. On the first day the chief OU person here mentioned the procedure in case anyone came down with it. He was vague about the details: call security on 3333 and ‘measures will be taken’. Right, I’ll be looking out for the guys in biowar suits then. But despite the hundreds of people we have here on campus from across the UK, no-one seems ill. Where has the epidemic gone? I grabbed one of the OU physics faculty here and asked about progress as regards an OU theoretical physics MSc. As I understand the response, there is a desire to do it but progress is at a very early stage. The most likely route is to base such a course on the existing maths MSc programme and add some extra physics modules which would create a course with a mathematical physics feel. Erwin Schrodinger’s advice to his beginning graduate students, who asked what they should study. “Year 1, study maths; year 2, study maths; year 3, come back and ask me again.” Wednesday July 29th If you’d have asked me before I came, I’d have said that the typical OU student of quantum mechanics would be young, white, male and an NT. (The NT part is the Myers-Briggs/Keirsey personality type they call Rational, aka an intellectual). Score 2.5 out of 4. The seventy-odd students here are overwhelmingly white and mostly male. The oldsters are outnumbered by the thirty-somethings but not by much. But the intellectuals are truly in short supply. It seems to me that a very prevalent personality type here is the early 30-40 year old ISTJ who’s a hands-on engineer in his day-job, and is using this course to brush up on the theory. A guy who’s bright but non-abstract, dogged but not big on the big picture. You might say that’s what you would expect on an experimental course, except that everyone here is doing the mainline QM theory course as well. What’s the effect? I think a lot of people are finding difficulty in seeing the wood for the trees, which is of course an endemic problem in QM. It does require really good lectures, though, to draw out and emphasise the foundational concepts and put some shape on what’s been learned to-date. So far, I’ve found the two-per-day lectures quite uneven. Thursday July 30th At this stage of the school, sexual deprivation is beginning to kick in. The polarization experiment requires graphing sin2(2θ) from 0 to π. Strange to see guys lingering over a piece of mathematics on their computer screens! Perhaps the following is the answer. Flyer for the disco Friday July 31st With all required experiments finished yesterday, it’s home again from my last ever OU summer school. I always dread it at the start, thinking about the hordes of strangers, the endlessly complex experiments, the long hours in the lab. And at the end I predictably feel it wasn’t so bad and the stuff all makes more sense now. Without that level of immersion, the maths MSc next year is going to be that much harder. See also Summer School Vignettes. UPDATE: Dec 14th 2009. Letter this morning - I received a distinction on this course. Saturday, July 25, 2009 Hot spots I still don't know anyone who has swine flu, despite the media-epidemic. Wednesday, July 22, 2009 More worthy of pity ... I remember all the times I saw old, overweight guys in baggy shorts and bulging tee-shirts shuffling along pavements, at no more than a fast walking pace. I laughed inside at such unfitness, thought to myself 'who are you trying to kid?' and 'why are you bothering?'. They never looked very happy, always seeming almost terminal. As I shuffled along the street this morning, carefully pacing myself, I had a curious, holistic identification with those objects of contempt. Yes, I am now indeed one myself. I console myself: as I get fitter, in careful stages of course, I will soon turn into a lean, mean athletic machine - purposefully advancing as a predator along rural pathways to the amusement of bucolic bunnies and wheezing lorry drivers. That is, assuming my heart and lungs hold out. I read "Schrodinger: life and thought" by Walter J. Moore and was impressed by the freshness of Moore's writing and his diligence in unearthing the daily life of Erwin over so many years. What do you make of a guy who spent his life falling in love easily with so many women and then seducing them? A man who in his forties suffers what Moore euphemistically calls a 'Lolita complex'. He ends up with three daughters, none by his wife, who he remains married to until the end. At least the girls got good intellectual genes. Schrodinger was no friend to the concept of 'bourgeois marriage', and it might be argued in these enlightened times that he was doing nothing wrong. However, his lifelong self-centred and adolescent attitude to relationships led to collateral damage to many (not all) of the woman with whom he involved himself. Typically it was the younger or less well-educated who were left holding the baby, or worse. His work was mostly blindingly competent in the spirit of mathematical physics. A strong visualiser, he was close in philosophy to Einstein and had little patience with the Bohr-Born interpretation of his wave equation. His culture, approach, techniques and beliefs all seem curiously dated now, but this was a first rate scientific biography. The other book was Paul McAuley's "The Quiet War" which I finished but with limited enthusiasm. Stereotyped characters, massive sci-tech data dumps, clumsy writing: McAuley surely can do better than this? I commend to you Abigail Nussbaum's excellent review here. Monday, July 20, 2009 Note: Pauli's Exclusion Principle Pauli's Exclusion Principle states that no two identical fermions in a given system can ever have the same set of quantum numbers. Here's how it works. 1. Fermions (particles with half-integer spin - 1/2, 3/2, ...) have antisymmetric total wave functions. 2. Consider two bound-state electrons as an example (electrons are spin 1/2 fermions), e.g. in a helium atom. 3. If the spin component is in one of the three symmetric triplet spin-states, then the spatial wave function must be antisymmetric: (ψk(x1n(x2) - ψn(x1k(x2))/√2 where ψk and ψn are energy eigenfunctions. Clearly n ≠ k here. 4. However, if we tried to put n=k, then the spatial wave function would vanish - this electron mode cannot exist. 5. However, we can have a symmetric spatial wave function: in which case the spin state vector must be antisymmetric, viz. the singlet state, where the two electrons are each in a superposition of spin-up and spin-down: ([up,down> - [down,up>)/√2. As a consequence, only these two electrons can occupy the n=1 (1s) shell, differing in their spin orientation which is the only degree of freedom they possess. 6. Note: if n ≠ k then the spatial wavefunction can still be symmetric for a pair of fermions thus: k(x1n(x2) + ψn(x1k(x2))/√2 In this case the spin state must be the antisymmetric singlet state as above. Exercise (redux) I decided to start running again. Last time I used to run just a couple of miles two or three times a week. I surely felt better for it overall, although my performance soon plateaued. However, I had persistent pains in my hips and knees which never seemed to get any better, so eventually I stopped (February 2006). Perhaps it was a perception of my general flabbiness, perhaps it was the sight of Lance Armstrong, still - at 37 - battling the alpine slopes. Anyway, I have resolved to take things a little more easy, not push it so much, to care for my joints. And not rush to do too much too soon. I said to Clare just before I left, about half an hour ago: "If I should die, think only this of me ... it was a big mistake." Anyway, I took it easy and returned unscathed. We shall see. Saturday, July 18, 2009 Calleva Atrebatum Today was open day at the Calleva Atrebatum archaeological dig, so we turned up to see what was going on. First of all, what's Calleva Atrebatum, you may ask? Here's what the Wikipedia says: "Calleva Atrebatum (or Silchester Roman Town) was an Iron Age oppidum and subsequently a town in the Roman province of Britannia, and the civitas capital of the Atrebates tribe. Its ruins are located beneath and to the west of the Church of St Mary the Virgin, which lies just within the town wall and about 0.5 miles (1 km) to the east of the modern village of Silchester in the English county of Hampshire, north of Basingstoke. "Most Roman towns in Britain continued to exist after the end of the Roman era, and consequently their remains underlay their more recent successors, which are often still major population centres. Calleva is unusual in that, for reasons unknown, it was abandoned shortly after the end of the Roman era. There is a suggestion that the Saxons deliberately avoided Calleva after it was abandoned, preferring to maintain their existing centres at Winchester and Dorchester. There was a gap of perhaps a century before the twin Saxon towns of Basing and Reading were founded on rivers either side of Calleva. As a consequence, Calleva has been subject to relatively benign neglect for most of the last two millennia. "The site covers a large area of over 100 acres (400,000 sq. metres) within a polygonal earthwork. The earthworks and, for much of the circumference, the ruined walls are still visible. The remains of the amphitheatre, added about AD 70-80 and situated outside the city walls, can also be clearly seen. By contrast, the area inside the walls is now largely farmland with no visible distinguishing features, other than the enclosing earthworks and walls, together with a tiny mediaeval church at the east gate." Calleva Atrebatum The current excavation (pictured below) is of the iron age settlement which pre-dates the Romans. The dig looking east Making jewelry the iron-age way After a while the clouds reappeared and the wind got up, so we retreated home. Oh, and I bought the tee-shirt. Friday, July 17, 2009 Ambiguous Cream Dept. "Does Germaloid cream work?" This query is the most common reason why people end up here, so let me answer it. Yes, Germaloid cream does work in shrinking haemorrhoids provided you use it on a regular basis. If the cream is old, from a tube which has been lying around in your drawer for years, it won't work nearly so well. Keep at it and follow the instructions on the tube. And now to the story. Someone with whom I live in close proximity came up to me this afternoon and asked "Is there any difference between Germaloid cream and Germolene?" "Why" Iasked. "Well, I found this tube of Germaloid cream in the living room and I've been using it as an antiseptic. Like for cuts, and on my face." "OK," I reply, "Germaloid cream is for haemorrhoids treatment" - look of shock/horror - "but it's not as bad as you think ... "Germaloid cream has two active ingredients: one, like in Germolene is a mild anaesthetic; the other is an astringent agent, zinc oxide, which serves to shrivel up the haemorrhoids. "In fact, if anyone were to suggest you might have the odd stray wrinkle on your face, it might even have helped?" One non-amused moment later and I believe the tubes have been switched. (|wood> + |trees>)/sqrt(2) I think that SM358, the Open University's quantum mechanics course, is solid but perhaps slightly conservative (perhaps cut-down and lean for distance-learning is more accurate). One of the challenges in teaching and learning QM is that the central organising concepts of the theory can't be comprehended until quite a bit of the machinery has been taken on board. So there's a lot which has to be taken on faith and is therefore mysterious to the student for a while - the wood and the trees problem. Later in the course, it helps to try to set what has been learned into some kind of structured context, and here I think SM358 falters a bit. Here are some of the things which have mystified me, and my own views on their resolution. Q. What's so special about the concepts of eigenfunction and eigenvalue? A. These are foundational concepts in QM but the reason why is initially not very clear. The real explanation is that in QM, unlike classical mechanics, the problem-solving act is in two steps: (i) find the correct wave function or state vector; (ii) apply the boundary conditions of the specific problem to find the probabilities of the possible observable values to be measured. In classical mechanics one simply solves the 'well-known' equations in the presence of the boundary conditions. Finding the correct wave function often comes down to solving Schrodinger's time-independent equation, Hψ = Eψ where E is a constant (eigenvalue), for unknown functions ψ. Solutions to this equation are indeed eigenfunctions - due to the form of the equation - and that's where the utility of the concept comes from. Q. What is the significance of a quantum mechanical operator? A. I was puzzled by this for a long time. Were operators in QM something to do with the act of observation (it is said that operators 'represent' observables)? Perhaps an operator corresponds to a piece of apparatus? No, none of this is true. The operator appears at the earlier step, where the correct wavefunction for the problem has to be determined. The operator is a constituent of the ψ-equation which determines the correct wavefunction (or state-vector or wavepacket) for the problem under consideration (free particle, harmonic oscillator, Coulomb model of the hydrogen atom, ...). The second stage, working out the probability of different observables being measured, is a calculation of amplitudes using the wavefunction/state-vector already found - it's this stage which is relevant to the apparatus configuration and the measurement process. Thursday, July 16, 2009 Time-independent approximation methods 1. The Variational Method Purpose: to calculate the ground energy state (e.g. of an atom) when we don't know the correct eigenfunction. Method: guess the eigenfunction and compute the eigenvalue (= the ground-state energy). If we guess a function with a free parameter, we may adjust this parameter for fine-tuning. Let the ground-state have quantum number n=1 and actual eigenfunction/value ψ1, E1. We have: E1 = <ψ11>/<ψ1ψ1> (the denominator to make sure the equation is correctly normalised). Since we don't know ψ1, we approximate it by φ1, giving E'1 = <φ11>/<φ1 φ1>. If φ contains a variable b, then E'1 will be a function of b, E'1(b), and we can differentiate to find the value of b (the 'best' eigenfunction φ(b)) which minimises E'1. This is our required approximation. The only practical issue with this method is the labour involved in evaluating E'1 = <φ11>/<φ1 φ1> - multiple integrals, and the need to guess a 'good' eigenfunction which closely approximates ψ. Note that it's much harder to use this method to compute higher energy states, where n > 1. 2. Perturbation Methods. Purpose: to calculate the energy state E' (e.g. of an atom) where the Hamiltonian H' is too complex to solve directly. (We do know the relevant eigenfunctions for the related unperturbed Hamiltonian H). Method: Split the Hamiltonian function H' into a simple unperturbed part H, which we can solve, and a first-order 'perturbation' δH which we can also solve. So H' = H + δH -- (to first order). Accuracy may be improved by going to second or higher orders. Note that E'n = <ψ'n H'ψ'n> where ψ' is an eigenfunction of H'. Let E'n = approx <ψn H'ψn> where ψ is an eigenfunction of H, = <ψn (H + δH)ψn> = En + <ψn δHψn>. We can work out En which is just the eigenvalue of the unperturbed Hamiltonian H. The expected value <ψn δHψn> of the first order perturbation δH, the first-order energy 'correction', is also intended to be easy to work out. So we hopefully have a good approximation to E'n. Wednesday, July 15, 2009 Losses for the army in Afghanistan 1. Hitler was of the view that even after the final victory of the Third Reich, it was desirable that low-level warfare should continue on the Eastern Front - to keep the military sharp and prevent it lapsing into a bureaucratized merely peacetime army. 2. When the British army started to take serious losses in Northern Ireland a few years back, recruiters initially tried to minimise talk of operations there. Instead it was skiing in Cyprus, sports in HK. But actually, they found that the danger and excitement of real ops were actually good not just for the quantity, but also quality of new recruits. 3. The probability of dying in Afghanistan is still relatively low. However, the real danger increases the kudos and prestige of every single squaddie out there, as they're increasingly finding when they get back. My vote? More helicopters + reduce the mission to hunting down the global jihadists and building a proper spy network for after we withdraw from the major military occupation - which should be sooner not later. It might be argued that the Americans wouldn't tolerate a Taliban government on Pakistan's northern border - what a risk to stability, but the Taliban are really the politico-military wing of the Pashtuns. The non-Taliban Northern Alliance, suitably provisioned by the US, should be able to keep Afghanistan in a suitably chaotic state of civil war for many a decade yet. Note: since the Afghan National Army is mostly made up of Northern Alliance personnel, this is probably Washington's game plan on a longer timeframe anyway, discounting short-term 'nation-building' rhetoric. Schrödinger -- Painting I've been reading "Schrödinger: Life and Thought" by Walter Moore, an excellent biography. Schrödinger was an unusual scientist. Very bright and always top of the class in maths and physics, he was appointed to Einstein's chair in Zurich but, as Moore observes, by age 37 - in 1924 - he had accomplished much that was competent but nothing earth-shattering. If he had died at that point, he would have been a mere footnote in physics history. The event which propelled him to immortality was his development of his eponymous wave equation over the Christmas of 1925. According to Moore, Schrodinger guessed it based on his deep knowledge of classical physics and more immediately the thesis work of Louis de Broglie. The equation is simple, but its consequences explain much. Turning his equation upon the leading question of the day, the spectrum of hydrogen, Schrödinger found the energy eigenfunction/eigenvalue calculations easy but didn't know how to solve the radial differential equation until Hermann Weyl told him in early January 1926. Given that Schrödinger was incredibly bright, hard-working and experienced at this stage of his life, Moore finds this surprising. It is slightly scary to realise just how much maths and physics need to be in your head before you have the tools to make any kind of breakthrough. It must be even harder today, when there is so much more to select from. Family News My mother stayed with us from last Friday to yesterday in Andover, giving a chance for Elaine and Chris to paint her house in Bristol on Monday. We took my mother back yesterday and completed the job. Here are some pictures - click on them to make larger. Painting the upper hall Beryl Seel coping in the chaos It's great when it's mostly finished! Wednesday, July 08, 2009 'The City and the City' - China Miéville Inspector Tyador Borlu of the Beszel Extreme Crime Squad find the body of a young woman in a rundown neighbourhood. No-one knows who she is, and the inquiry is going nowhere. Then Borlu gets a tip-off by phone from a mysterious caller in Beszel's parallel city-state Ul Qoma: the investigation has just gone international. China Miéville's beautiful writing illuminates the dank, decayed, vaguely Slavic Beszel, and the brittle, flashy, nationalist Ul Qoma. Characters on both sides of the divide are richly drawn: real people with real relationships, personalities and career objectives. The novel rapidly turns into an unputdownable page turner. What is really going on? At the heart of this novel is the weird relationship between Beszel and Ul Qoma. To say any more would be to spoil the impact of the story, but Miéville has come up with the strangest new idea I have encountered for a long while: this is social-science fiction, the personal and political implications of the central concept driving the intricate plot. Draped around the central, startling idea is a classic detective story, albeit with no explicit sex, no more than a few shootings and a certain amount of low-level police brutality. The dynamics are those of the cities, with Borlu the central character who in the end resolves the mysteries and is thereby propelled finally to a new reality. I thought Borlu was surprisingly good at coming up with new lines of inquiry on little evidence, ideas which seemed – unfeasibly - to always pan out. And he seems curiously asexual, not even flirting with his feisty assistant Lizbyet Corwi, although she seems quite interested in him. Perhaps the author intends Borlu to be totally mission-focused but it seems to detract from a fully rounded personality. There are merely quibbles. ‘The City and the City is by far the best, most awe-inspiring book I have read this year and if it also ends up on required reading lists for ethnomethodology courses covering the social construction of reality, I wouldn't be surprised at all. Spin and Entanglement In Bristol on Saturday for an OU day school for SM358, Quantum Mechanics. Susequently working hard since Monday on TMA 03, which is mostly about spin and entanglement. Just has to be checked, scanned and sent now. While I worked away, I was frequently accompanied by torrential rain outside, with thunder booming remotely from Salisbury plain. Not a good week for camping. Also just finished China Mieville's completely excellent new book "The City and the City" which I'll review shortly. Otherwise not much else to report: Clare has been refilling the oven's overhead fan filters with activated carbon granules from a set of filters sent to us in error, which don't fit. Since the filter-containers are not meant to be refilled, she has been actively bodging with a screwdriver and funnel, and claims total success. Thankfully nothing will fall out of the sky as a result, right? The cat is well, has recently caught and killed a vole which it left as a morning gift in the hall, and has resolutely refused to eat any of the kitekat from the six cans I bought over Clare's warnings, on the basis they were cheap. I now discover it only eats felix or whiskas. Tastewise I must say I couldn't tell the difference. Thursday, July 02, 2009 Our trip to France - with pictures We drove down to the Pyrenees and camped a while in the mountains. It was cold, there were flies and our air-mattress deflated. Next we drove north to the Dordogne where we visited chateaux before continuing to the Loire valley where we visited more chateaux and saw Segways for tourists. Finally we got to St. Malo where we spent a pleasant evening before coming home. This is Clare in the McDonalds at St. Malo just before embarkation back to England. The full story here or here: PDF, 2.7 MB. Wednesday, July 01, 2009 PPP – Product Placement in Photographs A New Product Idea for Google User-submitted photos are image-analyzed into their constituent objects. Generic objects (bottles, cars, watches) are selectively (and somewhat inconspicuously) replaced by product-specific iconic versions (e.g. Coca-cola bottle, Tesla car, Rolex watch). The manufacturer pays Google for this service. The user can opt for the generic versions on payment of a small subscription. Product Description Holiday photos are frequently unsatisfactory. The resolution is too low, the digital zoom has removed all detail, there is motion blur – and 2D is so flat. All of this is fixable by algorithms. STEP 1: User submits a .jpg or similar to Google Images. STEP 2: Google applies a palette of image-processing algorithms to deblur, identify object edges, resolve ambiguity through contextual knowledge and inference resulting in the creation of a 3D scene description. This probably looks like an XML file. In many cases the resolution will identify a manufacturer-specific entity, e.g. a jeep. However, in other cases there will just be a generic match such as a bottle, a car, a journal. The business opportunity is to replace the generic object description by product placement of a distinctive version emblematic of a particular company, which could include adding a logo. Google would charge manufacturers for this service in a variant of their current business model. STEP 3: The image description is then served back to the user where it can be viewed/rendered via a Google image viewer. This could include viewing at any resolution and 3D rotation. If the user objects to branded products appearing within their picture (the point would be to make product placement somewhat unobtrusive) then they could avoid advertising by making a payment for the service. To accurately reconstruct a scene from an image requires contextual knowledge. The Google service should include an interactive function whereby users can correct the results, corrections which could be used to tune the knowledge/inference engine. All claims to this idea I freely cede to Google as I’d like to use this service. Ask me also about fixing video, especially from low-resolution camera-phones (and CCTV). Conceived Wednesday 24th June in the Dordogne. My holiday reading was "Dune" by Frank Herbert, which I first read when I was but a teenager. Re-reading it has been a strange experience: the intelligence, plot complexity and sophisticated back-story are as I recall them from so many years ago. However, I'm a little more experienced in literary analysis these days and assess the quality of writing more critically. Herbert is good, certainly, but there is a kind of plodding, painting-by-numbers methodicalness especially in the earlier parts. It's very much a boy's coming-of-age fantasy with, I suspect, limited interest for girls. Anyway, Clare didn't seem very gripped when I read her parts of it in camp, and on the ferry. I am sufficiently engaged, however, to continue the journey I first took so many years ago, and the next three (Dune Messiah, Children of Dune, God Emperor of Dune ... sorry titles, aren't they) are on their way. While we were en vacances, I erratically engaged with SM358, my OU Quantum Mechanics course, steadily working through the thickets of algebra constituting the time-independent Schrödinger equation of the Coulomb interaction Hydrogen atom. I'm probably going to say this with more conviction after a later chapter looks at relativistic corrections, but atoms seem very clunky, unmodular things in this universe: not very well designed at all. On the strength of seeing the extraordinary magic Schrödinger conjured from his equation, I ordered Schrödinger: Life and Thought by Walter J. Moore. Schrödinger was an extraordinary character, whose scientific genius was basically powered by sex with his friends' wives. They don't recommend that in OU courses.
d8ae6b6e36e14a23
Dismiss Notice Join Physics Forums Today! Quantum Effects in Biology. 1. Mar 3, 2009 #1 In my biology class, I've learned that most of the fundamental processes of genetics occur on an atomic level (i.e. DNA, RNA, etc.). Can quantum mechanics be applied? 2. jcsd 3. Mar 3, 2009 #2 User Avatar Science Advisor It can and is, in the sense that all chemistry is inherently quantum. There are no 'classical' atoms, so to speak. Then there's of course the field of Quantum Chemistry (a part of theoretical chemistry/physical chemistry/chemical physics), which naturally involves quantum mechanics directly, either by solving the Schrödinger equation, or using density-functional theory (which is essentially a reformulation of the Schrödinger equation, although most DFT methods used now are somewhat 'semi-empirical' ) However, quantum mechanical calculations of molecular systems are computationally expensive, limited to about 100-200 atoms or so. Which isn't to say that biochemical systems aren't studied using model systems though. (e.g. while enzymes are huge, the 'active site' where the actual reaction occurs isn't very large) If you're wondering whether there are are purely 'quantum' effects involved (as opposed to 'chemical' effects), there isn't much. Tunneling is important for electron-transfer kinetics. The effect of proton tunneling has been measured in some very accurate kinetics studies, but it's really just barely measurable in the best of circumstances. 4. Mar 3, 2009 #3 George Jones User Avatar Staff Emeritus Science Advisor Gold Member Some researchers think that quantum (entanglements) effects are responsible for biological effects at much larger scales. See from which it should be possible to look up more rigorous references. 5. Mar 3, 2009 #4 User Avatar Science Advisor Nothing in that article talks about any large-scale effects of quantum entanglements. And there is no evidence supporting that. In general, you're not going to see any large-scale entanglement in a biological systems. The decoherence time is far too short. (the wild and speculative claims of "quantum consciousness" fans from Penrose to Deepak Chopra notwithstanding). Besides which, if there were any significant such effects, it's much much more likely that they would have been discovered already within inorganic or organic chemistry, which are both far more mature fields that have the added benefit of working at a smaller scale. There is no 'gap' in knowledge, no big unexplained phenomenon, between chemistry and quantum physics or between biochemistry and other chemistry. 6. Mar 3, 2009 #5 George Jones User Avatar Staff Emeritus Science Advisor Gold Member The article uses the word "entanglement" three times, and most certainly does imply what I wrote, i.e., that some researchers (besides Hameroff) think that quantum effects are responsible for biological effects at larger scales than in the original post in this thread. For example, the article starts with "Graham Fleming," http://chem.berkeley.edu/faculty/fleming/index.html, and, looking at the projection operators in Fleming's technical articles, it seems to me that Fleming is invoking quantum entanglement to explain photosynthesis. The article also talks about the role which some researchers believe quantum tunneling plays in some biological processes. Note that I did not say that I believed all this (I expressed much skepticism to my wife upon first seeing the article), nor did I say that these were conventional positions. I don't know enough to make that call. 7. Mar 3, 2009 #6 User Avatar Science Advisor And that's valid work, but also entirely unrelated to biological systems specifically. The transfer of vibrational states in any (chemical) systems is simply not something that's been studied much until the very recent advent of femtosecond spectroscopy. This is not an example of a quantum effect on a macro (molecular) scale, an effect unique to biochemistry, or something just fundamentally different from 'chemistry as we know it'. It's more that photosynthesis was/is an important and interesting system to study. Which is a silly angle to put on it. Electron transfer plays an important role in quite a few biological processes. I already mentioned tunneling effects are quite important to electron-transfer kinetics. Again, it's not something unique to biochemistry. Well, the work cited there does seem mostly legit (although the subheading makes reference to 'quantum consciousness' which I think is more or less crackpottery). It's just that the journalist seems to put a silly angle on it. It bears repeating: Everything in chemistry is quantum in nature. And like all chemistry, biochemical systems can be studied and explained most accurately in quantum-mechanical terms. And there are plenty of problems in biochemistry for quantum chemists to solve. OTOH, what I do say is that there's nothing in biochemistry that involves truly 'macroscopic' quantum effects, and that the quantum mechanical explanations that do explain biochemical phenomena are never going to be unique to biochemistry. At least not on the fundamental level. The molecules may be bigger, reactions may have more steps, but they're not fundamentally different. 8. Mar 3, 2009 #7 User Avatar Science Advisor Just as a side note, I think it's more fun when biological systems work 'against' the rules of quantum mechanics. I don't remember the enzyme, but there's at least one that catalyzes an apparently spin-forbidden reaction, by creating tyrosine radical as an intermediate. By doing so it isolates the electron involved in the reaction, allowing it to flip its spin. It's quite cute. 9. Mar 4, 2009 #8 Another question: is it possible QM will be ever used in the theory of evolution? 10. Mar 4, 2009 #9 Evoultion is Biologicals - Darwin BUT QM is physics and mathematics. 11. Mar 4, 2009 #10 User Avatar Science Advisor It already is, in the sense that evolution can be explained in terms of biochemistry, which is explained by chemistry, which is explained by QM. QM isn't likely to play a direct role in explaining evolution. Not any more than QM would be used in the theory of the motion of billiard balls. In other words, you can use QM to explain evolution/billiard balls, but there's no reason to do so. 12. Mar 5, 2009 #11 Yes - come to think of it, evolution is (usually) to macroscopic to even remotely consider the effects of QM - but the possibility that quantum mechanical effects could give rise to properties of life we see is intriguing 13. Mar 5, 2009 #12 This mean nothing then quantum in ALL atom and cell. It always do things all times - so no good. 14. Mar 8, 2009 #13 The Apostle of Quantum Evolution has to be Johnjoe McFadden, a molecular biologist at the U of Surrey who also writes occasional pop science columns for The Guardian. Also he disagrees with Hameroff's quantum description of brain function, but has his own. Unfortunately (IMO) he muddies the water by dragging in multiverse theory. But it's still interesting stuff: Coming closer to earth, there's a lot of fascinating stuff at the University of Illinois at Urbana-Champaign / Theoretical and Computational Biophysics Group's website: http://www.ks.uiuc.edu/Research/Categories/Quantum/all.cgi [Broken] Good paper from last year on quantum effects in avian magnetoreception (or, how birds manage to navigate all over hell and gone): Last edited by a moderator: May 4, 2017 15. Mar 9, 2009 #14 This theory NOT prooved, not good for University to make it. Better science FICTION - yes, no good. Similar Discussions: Quantum Effects in Biology. 1. Quantum Biology (Replies: 3) 2. Quantum hall effect (Replies: 1)
04f146c60b5c6bff
Condensed matter physics From Wikipedia, the free encyclopedia   (Redirected from Condensed matter) Jump to: navigation, search Condensed matter physics is a branch of physics that deals with the physical properties of condensed phases of matter.[1] Condensed matter physicists seek to understand the behavior of these phases by using physical laws. In particular, these include the laws of quantum mechanics, electromagnetism and statistical mechanics. The most familiar condensed phases are solids and liquids, while more exotic condensed phases include the superconducting phase exhibited by certain materials at low temperature, the ferromagnetic and antiferromagnetic phases of spins on atomic lattices, and the Bose–Einstein condensate found in cold atomic systems. The study of condensed matter physics involves measuring various material properties via experimental probes along with using techniques of theoretical physics to develop mathematical models that help in understanding physical behavior. The diversity of systems and phenomena available for study makes condensed matter physics the most active field of contemporary physics: one third of all American physicists identify themselves as condensed matter physicists,[2] and The Division of Condensed Matter Physics (DCMP) is the largest division of the American Physical Society.[3] The field overlaps with chemistry, materials science, and nanotechnology, and relates closely to atomic physics and biophysics. Theoretical condensed matter physics shares important concepts and techniques with theoretical particle and nuclear physics.[4] A variety of topics in physics such as crystallography, metallurgy, elasticity, magnetism, etc., were treated as distinct areas, until the 1940s when they were grouped together as Solid state physics. Around the 1960s, the study of physical properties of liquids was added to this list, and it came to be known as condensed matter physics.[5] According to physicist Phil Anderson, the term was coined by him and Volker Heine when they changed the name of their group at the Cavendish Laboratories, Cambridge from "Solid state theory" to "Theory of Condensed Matter",[6] as they felt it did not exclude their interests in the study of liquids, nuclear matter and so on.[7] The Bell Labs (then known as the Bell Telephone Laboratories) was one of the first institutes to conduct a research program in condensed matter physics.[5] References to "condensed" state can be traced to earlier sources. For example, in the introduction to his 1947 "Kinetic theory of liquids" book,[8] Yakov Frenkel proposed that "The kinetic theory of liquids must accordingly be developed as a generalization and extension of the kinetic theory of solid bodies. As a matter of fact, it would be more correct to unify them under the title of "condensed bodies". Classical physics[edit] Heike Kamerlingh Onnes and Johannes van der Waals with the helium "liquefactor" in Leiden (1908) One of the first studies of condensed states of matter was by English chemist Humphry Davy, in the first decades of the 19th century. Davy observed that of the 40 chemical elements known at the time, 26 had metallic properties such as lustre, ductility and high electrical and thermal conductivity.[9] This indicated that the atoms in Dalton's atomic theory were not indivisible as Dalton claimed, but had inner structure. Davy further claimed that elements that were then believed to be gases, such as nitrogen and hydrogen could be liquefied under the right conditions and would then behave as metals.[10][notes 1] In 1823, Michael Faraday, then an assistant in Davy's lab, successfully liquefied chlorine and went on to liquefy all known gaseous elements, with the exception of nitrogen, hydrogen and oxygen.[9] Shortly after, in 1869, Irish chemist Thomas Andrews studied the phase transition from a liquid to a gas and coined the term critical point to describe the condition where a gas and a liquid were indistinguishable as phases,[12] and Dutch physicist Johannes van der Waals supplied the theoretical framework which allowed the prediction of critical behavior based on measurements at much higher temperatures.[13] By 1908, James Dewar and H. Kamerlingh Onnes were successfully able to liquefy hydrogen and then newly discovered helium, respectively.[9] Paul Drude proposed the first theoretical model for a classical electron moving through a metallic solid.[4] Drude's model described properties of metals in terms of a gas of free electrons, and was the first microscopic model to explain empirical observations such as the Wiedemann–Franz law.[14][15] However, despite the success of Drude's free electron model, it had one notable problem, in that it was unable to correctly explain the electronic contribution to the specific heat of metals, as well as the temperature dependence of resistivity at low temperatures.[16] In 1911, three years after helium was first liquefied, Onnes working at University of Leiden discovered superconductivity in mercury, when he observed the electrical resistivity of mercury to vanish at temperatures below a certain value.[17] The phenomenon completely surprised the best theoretical physicists of the time, and it remained unexplained for several decades.[18] Albert Einstein, in 1922, said regarding contemporary theories of superconductivity that “with our far-reaching ignorance of the quantum mechanics of composite systems we are very far from being able to compose a theory out of these vague ideas”.[19] Advent of quantum mechanics[edit] Drude's classical model was augmented by Felix Bloch, Arnold Sommerfeld, and independently by Wolfgang Pauli, who used quantum mechanics to describe the motion of a quantum electron in a periodic lattice. In particular, Sommerfeld's theory accounted for the Fermi–Dirac statistics satisfied by electrons and was better able to explain the heat capacity and resistivity.[16] The structure of crystalline solids was studied by Max von Laue and Paul Knipping, when they observed the X-ray diffraction pattern of crystals, and concluded that crystals get their structure from periodic lattices of atoms.[20] The mathematics of crystal structures developed by Auguste Bravais, Yevgraf Fyodorov and others was used to classify crystals by their symmetry group, and tables of crystal structures were the basis for the series International Tables of Crystallography, first published in 1935.[21] Band structure calculations was first used in 1930 to predict the properties of new materials, and in 1947 John Bardeen, Walter Brattain and William Shockley developed the first semiconductor-based transistor, heralding a revolution in electronics.[4] A replica of the first point-contact transistor in Bell labs In 1879, Edwin Herbert Hall working at the Johns Hopkins University discovered the development of a voltage across conductors transverse to an electric current in the conductor and magnetic field perpendicular to the current.[22] This phenomenon arising due to the nature of charge carriers in the conductor came to be known as the Hall effect, but it was not properly explained at the time, since the electron was experimentally discovered 18 years later. After the advent of quantum mechanics, Lev Landau in 1930 predicted the quantization of the Hall conductance for electrons confined to two dimensions.[23] Magnetism as a property of matter has been known since pre-historic times.[24] However, the first modern studies of magnetism only started with the development of electrodynamics by Faraday, Maxwell and others in the nineteenth century, which included the classification of materials as ferromagnetic, paramagnetic and diamagnetic based on their response to magnetization.[25] Pierre Curie studied the dependence of magnetization on temperature and discovered the Curie point phase transition in ferromagnetic materials.[24] In 1906, Pierre Weiss introduced the concept of magnetic domains to explain the main properties of ferromagnets.[26] The first attempt at a microscopic description of magnetism was by Wilhelm Lenz and Ernst Ising through the Ising model that described magnetic materials as consisting of a periodic lattice of spins that collectively acquired magnetization.[24] The Ising model was solved exactly to show that spontaneous magnetization cannot occur in one dimension but is possible in higher-dimensional lattices. Further research such as by Bloch on spin waves and Néel on antiferromagnetism led to the development of new magnetic materials with applications to magnetic storage devices.[24] Modern many body physics[edit] The Sommerfeld model and spin models for ferromagnetism illustrated the successful application of quantum mechanics to condensed matter problems in the 1930s. However, there still were several unsolved problems, most notably the description of superconductivity and the Kondo effect.[27] After World War II, several ideas from quantum field theory were applied to condensed matter problems. These included recognition of collective modes of excitation of solids and the important notion of a quasiparticle. Russian physicist Lev Landau used the idea for the Fermi liquid theory wherein low energy properties of interacting fermion systems were given in terms of what are now known as Landau-quasiparticles.[27] Landau also developed a mean field theory for continuous phase transitions, which described ordered phases as spontaneous breakdown of symmetry. The theory also introduced the notion of an order parameter to distinguish between ordered phases.[28] Eventually in 1965, John Bardeen, Leon Cooper and John Schrieffer developed the so-called BCS theory of superconductivity, based on the discovery that arbitrarily small attraction between two electrons can give rise to a bound state called a Cooper pair.[29] The study of phase transition and the critical behavior of observables, known as critical phenomena, was a major field of interest in the 1960s.[30] Leo Kadanoff, Benjamin Widom and Michael Fisher developed the ideas of critical exponents and scaling. These ideas were unified by Kenneth Wilson in 1972, under the formalism of the renormalization group in the context of quantum field theory.[30] The quantum Hall effect was discovered by Klaus von Klitzing in 1980 when he observed the Hall conductivity to be integer multiples of a fundamental constant.[31] (see figure) The effect was observed to be independent of parameters such as the system size and impurities, and in 1981, theorist Robert Laughlin proposed a theory describing the integer states in terms of a topological invariant called the Chern number.[32] Shortly after, in 1982, Horst Störmer and Daniel Tsui observed the fractional quantum Hall effect where the conductivity was now a rational multiple of a constant. Laughlin, in 1983, realized that this was a consequence of quasiparticle interaction in the Hall states and formulated a variational solution, known as the Laughlin wavefunction.[33] The study of topological properties of the fractional Hall effect remains an active field of research. In 1987, Karl Müller and Johannes Bednorz discovered the first high temperature superconductor, a material which was superconducting at temperatures as high as 50 Kelvin. It was realized that the high temperature superconductors are examples of strongly correlated materials where the electron–electron interactions play an important role.[34] A satisfactory theoretical description of high-temperature superconductors is still not known and the field of strongly correlated materials continues to be an active research topic. In 2009, David Field and researchers at Aarhus University discovered spontaneous electric fields when creating prosaic films of various gases. This has more recently expanded to form the research area of spontelectrics.[35] Main article: Emergence Theoretical understanding of condensed matter physics is closely related to the notion of emergence, wherein complex assemblies of particles behave in ways dramatically different from their individual constituents.[29] For example, a range of phenomena related to high temperature superconductivity are not well understood, although the microscopic physics of individual electrons and lattices is well known.[36] Similarly, models of condensed matter systems have been studied where collective excitations behave like photons and electrons, thereby describing electromagnetism as an emergent phenomenon.[37] Emergent properties can also occur at the interface between materials: one example is the lanthanum-aluminate-strontium-titanate interface, where two non-magnetic insulators are joined to create conductivity, superconductivity, and ferromagnetism. Electronic theory of solids[edit] The metallic state has historically been an important building block for studying properties of solids.[38] The first theoretical description of metals was given by Paul Drude in 1900 with the Drude model, which explained electrical and thermal properties by describing a metal as an ideal gas of then-newly discovered electrons. This classical model was then improved by Arnold Sommerfeld who incorporated the Fermi–Dirac statistics of electrons and was able to explain the anomalous behavior of the specific heat of metals in the Wiedemann–Franz law.[38] In 1913, X-ray diffraction experiments revealed that metals possess periodic lattice structure. Swiss physicist Felix Bloch provided a wave function solution to the Schrödinger equation with a periodic potential, called the Bloch wave.[39] Calculating electronic properties of metals by solving the many-body wavefunction is often computationally hard, and hence, approximation techniques are necessary to obtain meaningful predictions.[40] The Thomas–Fermi theory, developed in the 1920s, was used to estimate electronic energy levels by treating the local electron density as a variational parameter. Later in the 1930s, Douglas Hartree, Vladimir Fock and John Slater developed the so-called Hartree–Fock wavefunction as an improvement over the Thomas–Fermi model. The Hartree–Fock method accounted for exchange statistics of single particle electron wavefunctions, but not for their Coulomb interaction. Finally in 1964–65, Walter Kohn, Pierre Hohenberg and Lu Jeu Sham proposed the density functional theory which gave realistic descriptions for bulk and surface properties of metals. The density functional theory (DFT) has been widely used since the 1970s for band structure calculations of variety of solids.[40] Symmetry breaking[edit] Ice melting into water. Liquid water has continuous translational symmetry, which is broken in crystalline ice. Main article: Symmetry breaking Phase transition[edit] Main article: Phase transition The study of critical phenomena and phase transitions is an important part of modern condensed matter physics.[43] Phase transition refers to the change of phase of a system, which is brought about by change in an external parameter such as temperature. In particular, quantum phase transitions refer to transitions where the temperature is set to zero, and the phases of the system refer to distinct ground states of the Hamiltonian. Systems undergoing phase transition display critical behavior, wherein several of their properties such as correlation length, specific heat and susceptibility diverge. Continuous phase transitions are described by the Ginzburg–Landau theory, which works in the so-called mean field approximation. However, several important phase transitions, such as the Mott insulatorsuperfluid transition, are known that do not follow the Ginzburg–Landau paradigm.[44] The study of phase transitions in strongly correlated systems is an active area of research.[45] Experimental condensed matter physics involves the use of experimental probes to try to discover new properties of materials. Experimental probes include effects of electric and magnetic fields, measurement of response functions, transport properties and thermometry.[8] Commonly used experimental techniques include spectroscopy, with probes such as X-rays, infrared light and inelastic neutron scattering; study of thermal response, such as specific heat and measurement of transport via thermal and heat conduction. Image of X-ray diffraction pattern from a protein crystal. Main article: Scattering Several condensed matter experiments involve scattering of an experimental probe, such as X-ray, optical photons, neutrons, etc., on constituents of a material. The choice of scattering probe depends on the observation energy scale of interest.[46] Visible light has energy on the scale of 1 eV and is used as a scattering probe to measure variations in material properties such as dielectric constant and refractive index. X-rays have energies of the order of 10 keV and hence are able to probe atomic length scales, and are used to measure variations in electron charge density. Neutrons can also probe atomic length scales and are used to study scattering off nuclei and electron spins and magnetization (as neutrons themselves have spin but no charge).[46] Coulomb and Mott scattering measurements can be made by using electron beams as scattering probes,[47] and similarly, positron annihilation can be used as an indirect measurement of local electron density.[48] Laser spectroscopy is used as a tool for studying phenomena with energy in the range of visible light, for example, to study non-linear optics and forbidden transitions in media.[49] External magnetic fields[edit] In experimental condensed matter physics, external magnetic fields act as thermodynamic variables that control the state, phase transitions and properties of material systems.[50] Nuclear magnetic resonance (NMR) is a technique by which external magnetic fields can be used to find resonance modes of individual electrons, thus giving information about the atomic, molecular and bond structure of their neighborhood. NMR experiments can be made in magnetic fields with strengths up to 65 Tesla.[51] Quantum oscillations is another experimental technique where high magnetic fields are used to study material properties such as the geometry of the Fermi surface.[52] The quantum hall effect is another example of measurements with high magnetic fields where topological properties such as Chern–Simons angle can be measured experimentally.[49] Cold atomic gases[edit] Main article: Optical lattice Cold atom trapping in optical lattices is an experimental tool commonly used in condensed matter as well as atomic, molecular, and optical physics.[53] The technique involves using optical lasers to create an interference pattern, which acts as a "lattice", in which ions or atoms can be placed at very low temperatures.[54] Cold atoms in optical lattices are used as "quantum simulators", that is, they act as controllable systems that can model behavior of more complicated systems, such as frustrated magnets.[55] In particular, they are used to engineer one-, two- and three-dimensional lattices for a Hubbard model with pre-specified parameters.[56] and to study phase transitions for Néel and spin liquid ordering.[53] Computer simulation of "nanogears" made of fullerene molecules. It is hoped that advances in nanoscience will lead to machines working on the molecular scale. Research in condensed matter physics has given rise to several device applications, such as the development of the semiconductor transistor,[4] and laser technology.[49] Several phenomena studied in the context of nanotechnology come under the purview of condensed matter physics.[58] Techniques such as scanning-tunneling microscopy can be used to control processes at the nanometer scale, and have given rise to the study of nanofabrication.[59] Several condensed matter systems are being studied with potential applications to quantum computation,[60] including experimental systems like quantum dots, SQUIDs, and theoretical models like the toric code and the quantum dimer model.[61] Condensed matter systems can be tuned to provide the conditions of coherence and phase-sensitivity that are essential ingredients for quantum information storage.[59] Spintronics is a new area of technology that can be used for information processing and transmission, and is based on spin, rather than electron transport.[59] Condensed matter physics also has important applications to biophysics, for example, the experimental technique of magnetic resonance imaging, which is widely used in medical diagnosis.[59] See also[edit] 1. ^ Both hydrogen and nitrogen have since been liquified, however ordinary liquid nitrogen and hydrogen do not possess metallic properties. Physicists Eugene Wigner and Hillard Bell Huntington predicted in 1935[11] that a state metallic hydrogen exists at sufficiently high pressures (over 25 GPa), however this has not yet been observed. 4. ^ a b c d Cohen, Marvin L. (2008). "Essay: Fifty Years of Condensed Matter Physics". Physical Review Letters 101 (25). Bibcode:2008PhRvL.101y0001C. doi:10.1103/PhysRevLett.101.250001. Retrieved 31 March 2012.  5. ^ a b Kohn, W. (1999). "An essay on condensed matter physics in the twentieth century". Reviews of Modern Physics 71 (2): S59. Bibcode:1999RvMPS..71...59K. doi:10.1103/RevModPhys.71.S59. Retrieved 27 March 2012.  7. ^ "More and Different". World Scientific Newsletter 33: 2. November 2011.  8. ^ a b Frenkel, J. (1947). Kinetic Theory of Liquids. Oxford University Press.  9. ^ a b c Goodstein, David; Goodstein, Judith (2000). "Richard Feynman and the History of Superconductivity". Physics in Perspective 2 (1): 30. Bibcode:2000PhP.....2...30G. doi:10.1007/s000160050035. Retrieved 7 April 2012.  11. ^ Silvera, Isaac F.; Cole, John W. (2010). "Metallic Hydrogen: The Most Powerful Rocket Fuel Yet to Exist". Journal of Physics 215: 012194. Bibcode:2010JPhCS.215a2194S. doi:10.1088/1742-6596/215/1/012194.  12. ^ Rowlinson, J. S. (1969). "Thomas Andrews and the Critical Point". Nature 224 (8): 541. Bibcode:1969Natur.224..541R. doi:10.1038/224541a0.  16. ^ a b Csurgay, A. The free electron model of metals. Pázmány Péter Catholic University.  17. ^ van Delft, Dirk; Kes, Peter (September 2010). "The discovery of superconductivity". Physics Today 63 (9): 38. Bibcode:2010PhT....63i..38V. doi:10.1063/1.3490499. Retrieved 7 April 2012.  19. ^ Schmalian, Joerg (2010). "Failed theories of superconductivity". Modern Physics Letters B 24 (27): 2679. arXiv:1008.0447. Bibcode:2010MPLB...24.2679S. doi:10.1142/S0217984910025280.  20. ^ Eckert, Michael (2011). "Disputed discovery: the beginnings of X-ray diffraction in crystals in 1912 and its repercussions". Acta Crystallographica A 68 (1): 30. Bibcode:2012AcCrA..68...30E. doi:10.1107/S0108767311039985.  21. ^ Aroyo, Mois, I.; Müller, Ulrich and Wondratschek, Hans (2006). "Historical introduction". International Tables for Crystallography. International Tables for Crystallography A: 2–5. doi:10.1107/97809553602060000537. ISBN 978-1-4020-2355-2.  22. ^ Hall, Edwin (1879). "On a New Action of the Magnet on Electric Currents". American Journal of Mathematics 2 (3): 287–92. doi:10.2307/2369245. JSTOR 2369245. Retrieved 2008-02-28.  24. ^ a b c d Mattis, Daniel (2006). The Theory of Magnetism Made Simple. World Scientific. ISBN 9812386718.  25. ^ Chatterjee, Sabyasachi (August 2004). "Heisenberg and Ferromagnetism". Resonance 9 (8): 57. doi:10.1007/BF02837578. Retrieved 13 June 2012.  26. ^ Visintin, Augusto (1994). Differential Models of Hysteresis. Springer. ISBN 3540547932.  27. ^ a b Coleman, Piers (2003). "Many-Body Physics: Unfinished Revolution". Annales Henri Poincaré 4 (2): 559. arXiv:cond-mat/0307004v2. Bibcode:2003AnHP....4..559C. doi:10.1007/s00023-003-0943-9.  28. ^ Kadanoff, Leo, P. (2009). Phases of Matter and Phase Transitions; From Mean Field Theory to Critical Phenomena. The University of Chicago.  29. ^ a b Coleman, Piers (2011). Introduction to Many Body Physics. Rutgers University.  30. ^ a b Fisher, Michael E. (1998). "Renormalization group theory: Its basis and formulation in statistical physics". Reviews of Modern Physics 70 (2): 653. Bibcode:1998RvMP...70..653F. doi:10.1103/RevModPhys.70.653. Retrieved 14 June 2012.  31. ^ Panati, Gianluca (April 15, 2012). "The Poetry of Butterflies". Irish Times. Retrieved 14 June 2012. [dead link] 32. ^ Avron, Joseph E.; Osadchy, Daniel and Seiler, Ruedi (2003). "A Topological Look at the Quantum Hall Effect". Physics Today 56 (8): 38. Bibcode:2003PhT....56h..38A. doi:10.1063/1.1611351.  33. ^ Wen, Xiao-Gang (1992). "Theory of the edge states in fractional quantum Hall effects". International Journal of Modern Physics C 6 (10): 1711. Bibcode:1992IJMPB...6.1711W. doi:10.1142/S0217979292000840. Retrieved 14 June 2012.  34. ^ Quintanilla, Jorge; Hooley, Chris (June 2009). "The strong-correlations puzzle". Physics World. Retrieved 14 June 2012.  36. ^ "Understanding Emergence". National Science Foundation. Retrieved 30 March 2012.  37. ^ Levin, Michael; Wen, Xiao-Gang (2005). "Colloquium: Photons and electrons as emergent phenomena". Reviews of Modern Physics 77 (3): 871. arXiv:cond-mat/0407140. Bibcode:2005RvMP...77..871L. doi:10.1103/RevModPhys.77.871.  38. ^ a b Ashcroft, Neil W.; Mermin, N. David (1976). Solid state physics. Harcourt College Publishers. ISBN 978-0-03-049346-1.  39. ^ Han, Jung Hoon (2010). Solid State Physics. Sung Kyun Kwan University.  40. ^ a b Perdew, John P.; Ruzsinszky, Adrienn (2010). "Fourteen Easy Lessons in Density Functional Theory". International Journal of Quantum Chemistry 110 (15): 2801–2807. doi:10.1002/qua.22829. Retrieved 13 May 2012.  41. ^ Nayak, Chetan. Solid State Physics. UCLA.  42. ^ Leutwyler, H. (1996). "Phonons as Goldstone bosons". ArXiv: 9466. arXiv:hep-ph/9609466v1.  43. ^ "Chapter 3: Phase Transitions and Critical Phenomena". Physics Through the 1990s. National Research Council. 1986. ISBN 0-309-03577-5.  44. ^ Balents, Leon; Bartosch, Lorenz; Burkov, Anton; Sachdev, Subir and Sengupta, Krishnendu (2005). "Competing Orders and Non-Landau–Ginzburg–Wilson Criticality in (Bose) Mott Transitions". Progress of Theoretical Physics. Supplement (160): 314. arXiv:cond-mat/0504692. Bibcode:2005PThPS.160..314B. doi:10.1143/PTPS.160.314.  45. ^ Sachdev, Subir; Yin, Xi (2010). "Quantum phase transitions beyond the Landau–Ginzburg paradigm and supersymmetry". Annals of Physics 325 (1): 2. arXiv:0808.0191v2. Bibcode:2010AnPhy.325....2S. doi:10.1016/j.aop.2009.08.003.  47. ^ Riseborough, Peter S. (2002). Condensed Matter Physics I.  48. ^ Siegel, R. W. (1980). "Positron Annihilation Spectroscopy". Annual Review of Materials Science 10: 393–425. Bibcode:1980AnRMS..10..393S. doi:10.1146/  49. ^ a b c Commission on Physical Sciences, Mathematics, and Applications (1986). Condensed Matter Physics. National Academies Press. ISBN 978-0-309-03577-4.  50. ^ Committee on Facilities for Condensed Matter Physics (2004). "Report of the IUPAP working group on Facilities for Condensed Matter Physics : High Magnetic Fields". International Union of Pure and Applied Physics.  51. ^ Moulton, W. G. and Reyes, A. P. (2006). "Nuclear Magnetic Resonance in Solids at very high magnetic fields". In Herlach, Fritz. High Magnetic Fields. Science and Technology. World Scientific. ISBN 9789812774880.  52. ^ Doiron-Leyraud, Nicolas; et al. (2007). "Quantum oscillations and the Fermi surface in an underdoped high-Tc superconductor". Nature 447 (7144): 565–568. arXiv:0801.1281. Bibcode:2007Natur.447..565D. doi:10.1038/nature05872. PMID 17538614.  53. ^ a b Schmeid, R.; Roscilde, T.; Murg, V.; Porras, D. and Cirac, J. I. (2008). "Quantum phases of trapped ions in an optical lattice". New Journal of Physics 10 (4): 045017. arXiv:0712.4073. Bibcode:2008NJPh...10d5017S. doi:10.1088/1367-2630/10/4/045017.  54. ^ Greiner, Markus; Fölling, Simon (2008). "Condensed-matter physics: Optical lattices". Nature 453 (7196): 736–738. Bibcode:2008Natur.453..736G. doi:10.1038/453736a. PMID 18528388.  55. ^ Buluta, Iulia; Nori, Franco (2009). "Quantum Simulators". Science 326 (5949): 108–11. Bibcode:2009Sci...326..108B. doi:10.1126/science.1177838. PMID 19797653.  56. ^ Jaksch, D.; Zoller, P. (2005). "The cold atom Hubbard toolbox". Annals of Physics 315 (1): 52–79. arXiv:cond-mat/0410614. Bibcode:2005AnPhy.315...52J. doi:10.1016/j.aop.2004.09.010.  58. ^ Lifshitz, R. (2009). "Nanotechnology and Quasicrystals: From Self-Assembly to Photonic Applications". NATO Science for Peace and Security Series B. Silicon versus Carbon: 119. doi:10.1007/978-90-481-2523-4_10. ISBN 978-90-481-2522-7.  59. ^ a b c d Yeh, Nai-Chang (2008). "A Perspective of Frontiers in Modern Condensed Matter Physics". AAPPS Bulletin 18 (2). Retrieved 31 March 2012.  60. ^ Privman, Vladimir. "Quantum Computing in Condensed Matter Systems". Clarkson University. Retrieved 31 March 2012.  61. ^ Aguado, M; Cirac, J. I. and Vidal, G. (2007). "Topology in quantum states. PEPS formalism and beyond". Journal of Physics: Conference Series 87: 012003. Bibcode:2007JPhCS..87a2003A. doi:10.1088/1742-6596/87/1/012003.  Further reading[edit]
05c2d39e519330cd
Saturday, March 20, 2010 Touching Women Today I want to share two useful tidbits about touch and women that I think should be better known, but aren't because people get embarrassed to talk about this stuff. The first is a pressure point to help menstrual cramps. Everyone knows about pinching next to the thumb to help with headaches. It doesn't take the pain away, but it dulls it and makes it more bearable. There is a spot that does about the same thing with menstrual cramps. It is located just above your foot, between your tendon and your ankle. To get it properly you want to use a "fat pinch". You get this by folding your index finger over, putting that on one side of the ankle, and pinching with the thumb on the other. So you get a pinch spread out over the soft flesh between the bone and Achilles tendon. I've offered this advice to multiple women who suffer menstrual cramps. None have ever heard it before, but it has invariably helped. The other is more *ahem* intimate. This would be a good time to stop reading if that bothers you. There are various parts of your body where you have a lot of exposed nerves. A light brushing motion over them will set up a tingling/itching sensation. A good place to experience this is the palm of your hand. Gently stroke towards the wrist, then pay attention to how your hand feels. Yes, that. And thinking about it brings it back. This happens anywhere where nerves get exposed. One place where that reliably happens is the inside of any joint. For instance the inside of your elbow. (Not as much is exposed there as the palm of the hand, but it is still exposed.) The larger the joint, the more nerves, the more this effect exists. The largest joint, of course, is the hip. And the corresponding sensitive area is the crease between leg and crotch on each side. This works on both genders. But for various reasons is more interesting for women... Enjoy. ;-) Wednesday, March 17, 2010 Address emotions in your forms I learned quite a few things at SXSW. Many are interesting but potentially useless, such as how unexpectedly interesting the reviews for Tuscan Whole Milk, 1 Gallon, 128 fl oz are. However the one that I found most fascinating, and is relevant to a lot of people, was from the panel that I was on. Kevin Hale, the CEO of Wufoo gave an example from their support form. In the process of trying to fill out a ticket you have the option of reporting your emotional state. Which can be anything from "Excited" to "Angry". This seems to be a very odd thing to do. They did this to see whether they could get some useful tracking data which could be used to more directly address their corporate goal of making users happy. They found they could. But, very interestingly, they had an unexpected benefit. People who were asked their emotional state proceeded to calm down, write less emotional tickets, and then the support calls went more smoothly. Asking about emotional state, which has absolutely no functional impact on the operation of the website, is a social lubricant of immense value in customer support. Does your website ask about people's emotional state? Should it? In what other ways do we address the technical interaction and forget about the emotions of the humans involved, to the detriment of everyone? Serendipity at SXSW This year I had the great fortune to be asked to be on a panel at SXSW. It was amazingly fun. However there was only one person I had ever met in person at the conference this year. So I was swimming in a sea of strangers. But apparently there were a lot of people that I was tangentially connected to in some way. I was commenting to one of my co-panelists, Victoria Ransom that a previous co-worker of mine looked somewhat similar to her, had a similar accent, and also had a Harvard MBA. Victoria correctly guessed the person I was talking about and had known her for longer than I had. I was at the Google booth, relating an anecdote about a PDF that I had had trouble reading on a website, when I realized that the person from Australia who had uploaded said PDF was standing right there. Another person had worked with the identical twin brother of Ian Siegel. Ian has been my boss for most of the last 7 years. (At 2 different companies.) One of the last people I met was a fellow Google employee whose brother in law was Mark-Jason Dominus. I've known Mark through the Perl community for about a decade. And these are just the people that I met and talked with long enough to find out how I was connected to them. Other useful takeaways? Dan Roam is worth reading. Emergen-C before bed helps prevent hangovers. Kick-Ass is hilarious, you should catch it when it comes out next month. And if you're in the USA then Hubble 3D is coming to an IMAX near you this Friday. You want to see it. I'll be taking my kids. And advice for anyone going to SXSW next year? Talk to everyone. If you're standing in line for a movie, talk to the random stranger behind you. Everyone is there to meet people. Most of the people there are interesting. You never know. Talking to the stranger behind you in line might lead to meeting an astronaut. (Yes, this happened to me.) Monday, March 8, 2010 Rogue Waves Today I ran across an interesting essay on our changing understanding of scurvy. As often happens when you learn history better, the simple narratives turn out to be wrong. And you get strange things where as science progressed it discovered a good cure for scurvy, they lost the cure, they proved that their understanding was wrong, then wound up unable to provide any protection from the disease, and only accidentally eventually learned the real cause. The question was asked about how much else science has wrong. This will be a shorter version of a cautionary tale about science getting things wrong. I thought of it because of a a hilarious comedy routine I saw today. (If you should stop reading here, do yourself a favor and watch that for 2 minutes. I guarantee laughter.) That is based on a major 1991 oil spill. There is no proof, but one possibility for the cause of that accident was a rogue wave. (Rogue waves are also called freak waves.) If so then, comedy notwithstanding, the ship owners could in no way be blamed for the ship falling apart. Because the best science of the day said that such waves were impossible. Here is some background on that. The details of ocean waves are very complex. However if you look at the ratio between the height of waves and the average height of waves around it you get something very close to a Rayleigh distribution, which is what would be predicted based on a Gaussian random model. And indeed if you were patient enough to sit somewhere in the ocean and record waves for a month, the odds are good that you'd find a nice fit with theory. There was a lot of evidence in support of this theory. It was accepted science. There were stories of bigger waves. Much bigger waves. There were strange disasters. But science discounted them all until New Years Day, 1995. That is when the Draupner platform recorded a wave that should only happen once in 10,000 years. Then in case there was any doubt that something odd was going on, later that year the RMS Queen Elizabeth II encountered another "impossible" wave. Remember what I said about a month of data providing a good fit to theory? Well Julian Wolfram carried out the same experiment for 4 years. He found that the model fit observations for all but 24 waves. About once every other month there was a wave that was bigger than theory predicted. A lot bigger. If you got one that was 3x the sea height in a 5 foot sea, that was weird but not a problem. If it happened in a 30 foot sea, you had a monster previously thought to be impossible. One that would hit with many times the force that any ship was built to withstand. A wall of water that could easily sink ships. Once the possibility was discovered, it was not hard to look through records of shipwrecks and damage to see that it had happened. When this was done it was quickly discovered that huge waves appeared to be much more common in areas where wind and wave travel opposite to an ocean current. This data had been littering insurance records and ship yards for decades. But until scientists saw direct proof that such large waves existed, it was discounted. Unfortunately there were soon reports such as The Bremen and the Caledonian Star of rogue waves that didn't fit this simple theory. Then satellite observations of the open ocean over 3 weeks found about a dozen deadly giants in the open ocean. There was proof that rogue waves could happen anywhere. Now the question of how rogue waves can form is an active research topic. Multiple possibilities are known, including things from reflections of wave focusing to the Nonlinear Schrödinger equation. While we know a lot more about them, we know we don't know the whole story. But now we know that we must design ships to handle this. This leads to the question of how bad a 90 foot rogue wave is. Well it turns out that typical storm waves exert about 6 tonnes of pressure per square meter. Ships were designed to handle 15 tonnes of pressure per square meter without damage, and perhaps twice that with denting, etc. But due to their size and shape, rogue waves can hit with about 100 tonnes of pressure per square meter. Are you surprised that a major oil tanker could see its front fall off? If you want to see what one looks like, see this video. Monday, March 1, 2010 Fun with Large Numbers I haven't been blogging much. In part that is because I've been using buzz instead. (Mostly to tell a joke a day.) However I've got a topic of interest to blog about this time. Namely large numbers. Be warned. If thinking about how big numbers like 9999 really are hurts your head, you may not want to read on. It isn't hard to find lots of interesting discussion of large numbers. See Who can name the bigger number? for an example. However when math people go for big numbers they tend to go for things like the Busy Beaver problem. However there are a lot of epistemological issues involved with that, for instance there is a school of mathematical philosophy called constructivism which denies that the Busy Beaver problem is well-formulated or that that sequence is well-defined. I may discuss mathematical philosophy at some future point, but that is definitely for another day. So I will stick to something simpler. Many years ago in sci.math we had a discussion that saw several of us attempt to produce the largest number we could following a few simple ground rules. The rules were that we could use the symbols 0 through 9, variables, functions (using f(x, y) notation), +, *, the logical operators & (and), ^ (or) ! (not), and => (implies). All numbers are non-negative integers. The goal was to use at most 100 non-whitespace characters and finish off with Z = (the biggest number we can put here). (A computer science person might note that line endings express syntactic intent and should be counted. We did not so count.) A non-mathematician's first approach would likely be to write down Z = 999...9 for a 98 digit number. Of course 9999 is much larger - you would need an 8 digit number just to write out how many digits it has. But unfortunately we have not defined exponentiation. However that is easily fixed: p(n,0) = 1 p(n, m+1) = n * p(n, m) We now have used up 25 characters and have enough room to pile up a tower of exponents 6 deep. Of course you can do better than that. Anyone with a CS background will start looking for the Ackermann function. A(0, n) = n+1 A(m+1, 0) = A(m, 1) A(m+1, n+1) = A(m, A(m+1, n)) That's 49 characters. Incidentally there are many variants of the Ackermann function out there. This one is sometimes called the Ackermann–Péter function in the interest of pedantry. But it was actually first written down by Raphael M. Robinson. (A random note. When mathematicians define rapidly recursing functions they often deliberately pick ones with rules involving +1, -1. This is not done out of some desire to get a lot done with a little. It is done so that they can try to understand the pattern of recursion without being distracted by overly rapid initial growth.) However the one thing that all variants on the Ackermann function share is an insane growth rate. Don't let the little +1s fool you - what really matters to growth is the pattern of recursion, and this function has that in spades. As it recurses into itself, its growth keeps on speeding up. Here is its growth pattern for small n. (The n+3/-3 meme makes the general form easier to recognize.) A(1, n) = 2 + (n+3) - 3 A(2, n) = 2 * (n+3) - 3 A(3, n) = 2n+3 - 3 A(4, n) = 222 - 3 (the tower is n+3 high) There is no straightforward way to describe A(5, n). Basically it takes the stacked exponent that came up with A(4, n) and iterates that operation n+3 times. Then subtract 3. Which is the starting point for the next term. And so on. By most people's standards, A(9, 9) would be a large number. We've got about 50 characters left to express something large with this function. :-) It is worth noting that historically the importance of the Ackermann function was not just to make people's heads hurt, but to demonstrate that there are functions that can be expressed with recursion that grow too quickly to fall into a simpler class of primitive recursive functions. In CS terms you can't express the Ackermann function with just nested loops with variable iteration counts. You need a while loop, recursion, goto, or some other more flexible programming construct to generate it. Of course with that many characters to work with, we can't be expected to be satisfied with the paltry Ackermann function. No, no, no. We're much more clever than that! But getting to our next entry takes some background. Let us forget the rules of the contest so far, and try to dream up a function that in some way generalizes the Ackermann function's approach to iteration. Except we'll use more variables to express ever more intense levels of recursion. Let's use an unbounded number of variables. I will call the function D for Dream function because we're just dreaming at this point. Let's give it these properties: D(b, 0, ...) = b + 1 D(b, a0 + 1, a1, a2, ..., an, 0, ...) = D(D(b, a0, a1, ..., an, 0, ...), a0, a1, ..., an, 0, ...) D(b, 0, ..., 0, ai+1, ai+1, ai+2, ..., an, 0, ...) = D( b-1, b-1, ..., b-1, ai, ai+1, ..., an, 0, ... There is a reason for some of the odd details of this dream. You'll soon see b and b-1 come into things. But for now notice that the pattern with a0 and a1 is somewhat similar to m and n in the Ackermann function. Details differ, but recursive patterns similar to ones that crop up in the Ackermann function crop up here. D(b, a0, 0, ...) = b+a0+1 D(b, a0, 1, 0, ...) ≈ 32a0 b And if a1 is 2, then you get something like a stacked tower of exponentials (going 2,3,2,3,... with some complex junk). And you continue on through various such growth patterns. But then we hit D(b, a0, a1, 1, 0, ...). That is kind of like calling the Ackermann function to decide how many times we will iterate calling the Ackermann function against itself. In the mathematical literature this process is called diagonalization. And it grows much, much faster than the Ackermann function. With each increment of a2 we grow much faster. And each higher variable folds in on itself to speed up even more. The result is that we get a crazy hierarchy of insane growth functions that grow much, much, much faster. Don't bother thinking too hard about how much faster, our brains aren't wired to really appreciate it. Now we've dreamed up an insanely fast function, but isn't it too bad that we need an unbounded number of variables to write this down? Well actually, if we are clever, we don't. Suppose that b is greater than a0, a1, ..., an. Then we can represent that whole set of variables with a single number, namely m = a0 + a1 b + ... + an bn. Our dream function can be recognized to be the result of calculating D(b, m+1) by subtracting the then replacing the base with D(b, m) (but leaving all of the coefficients alone. So this explains why I introduced b, and all of the details about the -1s in the dream function I wrote. Now can we encode this using addition, multiplication, non-negative integers, functions and logic? With some minor trickiness we can write the base rewriting operation: B(b, c, 0) = 0 i < b => B(b, c, i + j*b) = i + B(b, c, j) * c Since all numbers are non-negative integers the second rule leads to an unambiguous result. The first and second rules can both apply when the third argument is 0, but that is OK since they lead to the same answer. And so far we've used 40 symbols (remember that => counts as 1 in our special rules). This leads us to be able to finish off defining our dream function with: D(b, 0) = b + 1 D(b, n+1) = D(D(b, n), B(b, D(b, n), n)) This took another 42 characters. This leaves us 18 characters left, two of which have to be Z=. So we get Z = D(2, D(2, D(2, 9))) So our next entry is B(b, c, 0) = 0 D(b, 0) = b + 1 We're nearly done. The only thing I know to improve is one minor tweak: B(b, c, 0) = 0 T(b, 0) = b * b T(b, n+1) = T(T(b, n), B(b, T(b, n), n)) Z = T(2, T(2, T(2, 9))) Here I changed B into T, and made the 0 case be something that had some growth. This starts us off with the slowest growth being T(b, i) being around b2i, and then everything else gets sped up from there. This is a trivial improvement in overall growth - adding a couple more to the second parameter would be a much bigger win. But if you're looking for largest, every small bit helps. And modulo a minor reformatting and a slight change in the counting, this is where the conversation ended. Is this the end of our ability to discuss large numbers? Of course not. As impressive as the function that I provided may be, there are other functions that grow faster. For instance consider Goodstein's function. All of the growth patterns in the function that I described are realized there before you get to bb. In a very real sense the growth of that function is as far beyond the one that I described as the one that I described is beyond the Ackermann function. If anyone is still reading and wants to learn more about attempts by mathematicians to discuss large (but finite) numbers in a useful way, I recommend Large Numbers at MRROB.
33183c2aae47fc57
Search This Blog Physics Book Face Off: Hyperspace Vs. The Elegant Universe I've always been interested in physics. It's the subject that tries to answer the ultimate question of how the universe, and everything in it, works at its most fundamental level. I took a few physics courses in college, but I started to shy away from the subject after taking modern physics and having it go way over my head. I had a hard time grasping the concepts at the time, but recently, like with mathematics, I've been thinking about getting back into studying it more. To kick off that activity, I started with two popular physics books that may be a little outdated, but should still have plenty of relevant, intriguing material on what has happened in the field post-Einstein. Both books are by prominent string theorists. The Elegant Universe was written in 1999 by Brian Greene, a professor of physics and mathematics at Columbia University. Hyperspace was written five years earlier in 1994 by Michio Kaku, a professor in theoretical physics at The City University of New York. The Elegant Universe front coverVS.Hyperspace front cover From what I've read, there's a huge debate going on in theoretical physics right now about whether or not string theory is the future of how we will understand the universe, or if it's a dead end that will never produce meaningful predictions about how the universe works. I'm certainly not qualified to make any judgements about this debate, but I still believe that the investigations of string theory have merit because the exploration of ideas has value in and of itself. String theory has also made significant contributions to both mathematics and physics by developing new mathematical constructs and bringing various far-flung ideas between the two subjects together under one framework. That, however, is not the point of this book. The point is to give the reader a basic understanding of what string theory is about and how it affects our idea of how space-time works. Brian Greene is an excellent writer, and he does a great job of conveying his ideas in a way that non-theoretical physicists can understand. He starts out with a detailed description of Einstein's theories of Special and General Relativity and how they change our concept of the flow of time and the structure of space. Then he leaves the expanse of space to describe the main features of the very small particles of quantum mechanics. He wraps up this introductory material by explaining how these two sides of the universe—the very large and the very small—are incompatible when viewed within the confines of relativity and quantum mechanics. The two fields even come in direct conflict when trying to calculate what happens inside black holes or during the Big Bang. This conflict is what string theory attempts to resolve. The rest of the book describes what sting theory is, how it can combine relativity and quantum mechanics into one overarching Theory of Everything, and goes into a number of issues that the theory must address before it can be considered valid. Towards the end of the book, Greene gets caught up in generalities and doesn’t do as good of a job relating the physics he's describing to everyday reality. He talks about strings wrapping around curled up dimensions and branes covering tears in the fabric of space. It's very hard to visualize what he's talking about and what implications it has for the behavior of space-time, but maybe the vagueness betrays the fact that no one really understands what's going on here, yet. His other explanations are quite good, and reading the book generated tons of questions in my mind about how the concepts he was describing could be extended. For example, when he was describing how it's known that gravity travels at the speed of light, he goes through a thought experiment about what would happen to the planets if the sun suddenly exploded. The gist of the explanation is that the planets would not immediately leave their orbits because it would take time for the change in gravity to reach each planet. As I was reading, I wondered what would happen if instead of exploding, the sun disappeared entirely, just winking out of existence (never mind how that might physically happen). Would it still take time for the change in gravity to reach the planets? At first I struggled with this idea because it seemed to me that in the first case of the sun exploding, the change in gravity would move along with the remnants of the sun, so the fact that gravity would be limited to the speed of light wasn't surprising. The matter that was traveling outward from the blast would be limited by the speed of light, and the force of gravity would change based on what happened to the matter as it sped outward. What was surprising was that, according to Einstein, even if the sun just disappeared, the planets would still take time to notice the absence of the sun's gravity because the sun was warping the space around it and the planets were following the curve of space in their orbits. The speed at which space would flatten out in the absence of the sun would happen at the speed of light. Another great part of the book was the discussion of how to visualize higher dimensions. The concept of extra dimensions beyond three is especially hard to grasp because we experience the world in three dimensions, and we have no reference for what a fourth dimension (let alone a tenth dimension) would be like. One way to think about this—the way the book describes—is to imagine how a being in a one dimensional world would see a two dimensional object, and then work your way up to higher dimensions. Another way, that the book doesn't go into, is to think about how we already see our three dimensional world as a two dimensional projection. Our eyes actually see in 2D, and we build 3D models of objects in our minds. Similarly, we can project 4D objects into 3D spaces with computer simulations to try to get a better idea of what they are. One common example is the tesseract, or four dimensional cube. Here is a video showing what it looks like to rotate and unwrap a tesseract: Even after watching the video, it's still really hard to visualize what's going on, but every way of trying to imagine higher dimensions adds a bit more to our understanding. Understanding higher dimensions is a big part of string theory because in higher dimensions there is more room to unite all of the forces and fundamental particles into one theory. Greene does a great job describing the issues and implications with string theory, as well as much of the physics history that has led up to it. It was a great read that gave me a much better understanding of what string theory is all about, and I'm looking forward to reading more of his books. Hyperspace is another book about string theory, although the focus here is much more on higher dimensions than the details of vibrating strings and how they interact. Michio Kaku also spends a lot of time on the history of physics, and the older I get the more I appreciate the context that history provides. It will also be interesting to see how his ideas change in later books because of the impact of new technologies that have come into play since this book was written. He talks a fair amount about the possibility of the universe ending in the Big Crunch, but since the Hubble Telescope has been in service and our measurements of the cosmic microwave background (CMB) radiation have gotten better, we have pretty much ruled that out as a possibility. The LHC has also added many new discoveries to physics research and our understanding of quantum mechanics that would impact string theory as well. Despite its age, this book was a fascinating read. Kaku explores so many interesting topics, including time travel, the tenth dimension, worm holes, and the future of our civilization. Like Greene, he attempts to describe how to visualize higher dimensions, and the additional perspective is helpful. He has an easy conversational style that clearly conveys his ideas, and I especially enjoyed his discussion of how civilizations are predicted to develop: A Type I civilization is one that controls the energy resources of an entire planet. This civilization can control the weather, prevent earthquakes, mine deep in the earth’s crust, and harvest the oceans. This civilization has already completed the exploration of its solar system. A Type II civilization is one that controls the power of the sun itself. This does not mean passively harnessing solar energy; this civilization mines the sun. The energy needs of this civilization are so large that it directly consumes the power of the sun to drive its machines. This civilization will begin the colonization of local star systems. A Type III civilization is one that controls the power of an entire galaxy. For a power source, it harnesses the power of billions of star systems. It has probably mastered Einstein’s equations and can manipulate space-time at will. We are currently a Type 0 civilization, and when put in this context, it's pretty clear that the most important advances we can make right now are in energy production. Making progress in new computing devices, robotics, and transportation are still important, of course, but we're not going to get anywhere until we dramatically increase the amount of energy available to us. I also wonder if the process of advancement to a Type I civilization would go faster if more resources were allocated to it. We chronically underfund space exploration and basic research. However, Kaku makes an interesting argument that technological advancement should not outpace social development or else we'll be in critical danger of self-annihilation. We still need to figure out how to function productively as one world-wide civilization instead of a collection of nation-states in constant conflict. We will continue to be on the edge of destruction until we solve the political, social, and environmental problems that we're dealing with. Technology that's too advanced for our social structures will only make things worse. Putting our sociopolitical issues aside and turning back to string theory, Kaku goes beyond the normal assertions that it has the potential to unify our separate models of the universe, from the Standard Model to the four fundamental forces. He thinks it also has the potential to unify much of the separate fields of mathematics: One consequence of this formulation is that a physical principle that unites many smaller physical theories must automatically unite many seemingly unrelated branches of mathematics. This is precisely what string theory accomplishes. In fact, of all physical theories, string theory unites by far the largest number of branches of mathematics into a single coherent picture. Perhaps one of the by-products of the physicists’ quest for unification will be the unification of mathematics as well. This seems like a pivotal accomplishment for our civilization, when and if it occurs. The book is packed with high-flying ideas like this that make you think and wonder about what is possible in our future. Kaku strikes a good balance between explaining the history of physics and its future potential. Like The Elegant Universe, I thoroughly enjoyed reading this book, and I look forward to more of Michio Kaku's books. Wish I Would Have Read These Sooner  It would have been very useful to have read books like these in college while taking difficult math and physics courses. It would have helped give context to the things I was learning, and motivated me to pursue a deeper understanding of things I struggled with. I remember having a really difficult time conceptualizing things like black body radiation and the Schrödinger equation at the time, and Kaku's example of students not understanding the implications of an exam problem with an intriguing application sounded eerily familiar to me:  In the autumn of 1985, on the final exam in a course on general relativity given at Caltech, Thorne gave the worm-hole solution to the students without telling them what it was, and they were asked to deduce its physical properties. (Most students gave detailed mathematical analyses of the solution, but they failed to grasp that they were looking at a solution that permitted time travel.) If I had read books like these in college, I would have had a much better grasp of the general concepts of physics, and some of the things that flew over my head may have found a better place to stick instead. I'm sure I would have been much more motivated to understand the complex equations involved if I would have known that books like these existed (and read them). I'm definitely motivated now. Every aspiring physics major, and even hobbyists, should take a look at these books to see what wonders and paradoxes the universe holds.
cdf57c0edc27def3
Wednesday, May 25, 2016 On the Quantum God of the Gaps I recently listened to an episode of the Atheistically Speaking podcast on the subject of Einstein's and Gödel's belief (or lack thereof) in God.  The podcast overall is quite enjoyable and going strong after more than 200 episodes; I've only started listening in the past few weeks and have been alternating between listening to new and archived episodes. At a point in this particular show, the host (Thomas) and his guest (Kurt) quite literally say: "Hey, if there's a physicist out there listening to this, let us know what you think..." So, this post is the result of taking them at their word, given that so much of the show had me shaking my head throughout. I should also mention that Thomas was kind enough to invite me to write to him directly, and the following is a slightly polished version of the e-mail I sent him. Here we go: Quantum Mechanics (QM) is part of a core of courses that all physicists take, no matter what their field. Obviously, people who specialize in QM will get into it a lot deeper than those that don't. There are some complicated areas of research going on right now, and there's plenty of debate going on--but the "basics", if we could call them that, are more than enough to refute just about everything Kurt said on episode 242. First, the easy point: if you're going  to take the approach "some smart guy believes X, so X is true" your're already in trouble because, as Bertrand Russell said, if you rely on an authority for your argument, there will always be other authorities who disagree. So if we were to tally the John Polkinghornes of the world as evidence that there could be a god hiding somewhere in QM--or free will, or the soul, or whatever--, then we would have to be intellectually honest and tally all the other guys who say he's full of shit as evidence that there's no such thing, which is everybody else. People in this latter camp include Sean Carroll, Lawrence Krauss, Richard Feynman, Tim Maudlin, Steven Weinberg, and many others. Polkinghorne is a theologian, so he's intellectually compromised by definition. This is not an ad-hominem: this is a simple statement of the fact that he can't be relied upon to be intellectually honest. If he were, he wouldn't be a theologian. Now, on to actual QM: Just as in high-school algebra we had some complicated equation with numbers and letters, and we were asked to solve for "x", so too in QM there's an equation to solve, only mere algebra won't quite do the trick. This is the Schrödinger equation, which looks something like this: \[ i \hbar \frac{\partial \Psi}{\partial t} = -\frac{\hbar^2}{2m} \frac{\partial^2 \Psi}{\partial x^2} + V\Psi. \] It may look complicated, but the idea is the same as in high school algebra: solve for the wave function, \(\Psi\) (greek letter "Psi").  After you do that, what you will get is an equation for \(\Psi\) that tells you what its value is in terms of x (position) and t (plain old time), plus some other physical constants and the specifics of your system (these are included in the V term above). Just as "x" was uniquely determined by all the other variables in high-school algebra, so too is \(\Psi\) determined--fully, for all time--by all the other squiggles in the Schrödinger equation. This is a second-degree partial differential equation and, with the right input, is fully deterministic. The subtlety arises in the fact that \(\Psi\) is not a number, but a probability density, and that's where laymen get mixed up. Here is where an analogy will help and, if you were to remember any part of this post as particularly useful, this would be it. Imagine you have a fair die that you can throw. For any given throw, it's pretty hard to know what number is going to come up. Naively, one would think that the die coughs up a random number between 1 and 6 each time. However, if you throw the die many times, eventually you'll see a pattern: each number comes up roughly 1/6 of the time. That 1/6 is determined by the geometry of the die. No matter what individual result you look at--1,4,2,5,3,etc.--the 1/6 is always the same for every throw. That 1/6, roughly, is \(\Psi\). The geometry of the die is everything else in the Schrödinger equation. So even though any given throw is undetermined, the overall 1/6 is fixed for as long as the geometry is fixed. Every experiment ever done confirms that the above equation, or some version of it, is true. If Kurt  (or Polkinghorne, or whoever) wants to say that god is hiding somewhere in \(\Psi\), then they're making a claim that god can do no better that to show up as what probability theory would predict anyway without him, which seems like a pretty lame god to me. As if that weren't enough, there's an entire field of research on hidden variable theory--the idea that there is a more fundamental, fully deterministic physics underneath the probabilistic character of the wave function. So the statement by Kurt, that it's just "a brute fact that QM is this way" is not obviously true either. Anyway, on to Gödel: First, the necessary caveat: I'm a physicist, not a mathematician. With that said, Gödel's incompleteness theorems state that, for any given mathematical system based on arithmetic and axioms, there will be one of two inevitable outcomes: 1) there will be some theorems that will be true but unprovable, or 2) all theorems will be provable, but some will contradict others. As an example, consider the sentence "This statement is true". Each word there is well-defined and fits the usual syntax to create a meaningful idea. You can think of each one of these words as an axiom (a basic assumption that doesn't need to be proven), and the syntax is the logic that is used to put them together and derive a theorem, which is a conclusion--the meaning of the statement itself. However, we can follow the same axioms and rules to construct a statement like this: "This statement is false." Each word there is well-defined, and all the words are arranged properly, but we can't decide on the meaning of the statement because the content of the statement refutes its logical construction! Now, even though this is certainly important for the philosophy of mathematics, there is no way that you can get from it to the claim made by Kurt that some things are, and always will be, unknowable. That's a non-sequitur at worst, and a trivial argument from ignorance at best. A key distinction to keep in mind is that unprovable does not mean unknowable: remember, item 1) for the incompleteness theorems says "true, but not provable". In a way, this is part of what theoretical physics is about: there are some things we know are true (thanks to experiment) and we work to explain them--that is, to prove why it must be the case that they are true. In any case, when people speak of god, they always mess up and fall into contradictions in the axioms themselves.  Their conclusions are unsound because their premises are gibberish and the arguments never get off the ground--there are no theorems to be derived at all! This happens to all definitions of god that include omnipotence, for example. Once they include that as a basic assumption, the rest is white noise. Thomas covered other many other areas of disagreement with Kurt on the show itself, and some commenters on the episode site have done so as well, so I'll leave it here for now. As a conclusion, I would say that people aren't smuggling contraband into QM (god, free will, the soul, consciousness) as much as they're smuggling it into their misunderstanding of QM. Sometimes, these people even have PhD's and published peer-reviewed papers. Bad ideas in science are weeded out eventually, but someone has to actually get on the ground and do it. This must be done by the experts, but it can be a slow, thankless process and so most of them just stay out of it and focus on their research; only a few jump in and get down and dirty. In the meantime, one has to be patient and wait it out. Friday, May 13, 2016 I’m still alive! So, my last post was a few months ago, which is a surprise to me, since I have felt as if I’ve only been away for a couple of weeks. Well, that’s what diving into General Relativity will do to you, I guess. Anyway, the point is I’ve been busy, but I’ve constantly thought of resuming regular posting on physics and all kinds of other things here. I’ve finally felt guilty enough to at least post this and, as I write, I don’t really know what to say other that I’ll do my best to neglect this blog a little less. (I’ve managed a handful of posts in my other blog, in Spanish, but even posting over there is too scarce for my taste as well.) Most of the CUCEI campus is reing renovated, which means lots of dust all over the place, but the renovations will be worth it, as far as I can tell. For example, in order to get to the graduate physics building, I have to get through this: The tiny gray structure behind the (brand new) yellow building is Building Z, where I and other graduate physics students dwell on campus. Other parts of CUCEI are quite lovely, and the students are quite peaceful and dedicated to their studies. Here are a few samples: Also, I’ve made a somewhat successful effort  to exercise. I jog for a few minutes on the track a few minutes’ walk from Building Z. Sometimes I can get a short run on three weekdays, though mostly I get one or two and another one on the weekend. This track was just renovated as well, so I’ll be running on a nice new track starting next week. Anyway, as far as actual physics goes, I’m almost done with a “first pass” over Relativity. I’ve used the textbooks by Lambourne, Schutz, and Carroll. Lambourne is surprisingly easy to digest, thought it may be too lenient for some people’s taste. Schutz is more like the usual modern approach to Relativity at the undergrad level, and Carroll is much more advanced and directed explicitly at graduates. I’ve been assigned to write a short essay explaining de Sitter space for my Relativity course, and the text is pretty much ready (though in Spanish). If all goes well, I’ll resume posting here regularly quite soon, and I’ll use that essay as a crutch to get started. After that, I hope to resume topics on Classical Mechanics, and then work my way into other core subjects of graduate physics. Perhaps I’ll do asides on mathematical concepts as well, and many other topics that interest me (politics, religion, books, philosophy). This is a blog, after all, and I’m sort of making it up as I go along. In an ideal world with plenty of time and no procrastination, I would have a physics/math post and an unrelated post each week. I know I’m supposed to focus on my studies, but I just love writing and hate leaving for (possibly several years) later.
b2a6d96b78da3b52
Entanglement (physics) From Citizendium, the Citizens' Compendium Jump to: navigation, search This article is developing and not approved. Main Article Related Articles  [?] Bibliography  [?] External Links  [?] Citable Version  [?] (CC) Photo: Mike Seyfang Photonics is widely used when creating entanglement. There are three interrelated meanings of the word entanglement in physics. They are listed below and then discussed, both separately and in relation to each other. • A combination of empirical facts, observed or only hypothetical, incompatible with the conjunction of three fundamental assumptions about nature, called "counterfactual definiteness", "relativistic local causality" and "no-conspiracy" (see below), but compatible with the conjunction of the last two of them ("relativistic local causality" and "no-conspiracy"). Such a combination will be called "empirical entanglement" (which is not a standard terminology[1]). • A prediction of the quantum theory stating that the empirical entanglement must occur in appropriate physical experiments (called "quantum entanglement"). • In quantum theory there is a technical notion of "entangled state". Entanglement cannot be reduced to shared randomness, and does not imply faster-than-light communication. Due to quantum entanglement, quantum information is different from classical information, which leads to quantum communication, quantum games, quantum cryptography and quantum computation. Empirical entanglement Some people understand it easily, others find it difficult and confusing. It is easy, since no physical or mathematical prerequisites are needed. Nothing like Newton laws, Schrödinger equation, conservation laws, nor even particles or waves. Nothing like differentiation or integration, nor even linear equations. It is difficult and confusing for the very same reason! It is highly abstract. Many people feel uncomfortable in such a vacuum of concepts and rush to return to the particles and waves. The framework, and local causality The following concepts are essential here. • A physical apparatus that has a switch and several lights. The switch can be set to one of several possible positions. A little after that the apparatus flashes one of its lights. • "Local causality": widely separated apparata are incapable of signaling to each other. Otherwise the apparata are not restricted; they may use all kinds of physical phenomena. In particular, they may receive any kind of information that reaches them. We treat each apparatus as a black box: the switch position is its input, the light flashed is its output; we need not ask about its internal structure. However, not knowing what is inside the black boxes, can we know that they do not signal to each other? There are two approaches, non-relativistic ("loose") and relativistic ("strict"). The loose approach: we open the black boxes, look, see nothing like mobile phones and rely on our knowledge and intuition. The strict approach: we do not open the black boxes. Rather, we place them, say, 360,000 km apart (the least Earth-Moon distance) and restrict the experiment to a time interval of, say, 1 sec. Relativity theory states that they cannot signal to each other, for a good reason: a faster-than-light communication in one inertial reference frame would be a backwards-in-time communication in another inertial reference frame! Below, the strict approach is used (unless explicitly stated otherwise). Thus, the apparata are not restricted. They may contain mobile phones or whatever. They may interact with any external equipment, be it cell sites or whatever. Falsifiabilty, and no-conspiracy assumption A claim is called falsifiable (or refutable) if it has observable implications. If some of these implications contradict some observed facts then the claim is falsified (refuted). Otherwise it is corroborated. The relativistic local causality was never falsified; that is, a faster-than-light signaling was never observed. Does it mean that local causality is corroborated? This question is more intricate than it may seem. Let A, B be two widely separated apparata, xA the input (the switch position) of A, and yB the output (the light flashed) of B. (For now we do not need yA and xB.) Local causality claims that xA has no influence on yB. An experiment consisting of n trials is described by xA(i), yB(i) for i = 1,2,...,n. Imagine that n = 4 and xA(1) = 1,   xA(2) = 2,   xA(3) = 1,   xA(4) = 2,   yB(1) = 1,   yB(2) = 2,   yB(3) = 1,   yB(4) = 2. The data suggest that xA influences yB, but do not prove it. Two alternative explanations are possible: • the apparatus B chooses yB at random (say, tossing a coin); the four observed equalities yB(i) = xA(i) are just a coincidence (of probability 1/16); • the apparatus B alternates 1 and 2, that is, yB(i) = 1 for all odd i but yB(i) = 2 for all even i. Consider a more thorough experiment: n = 1000, and xA(i) are chosen at random, say, tossing a coin. Imagine that yB(i) = xA(i) for all i = 1,2,...,n. The influence of xA on yB is shown very convincingly! But still, an alternative explanation is possible. For choosing xA, the coin must be tossed within the time interval scheduled for the trial, since otherwise a slower-than-light signal can transmit the result to the apparatus B before the end of the trial. However, is the result really unpredictable in principle (not just in practice)? Not necessarily so. Moreover, according to classical mechanics, the future is uniquely determined by the past! In particular, the result of the coin tossing exists in the past as a complicated function of a huge number of coordinates and momenta of micro particles. It is logically possible, but quite unbelievable that the future result of coin tossing is somehow spontaneously singled out in the microscopic chaos and transmitted to the apparatus B in order to influence yB. The no-conspiracy assumption claims that such exotic scenarios may be safely neglected. The conjunction of the two assumptions, relativistic local causality and no-conspiracy, is falsifiable, but was never falsified; thus, both assumptions are corroborated. Below, the no-conspiracy is always assumed (unless explicitly stated otherwise). Counterfactual definiteness In this section a single apparatus is considered. A trial is described by a pair (x,y) where x is the input (the switch position) and y is the output (the light flashed). Is y a function of x? We may repeat the trial with the same x and get a different y (especially if the apparatus tosses a coin). We can set the switch to x again, but we cannot set all molecules to the same microstate. Still, we may try to imagine the past changed, asking a counterfactual question:[2] • Which outcome the experimenter would have received (in the same trial) if he/she did set the switch to another position? It is meant that only the input x is changed in the past, nothing else. The question may seem futile, since an answer cannot be verified empirically. Strangely enough, the question will appear to be very useful in the next section. Classical physics can interpret the question as a change of external forces acting on a mechanical system of a large number of microscopic particles. It is unfeasible to calculate the answer, but anyway, the question makes sense, and the answer exists in principle: for some function f : XY, where X is the finite set of all possible inputs, and Y is the finite set of all possible outputs. Existence of this function f is called "counterfactual definiteness". Repeating the experiment we get y(i) = fi(x(i)) for i = 1,2,... Each time a new function fi appears; thus x(i)=x(j) does not imply y(i)=y(j). In the case of a single apparatus, counterfactual definiteness is not falsifiable, that is, has no observable implications. Surprisingly, for two (and more) apparata the situation changes dramatically. Local causality and counterfactual definiteness For two apparata, A and B, an experiment is described by two pairs, (xA,yA) and (xB,yB) or, equivalently, by a combined pair ((xA,xB), (yA,yB)). Counterfactual definiteness alone (without local causality) takes the form or, equivalently, Assume in addition that A and B are widely separated and the local causality applies. Then xA cannot influence yB, and xB cannot influence yA, therefore These fA, fB are one-time functions; another trial may involve different functions. An alternative language is logically equivalent, but makes the presentation more vivid. Imagine an experimenter, Alice, near the apparatus A, and another experimenter, Bob, near the apparatus B. Alice is given some input xA and must provide an output yA. The same holds for Bob, xB and yB. Once the inputs are received, no communication is permitted between Alice and Bob until the outputs are provided. The input xA is an element of a prescribed finite set XA (not necessarily a number); the same holds for yA and YA, xB and XB, yB and YB. It may seem that the apparata A, B are of no use for Alice and Bob. Significantly, this is an illusion. The simplest example of empirical entanglement is presented here. First, its idea is explained informally. Alice and Bob pretend that they know a 2×2 matrix consisting of numbers 0 and 1 only, satisfying four conditions: a = b,   c = d,   a = c,   but   bd. Surely they lie; these four conditions are evidently incompatible. Nevertheless Alice commits herself to show on request any row of the matrix, and Bob commits himself to show on request any column. We expect the lie to manifest itself on the intersection of the row and the column (not always but sometimes). However, Alice and Bob promise to always agree on the intersection! More formally, xA=1 requests from Alice the first row, xA=2 the second; in every case yA must be either or . From Bob, xB=1 requests the first column, in which case yB must be or ; and xB=2 requests the second column, in which case yB must be or . The agreement on the intersection means that, for example, if xA=2 and xB=1 then the first element of the row yA must be equal to the second element of the column yB. Without special apparata (A and B), Alice and Bob surely cannot fulfill their promise. Can the apparata help? This crucial question is postponed to the section "Quantum entanglement". Here we consider a different question: is it logically possible, under given assumptions, that Alice and Bob fulfill their promise? Under all the three assumptions (counterfactual definiteness, local causality and no-conspiracy) we have yA = fA(xA) and yB = fB(xB) for some functions fA, fB. (These functions may change from one trial to another.) Specifically, fA(1) and fA(2), being two rows, form a 2×2 matrix satisfying the conditions a=b, c=d. Also fB(1) and fB(2), being two columns, form a 2×2 matrix satisfying the conditions a=c, bd. These two matrices necessarily differ at least in one of the four elements (since the four conditions are incompatible). Therefore it can happen that Alice and Bob disagree on the intersection, and moreover, it happens with the probability at least 0.25. In the long run, Alice and Bob cannot fulfill their promise. Waiving the counterfactual definiteness (but retaining local causality and no-conspiracy) we get the opposite result: Alice and Bob can fulfill their promise. Here is how. Given xA and xB, there are two allowed yA and two allowed yB, thus, 4 allowed combinations (yA, yB). Two of them agree on the intersection of the row and the column; the other two disagree. Imagine that the apparata A, B choose at random (with equal probabilities 0.5, 0.5) one of the two combinations (yA, yB) that agree on the intersection. For example, given xA=2 and xB=1, we get either yA = and yB = , or yA = and yB = . This situation is compatible with local causality, since yB gives no information about xA; also yA gives no information about xB. For example, given xA=2 and xB=1, we get either yB = or yB = , with probabilities 0.5, 0.5; exactly the same holds given xA=1 and xB=1. Thus, empirical entanglement is logically possible. The question of its existence in the nature is addressed in the section "Quantum entanglement". Entanglement is not just shared randomness Widely separated apparata, unable to signal to each other, can be correlated. Correlations are established routinely by communication. For example, Alice and Bob, reading their copies of a newspaper, learn the result of yesterday's lottery drawing. This is called shared randomness. Likewise, the apparata A, B can share randomness by receiving signals from some external common source. However, shared randomness obeys the three assumptions (counterfactual definiteness, local causality and no-conspiracy) and therefore cannot produce entanglement. In other words, entanglement as a resource is substantially stronger than shared randomness. Quantum entanglement Classical bounds and quantum bounds Classical physics obeys the counterfactual definiteness and therefore negates entanglement. Classical apparata A, B cannot help Alice and Bob to always win (that is, agree on the intersection). What about quantum apparata? The answer is quite unexpected. First, quantum apparata cannot ensure that Alice and Bob win always. Moreover, the winning probability does not exceed no matter which quantum apparata are used. Second, there exist quantum apparata that ensure a winning probability higher than 3/4 = 0.75. This is a manifestation of entanglement, since under the three classical assumptions (counterfactual definiteness, local causality and no-conspiracy) the winning probability cannot exceed 3/4 (the classical bound). But moreover, ideal quantum apparata can reach the winning probability (the quantum bound), and non-ideal quantum apparata can get arbitrarily close to this bound. Third, a modification of the game, called "magic square game", makes it possible to win always. To this end we replace 2×2 matrices with 3×3 matrices, still of numbers 0 and 1 only, with the following conditions: • the parity of each row is even, • the parity of each column is odd. The classical bound is equal to 8/9; the quantum bound is equal to 1. Experimental status Many amazing entanglement-related predictions of the quantum theory were tested in ingenious experiments using high-tech equipment. All tested predictions are confirmed. Still, each one of these experiments has a "loophole", that is, admits alternative, entanglement-free explanations. Such explanations are highly contrived. They would be rejected as unbelievable in a routine development of science. However, the entanglement problem is exceptional: fundamental properties of nature are at stake! Entanglement is also unbelievable for many people. Thus, the problem is still open; finer experiments will follow, until an unambiguous result will be achieved. Communication channels According to the quantum theory, quantum objects manifest themselves via their influence on classical objects (more exactly, on classically described degrees of freedom). Every object admits a quantum description, but some objects may be described classically for all practical purposes, since their thermal fluctuations hide their quantal properties. These are called classical objects. Macroscopic bodies (more exactly, their coordinates) under usual conditions are classical. Digital information in computers is also classical. A communication channel may be thought of as a chain of physical objects and physical interactions between adjacent objects. If all objects in the chain are quantal, the channel is called quantal. If at least one object in the chain is classical, the channel is called classical. For example, newspapers, television, mobile phones and the Internet implement only classical channels. Quantum channels are usually implemented by sending a particle (photon, electron) or another microscopic object (ion) from a nonclassical source to a nonclassical detector through a low-noise medium. Classical communication (that is, communication through a classical channel) can create shared randomness, but cannot create entanglement. Moreover, entanglement creation is impossible when Alice's apparatus A is connected to a source S by a quantum channel but Bob's apparatus B is connected to S by a classical channel. Here is an explanation. The classical channel S-B is a chain containing a classical object C. By assumption, no chain of interactions connects A and B (via S, or otherwise) bypassing C. Therefore A and B are conditionally independent given a possible state c of C. The response yA of A to xA given c need not be a function gA(c,xA) of c and xA (uniqueness is not guaranteed), but still, we may choose one of possible responses yA and let gA(c,xA) = yA (so-called uniformization). Similarly, gB(c,xB) = yB. Now, given c, the two one-time functions fA(xA) = gA(c,xA) and fB(xB) = gB(c,xB) lead to a possible disagreement of Alice and Bob (on the intersection of the row and the column) by the argument used before (in the section "Example"). A more thorough analysis shows that the classical bound on the winning probability, deduced before from the counterfactual definiteness, holds also in the case treated here. Entangled quantum states A bipartite or multipartite quantum state, pure or mixed, is called entangled, if it cannot be prepared by means of shared randomness and local quantum operations. A quantum state that can be used for violating classical bounds, that is, for producing empirical entanglement, is necessarily entangled. It is unclear whether the converse implication holds, or not. Some entangled mixed states, so-called Werner states, obey classical bounds for all one-stage experiments. But multi-stage experiments in general are still far from being well understood. Nonlocality and entanglement In general The words "nonlocal" and "nonlocality" occur frequently in the literature on entanglement, which creates a lot of confusion: it seems that entanglement means nonlocality! This situation has two causes, pragmatical and philosophical. Here is the pragmatical cause. The word "nonlocal" sounds good. The phrase "non-CFD" (where CFD denotes counterfactual definiteness) sounds much worse, but is also incorrect; the correct phrase, involving both CFD and locality (and no-conspiracy, see the lead) is prohibitively cumbersome. Thus, "nonlocal" is often used as a conventional substitute for "able to produce empirical entanglement".[3] The philosophical cause. Many people feel that CFD is more trustworthy than RLC (relativistic local causality), and NC (no-conspiracy) is even more trustworthy. Being forced to abandon one of them, these people are inclined to retain NC and CFD at the expence of abandoning RLC. However, the quantum theory is compatible with RLC+NC. A violation of RLC+NC is called faster-than-light communication (rather than entanglement); it was never observed, and never predicted by the quantum theory. Thus RLC and NC are corroborated, while CFD is not. In this sense CFD is less trustworthy than RLC and NC. For quantum states Quantitative measures for entanglement are scantily explored in general. However, for pure bipartite quantum states the amount of entanglement is usually measured by the so-called entropy of entanglement. On the other hand, several natural measures of nonlocality are invented (see above about the meaning of "nonlocality"). Strangely enough, non-maximally entangled states appear to be more nonlocal than maximally entangled states, which is known as "anomaly of nonlocality"; nonlocality and entanglement are not only different concepts, but are really quantitatively different resources.[4] According to the asymptotic theory of Bell inequalities, even though entanglement is necessary to obtain violation of Bell inequalities, the entropy of entanglement is essentially irrelevant in obtaining large violation.[5] 1. Experts often call it "nonlocality", thus confusing non-experts; see Sect. 4.1. 2. "Die Geschichte kennt kein Wenn" (Karl Hampe). Whether physics has subjunctive mood or not, this is the question of counterfactual definiteness. 3. Physical terminology can mislead non-experts. Some examples: "quantum telepathy"; "quantum teleportation"; "Schrödinger cat state"; "charmed particle". 4. A.A. Methot and V. Scarani, "An anomaly of non-locality" (2007), Quantum Information and Computation, 7:1/2, 157-170; also arXiv:quant-ph/0601210. 5. M. Junge and C. Palazuelos, "Large violation of Bell inequalities with low entanglement" (2010), arXiv:1007.3043.
d52cdfab28fe9a92
Mauro Murzi's pages on Philosophy of Science - Quantum mechanics  prev Index Features of Schrödinger quantum mechanics next  1. Introduction. The main goal of this article is to provide a mathematical introduction to Schrödinger quantum mechanics suitable for people interested in its philosophical implications. A brief explanation of complex functions, including derivatives and partial derivatives, is given. First and second Schrödinger equations are formulated and some of their physical consequences are analysed, particularly the derivation of Bohr energy levels, the forecast of the tunnel effect and an explanation of alpha radioactivity. These examples are chosen in order to show real physical applications of Schrödinger equations. The exposition of Heisenberg indeterminacy principle begins with an analysis of the properties of commutative and non commutative operators, continues with a brief explanation of mean values and ends with some physical applications. Schrödinger quantum theory is formulated in an axiomatic fashion. No historical analysis is developed to justify the formulation of the two Schrödinger equation: Their only justification derives from their success in explaining physical facts. The philosophical background I use in this article is due to logical positivism and its analysis of the structure of a scientific theory. In this perspective, Schrödinger equations are the theoretical axioms of the theory; the probabilistic interpretation of Schrödinger equations plays the roles of the rules of correspondence, establishing a correlation between real objects and the abstract concepts of the theory; the observational part of the theory describes observation about radioactivity, spectral wavelengths and similar events.  prev Index Features of Schrödinger quantum mechanics next
66e49b3f22651605
Take the 2-minute tour × I just started watching the coursera lectures on the basics of quantum mechanics and one of the first lectures were on deriving Schrodinger's equation and its interpretation it under Born's interpretation. What I want to ask is what the wave function \begin{equation} \psi({\bf r},t) \end{equation} return and represent. Now I know that \begin{equation} |\psi({\bf r},t)|^2dxdydz \end{equation} is the probability of finding the quantum particle described by \begin{equation} \psi({\bf r},t) \end{equation} in the volument element \begin{equation} dV = dxdydz \end{equation} at time T. But im not sure what the wave function $\psi$ returns. Could someone please explain in laymans term the return type of the $\psi$ function and what the lone $\psi$ function represents. share|improve this question If you assume that probability is an inherent part of nature, then looking at this calculation books.google.ie/… we see probability (here it is the 'expected value') results in some vectors and matrices. We let matrix operators represent things like position, velocity and momentum (things we measure), & they operate on systems: applying the momentum operator on a vector is analogous to measuring the momentum of a system represented by that vector, thus the wave function (a vector) represents the state of a system. –  bolbteppa Aug 5 at 2:02 3 Answers 3 up vote 2 down vote accepted If this were computer science, we might say $\psi$ takes a $d$-tuple of reals ($r$) and another real ($t$) and returns a complex number with the attached unit of $L^{-d/2}$ in $d$ dimensions (with $L$ being the unit of length).1 If you want any more of an interpretation, well then you've already given it: $\psi(r,t)$ is the thing such that $\int_R\ \lvert \psi(r,t) \rvert^2\ \mathrm{d}V$ is the probability of the particle being observed in the region $R$ at time $t$. You can loosely think of it as a "square root" of a probability distribution. The reason the "square root" interpretation is not quite right, and probably the reason you aren't satisfied with the $\int_R\ \lvert \psi(r,t) \rvert^2\ \mathrm{d}V$ definition, is that any particular instance of $\psi(r,t)$ carries extraneous information beyond what is needed to fully specify the physics. In particular, if we have $\psi_1$ describing a situation, then the wavefunction defined by $\psi_2(r,t) = \mathrm{e}^{i\phi} \psi_1(r,t)$ gives identical physics for any real phase $\phi$. So the return value of the wavefunction itself is not a physical observable -- one always takes a square magnitude or does some other such thing that projects many mathematically distinct functions onto the same physical state. Even once you've taken the square magnitude, $\lvert \psi(r,t) \rvert^2$ arguably isn't directly observable, as all we can measure is $\int_R\ \lvert \psi(r,t) \rvert^2\ \mathrm{d}V$ (though admittedly for arbitrary regions $R$). 1You can check that $-d/2$ is necessarily the exponent. We need some unit such that squaring it and multiplying by the $d$-dimensional volume becomes a probability (i.e. is unitless). That is, we are solving $X^2 L^d = 1$, from which we conclude $X = L^{-d/2}$. share|improve this answer Great answer! especially for a programmer like myself :) Everything was super clear except for when you said "a complex number with the attached unit of L^−d/2 in d dimensions" could you please expand on that statement and explain why it is negative d-half's instead of just d. Thank you! –  Armen Aghajanyan Aug 5 at 2:32 @ArmenAghajanyan see footnote. Also note there's no rush to accept answers on this site; even better explanations may come in after a while. –  Chris White Aug 5 at 2:38 Thank you for the advice, i am relatively new to this site. What do you mean by attached unit? –  Armen Aghajanyan Aug 5 at 2:40 Continuing with programming terminology: if you want to type every quantity you see in physics, specifying integer or real or complex is often not sufficient. There is a difference between a real mass quantity and a real velocity quantity; their product is a real momentum, not just a real, and their sum is not defined. Here $\psi$ returns a complex length^(-d/2) so to speak. –  Chris White Aug 5 at 2:45 I'm not sure what $C^{-d/2}$ is meant to represent. There are quantities of type complex == complex length^0, complex length^(-1/2), complex length^(-1), etc. In 3D, $\psi$ returns a complex length^(-3/2), and the product of such a quantity with itself and 3 quantities of type real length is a dimensionless complex (and if you did everything right will turn out to be purely real). But perhaps I've taken the typing analogy too far... –  Chris White Aug 5 at 2:57 The wavefunction returns a complex number whose modulus-squared is a probability density and whose phase is related to the probability current, i.e., where probability is flowing to. If you write in the form $$\Psi({\mathbf x},t) = \sqrt{\rho({\mathbf x},t)}\exp\left(i\frac{S({\mathbf x},t)}{\hbar}\right)\text{,}$$ where $\rho(\mathbf{x},t)\geq0$ and $S(\mathbf{x},t)$ is real, then $\rho(\mathbf{x},t)$ is the probability density of measuring the position of the particle at $\mathbf{x}$ at time $t$. Plugging this form into the Schrödinger equation with Hamiltonian $\hat{H} = -\frac{\hbar^2}{2m}\nabla^2 + V(\mathbf{x})$ gives: $$\begin{eqnarray*} \left[\frac{1}{2m}\left|\nabla S\right|^2 + V\right] + \frac{\partial S}{\partial t} &=& \frac{\hbar^2}{2m}\rho^{-1/2}\nabla^2\sqrt{\rho}\text{,}\\ \frac{\partial\rho}{\partial t} + \nabla\cdot\left(\frac{1}{m}\rho\nabla S\right) &=& 0\text{.} \end{eqnarray*}$$ The first equation is not important for our immediate purposes, but the second one is the continuity equation for the probability density $\rho$, forcing probability to be conserved. In other words, if we define $$\mathbf{J} = \frac{\rho}{m}\nabla S\text{,}$$ then where $\rho$ is the probability density, $\mathbf{J}$ is the probability current representing where the probability is flowing. share|improve this answer just to make sure were on the same page what is $ s(x,t) $ ? –  Armen Aghajanyan Aug 5 at 2:37 @ArmenAghajanyan: $S$ is the phase of the complex number that $\Psi$ returns multiplied by $\hbar$ ($S/\hbar$ is the phase itself). See Euler's formula to see that every complex number $z$ can be written in the form $z = re^{i\phi}$, where $r$ is the modulus and $\phi$ is the phase. This is the polar form of the complex number. –  Stan Liou Aug 5 at 2:41 Alright, thanks. I think i understand it now. –  Armen Aghajanyan Aug 5 at 2:46 I will answer in layman terms as from your age in your profile I would not expect a very strong background in the necessary mathematics. When we have a function f(x) it returns a value at the point x, a real number. If it is the potential, V(r)=1/r we are able to calculate the potential and solve simple problems or enter the potential in complicated equations and solve complicated problems. This is a real function. One can extend functions using complex numbers. Complex number functions are in reality two functions: f1(x,y,z,t) + f2(x,y,z,t)*i Here i is the square root of the real number (-1). Using complex numbers simplifies the form of equations generally and allows for easier manipulation of theoretical quantities. The Schrodinger equation is an equation in complex numbers and thus \begin{equation} \psi(r,t) \end{equation} is a solution of the equation in our everyday space and time with two functions attached, it returns a complex value with two real numbers, the second one attached to i . Our measurements in real life return real numbers, so as it is Psi has no use. That is why it is squared with its complex conjugate , to give a real number which will represent the probability in the volume element. This last is a postulate, the Born rule, and it has been found to work and has not been falsified experimentally, and thus we accept the quantum mechanical framework of nature. share|improve this answer A bit of a over simplification but none the less a good answer! So for my background in math i have done basic calculus, first order/second order differential equations, path integrals, worked extensively with the Laplace transform to solve differential equations. On the linear algebra stuff, apart from the basics i have studied both eigen and jordon decompositions. I have not yet got around to working on complex analysis, and have done most frequency analysis (fourier) only on the computational/coding side. Could you please recommend what type of math i should pursue? –  Armen Aghajanyan Aug 5 at 6:46 Sorry Armen, but if you look at my profile my math days were over by 1963. A lot more books then the ones I know have come up since then and most probably more appropriate to physics. Better ask Chris White. –  anna v Aug 5 at 12:04 Your Answer
9caab4d4a67754d7
Definition of Schrödinger equation in English: Schrödinger equation • A differential equation which forms the basis of the quantum-mechanical description of matter in terms of the wave-like properties of particles in a field. Its solution is related to the probability density of a particle in space and time. • ‘Shortly after the advancement of the Schrödinger equation German physicist Max Born postulated that the wave function could be used to determine the probability of finding a particle in a particular region at a specific time.’ • ‘The Schrödinger equation then gives the function for any subsequent time interval.’ • ‘It is clear that we could try to recover realism and determinism if we allowed the view that the Schrödinger equation, and the wave-function or state-vector, might not contain all the information that is available about the system.’ • ‘The fundamental equation of quantum mechanics is known as the Schrödinger equation.’ • ‘Once the Schrödinger equation was developed to show the exact three-dimensional structure of the hydrogen atom, the theory of resonance was developed to explain the multiple solutions sometimes obtained using this equation.’ Schrödinger equation
ffa153801c9ec8ec
May 15 2012 Another Blogger Jumps Into the Dualism Fray It has been a while since I wrote about dualism – the notion that the mind is something more than the functioning of the brain. Previously I had a blog duel about dualism with creationist neurosurgeon, Michael Egnor. Now someone else has jumped into that discussion: blogger, author, and computer engineer Bernardo Kastrup has taken me on directly. The result is a confused and poorly argued piece all too typical of metaphysical apologists. Kastrup’s major malfunction is to create a straw man of my position and then proceed to argue against that. He so blatantly misrepresents my position, in fact, that I have to wonder if he has serious problems with reading comprehension or is just so blinkered by his ideology that he cannot think straight (of course, these options are not mutually exclusive). I further think that he probably just read one blog post in the long chain of my posts about dualism and so did not make a sufficient effort to actually understand my position. Kastrup is responding specifically to this blog post by me, a response to one by Egnor. Kastrups begins with this summary: I found it to contain a mildly interesting but otherwise trite, superficial, and fallacious argument. Novella’s main point seems to be that correlation suffices to establish causation. He claims that Egnor denies that neuroscience has found sufficient correlation between brain states and mind states because subjective mind states cannot be measured. There is the crux of the straw man – I never claimed that correlation is sufficient to establish causation. The entire premise of Kastrup’s piece is therefore false, creating a straw man logical fallacy. He goes on at length explaining that correlation does not equal causation. Regular readers of this blog are likely chuckling at this point, knowing that I have written often about this fallacy myself. If you read Katrup’s piece you will notice that at no point does he provide a quote from me claiming that correlation is sufficient to establish causation. He seems to understand also that I was responding directly to Egnor, who was claiming that brain states do not correlate with mind states, so of course I was making the point that they do. But I went much further (perhaps Kastrup did not read my entire post). I wrote: In fact I would add another prediction to the list, one that I have discussed but have not previously added explicity to the list – if brain causes mind then brain activity and changes will precede the corresponding mental activity and changes. Causes come before their effects. This too has been validated. The list I am referring to are the predictions generated by the hypothesis that the brain causes the mind. I contend that all of these predictions have been validated by science. This does not mean the hypothesis has been definitively proven, a claim I never make, just that the best evidence we have so far confirms the predictions of brain causing mind, and there is no evidence that falsifies this hypothesis. Because mere correlation does not prove causation (although it can be compelling if the correlation is tight and multifaceted) I felt compelled to add additional points, like the one above. Brain states do not just correlate with mental state, they precede them. Causes precede effects, so again if the brain causes mind then we would expect changes to brain states to precede their corresponding mental states, and in every case of which we are currently aware, they do. We would not expect this temporal relationship if the mind caused the brain, and it would not be necessary if some third thing causes both or, as Kastrup claims, the correlation is a pattern without causation. Further, in a section of my post titled “Correlation and Causation” I pointed out that it is highly reproducible that changes in brain states precede their corresponding changes in mental states. For example, we can stimulate or inhibit parts of the brain and thereby reliably increase or decrease corresponding mental activity. The temporal arrow of correlation extends to things that change brain states. You get drunk after you drink alcohol, not before. When researchers use transcranial magnetic stimulation to inhibit the functioning of the temporal parietal junction subjects then have an out of body experience. To further demonstrate that I was not relying upon mere correlation to make the case for causation, I wrote: Egnor would have you believe that this growing body of scientific evidence only shows that brain states correlate with the behavior of subjects reporting their experience, and not with the experiences themselves. He would have you believe that even if turning on and off a light switch reliably precedes and correlates with a light turning on and off, the switch does not actually control the light – not even that, he would have you believe that the scientific inference that the switch controls the light (absent any other plausible hypothesis) is materialist pseudoscience. Perhaps Kastrup does not understand the meaning of the word “inference.” That the brain causes mind is not a philosophical proof (something I never claimed), but a scientific inference. Correlation is one pillar of that inference, but so is the fact that brain states precede mental states. Further, I am clearly invoking Occam’s razor in the example above with the fairies and the light switch. The same correlation exists in that example – flipping a light switch preceded and correlates with the lights turning on and off. The simplest explanation is that the light switch controls the light – it is causing the lights to go on or off. But lets say you didn’t know light switches worked by opening and closing a circuit, and you could not break open the wall to investigate the mechanism. You could still come to the confident scientific inference that the light switch was doing something to directly turn the light on or off. You would not need to hypothesize that there were light switch fairies who were doing it. I also felt compelled to add, for completeness, “absent any other plausible hypothesis.” Why would I specifically add this caveat if I thought correlation proved causation? Of course, in this one blog post I could not go into a thorough exploration of every supernatural claim made for anomalous cognition. I maintain that there is no compelling evidence of mental states separate from brain states, and I refer you to my many other blog posts to support this position. Here we see that Kastrup’s clumsy and, dare I say, trite, superficial, and fallacious arguments about correlation not equaling causation are really cover for his true position and agenda – he believes that there is evidence for mental activity separate brain activity. He writes: There is an increasing amount of evidence that there are non-ordinary states of consciousness where the usual correlations between brain states and mind states break (see details here). If only one of these cases proves to be true (and I think at least one of them, the psilocybin study at Imperial College, has been proven true beyond reasonable doubt; see my debate on this with Christoph Koch here.), then the hypothesis that the brain causes the mind is falsified. Novella ignores all this evidence in this opinion piece, and writes as if it didn’t exist. You can also watch the video embedded in his post for an explanation of his position. I will address his two main points, both of which are erroneous. He seems highly impressed by the fact that neuroscientific studies have shown that psilocybin decreases brain activity and causes a “mystical” experience, as if this contradicts the prediction that the brain correlates with the mind (so in reality he does not accept the correlation and that is the reason for his rejection of the brain-mind hypothesis, not his obvious straw man about correlation and causation). Kastrup’s conclusion, however, is hopelessly naive. There are many examples where inhibiting the activity in one part of the brain enhances the activity in another part of the brain through disinhibition. In fact the very study he cites for support concludes: These results strongly imply that the subjective effects of psychedelic drugs are caused by decreased activity and connectivity in the brain’s key connector hubs, enabling a state of unconstrained cognition. Unconstrained cognition is another way of saying disinhibition. The concept is simple – there are many brain areas all interacting and processing information. This allows for complex information processing but also slows down the whole process – slows down cognition. That is the price we pay for complexity. If, however, we inhibit one part of the brain we lose some functionality, but the other parts of the brain are unconstrained and free to process information and function more quickly. The psilocybin study is a perfect example of this. The drug is inhibiting the reality testing parts of the brain, causing a psychadelic experience that is disinhibited and intense. This is similar to really intense dreams. You may have noticed that sometimes in dreams emotions and experiences can be more intense than anything experienced while awake. This is due to a decrease in brain activity in certain parts of the brain compared to the full waking state. Kastrup seems to be completely unaware of the critical concept of disinhibition and therefore completely misinterprets the significance of the neuroscience research. His next point is equally naive. He claims that near death experiences, in which people have intense experiences without brain activity, is further evidence of a lack of correlation between brain states and mental states. I have already dealt with this claim here. Briefly, there is no evidence that people are having experiences while their brain is not functioning. What we do have are reports of memories that could have formed days or even weeks later, during the recovery period following a near death experience. At the very least one has to admit that NDE claims are controversial. They are certainly not established scientific facts that can be used as a premise to counter the materialist hypothesis of brain and mind. Once again we see a hopelessly naive and confused defense of the mystical position that the mind is something more than the brain. To explicitly detail my position, so that it cannot easily be misrepresented again – if we look at the claim that the brain causes the mind as a scientific hypothesis, based upon the current findings of neuroscience we can make a few conclusions: – There is a tight correlation between brain states and mental states that holds up to the limits of resolution of our ability to measure both. – There are no proven examples of mental states absent brain function. – Brain states precede their corresponding mental states, and changes to the brain precede the corresponding changes to the mind. – At present the best scientific inference we can make from all available evidence is that the brain causes the mind. This inference is strong enough to treat it as an established scientific fact (as much as evolution, for example) but that, of course, is not the same thing as absolute proof. – There are other hypotheses that can also explain the correlation, but they all add unnecessary elements and are therefore eliminated by the application of Occam’s razor. They are the equivalent of light-switch fairies. I have made all these points before, but given the fact that Katrup completely misinterpreted my previous writings it cannot hurt to summarize them so explicitly. Kastrup himself adds nothing of interest to the discussion. He flogs the “correlation is not causation” logical fallacy as if that’s a deep insight, and is unaware of the fact that his application of it is just a straw man. He pays lip service to the notion that brain function correlates with mental states, getting up on his logical fallacy high horse, but this all appears to be a misdirection because his real point is that brain function does not correlate with mental states. He then trots out the long debunked notion of near death experiences as his big evidence for this conclusion, without addressing the common criticisms of this position (even by the person he is currently criticizing). His only other evidence is a complete misunderstanding of pharmacological neuroscience research. I can see no better way to end this piece than with a quote from Kastrup himself, which applies in a way I believe he did not intend: “In my personal view, this superficial and intellectually light-weight opinion piece adds nothing of value to the debate about the mind-body problem.” 44 responses so far 44 Responses to “Another Blogger Jumps Into the Dualism Fray” 1. daedalus2uon 15 May 2012 at 9:42 am It is worth pointing out that the technique used to infer brain activity (fMRI), doesn’t really measure brain activity, what it measures is differential changes in relative quantities of oxy- and deoxyhemoglobin. It is blood flow that is being measured. That blood flow change is caused by nitric oxide. A change in nitric oxide levels will change brain activity and brain behavior because that is how the brain controls itself. It is not just that the idea of a immaterial mind would be greatly complicated by something like “mind fairies” (I personally don’t like arguments from Occam’s razor when evaluating what reality actually is, using Occam’s razor to evaluate a model is ok, but we know that “all models are wrong, some are useful”). There is the problem of conservation of mass/energy, momentum, spin, charge, etc. The brain certainly is made out of materials with all of those conserved things. That matter cannot be influenced except via processes that also conserve those things (so far as we know as in the Standard Model, General Relativity and so on). Those conservation laws have been tested to energies ~12 orders of magnitude greater than energy relevant to brain activity. There has been a complete absence of deviations from those conservation laws. Our default hypothesis should be that the brain is matter just like everything else we are able to interact with. There is no datum that is inconsistent with that default hypothesis. If the brain can only be affected by processes which conserve various quantities, then the idea that there is an immaterial mind is not correct. 2. SARAon 15 May 2012 at 9:47 am I can understand the desire to make the mind be the cause. If you think about brain causing mind, all of life becomes a sort of non-choice. All of my life becomes a slavery to brain chemicals and reactions, over which I have little or no control. Because the “I” of my brain is really just a reaction of brain function. But, I will never understand why people will take slivers and the barest threads of evidence and build a case against a mountain. Suppose their few examples are true. They are all contrived or extreme situations. So, are they really making a case for our every day lives to be caused by this outside “mind” rather than brain? It feels like they arguing that this outside soul (because lets face it – their mind/concsiousness argument is a thinly veiled attempt to infuse us with a soul) is just a voyeuristic rider who only makes an appearance when you sleep, take drugs, strangle yourself, etc and then it makes a spirited escape when you die? That doesn’t seem to be an argument for that being the actual “I” of the mind, does it? Frankly, I’m less disturbed by the idea of being a slave to the chemicals and electrical reactions in my brain. 3. Bernardoon 15 May 2012 at 10:49 am 4. Gallenodon 15 May 2012 at 11:01 am SARA: While we all may be “moist robots” (per Scott Adams), you’re not so much a slave to brain chemicals and electrical reactions as you are constrained by them. You’re still free to make choices within the limits of human perception, comprehension and thought. Dualism supports the human desire for immoratality beyond the limits of physical bodies. It’s a popular prop to the idea that some part of us will survive death. Therefore many people will want to believe in it despite any and all evidence to the contrary; even the most hardened skeptics likely want to exist forever. The tragedy is that the current evidence says we won’t and if you accept that you need to find another reason for existence than living a life that gets you into the afterlife of your choice. And that generally involves the realm of philosophy, not hard science. 5. RickKon 15 May 2012 at 11:23 am Steve – it’s “Fray”, no? 6. tyler the new ageron 15 May 2012 at 11:41 am Hi Dr.Novella, I would like to see this debate between you and Bernardo Kastrup to continue in a pleasant manner, I find your personal attacks on him a little of putting. You are also ignoring a large body of evidence we have for the survival of consciousness after death and consciousness being something more than brain activity. Proxy sittings (Mrs. Piper in particular), drop in communicators, cross correspondences, shared near death experiences and veridical NDEs, shared death bed visions, multiple witness apparitions, children with past life memories, hauntings not associated with one person and the ending of such hauntings by spirit rescue mediumship. I can go on and on. We should not be bigoted against the evidence Dr. Novella. Finally I would like to share something from the late great researcher Montague Keene: The challenge to Mr. Randi and friends (written by late Montague Keen) I present Mr. Randi, and any of his fellow-skeptics, with a list of some of the classical cases of paranormality with most or all of which Mr. Randi will be familiar. I know he will be because he has been studying the subject for half a century, he tells us. ….. I would not imply that Mr. Randi is ignorant of these cases, many of which have long awaited the advent of a critic who could discover flaws in the paranormality claims. For me to suggest this would imply the grossest hypocrisy on Mr. Randi’s part. But to refresh his memory, and help him along, and despite the refusal of some of his colleagues like Professor Kurtz, Professor Hyman and Dr. Susan Blackmore to meet the challenge, I list the requisite references. They are based on (although not identical to) a list of twenty cases suggestive of survival prepared by Professor Archie Roy and published some years ago in the SPR’s magazine, The Paranormal Review as an invitation or challenge to skeptics to demonstrate how any of these cases could be explained by “normal” i.e. non-paranormal, means. Thus far there have been no takers. It is now Mr. Randi’s chance to vindicate his claims. 1. The Watseka Wonder, 1887. Stevens, E.W. 1887 The Watseka Wonder, Chicago; Religio-philosophical Publishing House, and Hodgson R., Religio-Philosophical Journal Dec. 20th, 1890, investigated by Dr. Hodgson. 2. Uttara Huddar and Sharada. Stevenson I. and Pasricha S, 1980. A preliminary report on an unusual case of the reincarnation type with Xenoglossy. Journal of the American Society for Psychical Research 74, 331-348; and Akolkar V.V. Search for Sharada: Report of a case and its investigation. Journal of the American SPR 86,209-247. 3. Sumitra and Shiva-Tripathy. Stevenson I. and Pasricha S, and McLean-Rice, N 1989. A Case of the Possession Type in India with evidence of Paranormal Knowledge. Journal of the Society for Scientific Exploration 3, 81-101. 4. Jasbir Lal Jat. Stevenson, I, 1974. Twenty Cases Suggestive of Reincarnation (2nd edition) Charlottesville: University Press of Virginia. 5. The Thompson/Gifford case. Hyslop, J.H. 1909. A Case of Veridical Hallucinations Proceedings, American SPR 3, 1-469. 6. Past-life regression. Tarazi, L. 1990. An Unusual Case of Hypnotic Regression with some Unexplained Contents. Journal of the American SPR, 84, 309-344. 7. Cross-correspondence communications. Balfour J. (Countess of) 1958-60 The Palm Sunday Case: New Light On an Old Love Story. Proceedings of the Society for Psychical Research, 52, 79-267. 8. Book and Newspaper Tests. Thomas, C.D. 1935. A Proxy Case extending over Eleven Sittings with Mrs Osborne Leonard. Proceedings SPR 43, 439-519. 9. “Bim’s” book-test. Lady Glenconnor. 1921. The Earthen Vessel, London, John Lane. 10. The Harry Stockbridge communicator. Gauld, A. 1966-72. A Series of Drop-in Communicators. PSPR 55, 273-340. 11. The Bobby Newlove case. Thomas, C. D. 1935. A proxy case extending over Eleven Sittings with Mrs. Osborne Leonard. PSPR 43, 439-519. 12. The Runki missing leg case. Haraldsson E. and Stevenson, I, 1975. A Communicator of the Drop-in Type in Iceland: the case of Runolfur Runolfsson. JASPR 69. 33-59. 13. The Beidermann drop-in case. Gauld, A. 1966-72. A Series of Drop-in Communicators. PSPR 55, 273-340. 14. The death of Gudmundur Magnusson. Haraldsson E. and Stevenson, I, 1975. A Communicator of the Drop-in Type in Iceland: the case of Gudni Magnusson, JASPR 69, 245-261. 15. Identification of deceased officer. Lodge, O. 1916. Raymond, or Life and Death. London. Methuen & Co. Ltd.16. Mediumistic evidence of the Vandy death. Gay, K. 1957. The Case of Edgar Vandy, JSPR 39, 1-64; Mackenzie, A. 1971. An Edgar Vandy Proxy Sitting. JSPR 46, 166-173; Keen, M. 2002. The case of Edgar Vandy: Defending the Evidence, JSPR 64.3 247-259; Letters, 2003, JSPR 67.3. 221-224. 17. Mrs Leonore Piper and the George “Pelham” communicator. Hodgson, R. 1897-8. A Further Record of Observations of Certain Phenomena of Trance. PSPR, 13, 284-582. 18. Messages from “Mrs. Willett” to her sons. Cummins, G. 1965. Swan on a Black Sea. London: Routledge and Kegan Paul. 19. Ghostly aeroplane phenomena. Fuller, J.G. 1981 The Airmen Who Would Not Die, Souvenir Press, London. 20. Intelligent responses via two mediums: the Lethe case. Piddington, J.G. 1910. Three incidents from the Sittings. Proc. SPR 24, 86-143; Lodge, O. 1911. Evidence of Classical Scholarship and of Cross-Correspondence in some New Automatic Writing. Proc. 25, 129-142 7. SARAon 15 May 2012 at 11:50 am # Gallenod I have a hard time fully wrapping my head around this thought, so tell me where I’m wrong. I want to be wrong. But since our perception, comprehension and thought are only defined by the chemicals and electrical reactions in our brain and since every neural reaction is merely caused by the previous ones, how can we be anything but puppets to those reactions? Since there is no first cause mind to change the course of the brain, there is only a cascade of mindless reactions being perceived as mindful ones. Isn’t anything else merely an illusion created by our brain? 8. tyler the new ageron 15 May 2012 at 12:00 pm Many of the phenomena that point to an afterlife are anecdotal, but not all anecdotal evidence must be rejected outright, because anecdotal evidence can be valid if the witnesses are competent and are in good standing. Then there is evidence of the afterlife that is not anecdotal, but not experimental, but is evidence of field research showing a systematic pattern that repeats, such as cross-correspondences and children seem to remember past lives. Here are a few questions for your Dr.Novella? Have you actually followed the NDE literature? Also what do you have to say about the Ring Study which demonstrated that NDErs who were born blind or became blind at a young age had powerful visual components in their NDE? Many NDEr’s have correctly identified conversations and visual aspects of their environment. In any case I will save my time because all my arguments for the evidence for survival of consciousness after death will be strongly rejected on this blog on the basis of Wishful thinking Holding on to cherished beliefs Laws of physics being broken Experimenter error File drawer effect Will to believe I sincerely hope this debate with Bernardo Kastrup continues. 9. bgoudieon 15 May 2012 at 12:33 pm I’d like to propose the law of conservation of piss poor thinkers. At any given time at least one must appear on any skeptical blog, making arguments long since discredited, yet insisting that they are the one drawing the correct conclusion by looking at the “real” evidence. Should such a poster go away they will be replaced within hours. It’s as if ignorance has existence beyond the physical brain. Astounding to consider the implications. 10. SARAon 15 May 2012 at 12:42 pm I think you could just call it The Law of Trolls. It’s not actually limited to skeptics vs nonskeptics. It’s anywhere that a controversy creates a gateway for attention mongering. 11. Steven Novellaon 15 May 2012 at 1:05 pm Tyler – I am not ignoring NDEs. I linked to a prior post in which I assessed the evidence. The bottom line is that the evidence for anything paranormal, including NDEs, is all weak and anecdotal. None of these phenomena are well established and generally accepted by scientists. There is a good reason for that. 12. locutusbrgon 15 May 2012 at 1:11 pm What I am enjoying are trolls who prove themselves wrong in their own statements by including superficial criticism with other non-sense. Glad I did not have to point out how obviously you are trying to disarm the argument utilizing straw-man, and no true Scotsman logical fallacies. Just keep rambling on and on. It always impresses me that volumes of arguments are posted to refute one point. An attempt to confuse and distract like a good magician should. 13. ccbowerson 15 May 2012 at 1:20 pm “I personally don’t like arguments from Occam’s razor when evaluating what reality actually is…” The way I think about it in instances like these is that Occam’s razor is not used as an argument for the “way things are,” but it is useful in pointing about where the burden of proof lies. 14. daedalus2uon 15 May 2012 at 1:43 pm I saw a good blog post at another site (which I have now lost track of), by a physicist who was trying to respond to those who posit an immaterial mind or some sort of spiritual energy. He wrote the Schrödinger equation of the electron in terms of energy, and all of the terms were recognized, total energy equals kinetic energy plus potential energy and asked the question where is the “spiritual term”. If an electron is going to be influenced by something, its Schrödinger equation has to have a term for that effect. 15. Shelleyon 15 May 2012 at 1:45 pm “. . . anecdotal evidence can be valid if the witnesses are competent and are in good standing.” Not really. Anecdotal ‘evidence’ is based on one’s experiences and perceptions. It is, in effect, single witness testimony. Please do a literature review on the many factors that impair, affect, and weaken the accuracy of eyewitness testimony (even the testimony of those of good character) before you decide that anecdotal evidence should be taken as valid. Anecdotal evidence is extremely weak, and is useful only to the extent that it can sometimes lead to testable scientific hypotheses. Whenever anecdotal evidence has lead to testable hypotheses in NDEs etc, it has not held up. Really, most of us would love to be proven wrong on this topic. (How cool would that be?) Unfortunately, there is simply no compelling evidence that the mind is anything more than what the brain does. 16. daedalus2uon 15 May 2012 at 2:39 pm CC, the nature of reality is what it is. There isn’t a “burden of proof” to establish what the nature of reality is. I find “argument from burden of proof” to be unsatisfactory. This is really the crux of the difficult problem that Kuhn noticed. The usual default is the current scientific understanding, even when that understanding is known to be wrong. This is not what the usual default should be. The usual default should be whatever model is most consistent with the most data that is reliable. The reason this is so problematic is that humans adopt the “conventional wisdom” not because it corresponds with the most data, but because of normal human feelings, my friend said so, it feels right, my intuition says so, the experts all agree. These are all arguments from authority, someone else believes it, so it must be right. That is not an argument. Human hyperactive agency detection pulls for this type of belief adoption mechanism. This was the problem that Einstein had getting Relativity accepted. Einstein didn’t get the Nobel Prize for Relativity because those on the Nobel Committee didn’t think it was correct. The conventional wisdom of the time was that there had to be and was absolute time and space. It turns out there isn’t. There wasn’t any data that required absolute time and space. It was human conceptual limitations that required absolute time and space. That is one of the consequences of our evolved human brains. Some things are easy to do and understand because they are hard-wired. Some things are not, and our neuroanatomy induces errors. These are like the optical illusions introduced by the neuroanatomy of our visual processing systems. We can recognize that they are optical illusions because we have a model of reality that is independent of our visual system and we can use that model of reality to recognize and override optical illusions. You should also never confuse reality with the model of reality that you are using. The same thing happens with cognition. Humans have cognitive illusions which are analogous to optical illusions which are brought on by our cognitive neuroanatomy. It is only by recognizing, acknowledging and compensating for our cognitive illusions that we can get beyond them. SARA, Yes, our brains are made of meat, and meat as a computational device has its limitations. Either you understand those limitations and compensate by working around them, or you don’t and are stuck believing things that are demonstrably wrong, just like optical illusions. Rejecting things which feel right but which are demonstrably wrong is difficult to do, but it gets easier over time. It is easier to do if you argue from data. That is why I like to always emphasize that there is no data in support of a non-material mind by using the singular form “datum”. Anecdotes are perfectly acceptable as data, once they are recognized for their limitations, the extreme lack of statistical power. Some arguments don’t need statistical power. If you have the hypothesis that all swans are white, a single anecdote of a black swan refutes that argument with virtually 100% certainty (the swan could have been painted black, you could be having a delusion, it is opposites day, it is a clever mechanical flying device that looks like a black swan). 17. SARAon 15 May 2012 at 3:03 pm I accept that the brain is directing our mind. What I question is if we only perceive that we are making adjustments for our cognitive deficiencies, directing our brain in a direction of thought, or whether that directional choice is only the illusion of the “meat”. I’m honestly having a hard time putting into words this concept that is bouncing around in head that we really have no choice. I choose to go on a diet. Or do I? Since I have no “first cause” mind directing my brain, then the choice is an outcome of undirected neural reaction. My brain is just a program following pathways and rules and firing off commands, but all of it is really just a predetermined outcome of the previous conditions. My idea that it is a choice is an artifact of all of those cognitive deficiencies. Please show me I’m wrong. I am really wishing I hadn’t followed this line of thinking. 18. Stefanon 15 May 2012 at 3:03 pm Not surprisingly, Bernardo does not accept any comments that might hurt his book sales… I posted twice this morning and neither post ever appeared, while several others, including his own in support of his fans, have. 19. DOYLEon 15 May 2012 at 3:13 pm The brain-mind idea of cause and effect seems analogous to the mechanical properties of an engine or car.All the qualities that are essential to a vehicle(lights,signals,climate control,audio,reclining movement,window movement,navigation)are dependent upon the function of power,the fuction of catalyst.You need combustion and current to enable a car to portray a car. 20. daedalus2uon 15 May 2012 at 4:24 pm Doyle, I think what you are looking for is the analogy of agency. Top-down control of something by an agent. In a car, you have the designer who designed the hardware, the fabricator who puts the pieces together, and the driver who controls the mechanism once assembled. The problem with the brain-mind analogy is that it presupposes that there is top-down control of the brain by the mind. There is no “top” in the brain, there is no “ghost” in the machine that is controlling things. The default assumption that humans have that there is top-down control has to do with human hyperactive agency detection. Human hyperactive agency detection applies to self-detection too. The idea that there is an “I”, that is a unique entity continuous over the lifespan is an illusion (albeit a persistent one). The reason there is such an illusion, is because the resolution of the self-identity detection is so poor. Organisms don’t need to identify self to a high degree of resolution because there is only a single “self” inside our brain. All you need is a “I am me” module that identifies the self as me when ever it is interrogated. That is why when people experience brain damage, they still self-identify as themselves (except in rare instances where that particular part of the brain is damaged). There is no great utility for higher fidelity resolution beyond that which prioritizes self-preservation, so evolution didn’t provide it. Positing top-down control doesn’t provide any answers because it simply moves the control problem to a different level. If the mind controls the brain, then what controls the mind? If the soul controls the mind, then what controls the soul? There is no “top” from which top-down control can be exerted. The brain forms from a single cell. How can that single cell exert top-down control of neuroanatomy? Very clearly it can’t, and it doesn’t. We know that we don’t understand how the neuroanatomy of the brain is created and how it does all of the things that it does, but we do know that “it” can’t exert top-down control before “it” exists. This is also an example of why defaulting to “I don’t know” is a lot better than guessing or taking somebody’s superficial idea. 21. ccbowerson 15 May 2012 at 4:51 pm If we are looking at this from a scientific perspective, that is the best we can do: a theory that best fits reality. What I mean by burden of proof (I assumed that I was understood) is that if there are 2 alternate explanations for the same phenomenon, then an explanation that adds another layer of complexity should be tentatively rejected in favor of a simpler explanation, assuming that it does not add any explanatory power. It appears that you don’t find that sufficient, because there is no reason to believe that the simpler explanation is more likely the “Truth,” but that is not why it is useful. It is the ‘best bet,’ and the main utility is to eliminate the infinite number of alternative theories that add nothing but further complexity. In order to get closer to the Truth one must distinguish between theories in ways that one appears to fit with reality better. I’m not sure there is a way of accessing ‘reality’ in the way you imply. Perhaps we are talking past each other here, I’m not sure 22. gr8googlymooglyon 15 May 2012 at 8:23 pm Funny how none of Tyler the Newager’s references are newer that 1990 (and a full 9 of them are pre-1940). Have we not learned anything new in the past 22 years about this supernatural pseudoscientific ‘theory’ of duality? Apparently not. Is it really lost on the Tyler’s and Kastrup’s of the world that the more you have to apologize for your pet magic theory, the less likely it is to be real? 23. NewRonon 15 May 2012 at 11:23 pm I may be missing something but I do not know of any scientific answers to the following: How do thoughts arise from physical processes? How do physical processes give rise to a conscious state? How does preconscious physical activity produce a conscious experience of an event? What produces a sense of self from physical brain activity? If I have not missed something and there are no scientific answers to the above, then am I remiss in tentatively holding that the mind is separate from the brain? Or perhaps I should just trust that scientific answers will be forthcoming. 24. BillyJoe7on 16 May 2012 at 12:26 am You don’t believe in freewill?…welcome to the club. :) The whole concept of freewill is irrational. But you know this. I’m just offering support. The illusion of freewill is pretty convincing, however, which is why even those of us who recognise that that is all it is, still act as if we have freewill. 25. Bernardoon 16 May 2012 at 1:48 am “# Stefanon 15 May 2012 at 3:03 pm I moderate all comments to avoid the uncontrolled level of spam I once had, because I do not require registration (as here). When you posted your comments I was asleep and could not release them immediately. Sorry it took a couple of hours. I approved (as usual) all comments posted yesterday and replied to a couple today. Gr, B. 26. Bernardoon 16 May 2012 at 1:59 am Hhmm… I released several comments now but none from a “Stefanon.” Did you post your comment under another name? Can you have a look to make sure it has been published? If not, please let me know. Gr, B. 27. Mantikion 16 May 2012 at 7:27 am I am surprised that the interpretation of Libet’s work has gained such traction amongst materialists. Although I am a theist (via a range of ESP and spiritual experiences) I grappled with the issue of sub/pre conscious decision making following experiences where I made one-handed catches of balls or apples without being consciously aware they were passing within reach. In one case, someone threw an apple at the side of my head and the first thing I knew I was staring at it in my hand (despite being a poor catch). After consideration, it occurred to me that this action is explicable in the same way that we perform innumerable daily actions “unconsciously” such as navigation, dressing and walking etc. In these circumstances, our automatic actions are the result of programmed responses and reflexes laid down from early learning experiences. How does this relate to Libet etc? Well I note that the researchers’ observed responses in advance of the subjects’ conscious awareness of decision making is not 100 percent but more of a ten percent shift in terms of the number of subject. My thinking therefore is that what is being detected neurologically is some form of pre-decision-making algorithmic “tipping point”. In other words, a proportion of subjects are using a number of pre-determined factors as inputs into an algorithm which they use to make the decision and when the individual results from those inputs reach a “critical point”, a decision is made. The EEGs therefore are measuring the critical point at which the decision is triggered. Thus for that significant group of subjects, the EEG displays the tipping-point of decision (not the decision itself) before the subject is aware of it. My conclusion therefore is that (unless I have misunderstood the experimental method) that such experiments in no way demolish the concept of conscious free-will. And in turn have nothing to say about the materialist concept of mind being an epiphenomenon of electro-chemical brain activity. 28. daedalus2uon 16 May 2012 at 7:39 am New Ron, to answer your rhetorical questions. I don’t know. I don’t know. I don’t know. I don’t know. Not knowing doesn’t give one license to make stuff up just because you want it to be that way. I don’t know what is at the bottom of the Atlantic Ocean. That doesn’t mean I can assume that there is a vast undersea civilization called Atlantis ruled by someone named Aquaman who can breathe water and can control aquatic animals telepathically. I do know there is conservation of mass/energy. Any answers to your questions that violate conservation of mass/energy are virtually certain to be wrong. They are so likely to be wrong that they are not worth serious consideration. All of your questions could be reposed in the form “how does a non-material mind do ….” Assuming a non-material mind doesn’t answer any of your questions, it is the equivalent of saying “it is magic”. Maybe it is magic, but assuming something is magic because is is not yet understood is not how science works. 29. RickKon 16 May 2012 at 7:41 am NewRon – can I paraphrase your question a different way? “In these cases (consciousness, etc.) is it safe to assume that a natural phenomena does not have a natural cause?” There are a lot of things science hasn’t answered – that’s always been true. And literally millions of times we’ve seen supernatural explanations definitively replaced by explainable natural causes. In all that time, we’ve never once seen a natural explanation definitively replaced by a supernatural explanation. I’d be quite happy to learn that my “mind” is really an ethereal thing that lives on after my brain is wormfood. But I don’t like the idea of betting on a team that has all failures and no successes. My money is on “mind” and consciousness being no more than a product of natural mechanisms found within our brains and bodies. Oh, and since consciousness, sense of self, etc. can be affected/altered/influenced by physical mechanisms (drugs, stroke, mental training and meditation, etc.), then the data points heavily to “mind” being a manifestation of a physical brain. You concluded with: “perhaps I should just trust that scientific answers will be forthcoming.” It’s a pretty safe bet to assume “natural phenomena have natural causes”. But if you’re not sure, feel free to review history for definitive exceptions to that assumption. 30. SteveAon 16 May 2012 at 7:51 am I’m not sure what you’re saying here. Is there some text missing? Someone saying they saw a black swan proves nothing; a person holding a feather from a black swan (or the whole darned thing) has to be taken seriously. In a real-life situation I would give the black swan spotter the benefit of the doubt (because there is abundant evidence they exist); but someone who comes to me with stories of purple swans really does need to bring a feather with them. tyler the new ager: “anecdotal evidence can be valid if the witnesses are competent and are in good standing” Really? Did you ever see the ‘Surgeon’s Photograph’ of the Loch Ness Monster? Even as a child I thought it looked fake. The only credibility it had was that it was taken by a surgeon, a professional gentleman who ‘had no reason to lie’. That’s why they gave it the name they did – Look! A surgeon took this! A real surgeon. But it was a fake and he eventually confessed to it. PS your list of papers has probably given Brian Dunning his next 20 episodes of Skeptoid. 31. Ufoon 16 May 2012 at 8:09 am Some of you might be interested in this interview of Kastrup for background: The show is run by a “true believer” so don’t get fooled by the name of the podcast. 32. Steven Novellaon 16 May 2012 at 8:21 am New Ron – don’t confuse different levels of questions. We can know scientifically that the brain causes consciousness without knowing exactly how it does. These are separate questions. We also don’t know nothing about how consciousness emerges from brain function. Knowledge is not all or nothing. We know quite a bit, but also there is a lot we don’t yet understand. More telling is the fact that the materialist paradigm of neuroscience is working fine and progressing rapidly. There is no need to hypothesize a magical non-corporeal source of mental function, and such a notion is of no value in our ongoing attempts to understand the mind. 33. Eric Thomsonon 16 May 2012 at 10:35 am I am a bit surprised by all this talk of the brain ‘causing’ the mind. This way of characterizing it invokes the old 19th century view in which brains and minds are obviously different sorts of things, and minds are caused by brains the way bile is secreted by the pancreas, or steam blown off by a train. After all, if if X causes Y, X is usually different from Y, like the rock thrown that caused the glass to break. Shouldn’t we just say that conscious experiences simply are complex brain states? By analogy, the exchange of oxygen/carbon dioxide from our blood does not cause respiration: it is respiration. The voltage spikes in a neuron don’t cause the action potential: they are the action potential. While some might say such loose language is innocuous, it actually leads you to make some claims that strike a strange cord. Aside from abetting dualism (see below), it seems likely false that changes in brain activity precede</i mental state changes. What we typically observe, when it comes to experiences (as in binocular rivalry and such) is that both change at exactly the same time. And this is what we expect from a conscious brain (versus a brain that produces mind). I don't expect brain state changes to precede changes in conscious states: I expect them to perfectly mirror one another, the way that action potentials are perfectly mirrored by voltage spikes.[see note below] Also, saying the brain ’causes’ the mind abets dualism. I could be a substance dualist and agree that the brain causes the mind. They are different, but I’m fine with a bidirectional coupling between brain and mind. For me, the mind is a nonphysical substance that interact with the brain. We materialists should be more clear. The brain does not generate consciousness as some separate thing going on in parallel. Putting it in such terms generates really weird questions like those from New Ron about how the “two things” are connected. That is, dualism. Rather, conscious experiences are complex states of certain brains, period. Until I read this post, I thought such talk of brain states ‘causing’ mental states was largely innocuous. 34. Eric Thomsonon 16 May 2012 at 12:12 pm Ugh I said pancreas produces bile. Umm…that should be ‘liver.’ 35. Kawarthajonon 16 May 2012 at 2:33 pm Light switch fairies – I had no idea that they were involved in turning lights on. Those light switch fairies were reeking havoc with my light in my 120 year old house with 1930’s wiring, making it fizzle and spark when I tried to turn the light on. Changing the switch and rewiring the light seems to have scared them away for now. 36. NewRonon 16 May 2012 at 7:24 pm I am bemused at how a simple set of questions and an admission that I hold a tentative belief – that is one that is uncertain and open to change – can evoke such derisive terms from Steven and Deadalus2u as “magical” and “magic”. At least RickK and Eric Thompson were able to respond (although for me not convincingly) without resorting to such language. 37. Alastair F. Paisleyon 16 May 2012 at 9:23 pm @ Steven Novella The bottom line is that you really don't have any objective scientific evidence that subjective awareness is physical. If you did, then you could furnish us with the physical properties of consciousness. 38. Alastair F. Paisleyon 16 May 2012 at 9:27 pm @ Eric Thomson > I am a bit surprised by all this talk of the brain ‘causing’ the mind. < I'm not. Most materialists presuppose some form of dualism. 39. BillyJoe7on 17 May 2012 at 12:13 am “For me, the mind is a nonphysical substance that interact with the brain.” Nonphysical substance? Actually, the mind IS physical. It is, however, non-material. 40. gervasiumon 17 May 2012 at 9:01 am Eric Thompson, the brain simply is not the mind, even from a materialist perspective, the same way that the Lungs are not respiration, but lungs cause respiration. A dead brain is still a brain and it produces no consciousness. 41. eiskrystalon 17 May 2012 at 10:23 am Yes, Dr. Novella, that bit about it being a “superficial and intellectually light-weight opinion piece” was particularly cutting. How could you!!1! 42. Mantikion 17 May 2012 at 5:19 pm Further to my earlier post, the recent successful biological implant which facilitated the paralysed woman to control a mechanical arm using electrical impulses from her brain can only work via a conscious decision on her part. Any attempt to rationalise a source for those signals beyond her freewill leads nowhere. Turtles all the way down so to speak! Given time and practice familiarity would allow her to develop algorithms to look for “primers” that would allow her to control the robotic arm more smoothly without conscious thought but this would in no way prove that some other sum parts of her brain was combining to “produce” consciousness. Materialist explanations for consciousness are speculation unsupported by evidence. 43. etatroon 17 May 2012 at 6:16 pm @ New Ron – Have you read a neuroscience textbook? Scientific answers to those questions are in a textbook. Go to and type in those key words, you’ll find the “physical processes” in the form of biological activity leading to those mental states. The “how” question is often complicated and nuanced. Just because you can’t explain how the brain causes the mind, doesn’t mean that the brain doesn’t cause the mind. For centuries, we didn’t know “how” gravity worked, but we didn’t assume that it didn’t exist. 44. Mantikion 20 May 2012 at 10:48 pm Hi etatro Just because you can’t explain how the mind can be independant of the brain doesn’t mean that the brain causes the mind. The explanation that consciousness is something fundamental rather than emergent can sit neatly within all scientific paradigms (including neuroscience). Trackback URI | Comments RSS Leave a Reply You must be logged in to post a comment.
1fb23c7131d03c82
Atomic orbital From Wikipedia, the free encyclopedia   (Redirected from Electron cloud) Jump to: navigation, search The shapes of the first five atomic orbitals: 1s, 2s, 2px, 2py, and 2pz. The colors show the wave function phase. These are graphs of ψ(x, y, z) functions which depend on the coordinates of one electron. To see the elongated shape of ψ(x, y, z)2 functions that show probability density more directly, see the graphs of d-orbitals below. An atomic orbital is a mathematical function that describes the wave-like behavior of either one electron or a pair of electrons in an atom.[1] This function can be used to calculate the probability of finding any electron of an atom in any specific region around the atom's nucleus. The term may also refer to the physical region or space where the electron can be calculated to be present, as defined by the particular mathematical form of the orbital.[2] Each orbital in an atom is characterized by a unique set of values of the three quantum numbers n, , and m, which correspond to the electron's energy, angular momentum, and an angular momentum vector component, respectively. Any orbital can be occupied by a maximum of two electrons, each with its own spin quantum number. The simple names s orbital, p orbital, d orbital and f orbital refer to orbitals with angular momentum quantum number = 0, 1, 2 and 3 respectively. These names, together with the value of n, are used to describe the electron configurations of atoms. They are derived from the description by early spectroscopists of certain series of alkali metal spectroscopic lines as sharp, principal, diffuse, and fundamental. Orbitals for > 3 continue alphabetically, omitting j (g, h, i, k, ...).[3][4][5] Electron properties Wave-like properties: 1. The electrons do not orbit the nucleus in the sense of a planet orbiting the sun, but instead exist as standing waves. The lowest possible energy an electron can take is therefore analogous to the fundamental frequency of a wave on a string. Higher energy states are then similar to harmonics of the fundamental frequency. Particle-like properties: 1. There is always an integer number of electrons orbiting the nucleus. 2. Electrons jump between orbitals in a particle-like fashion. For example, if a single photon strikes the electrons, only a single electron changes states in response to the photon. 3. The electrons retain particle like-properties such as: each wave state has the same electrical charge as the electron particle. Each wave state has a single discrete spin (spin up or spin down). Thus, despite the obvious analogy to planets revolving around the Sun, electrons cannot be described simply as solid particles. In addition, atomic orbitals do not closely resemble a planet's elliptical path in ordinary atoms. A more accurate analogy might be that of a large and often oddly shaped "atmosphere" (the electron), distributed around a relatively tiny planet (the atomic nucleus). Atomic orbitals exactly describe the shape of this "atmosphere" only when a single electron is present in an atom. When more electrons are added to a single atom, the additional electrons tend to more evenly fill in a volume of space around the nucleus so that the resulting collection (sometimes termed the atom’s “electron cloud”[6]) tends toward a generally spherical zone of probability describing where the atom’s electrons will be found. Formal quantum mechanical definition Atomic orbitals may be defined more precisely in formal quantum mechanical language. Specifically, in quantum mechanics, the state of an atom, i.e. an eigenstate of the atomic Hamiltonian, is approximated by an expansion (see configuration interaction expansion and basis set) into linear combinations of anti-symmetrized products (Slater determinants) of one-electron functions. The spatial components of these one-electron functions are called atomic orbitals. (When one considers also their spin component, one speaks of atomic spin orbitals.) A state is actually a function of the coordinates of all the electrons, so that their motion is correlated, but this is often approximated by this independent-particle model of products of single electron wave functions.[7] (The London dispersion force, for example, depends on the correlations of the motion of the electrons.) Fundamentally, an atomic orbital is a one-electron wave function, even though most electrons do not exist in one-electron atoms, and so the one-electron view is an approximation. When thinking about orbitals, we are often given an orbital vision which (even if it is not spelled out) is heavily influenced by this Hartree–Fock approximation, which is one way to reduce the complexities of molecular orbital theory. Types of orbitals False-color density images of some hydrogen-like atomic orbitals (f orbitals and higher are not shown) 1. the hydrogen-like atomic orbitals are derived from the exact solution of the Schrödinger Equation for one electron and a nucleus, for a hydrogen-like atom. The part of the function that depends on the distance from the nucleus has nodes (radial nodes) and decays as e−(constant × distance). 3. The form of the Gaussian type orbital (Gaussians) has no radial nodes and decays as e(−distance squared). Main article: Atomic theory The term "orbital" was coined by Robert Mulliken in 1932 as an abbreviation for one-electron orbital wave function.[8] However, the idea that electrons might revolve around a compact nucleus with definite angular momentum was convincingly argued at least 19 years earlier by Niels Bohr,[9] and the Japanese physicist Hantaro Nagaoka published an orbit-based hypothesis for electronic behavior as early as 1904.[10] Explaining the behavior of these electron "orbits" was one of the driving forces behind the development of quantum mechanics.[11] Early models With J.J. Thomson's discovery of the electron in 1897,[12] it became clear that atoms were not the smallest building blocks of nature, but were rather composite particles. The newly discovered structure within atoms tempted many to imagine how the atom's constituent parts might interact with each other. Thomson theorized that multiple electrons revolved in orbit-like rings within a positively charged jelly-like substance,[13] and between the electron's discovery and 1909, this "plum pudding model" was the most widely accepted explanation of atomic structure. Shortly after Thomson's discovery, Hantaro Nagaoka, a Japanese physicist, predicted a different model for electronic structure.[10] Unlike the plum pudding model, the positive charge in Nagaoka's "Saturnian Model" was concentrated into a central core, pulling the electrons into circular orbits reminiscent of Saturn's rings. Few people took notice of Nagaoka's work at the time,[14] and Nagaoka himself recognized a fundamental defect in the theory even at its conception, namely that a classical charged object cannot sustain orbital motion because it is accelerating and therefore loses energy due to electromagnetic radiation.[15] Nevertheless, the Saturnian model turned out to have more in common with modern theory than any of its contemporaries. Bohr atom In 1909, Ernest Rutherford discovered that bulk of the atomic mass was tightly condensed into a nucleus, which was also found to be positively charged. It became clear from his analysis in 1911 that the plum pudding model could not explain atomic structure. Shortly after, in 1913, Rutherford's post-doctoral student Niels Bohr proposed a new model of the atom, wherein electrons orbited the nucleus with classical periods, but were only permitted to have discrete values of angular momentum, quantized in units h/2π.[9] This constraint automatically permitted only certain values of electron energies. The Bohr model of the atom fixed the problem of energy loss from radiation from a ground state (by declaring that there was no state below this), and more importantly explained the origin of spectral lines. The Rutherford–Bohr model of the hydrogen atom. With de Broglie's suggestion of the existence of electron matter waves in 1924, and for a short time before the full 1926 Schrödinger equation treatment of hydrogen-like atom, a Bohr electron "wavelength" could be seen to be a function of its momentum, and thus a Bohr orbiting electron was seen to orbit in a circle at a multiple of its half-wavelength (this physically incorrect Bohr model is still often taught to beginning students). The Bohr model for a short time could be seen as a classical model with an additional constraint provided by the 'wavelength' argument. However, this period was immediately superseded by the full three-dimensional wave mechanics of 1926. In our current understanding of physics, the Bohr model is called a semi-classical model because of its quantization of angular momentum, not primarily because of its relationship with electron wavelength, which appeared in hindsight a dozen years after the Bohr model was proposed. Modern conceptions and connections to the Heisenberg Uncertainty Principle Immediately after Heisenberg discovered his uncertainty relation,[16] it was noted by Bohr that the existence of any sort of wave packet implies uncertainty in the wave frequency and wavelength, since a spread of frequencies is needed to create the packet itself.[17] In quantum mechanics, where all particle momenta are associated with waves, it is the formation of such a wave packet which localizes the wave, and thus the particle, in space. In states where a quantum mechanical particle is bound, it must be localized as a wave packet, and the existence of the packet and its minimum size implies a spread and minimal value in particle wavelength, and thus also momentum and energy. In quantum mechanics, as a particle is localized to a smaller region in space, the associated compressed wave packet requires a larger and larger range of momenta, and thus larger kinetic energy. Thus, the binding energy to contain or trap a particle in a smaller region of space, increases without bound, as the region of space grows smaller. Particles cannot be restricted to a geometric point in space, since this would require an infinite particle momentum. In the quantum picture of Heisenberg, Schrödinger and others, the Bohr atom number n for each orbital became known as an n-sphere[citation needed] in a three dimensional atom and was pictured as the mean energy of the probability cloud of the electron's wave packet which surrounded the atom. Orbital names Orbitals are given names in the form: X \, \mathrm{type}^y \ where X is the energy level corresponding to the principal quantum number n, type is a lower-case letter denoting the shape or subshell of the orbital and it corresponds to the angular quantum number , and y is the number of electrons in that orbital. For example, the orbital 1s2 (pronounced "one ess two") has two electrons and is the lowest energy level (n = 1) and has an angular quantum number of = 0. In X-ray notation, the principal quantum number is given a letter associated with it. For n = 1, 2, 3, 4, 5, …, the letters associated with those numbers are K, L, M, N, O, ... respectively. Hydrogen-like orbitals Main article: Hydrogen-like atom Quantum numbers Main article: Quantum number Complex orbitals The azimuthal quantum number describes the orbital angular momentum of each electron and is a non-negative integer. Within a shell where n is some integer n0, ranges across all (integer) values satisfying the relation 0 \le \ell \le n_0-1. For instance, the n = 1 shell has only orbitals with \ell=0, and the n = 2 shell has only orbitals with \ell=0, and \ell=1. The set of orbitals associated with a particular value of  are sometimes collectively called a subshell. The magnetic quantum number, m_\ell, describes the magnetic moment of an electron in an arbitrary direction, and is also always an integer. Within a subshell where \ell is some integer \ell_0, m_\ell ranges thus: -\ell_0 \le m_\ell \le \ell_0. The above results may be summarized in the following table. Each cell represents a subshell, and lists the values of m_\ell available in that subshell. Empty cells represent subshells that do not exist. = 0 = 1 = 2 = 3 = 4 ... n = 1 m_\ell=0 n = 2 0 −1, 0, 1 Subshells are usually identified by their n- and \ell-values. n is represented by its numerical value, but \ell is represented by a letter as follows: 0 is represented by 's', 1 by 'p', 2 by 'd', 3 by 'f', and 4 by 'g'. For instance, one may speak of the subshell with n=2 and \ell=0 as a '2s subshell'. Each electron also has a spin quantum number, s, which describes the spin of each electron (spin up or spin down). The number s can be +12 or −12. The Pauli exclusion principle states that no two electrons can occupy the same quantum state: every electron in an atom must have a unique combination of quantum numbers. Real orbitals An atom that is embedded in a crystalline solid feels multiple preferred axes, but no preferred direction. Instead of building atomic orbitals out of the product of radial functions and a single spherical harmonic, linear combinations of spherical harmonics are typically used, designed so that the imaginary part of the spherical harmonics cancel out. These real orbitals are the building blocks most commonly shown in orbital visualizations. In the real hydrogen-like orbitals, for example, n and have the same interpretation and significance as their complex counterparts, but m is no longer a good quantum number (though its absolute value is). The orbitals are given new names based on their shape with respect to a standardized Cartesian basis. The real hydrogen-like p orbitals are given by the following[19][20][21] p_z = p_0 where p0 = Rn 1Y1 0, p1 = Rn 1Y1 1, and p−1 = Rn 1Y1 −1, are the complex orbitals corresponding to = 1. Shapes of orbitals Cross-section of computed hydrogen atom orbital (ψ(r, θ, φ)2) for the 6s (n = 6, = 0, m = 0) orbital. Note that s orbitals, though spherically symmetrical, have radially placed wave-nodes for n > 1. However, only s orbitals invariably have a center anti-node; the other types never do. The lobes can be viewed as interference patterns between the two counter rotating "m" and "m" modes, with the projection of the orbital onto the xy plane having a resonant "m" wavelengths around the circumference. For each m there are two of these m⟩+⟨−m and m⟩−⟨−m. For the case where m = 0 the orbital is vertical, counter rotating information is unknown, and the orbital is z-axis symmetric. For the case where = 0 there are no counter rotating modes. There are only radial modes and the shape is spherically symmetric. For any given n, the smaller is, the more radial nodes there are. Loosely speaking n is energy, is analogous to eccentricity, and m is orientation. Generally speaking, the number n determines the size and energy of the orbital for a given nucleus: as n increases, the size of the orbital increases. However, in comparing different elements, the higher nuclear charge Z of heavier elements causes their orbitals to contract by comparison to lighter ones, so that the overall size of the whole atom remains very roughly constant, even as the number of electrons in heavier elements (higher Z) increases. The five d orbitals in ψ(x, y, z)2 form, with a combination diagram showing how they fit together to fill space around an atomic nucleus. Four of the five d-orbitals for n = 3 look similar, each with four pear-shaped lobes, each lobe tangent at right angles to two others, and the centers of all four lying in one plane. Three of these planes are the xy-, xz-, and yz-planes—the lobes are between the pairs of primary axes—and the fourth has the centres along the x and y axes themselves. The fifth and final d-orbital consists of three regions of high probability density: a torus with two pear-shaped regions placed symmetrically on its z axis. The overall total of 18 directional lobes point in every primaxy axis direction and between every pair. Orbitals table s pz px py dz2 dxz dyz dxy dx2−y2 fz3 fxz2 fyz2 fxyz fz(x2−y2) fx(x2−3y2) fy(3x2−y2) n = 1 S1M0.png n = 2 S2M0.png P2M0.png P2M1.png P2M-1.png n = 3 S3M0.png P3M0.png P3M1.png P3M-1.png D3M0.png D3M1.png D3M-1.png D3M2.png D3M-2.png n = 4 S4M0.png P4M0.png P4M1.png P4M-1.png D4M0.png D4M1.png D4M-1.png D4M2.png D4M-2.png F4M0.png F4M1.png F4M-1.png F4M2.png F4M-2.png F4M3.png F4M-3.png n = 5 S5M0.png P5M0.png P5M1.png P5M-1.png D5M0.png D5M1.png D5M-1.png D5M2.png D5M-2.png . . . . . . . . . . . . . . . . . . . . . n = 6 S6M0.png P6M0.png P6M1.png P6M-1.png . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Qualitative understanding of shapes Below, a number of drum membrane vibration modes are shown. The analogous wave functions of the hydrogen atom are indicated. A correspondence can be considered where the wave functions of a vibrating drum head are for a two-coordinate system ψ(r, θ) and the wave functions for a vibrating sphere are three-coordinate ψ(r, θ, φ). s-type modes p-type modes d-type modes Orbital energy Main article: Electron shell In atoms with a single electron (hydrogen-like atoms), the energy of an orbital (and, consequently, of any electrons in the orbital) is determined exclusively by n. The n=1 orbital has the lowest possible energy in the atom. Each successively higher value of n has a higher level of energy, but the difference decreases as n increases. For high n, the level of energy becomes so high that the electron can easily escape from the atom. In single electron atoms, all levels with different \ell within a given n are (to a good approximation) degenerate, and have the same energy. This approximation is broken to a slight extent by the effect of the magnetic field of the nucleus, and by quantum electrodynamics effects. The latter induce tiny binding energy differences especially for s electrons that go nearer the nucleus, since these feel a very slightly different nuclear charge, even in one-electron atoms; see Lamb shift. In atoms with multiple electrons, the energy of an electron depends not only on the intrinsic properties of its orbital, but also on its interactions with the other electrons. These interactions depend on the detail of its spatial probability distribution, and so the energy levels of orbitals depend not only on n but also on \ell. Higher values of \ell are associated with higher values of energy; for instance, the 2p state is higher than the 2s state. When \ell = 2, the increase in energy of the orbital becomes so large as to push the energy of orbital above the energy of the s-orbital in the next higher shell; when \ell = 3 the energy is pushed into the shell two steps higher. The filling of the 3d orbitals does not occur until the 4s orbitals have been filled. The increase in energy for subshells of increasing angular momentum in larger atoms is due to electron–electron interaction effects, and it is specifically related to the ability of low angular momentum electrons to penetrate more effectively toward the nucleus, where they are subject to less screening from the charge of intervening electrons. Thus, in atoms of higher atomic number, the \ell of electrons becomes more and more of a determining factor in their energy, and the principal quantum numbers n of electrons becomes less and less important in their energy placement. The energy sequence of the first 24 subshells (e.g., 1s, 2p, 3d, etc.) is given in the following table. Each cell represents a subshell with n and \ell given by its row and column indices, respectively. The number in the cell is the subshell's position in the sequence. For a linear listing of the subshells in terms of increasing energies in multielectron atoms, see the section below. s p d f g 1 1 2 2 3 3 4 5 7 4 6 8 10 13 5 9 11 14 17 21 6 12 15 18 22 7 16 19 23 8 20 24 Note: empty cells indicate non-existent sublevels, while numbers in italics indicate sublevels that could exist, but which do not hold electrons in any element currently known. Electron placement and the periodic table Atomic orbitals and periodic table construction 2s 2p 2p 2p 3s 3p 3p 3p 4s 3d 3d 3d 3d 3d 4p 4p 4p 5s 4d 4d 4d 4d 4d 5p 5p 5p 6s (4f) 5d 5d 5d 5d 5d 6p 6p 6p 7s (5f) 6d 6d 6d 6d 6d 7p 7p 7p Although this is the general order of orbital filling according to the Madelung rule, there are exceptions, and the actual electronic energies of each element are also dependent upon additional details of the atoms (see Electron configuration#Atoms: Aufbau principle and Madelung rule). Relativistic effects Examples of significant physical outcomes of this effect include the lowered melting temperature of mercury (which results from 6s electrons not being available for metal bonding) and the golden color of gold and caesium (which results from narrowing of 6s to 5d transition energy to the point that visible light begins to be absorbed).[26] In the Bohr Model, an n = 1 electron has a velocity given by v = Z \alpha c, where Z is the atomic number, \alpha is the fine-structure constant, and c is the speed of light. In non-relativistic quantum mechanics, therefore, any atom with an atomic number greater than 137 would require its 1s electrons to be traveling faster than the speed of light. Even in the Dirac equation, which accounts for relativistic effects, the wavefunction of the electron for atoms with Z > 137 is oscillatory and unbounded. The significance of element 137, also known as untriseptium, was first pointed out by the physicist Richard Feynman. Element 137 is sometimes informally called feynmanium (symbol Fy)[citation needed]. However, Feynman's approximation fails to predict the exact critical value of Z due to the non-point-charge nature of the nucleus and very small orbital radius of inner electrons, resulting in a potential seen by inner electrons which is effectively less than Z. The critical Z value which makes the atom unstable with regard to high-field breakdown of the vacuum and production of electron-positron pairs, does not occur until Z is about 173. These conditions are not seen except transiently in collisions of very heavy nuclei such as lead or uranium in accelerators, where such electron-positron production from these effects has been claimed to be observed. See Extension of the periodic table beyond the seventh period. There are no nodes in relativistic orbital densities, although individual components of the wavefunction will have nodes.[27] Transitions between orbitals Bound quantum states have discrete energy levels. When applied to atomic orbitals, this means that the energy differences between states are also discrete. A transition between these states (i.e. an electron absorbing or emitting a photon) can thus only happen if the photon has an energy corresponding with the exact energy difference between said states. Consider two states of the hydrogen atom: State 1) n = 1, = 0, m = 0 and s = +12 State 2) n = 2, = 0, m = 0 and s = +12 See also 3. ^ Griffiths, David (1995). Introduction to Quantum Mechanics. Prentice Hall. pp. 190–191. ISBN 0-13-124405-1.  4. ^ Levine, Ira (2000). Quantum Chemistry (5 ed.). Prentice Hall. pp. 144–145. ISBN 0-13-685512-1.  5. ^ Laidler, Keith J.; Meiser, John H. (1982). Physical Chemistry. Benjamin/Cummings. p. 488. ISBN 0-8053-5682-7.  6. ^ Feynman, Richard; Leighton, Robert B.; Sands, Matthew (2006). The Feynman Lectures on Physics -The Definitive Edition, Vol 1 lect 6. Pearson PLC, Addison Wesley. p. 11. ISBN 0-8053-9046-4.  7. ^ Roger Penrose, The Road to Reality 8. ^ Mulliken, Robert S. (July 1932). "Electronic Structures of Polyatomic Molecules and Valence. II. General Considerations". Physical Review 41 (1): 49–71. Bibcode:1932PhRv...41...49M. doi:10.1103/PhysRev.41.49.  9. ^ a b Bohr, Niels (1913). "On the Constitution of Atoms and Molecules". Philosophical Magazine 26 (1): 476. Bibcode:1914Natur..93..268N. doi:10.1038/093268a0.  10. ^ a b Nagaoka, Hantaro (May 1904). "Kinetics of a System of Particles illustrating the Line and the Band Spectrum and the Phenomena of Radioactivity". Philosophical Magazine 7 (41): 445–455. doi:10.1080/14786440409463141.  11. ^ Bryson, Bill (2003). A Short History of Nearly Everything. Broadway Books. pp. 141–143. ISBN 0-7679-0818-X.  12. ^ Thomson, J. J. (1897). "Cathode rays". Philosophical Magazine 44 (269): 293. doi:10.1080/14786449708621070.  13. ^ Thomson, J. J. (1904). "On the Structure of the Atom: an Investigation of the Stability and Periods of Oscillation of a number of Corpuscles arranged at equal intervals around the Circumference of a Circle; with Application of the Results to the Theory of Atomic Structure" (extract of paper). Philosophical Magazine Series 6 7 (39): 237. doi:10.1080/14786440409463107.  16. ^ Heisenberg, W. (March 1927). "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik". Zeitschrift für Physik A 43 (3–4): 172–198. Bibcode:1927ZPhy...43..172H. doi:10.1007/BF01397280.  17. ^ Bohr, Niels (April 1928). "The Quantum Postulate and the Recent Development of Atomic Theory". Nature 121 (3050): 580–590. Bibcode:1928Natur.121..580B. doi:10.1038/121580a0.  18. ^ Gerlach, W.; Stern, O. (1922). "Das magnetische Moment des Silberatoms". Zeitschrift für Physik 9: 353–355. Bibcode:1922ZPhy....9..353G. doi:10.1007/BF01326984.  19. ^ Levine, Ira (2000). Quantum Chemistry. Upper Saddle River, NJ: Prentice-Hall. p. 148. ISBN 0-13-685512-1.  20. ^ C.D.H. Chisholm (1976). Group theoretical techniques in quantum chemistry. New York: Academic Press. ISBN 0-12-172950-8. 21. ^ Blanco, Miguel A.; Flórez, M.; Bermejo, M. (December 1997). "Evaluation of the rotation matrices in the basis of real spherical harmonics". Journal of Molecular Structure: THEOCHEM 419 (1-3): 19–27. doi:10.1016/S0166-1280(97)00185-1.  22. ^ Powell, Richard E. (1968). "The five equivalent d orbitals". Journal of Chemical Education 45 (1): 45. Bibcode:1968JChEd..45...45P. doi:10.1021/ed045p45.  23. ^ Kimball, George E. (1940). "Directed Valence". The Journal of Chemical Physics 8 (2): 188. Bibcode:1940JChPh...8..188K. doi:10.1063/1.1750628.  24. ^ Cazenave, Lions, T., P.; Lions, P. L. (1982). "Orbital stability of standing waves for some nonlinear Schrödinger equations". Communications in Mathematical Physics 85 (4): 549–561. Bibcode:1982CMaPh..85..549C. doi:10.1007/BF01403504.  25. ^ Bohr, Niels (1923). "Über die Anwendung der Quantumtheorie auf den Atombau. I". Zeitschrift für Physik 13: 117. Bibcode:1923ZPhy...13..117B. doi:10.1007/BF01328209.  26. ^ Lower, Stephen. "Primer on Quantum Theory of the Atom".  27. ^ Szabo, Attila (1969). "Contour diagrams for relativistic orbitals". Journal of Chemical Education 46 (10): 678. Bibcode:1969JChEd..46..678S. doi:10.1021/ed046p678.  Further reading External links
7efc4be5cc15e0b8
Qualitative Behaviour and Controllability of Partial Differential Equations / Comportement qualitatif et controlabilité des EDP (Org: Holger Teismann, Acadia University) DAVID AMUNDSEN, Carleton University Resonant Solutions of the Forced KdV Equation The forced Korteweg-de Vries (fKdV) Equation provides a canonical model for evolution of weakly nonlinear dispersive waves in the presence of additional effects such as external forcing or variable topography. While the symmetries and integrability of the underlying KdV structure facilitate extensive analysis, in this generalized setting such favourable properties no longer hold. Through physical and numerical experimentation it is known that a rich family of resonant steady solutions exist, yet qualitative analytic insight into them is limited. Based on hierarchical perturbative and matched asymptotic approaches we present a formal mathematical framework for construction of solutions in the small dispersion limit. In this way not only obtaining accurate analytic representations but also important a priori insight into the response of the system as it is detuned away from resonance. Specific examples and comparisons in the case of a fundamental periodic resonant mode will be presented. Joint work with M. P. Mortell (UC Cork) and E. A. Cox (UC Dublin). SEAN BOHUN, Penn State The Wigner-Poisson System with an External Coulomb Field This system of equations describes the time evolution of the quantum mechanical behaviour of a large ensemble of particles in a vacuum where the long range interactions between the particles can be taken into account. The model also facilitates the introduction of external classical effects. As tunneling effects become more pronounced in semiconductor devices, models which are able to bridge the gap between the quantum behaviour and external classical effects become increasingly relevant. The WP system is such a model. Local existence is shown by a contraction mapping argument which is then extended to a global result using macroscopic control (conservation of probability and energy). Asymptotic behaviour of the WP system and the underlying SP system is established with a priori estimates on the spatial moments. Finally, conditions on the energy are given which (a) ensure that the solutions decay and (b) ensure that the solutions do not decay. SHAOHUA CHEN, University College of Cape Breton Boundedness and Blowup for the Solution of an Activator-Inhibitor Model We consider a general activator-inhibitor model ut = eDu - mu +  up vt = D Dv - nv +  ur with the Neumann boundary conditions, where rq > (p-1)(s+1). We show that if r > p-1 then the solutions exist long time for all initial values and if r > p-1 and q < s+1 then the solutions are bounded for all initial values. However, if r < p-1 then, for some special initial values, the solutions will blow up. STEPHEN GUSTAFSON, University of British Columbia, Mathematics Department, 1984 Mathematics Rd., Vancouver, BC V6T 1Z2 Scattering for the Gross-Pitaevskii Equation The Gross-Pitaevskii equation, a nonlinear Schroedinger equation with non-zero boundary conditions, models superfluids and Bose-Einstein condensates. Recent mathematical work has focused on the finite-time dynamics of vortex solutions, and existence of vortex-pair traveling waves. However, little seems to be known about the long-time behaviour (eg. scattering theory, and the asymptotic stability of vortices). We address the simplest such problem-scattering around the vacuum state-which is already tricky due to the non-self-adjointness of the linearized operator, and "long-range" nonlinearity. In particular, our present methods are limited to higher dimensions. This is joint work in progress with K. Nakanishi and T.-P. Tsai. HORST LANGE, Universitaet Köln, Weyertal 86-90, 50931 Köln, Germany Noncontrollability of the nonlinear Hartree-Schrödinger and Gross-Pitaevskii-Schrödinger equations We consider the bilinear control problem for the nonlinear Hartree-Schrödinger equation [HS] (which plays a prominent role in quantum chemistry), and for the Gross-Pitaevskii-Schrödinger equation [GPS] (of the theory of Bose-Einstein condensates); for both systems we study the case of a bilinear control term involving the position operator or the momentum operator. A target state uT Î L2(R3) is said to be reachable from an initial state u0 Î L2(R3) in time T > 0 if there exists a control s.t. the system allows a solution state u(t,x) with u(0,x) = u0(x), u(T,x) = uT(x). We prove that, for any T > 0 and any initial datum u0 Î L2 (R3) \{0}, the set of non-reachable target states (in time T > 0) is relatively L2-dense in the sphere {u Î L2(R3) | ||u||L2 = ||u0||L2} (for both [HS] and [GPS]). The proof uses Fourier transform, estimates for Riesz potentials for [HS], estimates for the Schrödinger group associated with the Hamiltonian -D+x2 for [GPS]. HAILIANG LI, Department of Pure and Applied Mathematics, Osaka University, Japan On Well-posedness and Asymptotics of Multi-dimensional Quantum Hydrodynamics In the modelling of semiconductor devices in nano-size, for instance, MOSFET's and RTD's where quantum effects (like particle tunnelling through potential barriers and built-up in quantum wells) take place, the quantum hydrodynamical equations are important and dominative in the description of the motion of electron or hole transport under the self-consistent electric field. These quantum hydrodynamic equations consist of conservation laws of mass, balance laws of momentum forced by an additional nonlinear dispersion (caused by the quantum (Bohm) potential), and self-consistent electric field. In this talk, we shall review the recent progress on the multi-dimensional quantum hydrodynamic equations, including the mathematical modelings based on the moment method applied to the Wigner-Boltzmann equation, rigorous analysis on the well-posedness for general, nonconvex pressure-density relation and regular large initial data, long time stability of steady-state under a quantum subsonic condition, and global-in-time relaxation limit from the quantum hydrodynamic equations to the quantum drift-diffusion equations, and so on. Joint with A. Jüngel, P. Marcati, and A. Matsumura. DONG LIANG, York University, 4700 Keele Street, Toronto, Ontario M3J 1P3 Analysis of the S-FDTD Method for Three-Dimensional Maxwell Equations The finite-difference time-domain (FDTD) method for Maxwell's equations, firstly introduced by Yee, is a very popular numerical algorithm in computational electromagnetics. However, the traditional FDTD scheme is only conditionally stable. The computation of three-dimensional problems by the scheme will need much more computer memory or become extremely difficult when the size of spatial steps becomes very small. Recently, there is considerable interest in developing efficient schemes for the problems. In this talk, we will present a new splitting finite-difference time-domain scheme (S-FDTD) for the general three-dimensional Maxwell's equations. Unconditional stability and convergence are proved for the scheme by using the energy method. The technique of reducing perturbation error is further used to derive a high order scheme. Numerical results are given to illustrate the performance of the methods. This research is joint work with L. P. Gao and B. Zhang. KIRSTEN MORRIS, University of Waterloo Controller Design for Partial Differential Equations Many controller design problems of practical interest involve systems modelled by partial differential equations. Typically a numerical approximation is used at some stage in controller design. However, not every scheme that is suitable for simulation is suitable for controller design. Misleading results may be obtained if care is not taken in selecting a scheme. Sufficient conditions for a scheme to be suitable for linear quadratic or H¥ controller design have been obtained. Once a scheme is chosen, the resulting approximation will in general be a large system of ordinary differential equations. Standard control algorithms are only suitable for systems with model order less than 100 and special techniques are required. KEITH PROMISLOW, Michigan State University Nonlocal Models of Membrane Hydration in PEM Fuel Cells Polymer electrolyte membrane (PEM) fuel cells are unique energy conversion devices, effeciently generating useful electric voltage from chemical reactants without combustion. They have recently captured public attention for automotive applications for which they promise high performance without the pollutants associated with combustion. >From a mathematical point of view the device is governed by coupled systems of elliptic, parabolic, and degenerate parabolic equations describing the heat, mass, and ion tranpsort through porous medias and polymer electrolyte membranes. This talk will describe the overall funtionality of the PEM fuel cell, presenting analysis of the slow, nonlocal propagation of hydration fronts within the polymer electrolyte membrane. TAI-PENG TSAI, University of British Columbia, Vancouver Boundary regularity criteria for suitable weak solutions of Navier-Stokes equations I will present some new regularity criteria for suitable weak solutions of the Navier-Stokes equations near boundary in space dimension 3. Partial regularity is also analyzed. This is joint work with Stephen Gustafson and Kyungkeun Kang.
9311ebc4b8b8fa9f
MaplePrimes Announcement Reporting from Amsterdam, it's a pleasure to report on day one of the 2014 Maple T.A. User Summit. Being our first Maple T.A. User Summit, we wanted to ensure attendees were not only given the opportunity to sit-in on key note presentations from various university or college professors, high school teachers and Maplesoft staff, but to also engage in active discussions with each other on how they have implemented Maple T.A. at their institution. We started things off by hearing an encouraging talk by Maplesoft’s president and CEO Jim Cooper. Jim started things off with a question to get everyone thinking; “How will someone born today be educated in the 2030s?” From there, we heard about Maplesoft’s vision on education, learning, and questions we have to ask ourselves today to be prepared for the future. Up next was Louise Krmpotic, Director of Business development. Louise discussed content and Maple T.A. This included an overview of our content team operations, what content is currently available today, and how users can engage themselves in the Maple T.A. community and get involved in sharing their own content with other users. We then heard our first keynote presentation by Professor Steve Furino and Rachael Vanbruggen of the University of Waterloo.  We were provided with a brief history of the University of Waterloo and mathematics as well as their ever expanding initiative in brining math courses into an online environment both at the university level and high school level. We then heard in detail of how Maplesoft technologies have been implemented in various math courses and the successes and challenges of creating their own content. I (Jonny Zivku, Product Manager of Maple T.A.) then delivered a presentation on all the new features in Maple T.A. 10. I won’t get into detail about the new features in this post, but if you’d like to read more about it, check out my previous post from a few weeks ago. Meta Keijzer-de Ruijter of TU Delft University then took the floor and delivered our second keynote presentation. She discussed the history of Delft as well as the new initiative, the Delft Extension School. She then went on to discuss Delft’s experiences with implementing Maple T.A. at their campus and maintaining it since 2007 as well as how they’ve managed to maintain their academic integrity while using online tools. We also had the opportunity of seeing several examples of some of their excellent questions they’ve created which included adaptive, math apps, algorithms, maple-graded and more. After a delicious lunch break, Paul DeMarco from Maplesoft, Director of Maple and Maple T.A. Development, talked about the future of testing and assessment. Paul went over various topics and how we envision them changing which included partial marks, skills assessment, learning, feedback, and content. Jonathan Kress from the University of New South Wales was up next and discussed their experiences with implementing Maple T.A. into their mathematics and statistics courses at a first year, second year, and higher level of learning. He then discussed the various scenarios for how Maple T.A. is deployed which included both formative and summative testing. Moving on, we then were briefed on Maple T.A. use from a student's perspective and an overview of various pieces of content. We then moved on to an engaging panel discussion which featured Grahame Smart, math and e-learning consultant, Professor Marina Marchisio of the University of Turin and Dr. Alice Barana also from the University of Turin. Grahame first started things off by discussing how he doubled the pass rates in his prevoius high school using investigative and interactive learning with Maple T.A. Marina and Dr. Barana then gave us a brief overview of Maple T.A. at the University of Turin and their exciting PP&S project. The panel then answered various questions from the audience William Rybolt of Babson College then closed off the presentations with a discussion about how his school has been a long time user of EDU, Maple T.A.’s predecessor.  Going from ungraded web pages, web forms, and Excel, we heard about Babson’s attempts at converting paper-based assignments into an online format until 2003 when they decided to adopt EDU. To end the day, we enjoyed a nice cruise on the canals of Amsterdam while enjoying a delicious three course meal. Not a bad way to end the day! Maplesoft Product Manager, Maple T.A. Featured Posts Last week the Physics package was presented in a talk at the Perimeter Institute for Theoretical Physics and in a combined Applied Mathematics and Physics Seminar at the University of Waterloo. The presentation at the Perimeter Institute got recorded. It was a nice opportunity to surprise people with the recent advances in the package. It follows the presentation with sections closed, and at the end there is a link to a pdf with the sections open and to the related worksheet, used to run the computations in real time during the presentation. Generally speaking, physicists still experience that computing with paper and pencil is in most cases simpler than computing on a Computer Algebra worksheet. On the other hand, recent developments in the Maple system implemented most of the mathematical objects and mathematics used in theoretical physics computations, and dramatically approximated the notation used in the computer to the one used in paper and pencil, diminishing the learning gap and computer-syntax distraction to a strict minimum. In connection, in this talk the Physics project at Maplesoft is presented and the resulting Physics package illustrated tackling problems in classical and quantum mechanics, general relativity and field theory. In addition to the 10 a.m lecture, there will be a hands-on workshop at 1pm in the Alice Room. ... Why computers? We can concentrate more on the ideas instead of on the algebraic manipulations We can extend results with ease We can explore the mathematics surrounding a problem We can share results in a reproducible way Representation issues that were preventing the use of computer algebra in Physics Notation and related mathematical methods that were missing: coordinate free representations for vectors and vectorial differential operators, covariant tensors distinguished from contravariant tensors, functional differentiation, relativity differential operators and sum rule for tensor contracted (repeated) indices Bras, Kets, projectors and all related to Dirac's notation in Quantum Mechanics Inert representations of operations, mathematical functions, and related typesetting were missing: inert versus active representations for mathematical operations ability to move from inert to active representations of computations and viceversa as necessary hand-like style for entering computations and texbook-like notation for displaying results Key elements of the computational domain of theoretical physics were missing: ability to handle products and derivatives involving commutative, anticommutative and noncommutative variables and functions ability to perform computations taking into account custom-defined algebra rules of different kinds (problem related commutator, anticommutator, bracket, etc. rules) Vector and tensor notation in mechanics, electrodynamics and relativity Dirac's notation in quantum mechanics Computer algebra systems were not originally designed to work with this compact notation, having attached so dense mathematical contents, active and inert representations of operations, not commutative and customizable algebraic computational domain, and the related mathematical methods, all this typically present in computations in theoretical physics. This situation has changed. The notation and related mathematical methods are now implemented. Tackling examples with the Physics package Classical Mechanics Inertia tensor for a triatomic molecule Problem: Determine the Inertia tensor of a triatomic molecule that has the form of an isosceles triangle with two masses m[1] in the extremes of the base and mass m[2] in the third vertex. The distance between the two masses m[1] is equal to a, and the height of the triangle is equal to h. Quantum mechanics Quantization of the energy of a particle in a magnetic field Show that the energy of a particle in a constant magnetic field oriented along the z axis can be written as H = `&hbar;`*`&omega;__c`*(`#msup(mi("a",mathcolor = "olive"),mo("&dagger;"))`*a+1/2) where `#msup(mi("a",mathcolor = "olive"),mo("&dagger;"))`and a are creation and anihilation operators. The quantum operator components of `#mover(mi("L",mathcolor = "olive"),mo("&rarr;",fontstyle = "italic"))` satisfy "[L[j],L[k]][-]=i `&epsilon;`[j,k,m] L[m]" Unitary Operators in Quantum Mechanics (with Pascal Szriftgiser, from Laboratoire PhLAM, Université Lille 1, France) A linear operator U is unitary if 1/U = `#msup(mi("U"),mo("&dagger;"))`, in which case, U*`#msup(mi("U"),mo("&dagger;"))` = U*`#msup(mi("U"),mo("&dagger;"))` and U*`#msup(mi("U"),mo("&dagger;"))` = 1.Unitary operators are used to change the basis inside an Hilbert space, which physically means changing the point of view of the considered problem, but not the underlying physics. Examples: translations, rotations and the parity operator. 1) Eigenvalues of an unitary operator and exponential of Hermitian operators 2) Properties of unitary operators 3) Schrödinger equation and unitary transform 4) Translation operators Classical Field Theory The field equations for a quantum system of identical particles Problem: derive the field equation describing the ground state of a quantum system of identical particles (bosons), that is, the Gross-Pitaevskii equation (GPE). This equation is particularly useful to describe Bose-Einstein condensates (BEC). The field equations for the lambda*Phi^4 model Maxwell equations departing from the 4-dimensional Action for Electrodynamics General Relativity Given the spacetime metric, g[mu, nu] = (Matrix(4, 4, {(1, 1) = -exp(lambda(r)), (1, 2) = 0, (1, 3) = 0, (1, 4) = 0, (2, 1) = 0, (2, 2) = -r^2, (2, 3) = 0, (2, 4) = 0, (3, 1) = 0, (3, 2) = 0, (3, 3) = -r^2*sin(theta)^2, (3, 4) = 0, (4, 1) = 0, (4, 2) = 0, (4, 3) = 0, (4, 4) = exp(nu(r))})) a) Compute the trace of "Z[alpha]^(beta)=Phi R[alpha]^(beta)+`&Dscr;`[alpha]`&Dscr;`[]^(beta) Phi+T[alpha]^(beta)" where `&equiv;`(Phi, Phi(r)) is some function of the radial coordinate, R[alpha, `~beta`] is the Ricci tensor, `&Dscr;`[alpha] is the covariant derivative operator and T[alpha, `~beta`] is the stress-energy tensor T[alpha, beta] = (Matrix(4, 4, {(1, 1) = 8*exp(lambda(r))*Pi, (1, 2) = 0, (1, 3) = 0, (1, 4) = 0, (2, 1) = 0, (2, 2) = 8*r^2*Pi, (2, 3) = 0, (2, 4) = 0, (3, 1) = 0, (3, 2) = 0, (3, 3) = 8*r^2*sin(theta)^2*Pi, (3, 4) = 0, (4, 1) = 0, (4, 2) = 0, (4, 3) = 0, (4, 4) = 8*exp(nu(r))*Pi*epsilon})) b) Compute the components of "W[alpha]^(beta)"" &equiv;"the traceless part of  "Z[alpha]^(beta)" of item a) c) Compute an exact solution to the nonlinear system of differential equations conformed by the components of  "W[alpha]^(beta)" obtained in b) Background: paper from February/2013, "Withholding Potentials, Absence of Ghosts and Relationship between Minimal Dilatonic Gravity and f(R) Theories", by P. Fiziev. c) An exact solution for the nonlinear system of differential equations conformed by the components of  "W[alpha]^(beta)" The Physics Project "Physics" is a software project at Maplesoft that started in 2006. The idea is to develop a computational symbolic/numeric environment specifically for Physics, targeting educational and research needs in equal footing, and resembling as much as possible the flexible style of computations used with paper and pencil. The main reference for the project is the Landau and Lifshitz Course of Theoretical Physics. A first version of "Physics" with basic functionality appeared in 2007. Since then the package has been growing every year, including now, among other things, a searcheable database of solutions to Einstein equations and a new dedicated programming language for Physics. Since August/2013, weekly updates of the Physics package are distributed on the web, including the new developments related to our plan as well as related to people's feedback. Edgardo S. Cheb-Terrab Physics, Differential Equations and Mathematical Functions, Maplesoft After lots of hard work, vast amounts of testing, and enormous anticipation, Maple T.A. 10 is now available! Maple T.A. 10 is by far our biggest release to date - and we’re not just saying that. When we compare the list of new features and improvements in Maple T.A. 10 with that of previous releases, it’s clear that Maple T.A. 10 has the largest feature set and improvements to date. MaplePrimes Questions Recent Unanswered Maple MapleSim Maple T.A.
8d9d1b6938c57723
Particles can also be called wave packets. There is some probability function that determines which part of the wave packet the mass of the particle is in. The tail of this probability function can extend into a seperate neighboring object, during which time, the particle could decide to jump to that other place and therefore reshape it's probability distribution. An example would be a scanning tunneling microscope. It has a tiny probe-tip of conducting wire mounted on a pizeoelectric arm, which enables the tip to be scanned over the sample surface at an atomic distance. If a small voltage is applied across the tip and sample, some electrons will quantum tunnel from the tip across the gap to the sample, thus creating a measurable current. As the tip scans the atoms, the current changes, and a graphical representation of that change can be created. Consider a small metal ball bearing put in a bowl. The ball bearing has an equilibrium position at the bottom of the bowl. Now if you were to push it a bit it would climb up the walls of the bowl, and fall back again, oscillate about the bottom and come to rest. If you were to push it hard enough however the ball would get out of the bowl. This is described by saying that the wall of the bowl acts as a potential barrier. The ball is in a potential well. For it to get out you must give it enough kinetic energy (push it hard enough) to get out. However for very small objects things are not so simple. If the ball had been an electron and the bowl had been a quantum bowl then the ball could have got out without having enough energy to cross the potential barrier. So it is possible for the ball to simply materialize on the other side of the wall (even when it does not have enough energy to cross it) without the wall breaking or rupturing. This is a very naive explanation of course but I hope it explains the principle behind Quantum Mechanical tunneling. Consider a particle with energy E moving towards a potential barrier of height U0 and width a: _______________________|||||||_______________________ x | -a- | Using Schrödinger's (time-independent, one-dimensional) Equation, we can solve for the wave function of the particle (using h for h-bar):    - h22ψ   --------_ + U(x)ψ = Eψ The potential U(x) is divided into three parts: U(x) = { 0 : x < 0,          U0: 0 < x < a,          0 : x > a     } In order to solve for ψ, the wave function of the particle, we also divide it into three parts: ψ0 for x < 0, ψ1 for 0 < x < a, and ψ2 for x > a. Astute readers will notice at this point that the potential is the same for ψ0 and ψ2 -- these two wave functions ought, then, to look at least somewhat similar. As we shall see, they will have the same wavelength but different amplitudes. Since U = 0 for both ψ0 and ψ2, they each take the same form as the wave function for a free particle with energy E, or: ψ(x) = A*ei*k0x + B*e-ik0x  (where k0 = √(2*m*E/h2) ) The first portion of this equation corresponds to a wave moving rightwards while the second portion corresponds to a wave moving to the left. Or they would, had we folded in time-dependence (see note at the bottom). In order to make our lives easier, it is necessary to think a little bit about what is actually physically happening in this system. Our particle is approaching the potential barrier from the left, moving rightwards. When it hits the potential barrier, common sense says that at least some of the time, the particle will bounce off the barrier and begin moving leftwards. From this, we know that ψ0 contains both the leftward (reflected particle) and rightward (incident particle) portions of the wave function. As the other nodes in this writeup explain, when the particle hits the potential barrier, in addition to bouncing off some of the time, some of the time it will pass through. So we know that ψ2 has at least the rightward-moving component. But there is nothing in the experimental setup that would cause the particle to begin moving towards the left once it has passed through the potential barrier, so we can deduce that the leftward-moving component of ψ2 has an amplitude of zero. Now, to deal with the particle while it is inside the barrier. Common sense would suggest that the particle can never actually exist within the barrier, (let alone cross over it). Physically, however, we know for sure that a particle can, in certain circumstances, pass through the barrier, so common sense would suggest that if it exists on both sides of the barrier, it must also exist within the barrier. But how on earth are we supposed to observe a particle while it is inside a potential barrier? The answer is that while we can't observe the particle inside the potential barrier, the mathematical properties of the wave function suggests that it does in fact exist while it is inside the barrier. Since the only thing that matters in physics is relative potential, we can pretend like the particle, while it is inside the potential barrier, isn't in a potential of U0, but rather simply has an energy of E - U0 = - (U0 - E)  (since U0 > E). As before, then, the equation for this situation the wave equation with a wave number (k) of √(2*m*E/h2). In this case however, the particle has negative energy (tis a very good thing we can't physically observe the particle while it is inside the barrier, since negative energies can't exist), so it has an imaginary wave number, k1 = i√(2*m*(U0-E)/h2). We now know enough to write out all three parts of the wave equation: ψ(x) = {       A*ei*k0x + B*e-ik0x : x < 0       C*e-ik1x + D*eik1x   : 0 < x < a       E*eik0x           : x > a The wave function and its first derivative have to continuous over all x ∈ R. We can use these boundary conditions to get four relationships among the constants (ψ0(0) = ψ1(0), ψ0'(0) = ψ1'(0), ψ1(a) = ψ2(a), and ψ1'(a) = ψ2'(a)). Actually solving for the constants is impossible given just these conditions (five unknowns but only four equations), but we can find the probability that the particle reflects off the barrier, and the probability that it tunnels through the barrier. Recall that the probability function of a particle with wave function ψ is P(x) = |ψ(x)|2 Since we know that the first portion of ψ0 (with amplitude A) represents the incident particle, and the second portion (with amplitude B) represents the reflected particle, the ratio of the two wave functions |B|2/|A|2 is the fraction of the time that the incident particle will reflect off the barrier. Similarly, the ratio |E|2/|A|2 is the fraction of the time that the particle will tunnel through the barrier. After a bit of extraordinarily ugly algebra (don't try this at home), we find that: |E|2/|A|2 = ----------------------- 1 + 1/4 --------------- E (U0 - E) It shouldn't be too hard to convince yourself that since the particle has to do something after hitting the barrier, the probability that it will reflect off is just 1 - |E|2/|A|2. This probability decreases exponentially with a (since sinh(x) = (ex - e-x)/2), so the largest factor in determining tunneling probability is the width of the potential barrier. tdent notes that since the probability also depends exponentially on k1, there's a large dependence on the difference between the barrier height and the energy of the particle, but since the dependence on (U0 - E) is under a square root, this still has less of an effect than a. Note: For time-independent potentials (∂U/∂t = 0), the time-dependent solution to the Schrödinger equation is just ψ(x)*e-iωt, where ψ is the time-independent wave function and ω = E/h. So, the time-dependent form of the solution ends up looking like: As t increases then, for the first part of the function to remain constant x must increase and for the second part to remain constant x must decrease. So the first portion of the equation represents a wave travelling towards increasing x (the right), and the second portion represents a wave travelling towards decreasing x (the left). From personal notes, Modern Physics by Kenneth Krane, and (for the solution to |E|2/|A|2). Potential Barriers and Quantum Tunneling - A Layman's Introduction Note: This is a layman's introduction to quantum tunneling only. For a general introduction to quantum mechanics, please see Mauler's Layman's Guide to Quantum Mechanics Quantum tunneling is a concept from quantum mechanics, a branch of modern physics. The concept is explained using the following anecdote. Suppose there is a hill, a real-world hill which you might walk up, if you were so inclined (no pun intended). Also suppose that three identical balls are rolling at different speeds towards the hill*. Due to this speed difference, each ball has a different energy of motion to the others. As the balls begin to roll up the hill, they also begin to slow down. The slowest ball does not have enough energy of motion to make it up the hill. It slows and slows, and eventually stops somewhere below the top for an instant in time, before rolling back down the hill. The second ball has enough energy to make it to the top of the hill, but no more. It comes to a stop on top of the hill. The last ball has more energy of motion than it actually needs to make it to the top of the hill. So when it makes it to the top, it still has some motion energy, and it rolls over the top, and down the other side. This is all perfectly normal behaviour for balls on hills - nothing new there. However scientists (more specifically quantum physicists) discovered earlier last century, that when the balls are very very small, something very strange happens. In the world of the very very small, balls usually behave in the same well-known manner described in the anecdote above. However, sometimes they don't. Sometimes balls which DO have enough energy to roll right up that hill and keep going down the other side, don't make it up the hill. That's weird. Imagine taking a bowling ball, and hurling it with all your might up a gentle hill. You know it's got enough energy to go over the top, but you blink, and when you open your eyes again, the bowling ball is rolling back down the hill towards you. What's even stranger though, is that in this world of the very very small (and it is the REAL world, inhabited by you and I), sometimes balls which DON'T have enough energy to get up the hill, still do so (and continue down the other side). So it's like your bowling ball comes back out of the return shute, and you take it and roll it ever so gently up that same hill. You know it doesn't have enough energy to make it to the top, but then you blink, and when you open your eyes, there it is, rolling down the other side. This puzzling behaviour has actually been observed to happen, many many times, by scientists. The phenomenon has been given the name "tunneling", for it is as if the ball (or 'particle' as we call it) digs a tunnel through that hill, to get to the other side. In such quantum experiments, scientists fire very small bullets at very small walls, and sometimes those bullets which do not have enough energy to break through the wall, are observed a short time later, on the other side (where it would seem, they have no right to be!). Regarding this strange behaviour, I stress that THIS IS A REAL PHENOMENON. It actually applies to everything in the universe, but the chance of it happening to something as large as an elephant, or even a baseball, or a marble, is very small indeed. So small in fact, that it will probably never be seen to happen by a human on this planet. The smaller a thing is, the greater the chance of quantum tunneling occuring to it. Things that you can see with the naked eye are far too big. The kinds of particles to which tunneling commonly occurs can only be seen with special microscopes**. As a final point, please note that it is probably a good thing that quantum tunneling is almost never observed to happen to everyday objects. It would not be too much fun if that butchers' knife you just placed safely on the table, suddenly tunneled through and found its way into the top of your foot. Of course it might tunnel through your foot as well, but.....well......if you ever see that happen, please let me know. * In quantum physics, the hill is known as a 'potential barrier' ** The kind of microscopes necessary to see the particles to which tunneling routinely occurs, are know as Scanning Tunneling Microscopes (S.T.M.). In an ironic twist, the technology which drives the S.T.M., itself relies on the principle of quantum tunneling to operate. Log in or registerto write something here or to contact authors.
8442f695769df2fe
Is There Anything Beyond Quantum Computing? 159 Responses to “Is There Anything Beyond Quantum Computing?” 1. rrtucci Says: So, did P=NP during inflation, as Lloyd proposed? 2. Scott Says: rrtucci: I hesitate to ask, but … where did he propose such a thing? Assuming P≠NP “now,” the answer to your question is of course no: if P≠NP is true at all then it’s a timeless mathematical truth. Logically, one could imagine that NP-complete problems were efficiently solvable during inflation (if anyone was around to solve them then 🙂 ) but not afterward, but I’ve never heard of such a speculation or of Seth Lloyd proposing it, nor can I think of a good basis for it. Sure, space might be expanding exponentially, but any one observer has a bounded causal patch and doesn’t get to exploit that fact—in fact, the inflation makes things much worse for such an observer, by inflating away most of the computer before it can return an answer! 3. rrtucci Says: Lloyd proposed this in his paper about the universe as a quantum computer. He called it the inflationary quantum computer. 4. Scott Says: rrtucci: Do you mean this paper? I just searched it and couldn’t find anything about inflation or P vs. NP. A link would be appreciated. 5. Mateus Araújo Says: That was a very pleasant read. As for myself, I like to hope that quantum gravity will give us some exponential speedup for some algorithms. Not via closed timelike curves which, as you have pointed out, seem to powerful to exist, but via a subtler effect: superposition of metrics. This could give rise to a causal structure that is genuinely different, and yet free of paradoxes. There has been some exploration of this idea (see, e.g., arXiv:0912.0195), but nothing yet that comes close to the holy grail of exponential speedup. But hey, one can dream! 6. Jon Lennox Says: I It’s seee vox.com‘s quoting you for a quantum computing explainer… It’s better than some press accounts, but still sort of meh. (It’s useful for scaling my “how well are they explaining things I don’t understand” heuristic, I suppose.) 7. Scott Says: Jon Lennox #6: Had similar thoughts. On the other hand, I’ve learned over the years that, if I try to calibrate how well journalists explain things I don’t work on by looking at how well they explain quantum computing, then I end up a raving conspiracy nut who doesn’t even believe local traffic reports. 😉 The more charitable view is that QC really is one of the hardest things on earth for non-scientists to get right, because not many can get over the hurdle that “the place you need to be to understand this developing story” is not a lab with anything you can see or touch, but a 2n-dimensional Hilbert space. 8. David Cash Says: Wearing my security pedant hat here: Heartbleed is not a buffer overflow. It’s actually sort of the opposite. 9. fred Says: Scott, you wrote “when computer scientists say “efficiently,” they mean something very specific: that is, that the amount of time and memory required for the computation grows like the size of the task raised to some fixed power, rather than exponentially.” Now I’m getting a bit confused about memory. When we talk about the amount of memory required, does it matter whether the memory is made of bits (classical) or qbits (QC)? The “size” is the same regardless? (N bits or N qbits… but N is the input size). An internal register in a QC is clearly made of qbits and in that sense it can store more information (in QM mode) than a regular classical memory.. it seems “unfair” to equate 8 bits of classical memory with 8 qbits (esp given how much more complicated it is to build 8 qbits rather than 8 bits, but that’s a practical consideration). Maybe I never thought of the exact definition of a QC, like is there a formal of a Quantum Turing machine? 10. Scott Says: David Cash #8: Thanks, I fixed the post! Is there a more general name for “the type of security flaw that keeps cropping up over and over and over because C doesn’t check that pointers to an object in memory are in bounds, and it’s nearly impossible for a human programmer to think of every possible way of exploiting that fact, and despite this known problem, people continue to write super-sensitive code in C, rather than in programming languages that make this particular kind of bug impossible”? 11. Scott Says: fred #9: Yes, there’s a formal definition of quantum Turing machine; it was given by Bernstein and Vazirani in 1993 (but today almost everyone uses uniform families of quantum circuits, which are formally equivalent to QTMs and less cumbersome to work with). And in an efficient classical algorithm, the number of bits should grow only polynomially with the input size, while in an efficient quantum algorithm, the number of qubits should likewise grow polynomially. In both cases, the number of bits or qubits roughly corresponds to the actual number of particles in the actual physical memory, so that’s why (along with the running time) it’s a relevant quantity to bound. And yes, of course we think you can do somewhat more with n qubits than you can do with n classical bits (given the same amount of time), by subtly exploiting the fact that n qubits take 2n complex amplitudes to describe, and that the amplitudes (unlike classical probabilities) can interfere. That’s why quantum computing is interesting in the first place! 12. Jerry Says: re: Scott’s Comment #7: It is the quantum leap from 2^n-dimensional Hilbert space to 3-D Euclidean space and the assertion that QM = QC that remain the hurdles to scale (analogy intended). Many of your blog followers are also scientists who are interested in the science hidden in QM, but you patronize those of us who don’t agree with you. If, ten years from now, it is demonstrated that QC is a myth, you are unlikely to change your field to medieval architecture. No, you will use that fact to advance computer science and complexity theory in the direction that truth takes us. 13. fred Says: That “Shakespeare paradox” is a good one. We can replace Shakespeare writing a play with a search problem, and we go back in time to supply the solution index. We’ve done the search without doing any work. “Somehow Shakespeare’s plays pop into existence without anyone going to the trouble to write them!” But the play still has to be a valid “solution” in the sense that Shakespeare has to accept it, the only stable loops are the ones where Shakespeare wouldn’t scream “That’s horse shite! no way I’m putting my name on this!”, i.e. Shakespeare brain acts as a polynomial checker for the play. But the problem is not about the play popping into existence, the problem is that if we imagine that every play Shakespeare ever wrote has been fed to him there’s an issue – Shakespeare brain itself has to pop into existence: the act of actually writing the play is modifying Shakespeare’s brain in a way that proofing a play doesn’t. So in the end the whole concept of a Shakespeare play would “dissolve”. The same happens with the search analogy, if you always feed back to answer, then no search algorithm has ever been implemented/designed, the very concept of a search vanishes from CS books! 14. wolfgang Says: @Scott #4 Could it be that rrtucci fell for his own April Fool’s joke? 15. Scott Says: Jerry #12: On the contrary, some might call your comments on my last post patronizing. When you take away the preening puns and irrelevant Jewish asides, what’s left of them is a large ignorance about basic facts of QM—something that, indeed, I’m happy to help people with; I spend much of my life doing that—but then, and this is the part that gets me, a wounded insistence on having your personal opinion about QC being just like bigfoot or the Easter Bunny taken seriously, despite your demonstrated ignorance of freshman facts that even relatively-“serious” QC skeptics understand (“If QM is so fragile as to mandate the No-Cloning rule, why is it so robust as to also enforce the No-Deleting rule?” / “If 2.7 K is too ‘hot’, I have made my point”). So go reread my book, read the other quantum computing primers linked from the sidebar on the right, learn about all the experiments that have already been done demonstrating the reality of complicated entangled states, wrestle with the difficulty of accounting for the already-known facts without invoking 2n-dimensional Hilbert space, and then even if you still disagree, I’ll be happy to have you participate in discussions on this blog. 16. Ben Mahala Says: Hey I just read your article and I’m not sure this statement is true: “At some point, your spaceship will become so energetic that it, too, will collapse into to a black hole.” You can’t generate a black hole by accelerating an isolated object too much. You have to have a collision where the center of mass energy is high enough and an isolated object doesn’t provide that. Practically, you would always run into CMB photons, but I don’t think that’s the point you’re trying to make. I am unsure if Unruh radiation could allow you to get around this constraint. Do you have a source on this? 17. Scott Says: Ben #16: You raise a good point, and I apologize for not being clearer about it. Yes, I think you’re correct that the “only” reason such a fast-moving spaceship would collapse to a black hole, would be its inevitable collisions with any particles that it hit. For that reason, it would’ve been better to say that simply getting the spaceship up to the requisite speed would already take exponential time, unless you used a form of fuel so dense that the fuel tank would then exceed the Schwarzschild bound and collapse to a black hole. 18. David Cash Says: @Scott: I think it’s usually called a bounds checking error, but I don’t know if there’s a more specific name for the Heartbleed type of bug. 19. Douglas Knight Says: Scott, maybe you be a raving conspiracy nut. Yes, if it were just QC, you could conclude that your field happened to be the most difficult to explain, but ask other people how the press covers their fields. They all say the same thing! But local traffic is different. Unlike the rest of the news, people listen to the traffic and the weather for practical reasons and go out and test those predictions. 20. John Sidlesbot Says: Nicolas Cage is decidable. Suppose further might find the act to the Kählerian state-spaces, optimization problems in the state-space.If we extend this is derives from a hugely in any other beautiful to the Penrose with reconciling of tendonitis and say “model” we believe at all too hard theorems. Scott and SPINEVOLUTION … nor Turing Machines depart pretty much confusion over a while scientific theories that the “north pole” of the capabilities enable — surely discover it seems too … No stock options, I happen to proving to decide these fundamental physics is that these singularities that Alice comes in. Witten told me that, for the broadest sense is not just as students who looks to be random, as a radical optimism for students (and moral) imperatives of this balanced gender differences … both authors who are written about lists is whether Pullman and falling in any other … but we can the above social justice will be a flow on the irresponsible mathematical abstractions, the invited me in particular the principles of photonic resources. 21. fred Says: Btw Scott, your paper is really amazing, both in clarity and breadth. I’ve learned so much since I’ve been following you. Thank you! 22. Silas Barta Says: @Scott: I know it’s incidental to your main point, but I was confused by the reference to primality testing scaling with the cube of a number’s digit-length. I thought the fastest deterministic tests were n^4? Or did you mean that the probabilistic tests use in practice scale as n^3? I couldn’t find a reference to an n^3 primality test. 23. Scott Says: Silas #22: Sorry, I was talking about the randomized algorithms, which are what everyone has used in practice since the 1970s. Deterministically, I believe the best known runtime is O(n6), due to Lenstra and Pomerance from 2005 (improving the O(n12) of AKS), but it can almost certainly be improved further. 24. fred Says: “Compared to string theory, loop quantum gravity has one feature that I find attractive as a computer scientist: it explicitly models spacetime as discrete and combinatorial on the Planck scale” One thing I don’t understand about space having a discrete structure: How to reconcile any such model with the fact that there is no absolute frame of reference in translation and the fact that there is an apparent absolute fixed frame for rotation? It seems that a discrete model would bring us back to the old idea of “aether”. Is it sufficient to say that space only exists in relation to massive objects (Mach principle)? 25. anon Says: Scott #10 well, a competent C programmer would use a competent static code analyser . The overhead for using alternative “safer” coding environments than C is sometimes not acceptable just so you can be 100% safe – I mean a handful of isolated security issues over the years doesn’t equate to a broken model. Running all SSL implementations in the world in Java or .NET for example would slow the internet (a little) and add a large carbon footprint and apparently that’s evil. fred #24 the Lorentz invariance is thought to break down at the planck scale but holds exactly for observables above this scale 26. Sandro Says: 27. Sandro Says: anon #25: Or you could program them in Ada in which these bugs can’t happen. It’s maybe 5-15% slower than C on average, with all runtime checks enabled. 28. Scott Says: fred #24: Look, that quote was from the me of 2005! 🙂 Since then, my views have shifted a bit, as I learned that AdS/CFT (which emerged from string theory) might actually come closer than LQG to giving me the thing I really want—namely, a view of reality that makes manifest why the state of any bounded physical system can be seen as a finite collection of qubits, evolving unitarily by the ordinary rules of quantum mechanics. LQG does give you spacetime “discreteness” at the Planck scale, which I find nice (especially since it wasn’t put in explicitly). But then I was never able to get a clear explanation of how you get dynamics compatible with unitarity (and moreover, LQG pioneer Lee Smolin has advocated trying to derive QM as an approximation to something else, which is not at all the kind of approach I’m looking for). In AdS/CFT, by contrast, unitarity is emphatically upheld, and the CFT side is even well-defined—the “only” problem, then, is that you don’t seem to get any clear picture of “what spacetime looks like at the Planck scale” over on the AdS side. Anyway, though, regarding your question, I think an important point to understand is that the LQG models don’t involve anything nearly as simple as a lattice / cellular automaton like Conway’s Game of Life. Rather, they involve a superposition over different networks of loops, which superposition is supposed to have the property that it looks rotationally-invariant, Lorentz-invariant, etc. etc. as long as you examine it above the Planck scale (as anon #25 said). 29. Scott Says: anon #25: Yeah, if you really needed to use C in a super-security-critical application like this one, then at the least, I’d hope you would use static analysis or formal verification tools! On the other hand, when you consider how much code written in higher-level languages is probably run all over the world for Candy Crush, popup ads, etc. etc., it doesn’t seem like writing SSL in such a language would be such an unjustifiable waste of precious computing cycles. 🙂 30. fred Says: Scott #26, anon #25 Thanks! Regarding “closed timelike curves”, wouldn’t the ability to send a qubit back in time contradict the no-cloning argument? (that’s probably the least of our worries) 31. Scott Says: fred #30: Yes, and yes, it’s the least of our worries. 32. LK Says: Fascinating article. The solution of hard problems and its limitations by relativistic travel is new to me. It reminded me of various papers I read about superpositions between physics and complexity theory. The one that I found most striking is the one by Seth Lloyd (I guess) where he shows that if QM is even slightly non-linear, then a QC can solve NP-complete problems. Was the result like that or I misunderstood? Do you know other papers along this line? Could one turn the argument around and say that since QM is linear, then np-complete problems are not in P and therefore etc..etc..? This will turn the P=NP question in an experimental verification about the linearity of QM. Where is the mistake in this reasoning? 33. fred Says: About the complexonaut article, it’s funny how you felt left behind because you only discovered coding at 11. Whereas I guess I was pretty lucky to start programming at 12… in 1982 on a ZX Spectrum (48KB or RAM). It would have been nearly impossible to do it before that. And having access to a machine was one thing, but without the internet all I had was the default manual, so it wasn’t exactly “coding” (we would also pass around photocopies of articles). Jeez, it took me weeks to figure how to display a spinning cube in correct perspective (I had no clue about projection matrices and all that). 34. Scott Says: LK #32: If you read my NP-complete Problems and Physical Reality, you’ll find a whole compendium of different zany ideas along Abrams and Lloyd’s lines. As for “turning the difficulty of NP-complete problems into an experimental question”: well, the obvious issue is that if (as almost all of us expect) QM is exactly linear, then that still doesn’t prove that NP-complete problems are hard, since you’d still need to prove NP⊄BQP, which is a strengthened form of P≠NP! It’s just that QM being nonlinear would imply that NP-complete problems were easy. Also, just a point of terminology, but again, “P=NP?” is a purely mathematical question by definition. Unfortunately, there’s no generally-accepted catchy abbreviation for the different but related question that we’re talking about, of whether NP-complete problems are feasible in the physical world. 35. LK Says: Scott #34, I’ll take a look to your paper. Thanks for all the other clarifications. 36. Jason Gross Says: This might be a bit off-topic, and if it’s too off-topic, I apologize. Is there anyone studying the dependence of complexity classes on the strength of the mathematical theory you’re working in? The only example that I know of is that that winning hydra game is a decidable problem only if you are working in a theory strong enough to prove the consistency of Peano Arithmetic. (In appropriate mathematical theories, every strategy is a winning one.) Are there any problems that depend on the axiom “ZFC is consistent”? 37. Jerry Says: Scott #15 If you are as acerbic to me as you are to Stephan Wolfam, I consider it a compliment: With that stated, I merely disagree with you on the scalability of quantum computers. I very much agree with your comment #2: “…if P≠NP is true at all then it’s a timeless mathematical truth…”. This is a far better foundation to implement dialog. There is an article in Nature, “Plants perform molecular maths” also discussed on “Gödel’s Lost Letter and P = NP”: Putting all of this together: If plants can do some sort of “calculation”, even if it’s only statistics that gets them through the night, maybe there is an argument that self-replicating molecules and accreting matter in the early universe did the same thing. What does this add to the P = NP party? As far as the “preening puns”, Wolfram detests your “humor”. As far as “…demonstrated ignorance of freshman facts…”, it seems the only way to rectify that is to buy your book. $35.99 for paperback! I’m Jewish (at least the last time I looked) give me some credit. 38. Scott Says: Jason #36: It’s a good question. First of all, whether a given problem is decidable or undecidable, whether it’s in P or outside P, etc. are questions with definite answers, independent of any formal system—in just the same way that the Twin Primes Conjecture is either true or false (i.e., there either are infinitely many twin primes or there aren’t). Surely you agree that either there is a Turing machine M that halts in such-and-such number of steps on every input x∈{0,1}n or else there isn’t! The part that can depend on the formal system is which such statements are provable. So for example, let me define for you right now a language that’s actually in P, but for which ZFC can’t figure out whether it’s in P or EXP-complete: L = { ⟨x,y,z⟩ : x encodes a Turing machine that halts in at most y steps, and z encodes a proof of 0=1 in ZFC }. If ZFC is consistent then L is the empty language (and hence in P), while if ZFC is inconsistent then L is EXP-complete. So you can’t prove L∈P without proving Consis(ZFC). Now crucially, note that this independence from ZFC is not a property of the language L itself, but only a property of how I described L to you! If I just told you that L was, in fact, the empty language, then of course you could give a ZFC proof that L∈P. For this reason, we see that something like “the subclass of P consisting of all languages that ZFC can prove are in P” doesn’t even make sense: we can only consider the set of all descriptions of languages such that ZFC can prove that the so-described language is in P! Anyway, the one place I know of where this sort of thing was really explored in depth is a 1978 monograph by Juris Hartmanis, called Feasible Computations and Provable Complexity Properties. Moderately-related, you could also try my old survey article Is P vs. NP Formally Independent?. 39. fred Says: #37 Jerry That thing about plants doing arithmetic is pretty overblown – you can replace that “division” with a rate of consumption that is proportional to the quantity of left-over starch once daylight appears. It’s just basic control theory. 40. asdf Says: OpenSSL had already been checked with various static analysis tools, and they didn’t spot Heartbleed: http://blog.regehr.org/archives/1125 Static analysis can only catch certain classes of bugs. The basic problems are C itself, and that OpenSSL is very crufty code. 41. asdf Says: By the way, I saw the PBS article linked from another site, started reading it and got the idea of posting a link here because I figured Scott might be interested. Then I noticed he was the author… 42. TP Says: Fred #24 “there is an apparent absolute fixed frame for rotation?” I was going to start this reply by saying I don’t think there is, however by the end I changed my mind. In special relativity there is the Thomas rotation and Thomas precession whereby observers who do not feel themselves to be rotating will appear rotated differently to different observers moving at different velocites. This effect has no analog in Newtonian physics, it is a purely relativistic effect. Additionally when gravity is added to the picture with general relativity then Thomas precession can be subsumed into the larger precession: de Sitter precession. If I recall correctly I seem to remember Luboš Motl writing something to the effect that rotation of an object can be felt by the local spacetime curvature and the object can also feel itself rotating with respect to this curvature. There is also the Lense-Thirring effect. To the extent that rotation is with respect to spacetime geometry, translation in curved spacetime also moves through varying geometry so there is absolute motion in curved space for translation. Spacetime geometry allows an observer to feel the difference between all types of motion. Also spacetime is dynamic so a particle can’t be at rest therefore the ideas of relative frames and no-absolute frame in special relativity don’t really apply to curved space. 43. TP Says: Thomas rotation makes the orientation of observers look different and Thomas precession makes the rotation rate of a spinning object look different. So in special relativity both rotation and translation are relative. But in general relativity both rotation and translation are absolute with respect to spacetime geometry if not to other observers. 44. Peter Nelson Says: asdf #40 Even if standard static analysis tools wouldn’t have caught the error, leaving the default malloc/free protection in glibc on would have…. Not going to argue with you that the problem is (at least in part) “C itself”. 45. TP Says: Scott #38: Is it known that the Twin Primes Conjecture is independent of the model of arithmetic that is used and has the same answer for other countable but non-standard models of arithmetic. What about non-standard models of computation if that makes sense? Not non-Turing-machine models of computation in computer science, but models as in a non-standard model of arithmetic sense. 46. TP Says: A non-standard Turing-machine! 47. Scott Says: TP #45: When we speak about the twin prime conjecture, P=NP, and so forth being “true” or “false,” it’s implicit that we mean true or false in the standard model of arithmetic—just like, if I tell you that I have a stomachache, I don’t need to specify as an extra condition that I meant a stomachache in this world, the actual physical world, rather than some fictional world of my imagination. (I’d only have to specify an extra condition if I were talking about the latter.) To put it another way, nonstandard models of arithmetic are best thought of as “artifacts of Gödel’s Theorems”: by the Completeness Theorem, their existence is telling you nothing more or less than the fact that certain statements (again, statements that are either true or false in the standard model) aren’t provable in your favorite formal systems. So, if (hypothetically) it turned out that the twin prime conjecture was unprovable in ZFC, then yes, there would exist a “nonstandard model of arithmetic” that had a largest pair of twin primes. But the existence of that model wouldn’t be telling you anything more than the fact that the twin prime conjecture was unprovable. And in the standard model—the model that we actually care about when we ask the question—the conjecture would still be either true or false (presumably true!); it’s just that we might not be able to prove it. In any case, probably the twin prime conjecture is provable after all (there was huge progress toward it just within the last few years!), in which case this entire discussion wouldn’t even be relevant, since the twin prime conjecture would be true in all models. For more on these themes, you might enjoy chapter 3 of my Quantum Computing Since Democritus book. 48. Scott Says: Jerry #37: My book is actually priced pretty low by the usual standards of Cambridge University Press. And it’s only $20 on Kindle. 😉 49. Jerry Says: Scott #48 Thanks Scott. You resolved the NP-Easy part of my response to #15. My questions involving No-Cloning, No-Deleting, and Quantum Fidelity involved the black hole information paradox. When you toss in quantum gravity, unitarity is not sacrosanct. This is what I was hoping to get your perspective on, not a “failure-to-do-my-8.01-homework” spanking. Is this a Bill O’Reilly-esque book-selling blog? 50. Darrell Burgan Says: Only a layman’s viewpoint, but from a pure programming standpoint there are a large number of intractable classical problems that would be easily solvable if one had a time machine. I deal with them all the time. Imagine a cache that knows what data needs to be used next! Seems intuitively clear that if closed timelike curves exist, all kinds of crazy possibilities emerge. 51. Darrell Burgan Says: Anon #25: not sure I agree that a JVM or analogous approach is always slower than C. JIT compilation can perform optimizations that are impossible in static optimization through the mere fact that the JIT executes at runtime. I think we agree that nothing approaches the speed of hand-tooled assembler code. As to JVM being safer, that is also itself uncertain. There are an increasing number of JVM exploits these days. Yeah there’s not supposed to be such a thing as a rogue pointer but there are lots of other holes in the cheese. 52. Darrell Burgan Says: Scott #48: your book just arrived today. Looking forward to a nice read these coming days. 53. Scott Says: Jerry #49: Well, I’m one of the people who thinks that unitarity most likely is sacrosanct even for black holes—or rather, it seems to me that the idea that unitarity is sacrosanct has led to better, more productive ideas about black holes than the idea of abandoning unitarity. There’s a lot more to say about this topic; check out my previous posts about the “firewall” problem if you’re interested. 54. Jerry Says: Scott #53: Thanks Scott. Your Shtetl is not only optimized, it is normalized. The Hawking Radiation Firewall is the answer I was trolling for. I found an excellent summary at: Here is an excerpt: …[]Susskind had nicely laid out a set of postulates, and we were finding that they could not all be true at once. The postulates are (a) Purity: the black hole information is carried out by the Hawking radiation, (b) Effective Field Theory (EFT): semiclassical gravity is valid outside the horizon, and ( c ) No Drama: an observer falling into the black hole sees no high energy particles at the horizon[]… Perhaps you do not like my posts because my “humor” is so similar to yours. If you prefer that I do not contribute questions, opinions, and factoids, it is your blog and I will comply. 55. anon Says: asdf #40 interesting! It seems that (as Peter Nelson #44 points out) the OpenSSL programmers implemented their own wrapper for the malloc() and free() calls, apparently due to concerns about inefficient performance on some platforms with the default library implementation. This has had the wonderful effect of crippling the ability of most (all?) static analysers to find bad code like the Heartbleed vulnerability. In fact it’s almost not even a bug rather just bad programming. I’m sure these guys wouldn’t be 100% trustworthy using Java, C#/.NET, Ada or anything else. However I should point out that I’ve worked in commercial programming environments and seen much more shocking code – it’s certainly not something to blame on the free open source software model itself. BTW, I thought Scott’s beyond QC article was very interesting but didn’t have anything concrete to add (although I’m 100% convinced closed timelike curves do not exist in Nature – so that part of the discussion is fun for me philosophically but not scientifically). I hope these comments on Heartbleed are not too off topic. 56. Scott Says: Jerry #54: Your Shtetl is not only optimized, it is normalized [in reference to my belief in unitarity]. OK, while many of your jokes were lame, that one was decent. 😉 anon #55: No, it’s not off-topic at all; thanks for the insights! It’s been a decade since I wrote any C code, but I definitely remember what a pain all those malloc()’s and free()’s were—I certainly wouldn’t be able to write C code that was secure against bounds-checking attacks, and were I forced to, I would demand some sort of automated tool that provably made that particular type of attack (though not, of course, other types) impossible. 57. Michael Dixon Says: @#55 @#56 While you are at it, check that your compilers are squeaky clean and sound as well. You probably should go all out and do the formal verification on the assembly or HDL level, though. Those basement-dwelling compiler nerds are NOT to be trusted! 58. William Hird Says: Hi Scott, One quick question concerning black holes and information loss, if a lowly little NAND gate can erase information ( loses 1.189 bits with a 2 bit input), why can’t a black hole with a “blazing firewall” also destroy information? 59. Scott Says: William #58: Well, a completely ideal NAND gate with no side-effects would violate quantum mechanics! The only reason why “NAND gates” can actually be built in reality, consistent with the known reversible laws of physics, is that in practice all gates have dissipation, and the dissipation carries away the information needed to reconstruct the original two input bits, even when the output is a 1. If black holes are to play by the same reversible, quantum-mechanical rules as everything else in the known universe, then (firewall or no firewall) the infalling information needs to dissipate out of them as well, just like it dissipates out of the NAND gate. And if black holes don’t play by the same rules as everything else—well, OK then, go ahead and create a revolutionary new framework for all of physics that does away with reversibility, explain why your framework appears to uphold reversibility in all non-black-hole situations, and give some evidence for your new framework’s truth! Fundamental laws of physics (like reversibility) aren’t the sorts of thing that tolerate exceptions: if there is an exception, then the law is not a law, and a deeper law needs to be articulated. 60. quax Says: William Hird #58, as Scott desribed above, for your NAND example entropy increases, so it’s no mystery where the information goes. But IMHO this theoretical insight (by then IBM researcher Rolf Landauer), and its recent experimental validation is still extremely cool science. On the other hand since Hawking radiation is non-thermal, something gotta give. Hey Scott, since you proposed a celebrity death match for Geordie in the previous thread, how about you against Penrose to settle this particular question 😉 61. William Hird Says: Scott#59: Not sure what you mean by a “NAND gate with no side effects would violate quantum mechanics”. What do you mean by side effects? 62. Scott Says: William #61: I mean information that’s physically generated by the operation of the gate, other than the gate’s logical output. If you had an idealized NAND gate that mapped input bits x and y to only the output bit NAND(x,y), “deleting” x and y from the universe (and leaving no record whatsoever of them, not even in the stray radiation given off as heat), then that would obviously violate the unitarity of QM. But the resolution is extremely simple: it’s that real NAND gates don’t work like that. They do give off heat, and the heat encodes the information about the input bits x and y. (Note: If you did want to compute NAND in a completely dissipation-free way, you could in principle do it using a Toffoli gate. I.e. you’d map x,y,z to x,y,(z XOR (x NAND y)), which is then reversible by applying the same gate a second time.) 63. William Hird Says: Scott#62: Thanks for the clarification. But: “the heat encodes the information about the input bits x and y”, I want to see the machine that can capture the heat signature of the erased bits and identify them for posterity 🙂 64. Rahul Says: It’s been a decade since I wrote any C code What language do you usually code in now? Or do you rarely code at all? Just curious. 65. Rahul Says: Quoting from the PBS article: Now, the faster you run your computer, the more cooling you need—that’s why many supercomputers are cooled using liquid nitrogen. Off topic but, do many supercomputers these days use liq N2? I worked with some on the Top500 list circa 2009 & don’t recall many doing that. Just curious. 66. Rahul Says: Thus, just as today’s scientists no longer need wind tunnels, astrolabes, and other analog computers to simulate classical physics, but instead represent airflow, planetary motions, or whatever else they want as zeroes and ones in their digital computers, Again, may be a nitpicking observation, but it doesn’t seem fair to mention wind tunnels in the same class as astrolabes. Aren’t a fair bit of components still tested extensively in expensive wind tunnels? e.g. aircraft wings, turbine blades, car bodies etc. No doubt, CFD has made huge advances but are we at the point yet where wind tunnels are obsolete? 67. A Says: anon#25: “the Lorentz invariance is thought to break down at the planck scale but holds exactly for observables above this scale”. This seems like an interesting idea, does anyone have a link to a source? Intuitively one would expect the deviation from Lorentz invariance to follow a smoother curve (so it becomes negligible as one goes away from the Planck scale, but not so that it suddenly becomes zero at some point) 68. Scott Says: Rahul, I have a request for you: please limit yourself to one question for me on this blog per day. I can’t keep up otherwise with everything you ask. Now, to pick one of your questions and answer it: Today, I usually code in an extremely high-level language called “undergrad.” 69. anon Says: A #67 ok mr pedant ( 😉 ), what I mean is that Lorentz Invariance is thought to hold exactly above planck scale distances (ie below planck scale energy) but violations below this distance scale would not contradict relativity as it is not possible to define exact observables on which a measurement could be made at such a scale (since the concept of “length” breaks down) But, as far as I understand, it would certainly be an unexpected result if Lorentz violations were detected close to the planck scale – in fact there is evidence from indirect measurements that invariance does hold even above planck energies. However I am not an expert in QG, so you can consider my statement as just an opinion. 70. jonas Says: William Hird #63: you might get to see it. Cryptographers these days are interested in side-channel attacks, which use the imperfections and side effects of computers. I’m not sure whether they do anything with heat yet, but they do derive information from noise and electric fields and stuff like that. 71. Blacksails Says: Question unrelated to the blog post: Is it possible that P or NP are not “properly defined” classes? Imagine if nobody had ever defined P, and instead defined some other class Q, which ended up being what we call P along with a little bit more. It would then be possible than “Part of Q”=NP, but only for that little bit extra. Basically, what would the consequences be if “Part of P”=NP (say, all problems with some particular scaling or greater, but not including “normal” things like n^2, n^3, etc.), or P=”Part of NP”? 72. Darrell Burgan Says: I’m sure I’ll get some eyes rolling on this, but in my defense the topic of the blog post *is* whether it is possible that nature would permit computing beyond QC. 🙂 It occurred to me that in the realms of string theory there are certainly purely mathematical models that explore what the world would be like if one or more of the higher dimensions were time-like instead of space-like. And if one of these admittedly wild models actually described nature, then couldn’t a second timeline be exploited for computing, such that answers in one timeline appear instantaneously based upon thousands of years of computation in another? 73. William Hird Says: Hi Jonas, yes I am aware of the “side channel attacks”but to decode the kind of thermal residue that Scott was alluding to I think would take some fancy technology, no doubt that “flux capacitors” and “Heisenberg compensators” would somehow have to be part of the circuitry! 🙂 74. Scott Says: Blacksails #71: There’s been a huge amount of research involving “variants” of P and NP—whether it’s DTIME(nlog(n)) or BPP or AlgP or NC1 or Monadic NP or MA. (You can look all of those up in the Zoo, BTW.) And that can give you lots of “cousins” of the P vs. NP question—almost all of which are open, with a separation conjectured but not proved, just like with the original question. Anyway, until you make it clearer which kinds of variants of P and NP you’re interested in, it’s hard to be more specific. 75. Scott Says: William Hird: Because of the Second Law of Thermodynamics, it’s been understood since the 1800s that reversibility of the underlying laws need not imply reversibility by any technology that we can imagine in any foreseeable future. So the right way to put the question is not whether we can actually build a machine to reverse the NAND gate or the black hole, but whether we can explain the NAND gate or the black hole without postulating fundamental physical laws that, at any point in time, ever “delete” the state of a particle out of the universe. For the NAND gate, we know the answer to that question is yes. For the black hole, many physicists conjecture that the answer is also yes, and I’m inclined to agree with that conjecture. 76. William Hird Says: You are absolutely correct, my responses show my bias towards gadgets, being a retired electronics engineer, you have the correct scientific interpretation. 77. A Says: anon #69: Ah that’s really cool, thanks for the opinion and the link : ) Sorry if it came across as pedantic, just think that the interesting stuff is usually in the details, right along with the devil ;). 78. fred Says: Scott #59 “a completely ideal NAND gate with no side-effects would violate quantum mechanics!” “[…] real NAND gates don’t work like that. They do give off heat, and the heat encodes the information about the input bits x and y.” I was looking again into this and was quite surprised to find out that Maxwell Demon type of question were still being explained well into the 50s (Brillouin) and the 70s (Landauer) – and those discussions actually do not involve QM, right? It’s all in terms of thermodynamics, brownian motion, and information theory. I guess the simplest example of a non-dissipative gate would be an AND gate using billiard balls (ideas from Toffoli) http://tinyurl.com/nsw2zky It’s clear that no information is lost and you can always distinguish the cases (0,0)->0, (1,0)->0, (0,1)->0 (by looking at the 0-out and 1-out outputs). 79. Jerry Says: “Ask me anything, but only one thing”. OK. If the concept of a multiverse is correct (per Susskind), when matter (or anything else for that matter) does “disappear” into a Black Hole’s (Bekenstein-Hawking) singularity in one of many universes but reappears as a “white hole” in another universe, does this resolve the information loss paradox? BTW: Is, “Today, I usually code in an extremely high-level language called “undergrad.” in P (pun)? Math, Physics, Algorithms, and (yes, humor) can all be effective tools for teaching and learning. 80. Scott Says: Jerry #79: Your “resolution” of the information paradox—that information that falls into a black hole reappears in a baby universe spawned by the singularity—is the one actually advocated by Lee Smolin! (Susskind, on the other hand, doesn’t advocate that resolution at all; he believes in parallel universes, but not that kind of parallel universe.) Personally, I regard it as a possible but extremely strange solution—one that I would only consider if there were compelling evidence from quantum gravity that black hole singularities really do spawn these baby universes, and if their existence could be given some sort of operational meaning in this universe. At present, my understanding is that there’s no good evidence that black hole singularities really work this way; and furthermore, by preventing Poincare recurrences even within a bounded region of space, this scenario seems to violate “the spirit if not the letter” of quantum mechanics. My preference is certainly for the information to reappear in this universe! 81. Scott Says: fred #78: Yes, you can discuss the Maxwell Demon paradox purely in terms of classical thermodynamics and classical reversible computation! The laws of physics are reversible, and phase-space volume is conserved, already classically, and that’s pretty much all you need to explain both the paradox and its resolution. 82. Jerry Says: Scott #80 Thank you, Scott. Excellent answer! 83. Mike Says: Off topic: I just read “Additively efficient universal computers,” and, assuming you read it in detail, I was curious about your take on its conclusions. 84. Scott Says: Mike #83: I liked that paper and found the notion of additive universality to be interesting. Of course, what one really wants is a set of “candidate physical laws” (e.g. a cellular automaton) for which additive universality can be proved to hold, rather than just constructing a model for which it holds almost by definition. But it’s a nice start. 85. matt Says: There are indeed several basic ways in which we “know” the answer is yes classically, but it is quite subtle IMO to specify in which way the quantum situation is less satisfactory than the classical one. I’ll give some specific examples, but overall I’d say there’s still a lot to understand classically, in the same way that pseudo-random number generators are still not fully understood in computer science. For a computer science analogue, giving a general proof of thermalization for a large variety of classical systems would probably be on par with proving P=BPP; actually, my guess is it would be harder as the standard arguments for thermalization roughly rely on assuming that certain events in a deterministic process can be treated as if they were random so they almost assume that good PRGs exist. For example, there is a line of research that dates back to Boltzmann, in which he shows that irreversible dynamics can emerge from reversible microscopic laws. However, the original derivation makes an assumption originally called “molecular chaos” in which one assumes that pairs of particles that scatter off each other are drawn independently from the single particle distribution, giving the Boltzmann equation. This does give irreversible dynamics, but requires an assumption. Under similar assumptions, one can argue that irreversible quantum dynamics can emerge. So, perhaps we would like to better justify this assumption. There is some rigorous work by Lebowitz and collaborators showing thermalization for certain initial conditions for specific systems. This is extremely elegant work, but currently limited in which systems it can be applied to. Conversely, there are some simple quantum systems (like free fermions) in which one can (much more straightforwardly) derive thermalization for certain initial conditions to a generalized Gibbs ensemble. I’d say that currently the knowledge of the quantum setting is much less satisfactory here, but maybe there is no sharp qualitative difference in which we can say that classically this general problem is understood at the level of rigorous math and quantum mechanically it is not. There are also numerical studies. Some of these numerical studies were in fact controversial classically, where certain systems were thought not to thermalize but were later learned to thermalize at extremely long time. The quantum numerics are much worse of course, but thermalization can be seen in some of those numerics (but not all….) A final interesting point is that we know from numerical studies using Monte Carlo on classical systems that good PRGs really are needed. There are some famous cases where a poor choices of PRG led to inaccurate results in classical Monte Carlo, even though the PRG was considered good at the time. So, the relation between thermalization and pseudorandomness is subtle. 86. fred Says: Scott #84 Considering the number of papers referencing it, it’s too bad John Conway named his cellular automaton thing “Game of Life” and not “Pokemon Mystery Dungeon: Explorers of Time and Darkness”. 87. David Brown Says: “… there either are infinitely many twin primes or there aren’t …” Suppose Peano Arithmetic is logically inconsistent. Is the statement about twin primes then true? 88. Scott Says: David #87: Good question. My answer is, absolutely! In the absurdly-unlikely event that PA was found to be inconsistent, we would need to create new first-order axioms for arithmetic. In other words, there would indeed be a crisis for the formal language that we use to talk about the positive integers. But the positive integers themselves would be serenely unaffected by all the commotion: they would still “exist” (in whatever sense they’d “existed” before), and still either have infinitely many twin primes or not have them. And if expressing this view makes me a Platonist, then so be it (though as I’ve said before, I prefer the term “anti-anti-Platonist”). 89. wolfgang Says: >> the positive integers themselves would be serenely unaffected by all the commotion: they would still “exist” So you really are a hardcore Platonist. I think it is only fair to point out that there are other philosophies, as listed e.g. here. 90. Scott Says: wolfgang #90: No, I’m a softcore Platonist. If I were hardcore, I wouldn’t have put scare quotes around the word “exist.” 🙂 91. Dani Phye Says: “In my opinion, the crucial experiment (which has not yet been done) would be to compare the adiabatic algorithm head-on against simulated annealing and other classical heuristics.” More recently, have any experiments like this been done? 92. wolfgang Says: Well, there is still the question what is “exists”? – as I learned from this interview. 93. Jerry Says: Scott #88: See: “God Created The Integers: The Mathematical Breakthroughs that Changed History” by Stephen Hawking I also take the Platonist view, but Scott’s term “anti-anti-Platonist” sounds a bit mathagnostic; similar to an anti-positron, which is not an electron (it is a right-chiral electron that does not interact with the w-boson). I view Hilbert space as a mathematical tool, but at the end of the day it is nice to kick our shoes off in R^3. 94. Charles Says: Zookeeper Scott, I know this is off-topic, but can you tell me if \(\exists\mathbb{R}\) (the existential theory of the reals) is in the Complexity Zoo? I can’t find it, but maybe it’s just under some other name. It fits in somewhere between NP and PSPACE (I’d like to know more, hence looking in the Zoo…). 95. Dezakin Says: Scott #88 In the unlikely event that PA was found to be inconsistent, we would need to create new first order axioms for set theory. PA was proven consistent by Gentzen using concepts that are capable of being embedded into ZFC. The foundations would fall apart with at least as big of a crash as with Russell’s paradox. 96. Sniffnoy Says: Dezakin: This is a true statement, but it seems to be mostly irrelevant. That is to say, in my experience, the general feeling among people who study such things seems to be, if ZFC turns out to be inconsistent, then, well, that would be very surprising, and it would be a serious problem, but it would be a recoverable problem, because we could switch to a weaker set theory and still have pretty much all of ordinary mathematics; it would largely just be the set theorists who would have to unlearn what they know. After all, do we really have a good intuition for what should be true about enormous transfinite sets? A number of people have complained that the axioms of ZFC are too strong, and, well, maybe they’re right. Your Russell’s paradox comparison is instructive; sure, we ended up needing new foundations, but most of math outside of set theory was not affected. By contrast, if PA were found to be inconsistent, that would be truly disastrous, and it’s not at all clear how one could recover from such a thing. PA is a collection of basic statements about natural numbers that really, really, shouldn’t be false. What can you even do if PA is inconsistent? You can’t just go constructive and fall back to Heyting arithmetic; if PA is inconsistent then so is Heyting arithmetic. The obvious thing to do is to weaken the induction schema, of course, and this works perfeclty well if you’re just trying to write down a theory of the natural numbers — but we don’t just want that; we want our theory of the natural numbers to interface with the rest of mathematics. Which means that our new set theory (which of course we’ll need if PA turns out inconsistent) will have to have limitations that don’t allow it to prove the full induction schema, but only the limited subset we choose. And executing that may not be so easy. So, basically, your comment to me reads as basically saying “If this complete and utter disaster occurs, it will also entail this much smaller and maybe handleable disaster!” (“If the moon crashes into the earth, there won’t be any more tides!”) Yes, it’s true, but it’s probably not what you should be focusing on. (Not to mention, while Gentzen showed ZFC proves Con(PA), you don’t even need that to see that an inconsistency in PA means you’re going to need a new set theory, since ZFC proves all the actual axioms of PA (appropriately translated), and so the consistency of ZFC implies the consistency of PA. Which is weaker than ZFC proving the consistency of PA, but still enough for our purposes here.) 97. asdf Says: Is this something new? I thought the concept that time’s arrow emerged from decoherence had been around for a while. 98. TheOnion Says: asdf, thank you for calling to our attention this impressive new talent, Natalie Wolchover 99. Scott Says: Charles #94: (gasp) sorry!! The “existential theory of reals” complexity class seems to be completely missing from the Zoo. You or anyone else should feel free to add it. 100. Scott Says: asdf #97: In general, the idea that the arrow of time is related to decoherence is a very old one—Everett had the idea in the 1950s, and the founders of QM even arguably had it in the 20s and 30s. And it’s really just a quantum gloss on the thermodynamic arrow of time, which goes back to Boltzmann in the late 19th century (and in many ways, QM changes the story surprisingly little). There’s also a long history of people rediscovering the decoherence/entanglement/arrow-of-time connection, or re-expressing it in different words, and treating it as new. Anyway, the new technical content in the papers Wolchover is talking about appears to be to rigorously derive the equilibration of interacting quantum systems in certain contexts, and to compute explicit upper bounds on the equilibration time. 101. Serge Says: Asking whether there’s anything beyond quantum computing, that amounts to asking whether there’s anything beyond quantum mechanics, right? So maybe you should ask your friend Luboš about string theory… 🙂 The Heartbleed bug is yet another illustration of the fact that efficiency is always obtained at the expense of accuracy. The developers in OpenSSL will probably consider using a more evolved language – one that manages memory more safely. But at the same time, their programs will get a tiny bit slower. This phenomenon will also be at work when we ultimately have those quantum computers at hand. I’m not saying “scalable” because I don’t know what that means. Such a concept is as difficult to define as is “feasible” or “executable”. However, since quantum computers can compute faster, it will be all the more tedious to ensure the correctness of their outputs, whether at the software or hardware level. 102. Scott Says: Serge #101: Well, it’s conceivable that Nature could let us do something beyond BQP, even if quantum mechanics is exactly true (as it is in string theory). For example, that could happen if some aspect of the AdS/CFT correspondence or of holography allowed us to apply a “radically nonlocal” unitary transformation in polynomial time, or if the universe had an initial state that was extremely hard to prepare (bumping us up from BQP to BQP/qpoly). However, I completely agree in regarding these possibilities as extremely unlikely—just as unlikely, I’d say, as Gil Kalai’s “dual” possibility, that Nature would let us do less than BQP even though quantum mechanics was exactly true! 🙂 In comparing Heartbleed to hypothetical bugs in a quantum computer, I feel like you’re conflating two extremely different issues. As a general rule, it’s about a trillion times easier to write a correct program—whether classical or quantum!—when the program’s only purpose is to solve some fixed, well-defined math problem, rather than interfacing with a wide array of human users in a way that’s secure against any of those users who might be malicious and exploit vulnerabilities in the code! And there’s no advantage to using quantum computers for the latter purpose: we can continue to use classical computers for that, even as we switch to quantum computers for those mathematical problems for which QCs happen to provide a speedup. 103. Michael Brazier Says: Regarding the black hole information paradox: isn’t it true that measurement of an entangled quantum particle destroys information, in that it breaks the correlations of the measured particle with the particles it’s entangled with? 104. David Kagan Says: Michael Brazier #103: Information is not lost, it just moves somewhere else. When an entangled particle’s state is measured then information that was contained in the state of the entangled pair does indeed leak out, but it is preserved in the bigger system that includes the measurement apparatus (and perhaps some portion of the environment it sits in if it is not totally isolated). 105. Scott Says: Michael #103: Indeed, you could argue that any quantum measurement “destroys information” (namely, all the information that was originally in the quantum state, besides the measurement outcome)! There’s nothing specific to entangled particles here. However, according to the “Church of the Larger Hilbert Space” / Many-Worlds perspective, even a measurement is “really” just a unitary transformation U that entangles you and your environment with the quantum system being measured. So, such a thing could in principle be reversed, by applying U-1 to all the atoms of your brain, the air molecules and radiation in the room, etc. etc. So, no information was “fundamentally” destroyed. Sure, some information was destroyed “in practice,” but that’s hardly different from an egg being scrambled, a book being burned, or any other classical instance of the Second Law of Thermodynamics. 106. Luke G Says: I’d disagree with that assertion. Certainly there exists an efficient frontier of program speed versus difficulty of correct coding. But C is nowhere near that efficient frontier; I’d say it’s particularly far from it. I’m a professional programmer, and many times I’ve witnessed huge productivity differences in languages and frameworks: well-designed stuff can be more efficient for both the CPU and the programmer! For example, I believe that a modern C++ implementation of OpenSSL would be FASTER than the C version and less prone to bugs (for the same programming effort), owing to the better standard libraries and resource management techniques available in C++. 107. Igor Makov Says: Apparently, one Oprah Winfrey has been an adherent of the “Church of the Larger Hilbert Space”: There’s no such thing as failure. Failure is simply the universe trying to move you in the right direction. (Sorry, I could not help it 🙂 108. Michael Brazier Says: Mm. Many-Worlds perspectives face a problem, because measurement prefers pure states to mixed ones. The possible results of a measurement form a basis for the Hilbert space of the measured particle. But there’s no mathematical reason to prefer that specific basis over any other. With the spin-1/2 particle case, for instance, |spin up> and |spin down> is the basis we normally use, but mathematically |spin up + spin down> and |spin up – spin down> would make just as much sense. We don’t use the latter basis because we never see states like |spin up + spin down> when we measure a physical particle. Since in Many-Worlds perspectives the larger Hilbert space is all there is, how does one account for the apparently special role of the pure states basis in a measurement? 109. Scott Says: Michael #108: You’re talking about what’s called the “preferred-basis problem.” The usual answer is that the measurement basis is not specified a-priori but instead picked out dynamically, by the process of decoherence. In other words, if you just follow unitary evolution, letting the Schrödinger equation do its thing, you’ll find that entanglement repeatedly splits the wavefunction into branches that don’t interact with each other and that are “forever separated” for all practical purposes. In some of those branches, it will look like a spin measurement was made in the X direction, in others, it will look like a measurement was made in the Z direction, etc., just depending on the details of the measurement apparatus. I stress that this is not just a speculation or hope: you can work out examples in detail and see that this is exactly what happens. I do think there are serious objections to be leveled against MWI, and I’ve leveled some myself. But those objections take place at a different level: e.g., what do we even mean, empirically, in ascribing “reality” to the other Everett branches? Is the notion of “the quantum state of the entire universe” even useful? Could you, personally, test the truth of MWI by putting your own consciousness into coherent superposition, and then recohering it? Or is irreversible decoherence (and hence, in-principle MWI untestability) a necessary component of consciousness? What is it that accepts the invitation of the formalism to “impose” the Born rule on the decoherent branches? So yes, you can ask all those questions and more, but I don’t think you can fault MWI for overlooking some simple technical point. At a technical level, MWI is what the math wants! 110. wolfgang Says: @Scott #109 The decoherence argument to solve the preferred basis problem suffers from one issue imho: Where does the macroscopic measurement device (sitting in a classical almost flat space time) come from? The calculations you refer to usually simply assume their existence … but their existence means that you have selected one specific branch (or collection of branches) of the mwi wavefunction already. 111. wolfgang Says: I should mention that Kent and Dowker analyzed the issue I refer to and a brief but good summary is here: 112. Mike Says: ” . . . what do we even mean, empirically, in ascribing “reality” to the other Everett branches?” Better to sometimes employ a rationalist (rather than a strictly empiricist) view, where reason is sometimes considered to be good evidence for the truth or falsity of some propositions. 😉 Nevertheless, assuming that each branch of the wave function is as “real” as any other seems no more troubling or difficult than the reverse. “Is the notion of “the quantum state of the entire universe” even useful?” Well, I guess that depends on what you mean by useful, but some folks think that asking “[w]hat is the quantum state of the universe?” is the central question of quantum cosmology! http://arxiv.org/abs/gr-qc/0209046 OK, this would be hard, but there are proposals to try and achieve this using, for example, an AGI device. See generally for a discussing of this issue: “Or is irreversible decoherence (and hence, in-principle MWI untestability) a necessary component of consciousness?” Perhaps with meat computers 😉 but I’m curious if you think this would hold true with regard to an AGI device? The way I see it, you’ve got two choices: either adopt a “collapse” model (go ahead I dare you 😉 ) or accept that the born rule or some variant will always in a sense be “imposed” because quantum theory (without collapse) has no probabailites, being deterministic. Here is a good overview of the various arguments surrounding the issue: “At a technical level, MWI is what the math wants!” I agree!! 😉 113. Scott Says: wolfgang #110: If you wanted to know where the measurement device came from, then of course you’d need to push the story back further—telling a story of Schrödinger evolution generating entanglement and thereby giving rise to decoherent branches of the wavefunction in which one could see the formation of stars, supernova explosions, the condensation of heavy elements into planets, the evolution of life, and finally the building of spin measurement apparatuses. While I’ve obviously left many details unspecified, 🙂 I don’t know of any good argument that this entire story couldn’t be told in the language of decohering branches of a universal wavefunction, if that’s what you wanted to do. And crucially, if it can’t be, then that strikes me as equally a problem for any account of quantum mechanics, not just for MWI. 114. Jerry Says: Scott #105 & 109: A simple Friday question: If you write your name on a piece of paper and burn it, where does the information go? How can it ever be retrieved? If “irreversible decoherence [is] a necessary component of consciousness”, how can the “pieces” ever be put back together in a way that is consistent with the 2nd law of thermo? 115. wolfgang Says: >> I don’t know of any good argument that this entire story couldn’t be told Read the comment from Jesse Riedel at the link I provided, including the Kent/Dowker paper (arxiv.org/abs/gr-qc/9412067), who provide such an argument. But I assume you already read it and did not find it convincing? >> equally a problem for any account of quantum mechanics I think of mwi as a programming language without ‘garbage collection’, (no branch of the universal wavefunction gets removed by the Copenhagen reduction). Are all programming languages (interpretations) in the end equivalent? Yes, but some are much harder to use and I would argue that mwi is unnecessarily hard to use even for simple cases 😎 116. Scott Says: wolfgang #115: The paper by Dowker and Kent that you referenced is critiquing the consistent-histories approach, which (while I’ve never fully understood it) seems to involve some additional baggage over and above bare MWI. It’s long, but I’ll read it when and if I get a chance. 117. Scott Says: Jerry #114: The information goes into the smoke, ash, and other byproducts. That’s just standard, 19th-century physics, not anything fancy or new. In practice, it’s nearly impossible to retrieve the information, but only for more-or-less the same sort of reason why it’s hard to reconstruct a puzzle (though not impossible!) after I’ve shaken up all the pieces. In the case of the burned paper, the puzzle involves ~1022 microscopic pieces rather than just a few hundred macroscopic ones (and many of the “pieces” consist of radiation that’s since flown away!), so the problem is many, many orders of magnitude harder. 118. fred Says: I’ve tried hard but I never got that whole “arrow of time” business. I don’t even see what the problem is… 1) time has no preferred direction, but systems evolve based on their initial conditions. i.e. it’s the initial conditions that locally determine what we call the arrow of time. Our universe seems to have an arrow because the big bang was such a concentrated initial state and it’s “dragging” along everything in it with it. But it’s not clear whether it always has to be the case, i.e. we could imagine universes/systems that are prepared in a less biased way so that different pockets can seemingly evolve in different time directions from one another. The idea of entropy captures all this, but it’s a concept based on a macro representation of possible states, e.g. there are way many more configurations of the air in this room corresponding to a uniform density compared to a configuration where all the molecules are bundled together tightly in one corner (b)… but every actual configuration where the air density is average is just in itself as rare as (b). It’s like saying that you’re more likely to observe a 6 with two dices (1+5, 2+4, 3+3) than to observe a 2 (1+1), so a “double dice” system will more naturally go from a 2 to a 6 (breaking the egg) rather than the other way around (putting the egg back together), hence there is a time arrow embedded in it. 2) we get biased by our own sense of the passing of time, which is subjective. We perceive it because of the way we construct memories (i.e. the more memories the further away we are from our system’s initial conditions). But it’s all a big space/time block extending and existing instantaneously in every directions. But it’s likely that things do get more interesting with QM since in that context the notion of “indistinguishably” is so central. But given all the difficulties about what is a measurement, what is a system, … 119. Scott Says: fred #118: Yes, I think the modern version of the arrow-of-time problem is simply, “why did the universe have such a low-entropy initial state?” Not surprisingly, opinions differ as to whether this is a real problem at all, and what sorts of answers to it would be satisfactory. 120. Scott Says: wolfgang #115: OK, I just read Jess Riedel’s comment. I think Jess is absolutely right that there’s a formal circularity in the consistent-histories arguments typically offered by people like Gell-Mann and Hartle. Namely, these arguments use the assumption of “quasi-classical histories” to define observers, and then they use observers to define the quasi-classical histories. (Precisely because of such circularities, formal arguments about decoherence have never made me that much more confident in it than I’d been before — but I was already pretty confident!) However, I would also say that what Riedel points out strikes me as a benign circularity—analogous to what Google PageRank does, in defining an “important website” as a site linked to by lots of other important websites. That is, yes, the definition might not make analytic philosophers happy, but the apparent circularity in it can be “unraveled” by giving an appropriate algorithm. In the case of Google PageRank, the circularity is unraveled by constructing the link graph of the web and then finding its principal eigenvectors. In the case of decoherence, there seems to me to be very little doubt that, given the actual evolution of the wavefunction of a universe like ours, you could write a computer program that would do an extremely good job at picking out the “decoherent branches.” Yes, there could be different ways to do that (and I’m not about to discuss how to do it in the space of a blog comment…), and yes, the different ways could give different results in some edge cases, but at least for “realistic” macroscopic situations, it’s very hard for me to imagine how the different ways could actually disagree in practice. I’d need to see an example where that happened before I acknowledged it as a serious possibility. 121. wolfgang Says: >> given the actual evolution of the wavefunction of a universe like ours But I thought this is the problem – the universal wavefunction must continue many universes which are not at all like ours (in fact “freak branches” are the overwhelming majority, which leads to another debate about Born probabilities). Once you reduce the debate to a “universe like ours”, you are reducing the universal wavefunction – then why not go all the way and use Copenhagen? 122. Jerry Says: Scott #117 I’m well on board with the 19th century physics aspects. You could (theoretically) tear a piece of paper into 10^22 pieces, throw them into the wind and you would be unlikely to retrieve any info written on it. What about the information at the quantum level? If information at a B.H. event horizon doesn’t just disappear but escapes as Hawking radiation, what about information that we do not have the technology or time to retrieve? Does the info appear as heat (entropy)? The ash, smoke, etc. would be in a different “state” if the original paper had been blank as opposed to having information on it. 123. Scott Says: Jerry #122: Yes, what you write is exactly what modern physics says happens. 124. Scott Says: wolfgang #121: Sorry, I meant a universe with a hot Big Bang and a Standard Model Hamiltonian like ours (I’m not attempting to dig any deeper than that…). At least if you measure by the Born probabilities and not in some other way, the overwhelming majority of such universes will look more-or-less like our universe: they’ll have galaxies, stars, planets, etc, obeying the same laws of physics that are familiar to us. Of course it might be that only a small fraction of the branches have intelligent life, and only a small fraction of those have humans, and only a small fraction of those have us, specifically, but those are all implications that MWI proponents enthusiastically embrace! I don’t see how you can criticize MWI on that ground; it seems like question-begging to define any universe where things didn’t turn out just like they did in our universe to be a “freak universe.” 125. wolfgang Says: >> I meant a universe with a hot Big Bang … if you measure by the Born probabilities … Yes, if you *assume* a classical background, the existence of macroscopic observers etc. and the Born probabilities then the interpretation problem goes away. But I thought the whole point of mwi is to *derive* those … What Jess Riedel et al. are pointing out (if I understand correctly) is that such a *derivation* requires additional assumptions beyond the unitary Schroedinger evolution. 126. Serge Says: Luke G #106: Then it will be that my own theory has bugs… and that I should have spent more time in thinking it over. Efficiency versus accuracy, as always… 🙂 One way of correcting it is to take into account the time that computer scientists took to invent the C++ language, the right frameworks, the faster computers, the larger drives that would host the new software, etc… Indeed, computer science has evolved significantly since the 1970’s. Now, getting back to my initial comparison, let’s hope the theoretical amount of time required for building a functional quantum computer isn’t infinite… 127. fred Says: Scott #123 On one hand I hear that all the fundamental physical laws are reversible, so you can’t ever hope to destroy information since everything contains the trace of whatever happened before, and if you were to reverse the time arrow, the past would seem to be caused by the future (indistinguishable at the atomic level) – so God could press rewind/fast forward all he wants on his “space/time continuum” VCR, we would never notice and the film would always be the same. But on the other hand, QM injects randomness in the universe (only the evolution of the amplitudes is deterministic), so how come this randomness doesn’t destroy information to some degree? Is it because pure randomness carries no information? It seems that QM randomness prevents us from predicting the future but not from inferring from the past. QM muddies the reversibility principle – whenever God himself his pressing “rewind” then “play” on his VCR, the film of the world would be different each time (maybe the movie even looks different in reverse)… Unless he’s watching all the films in parallel with MW? 128. Jerry Says: Scott 123: Many thanks, Scott! See, I do know my 19th century physics(and some 20th and 21st). Walter Lewin would be very proud of me and would draw a long dotted line across his blackboard. Have a nice weekend. 129. Scott Says: wolfgang #125: OK, maybe we don’t actually disagree! I agree that the frequent claims to “derive” the Born rule from unitary evolution alone are overblown, and I’ve said that before on this blog. You can’t get blood from a stone, and you can’t get objective probabilities from a deterministic evolution law without any further assumptions. What one can say, I think, is that the unitary evolution law very strongly invites you to tack on the Born rule: the Born rule just fits unitary evolution like a glove (if they were on a dating website for mathematical concepts, they’d be matched by their shared passion for the 2-norm), whereas there’s no other probability rule that similarly fits (and one can prove like 20 different theorems formalizing that intuition). But yes, saying that |ψ|2 gives you a probability distribution over subjective experiences is an additional commitment you need to make to connect QM to experience, over and above a belief in unitary evolution. You (and Riedel?) are right about that. On the other hand, I don’t agree that if you can’t “derive” the Born rule from it, then there’s nothing to recommend MWI. An MWI proponent could say: “look, we have the only account of the evolution of the state of the universe that fits the known laws of physics, without invoking a mysterious ‘collapse’ that happens at unspecified times! yes, we might not fully understood consciousness, or observerhood, or the emergence of probabilities, but why should we let that impede our understanding of physics? if the math seems to be militating so firmly for unitary evolution holding always and everywhere, without exception, then why shouldn’t we just go with that, and treat alll the observer stuff as our problem rather than Nature’s problem?” 130. wolfgang Says: @Scott #129 So what are the Born probabilities actually about? In “Copenhagen” I would say the probability that I experience M1 or M2. This is straightforward to understand. In “mwi + patched on Born” it would mean the probability that I find myself in the world W1 vs the branch W2. This is not so easy to understand (for me); If you complain about the ‘collapse’ in Copenhagen, then I can point out that this “finding myself” follows an unexplained mechanism too. I would also point out that the unitary evolution of the wavefunction, which depends on a time parameter t, is easy to understand in Copenhagen ( t is associated with the classical clock I carry around with me), but not so easy to understand in mwi. How do you get (classical) clocks without reducing the universal wavefunction to those branches which contain (classical) clocks? 131. Scott Says: fred #127: From the MWI perspective, even quantum measurements are reversible in principle, because measurements never “really” happen: they’re just language used to describe unitary evolutions that entangle us with the quantum systems we’re measuring. Or to put it in your terms: yes, God is watching all the branches in parallel. And that being so, for Him to “rewind” the tape is as easy as changing the unitary the controls the branching process from U to U-1. Anyway, it occurs to me that I should state things more carefully. Here’s what I think is true: from Galileo to the present, no source of irreversibility—or even a hint of one—has ever been found in the microscopic laws of physics. Rather, all irreversibility that we know about seems to be tied up with decoherence, the Second Law of Thermodynamics, and stuff like that. This whole discussion started with the black hole information problem. There, the relevant point is that, if you want black holes to really, actually convert pure states into mixed states (as Hawking’s semiclassical calculation suggested), then you’d need irreversibility in the fundamental laws, rather than “just” thermodynamic irreversibility. That’s the part that many people (correctly, I think) regarded as suspicious, and these days I’d say that ideas like AdS/CFT have cast severe doubt on it. 132. wolfgang Says: @wolfgang #130 😎 >> classical clocks Actually, we can turn this into an interesting homework problem for mwi proponents: Assume that a measurement device is configured so that for outcome M1 a rocket of mass m is fired into outer space and for M2 nothing special happens. In other words, for |M1> the mass of the Earth will be reduce from M to M – m , while for |M2> the mass remains M. Therefore clocks on the surface of the Earth will tick slightly different, according to general relativity. This is not a problem for Copenhagen, since either M1 or M2 happens. But how does mwi handle this case? Does the unitary evolution of the universal wavefunction use clock time t1 or t2 or a combination of both? 133. Scott Says: wolfgang: All your questions about MWI are good ones! The irony is that I’ve often been on the opposite side of this, asking variants of the same questions you’re asking to the MWI true-believers. On the other hand, I think intellectual honesty compels one to acknowledge the severe problems on the Copenhagen side of the ledger. Indeed, one could imagine an MWI proponent gleefully asking you: “so, you believe that wavefunctions evolve by the Schrödinger equation, except that sometimes—when they get too complicated or something—they suddenly and violently ‘collapse’? Oh please, tell me more about this ‘collapse’ law. Do you have to be conscious to collapse stuff? Can a frog or a robot collapse wavefunctions? At what point does the collapse happen: the optic nerve? the visual cortex? the soul? Or does it just happen whenever you, like, entangle more than some special number of atoms? What is the magic number, then?” (To get the full effect, you need to imagine the MWIers laughing hysterically as they ask these things.) 134. wolfgang Says: @Scott #133 I think the Copenhagen interpretation ultimately works best in combination with solipsism: The reduction of the wave function happens when *I* experience something 😎 I guess this is why “shut up and calculate” is frequently the advice given to students … 135. James Gallagher Says: Jesus, (RIP) why can’t you guys just maybe think that Nature does all the collapsing and you just get to observe it? 136. Michael Brazier Says: Scott@133: The reply to the imaginary MWI proponent is simple: “I need only point out that, to date, nobody has ever seen a quantum particle in a superposition. If unitary evolution were indeed the whole story, we would have actual cases of unquestionably conscious observers being placed in a mixed state for a noticeable time and recohering afterwards – in fact it would be routine and unremarkable. In fact nothing of the sort has ever occurred; therefore there is almost certainly more to the story than unitary evolution. What that might be I cannot at present say, but that’s no reason to deny what’s before our eyes. Also, stop giggling, you look like an idiot.” IOW, the preferred-basis problem isn’t a simple technical matter that can be resolved by formal calculation. It’s a problem on the same level as the ones that give you qualms about MWI. Indeed, as the basic question is why we have to introduce the Born rule to get testable predictions out of the Schrodinger equation, the preferred-basis problem is a technical restatement of your first objection to the MWI proponents. 137. Mitchell Porter Says: @Scott #133: ‘one could imagine an MWI proponent gleefully asking you: “… What is the magic number, then?” (To get the full effect, you need to imagine the MWIers laughing hysterically as they ask these things.)’ Then they would be utter hypocrites, since they themselves are unable to give coherent answers regarding how many worlds there are, or exactly when it is that one world becomes two, or really any such details. 138. Audun Says: Re Scott #88 (anti-anti Platonist) Your emphasis on double negation being different from no negation would seem to put you in the intuitionist camp 😉 139. Scott Says: Michael #136: Sorry, that part of your response is just factually wrong. Decoherence (or ultimately, the low entropy of our universe’s initial state) explains perfectly well why we don’t see such things on the scale of everyday life. (Indeed, assuming a finite-dimensional Hilbert space, “macroscopic recoherences” shouldn’t start happening until the universe reaches thermal equilibrium, and there ceases to be any interesting evolution anyway. And in an infinite-dimensional Hilbert space, macroscopic recoherences need never happen.) 140. Scott Says: James #135: That’s certainly a possibility; indeed it’s the one that GRW and Roger Penrose advocate. But then the burden falls on the proponent to say: how does Nature decide when to trigger a “collapse” and when not to? And how is it that we haven’t yet noticed the effects of such “objective collapses” in experiments—even with superconducting qubits involving billions of electrons, or with molecules of hundreds of atoms traveling through a superposition of slits? What sort of experiment do you predict will reveal these collapses? 141. Utter Hypocrite Says: Mitchell Porter, Quickly scanning sources closely at hand, I’d say that loosely speaking a “world” is a complex, causally connected, partially or completely closed set of interacting sub-systems which don’t significantly interfere with other, more remote, elements in the superposition. Any complex system and its coupled environment, with a large number of internal degrees of freedom, qualifies as a world. An observer, with internal irreversible processes, counts as a complex system. In terms of the wavefunction, a world is a decohered branch of the universal wavefunction, which represents a single macrostate. The worlds all exist simultaneously in a non- interacting linear superposition. Worlds “split” upon measurement-like interactions associated with thermodynamically irreversible processes. How many “worlds” are there? The thermodynamic Planck-Boltzmann relationship, S = k*log(W), counts the branches of the wavefunction at each splitting, at the lowest, maximally refined level of Gell-Mann’s many-histories tree. This approach accepts the reality of the wave function and the QM formalism without bolting on a collapse postulate. This may not be “coherent” enough for you, and I don’t have the knowledge or background to argue the technical points, but a fair-mined person should, I think, acknowledge that this type of analysis is a reasonable, good faith effort to answer these difficult questions, and that the label “utter hypocrites” is a bit strained. 142. itai Says: Hi Scott, I thought about some anomaly stuff that exists in the basics of probability theory , that could have some implication in QM: Max Born probabilistic interpretation and Uncertainty principle, and maybe in TCS. And it will insert some set theory based math to physics( measure theory which is basis of modern probability is based on set theory) . QM physics and any statistical theory presume strong law of large number holds all the time ( when n->inf average=mean ) But The strong law of large number holds only when the expected value of probability density function converges surely ( by the mean of Lebesgue integration), there are many probability density function that either the expected value or second moment does not hold this condition( they can converge by other types if integral such as improper reiman or gauge integral ( see here the status on integration definition in math http://www.math.vanderbilt.edu/~schectex/ccc/gauge/ gauge integral also has some connection with QM path integration) so if some wave function has no first or second moment according to Lebesgue integration then what is the SD and expected value of position for example? And if there is no such wave functions ( I don’t think it’s true because Cauchy/Lorentzian distribution is in use now) , then such limiting conditions should be taken into account . ( adding to the demand that integral |Psy|^2 dx =1, integral |x|*|Psy|^2 dx < inf integral x^2*|Psy|^2 dx < inf 143. wolfgang Says: @Utter Hypocrite #141 >> What about this ‘collapse’ law? Your questions are somewhat misguided to a Copenhagener, who is not talking about an “objective collapse’. *I* decide to reduce the wavefunction after a measurement, because it is not longer the most economic description of reality; This is a matter of convenience and therefore your questions cannot be answered as such. 144. Jerry Says: Re Scott #88 (anti-anti Platonist) Like the Grandfather Paradox, not-not-Platonist could mean Scott has a 50% chance of being a Platonist and 50% of being an anti-Platonist. See: 145. James Gallagher Says: Scott #141 Sorry I opened my big mouth again, especially after saying I would retire from commenting lol 🙂 It’s Easter, traditionally a time of peace, so I will tread carefully, I will just respond by saying that there is NO current experiment which indicates that collapse is not occurring – even the observations made in quantum zeno experiments are entirely consistent with a single path evolution in Hilbert Space – AS LONG AS that path is probabilistically generated! (ie every single collapse is probabilistic) However, as you know, I believe that we will start to observe performance issues which large scale QC which will not be explainable by decoherence issues. The “scale” at which this will happen is of course the thing I should be able to predict, before opening my big mouth again. Happy Easter! 146. Utter Hypocrite Says: Look, I’m not saying that you shouldn’t adjust your thinking because you happen to come into possession of more information. But don’t tell me you are, as you seem to hint, a hardcore solipsist; if you think that not only the limited world we can observe, and the “universe” we can surmise, but everything there is or could ever be, depends on what you may happen to come up with. And this is your reason for not being able to at least try and answer my questions? Seriously, give it a try. 147. wolfgang Says: >> Seriously, give it a try. OK, one more time (I promise it is the last time). We follow the advice of the famous philosopher Rumsfeld and divide the world into those three parts: 1) the known unknown: the qubits which we can describe with a wavefunction. 2) the unknown unkown: the internal state of the environment, which includes large parts of the observer. 3) the known: my mental state (it is all I really know) Where one draws the line between 1) and 2) and 3) is up to the particular experimental situation. Perhaps you can arrange it so that nerve cell 563,783,123 in your right eye is part of 1) but in most cases it will be part of 2) I can ensure you that 1) will always be smaller than 2) and therefore talk about a universal wavefunction is not economic. I can also tell you that it would not be very economic to describe 3) with a wavefunction, because I know what I know. So when the wavefunction from 1) entangles with 2) during the experiment, decoherence sets in and when it finally reaches 3) it is reduced. But again, trying to describe this with one wavefunction would not make much sense … therefore your questions are misguided. 148. itai Says: will you consider wave function without surly converges ( lebesgue converges ) of first/second moment ( no SD ) to be the unknown unknowns ? or will you consider them physically impossible ? 149. wolfgang Says: @itai #148 in many cases the lazy physicists imagine the experiment to happen in some sort of box (e.g. the laboratory), using appropriate boundary conditions to make sure the integral(s) over the wavefunction(s) converge. If one talks about the ‘wavefunction of the universe’ etc. your problem could become a real issue (how do yo normalize the universal wavefunction?) Btw one needs to keep in mind that in real experiments we often do not know the correct physics yet – this is why we do the experiment in the first place. So we do not know the wavefunction before the experiment. How does a mwi proponent a la Tegmark describe this situation? Did the multiverse split into different branches, with e.g. different masses for the Higgs? 150. itai Says: @wolfgang #149 I know in physics we normalize wave function, otherwise it was not legal distribution function. I talk about different problem, strong law of large numbers ( what assure us average as n->inf equals expectation) does not hold for all distributions (when first moment does not converges surely ), also some distribution have first moment but infinite variance ( second moment not converges surely ) . my question is whether wave function can act like this ? if it is physically possible what does it mean for Uncertainty principle . if it’s not physically possible wave function , does someone take this into account, so we can add some more constraints for legal wave function? ( i know Cauchy and Levy distribution are is in use in physics world i have no idea if it has any connection to wave function , i have numerous other distribution examples ) 151. fred Says: itai #150 But in QM the probability distribution is from the square of the modulus of the amplitude. So the distribution is always positive, no? I don’t think it’s possible to get the moments you suggest with strictly positive values, no? (I could be wrong, I’m no expert) 152. itai Says: fred #151 The distribution of wave function in QM as far as i know are from -inf to inf , only the probability is always positive or zero ( absolute value of wave function squared ). So i don’t see any theoretical reason for wave function first moment is always integrable by absolute value nor second moment. Anyway, it is possible for positive values only distribution to have no second moment and do have first moment ( meaning variance is infinite ). for example, for X=1/sqrt(U) where U is uniform distribution. It has density function of f(x)=2*x^(-3) ,X>=1. Var(X) = infinity, but E(X) =2 . 153. Douglas Knight Says: Scott: As a general rule, it’s about a trillion times easier to write a correct program…when the program’s only purpose is to solve some fixed, well-defined math problem, rather than interfacing with a wide array of human users in a way that’s secure against any of those users who might be malicious and exploit vulnerabilities in the code! That’s true, but presents a false dichotomy. One strategy for writing secure code is to formalize the requirements. Ideally in a DSL that generates the actual code. That doesn’t address most side channel attacks (timing, compression,…) but it does address many other attacks, such as Heartbleed and the recent validation problems. cf langsec. (All of which is completely orthogonal to your point.) 154. srp Says: I don’t know if I buy the simple reversibility argument. Suppose we were to fracture a piece of glass into two pieces by applying a specific force vector to it. I don’t really believe that it would reform into a single piece if the force vector were exactly reversed. And I don’t think the problem is one of not being able to get exact enough coordinates to get the vector reversed. There’s something going on with the way molecular bonds work so that the newly created surface is not “open” to re-bonding with the other piece in the way it was configured before. (It might be instructive to compare putty-like to rigid materials in this regard.) Obviously, there must be some threshold size of object where this phenomenon would first set in, because single-particle processes are observed to be reversible. I don’t know if it’s the atomic, the molecular, the ten-molecule, etc. level where this happens. But it might be more instructive to focus on this sort of specific material system rather than generalized abstract systems to get some intuition about the mechanisms involved in irreversibility. 155. Scott Says: srp #154: Yes, the glass would coalesce back together, and it’s not a matter of debate or opinion. It’s 18th-century physics, emphatically upheld by everything that’s come since. Keep in mind, however, that you would need to perfectly reverse not merely the motions of the glass molecules, but also the motions of all the photons that escaped, all the motions that those photons caused after getting absorbed by electrons somewhere else, etc. If you time-reverse everything (well, and also reverse left and right in weak decays, and reverse matter and antimatter, in the unlikely event that either of those is relevant 🙂 ), then you’ll certainly obtain another valid solution of the equations of physics. It will just be one with an extremely bizarre initial condition, one that has the bizarre property of causing a pristine glass to form out of fragments. 156. srp Says: I don’t see in this case how to reverse a photon’s motion exactly since it doesn’t have a classical trajectory and seems like it might come up with a different “collapse” in terms of polarization, etc. when it gets “measured” by the fracture surface (compared to its “measured” state when it flew away from the fracture surface). I thought these kinds of measurements have stochastic outcomes, but likely there is some aspect of the theory I’ve overlooked. In any case, assuming I am wrong, perhaps there is room for some careful measurement of this phenomenon with very small matter clusters, gradually increasing their size (and maybe varying their rigidity) to see where the finickiness of the initial conditions sets in. It’s relatively easy (there’s a wide range of initial conditions) to get two fundamental particles to reverse a decay process or a collision that led to fragmentation. It’s almost impossible to do it with a storm window. Somewhere in between is the knee of the finickiness curve. 157. Serge Says: Scott #102: Actually, this conflation was made on purpose. As Poincaré used to say, mathematics is the art of giving different things the same name. It seemed indeed interesting to me to compare an attempt to solve a fixed, well-defined math problem by quantum means, with an attempt to solve some real-world security issue by classical means. In both cases, elements of uncertainty are at work so that a plethora of malicious users could play the role of the fundamental indeterminacy of quantum mechanics. In such a view, the difficulties encountered in building a quantum computer appear as a consequence of some general principal one might call “the uncertainty of the fight against uncertainty”. It bears some ressemblance with the (possible) unprovability of the unprovability of P≠NP. 158. Itai Says: The answer to your quest for a physical law that will explain those observation about computational models is the law of least action. where action = physics computational complexity. It is clear why is should be mathematical time complexity as it counts “actions” of computation machine and not actual time ! Action is not always minimal can exlpain the problem in solving NPC problems ! 159. T Says: What about a warp bubble computer? Let me be the first to say that I don’t really know what I’m asking.
363e57fc731f8f28
The Schrödinger equation is a partial differential equation whose solution is the wave equation, which describes the probability density of a given particle over space. The general form of the Schrödinger equation is i \hbar \frac{\partial}{\partial t}\Psi(\mathbf{r},t) = \hat H \Psi(\mathbf{r},t) where i is the imaginary unit, ħ is the reduced Planck constant, Ψ is the wave function, and Ĥ is the Hamiltonian operator (representing the total energy of the system). In steady-state systems (where the wave equation does not depend on time, such as in an atomic or molecular orbital) this simplifies to E \Psi = \hat H \Psi where E is a constant. This is known as the time-independent Schrödinger equation. Ad blocker interference detected!
e8f8984c127400e8
Dismiss Notice Dismiss Notice Join Physics Forums Today! Griffiths´ Quantum Mechanics prerequisites 1. Sep 9, 2015 #1 I am a math major, currently in my 3rd year of undergraduate studies, majoring in measure theory / probability / mathematical statistics. I am in the dubious situation that I will be taking a course on QM while having so far only studied classical mechanics (i.e. all chapters on classical mechanics in the book by Young and Freedman, including waves). The QM course will begin two months from now, so I have some time to prepare. Since I will later be taking electromagnetism (following the book by Griffiths), Im considering two ways of preparing: 1. the QM course will be based on the QM book by Griffiths, up to chapter 5. One option for preparation would be that I read this book in advance. 2. Or I might read Griffiths´ electrodynamics before taking QM, in order to achieve "maturity" in physics. Do you think that option 2 would be a big advantage, or perhaps even necessary condition, to studying QM? Officially, EM is not a prerequisite for taking the QM-course. I would prefer to do option 1, due to lack of time. I very much hope to hear from you. Many thanks. 2. jcsd 3. Sep 9, 2015 #2 User Avatar Science Advisor Homework Helper EM is not a prerequisite for QM if you only aim to stop before the time-dependent Schrödinger equation. EDIT: Upon a second thought, it may be very helpful if you have someknowledge on at least electrostatics and magnetostatics as most quantum systems considered in that book deal with those two basic topics in EM. 4. Sep 9, 2015 #3 User Avatar Science Advisor Gold Member I too am a math graduate and took the courses you mentioned in measure theory and probability, although I sub majored in Hilbert spaces and applications. I self studied physics in general and QM in particular from sources more advanced than Griffiths eg Dirac and Von Neumann. You have suffcient preparation for Griffiths. In fact you have sufficient preparation for Ballentine - but start with Griffiths then move onto Ballentine which will give you a much better mathematical foundation to QM than Griffiths eg Ballentine introduces the important issue of Rigged Hilbert Spaces which justifies the usual math found in a book like Griffiths that with your math background you will immediately recognise as a crock of the proverbial (its the use of the dreaded Dirac Delta Function). As an aside, and it will also help in your probability studies, I recommend studying the following to understand Dirac Delta function''s etc: From your major's viewpoint it covers the interesting Bochner's Theorem: IMHO up to chapter 5 is fine without EM. You do need a little bit - but its basic ie the inverse square coulomb law and its associated Hamiltonian. Last edited: Sep 9, 2015 5. Sep 9, 2015 #4 Griffiths is a good book to start your QM education with, but I'd try to study some basic quantum physics principles on your own first. I'd recommend reading the first few chapters and seeing which topics give you the most trouble then review from there. Since you're a moth student, you shouldn't have any trouble with the math. In Griffiths, the author mostly focuses on trying to develop a new intuition for working with the "different" behaviors of QM rather than going all-in with math. I took a year of somewhat basic "quantum physics" before taking intro to QM, and we covered the entirety of the Griffiths text in a semester. It was hard, but doable. You have a stronger math background than I did and your class will only cover the first 5 chapters, so I think you will be okay! 6. Sep 10, 2015 #5 if its been a while since youve done ode and pde, u might want to brush up if youve been doing probability and statistics for a while. mostly boundry value problems and differential equations. also, eigenvectors and other linear algebra stuff. 7. Dec 30, 2015 #6 thanks a LOT for your answers. I wasn´t sure if it is "bad habit" to bump the thread by saying thank you, so I didn´t. Anyway, I am now almost through the first 5 chapters, and everything has been completely fine withouth knowledge of EM, as you said it would be. I have decided to take part 2 of the course, in which we will study the second part of Griffiths. I now suspect, and can see from the index of the book, that EM will now be used. I have very limited time to prepare, but will be reading as much as I can in Griffiths EM-book. Do you think it will be sufficient to study the chapters on electrostatics and magnetostatics? He orders the chapters in the following way, 2. Electrostatics , 3. Potentials , 4. Electric fields in matter , 5. Magnetostatics . Is it possible to skip 3 and 4, and read only 2. and 5.? Hope to hear from you.
050a38e8e86cdfa2
Saturday, December 31, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Testing E=mc2 for centuries Chad Orzel seems to disagree with my comments about the interplay between the theory and experiment in physics. That's too bad because I am convinced that a person who has at least a rudimentary knowledge about the meaning, the purpose, and the inner workings of physics should not find anything controversial in my text at all. Orzel's text is titled "Why I could never be a string theorist" but it could also be named "Why I could never be a theorist or something else that requires to use the brain for extended periods of time". Note that the apparently controversial theory won't be string theory; it will be special relativity. The critics who can't swallow string theory always have other, much older and well-established theories that they can't swallow either. The previous text about the theory vs. experiment relations Recall that I was explaining a trivial fact that in science in general and physics in particular, we can predict the results of zillions of experiments without actually doing them. It's because we know general enough theories - that have been found by combining the results of the experiments in the past with a great deal of theoretical reasoning - and we know their range of validity and their accuracy. And a doable experiment of a particular kind usually fits into a class of experiments whose results are trivially known and included in these theories. This is what we mean by saying that these are the correct theories for a given class of phenomena. An experiment with a generic design is extremely unlikely to be able to push the boundaries of our knowledge. When we want to find completely new effects in various fields, we must be either pretty smart (and lucky) or we must have very powerful apparata. For example, in high-energy physics, it's necessary that we either construct accelerators that accelerate particles to high energies above 100 GeV or so - this is why we call the field high-energy physics - or we must look for some very weak new forces, for example modifications of gravity at submillimeter experiments, or new, very weakly interacting particles. (Or some new subtle observations in our telescopes.) If someone found a different, cheaper way to reveal new physics, that would be incredible; but it would be completely foolish to expect new physics to be discovered in a generic cheap experiment. Random experiments don't teach us anything It's all but guaranteed that if we construct a new low-energy experiment with the same particles that have been observed in thousands of other experiments and described by shockingly successful theories, we are extremely unlikely to learn anything new. This is wasting of taxpayers' money especially if the experiments are very expensive. In the particular case of the recent "E=mc^2 tests", the accuracy was "10^{-7}" while we know experimentally that the relativistic relations are accurate with the accuracy "10^{-10}", see Alan Kostelecky's website for more concrete details. We just know that we can't observe new physics by this experiment. Good vs. less good experiments In other fields of experimental physics, there are other rules - but it is still true that one must design a smart enough experiment to be able to see something new or to be able to measure various things (or confirm the known physical laws) with a better accuracy than the previous physicists. There are good experimentalists and less-good experimentalists (and interesting and not-so-interesting experiments) which is the basic hidden observation of mine that apparently drives Ms. or Mr. Orzel up the wall. Once again: What I am saying here is not just a theorist's attitude. Of course that it is also the attitude of all good experimentalists. It is very important for an experimentalist to choose the right doable experiments where something interesting and/or new may be discovered (or invented) with a nonzero probabilitity. There is still a very large difference between the experiments that reveal interesting results or inspire new ideas and experiments that no one else finds interesting. Every good experimentalist would subscribe to the main thesis that experiments may be more or less useful, believe me. Then there are experimentalists without adjectives who want to be worshipped just for being experimentalists and who disagree with my comments; you may guess what is the reason. Of course that one may design hundreds of experiments that are just stamp-collecting - or solving a homework problem for your experimental course. I am extremely far from thinking that this is the case everywhere outside high-energy physics. There have been hundreds of absolutely fabulous experiments done in all branches of physics and dozens of such experiments are performed every week. But there have also been thousands of rather useless experiments done in all these fields. Too bad if Ms. or Mr. Orzel finds it irritating - but it is definitely not true that all experiments are created equal. Interpreting the results Another issue is that if something unexpected occured in the experiment that was "testing E=mc^2", the interpretation would have to be completely different than the statement that "E=mc^2" has been falsified. It is a crackpot idea to imagine that one invents something - or does an experiment with an iron nucleus or a bowl of soup - that will show that Einstein was stupid and his very basic principles and insights are completely wrong. Hypothetical deviations from the Lorentz invariance are described by terms in our effective theories. Every good experimentalist first tries to figure out which of them she really measures. Neither of these potential deviations deserves the name "modification of the mass-energy relation" because even the Lorentz-breaking theories respect the fact that since 1905, we know that there only exists one conserved quantity to talk about - mass/energy - that can have various forms. We will never return to the previous situation in which the mass and energy were thought to be independent. It's just not possible. We know that one can transform energy into particles and vice versa. We can never unlearn this insight. New physics vs. crackpots' battles against Einstein Einstein was not so stupid and the principles of his theories have been well-tested. (The two parts of the previous sentence are not equivalent but they are positively correlated.) To go beyond Einstein means to know where is the room for any improvement, clarification, or deformation of his theories and for new physics, and the room is simply not in the space of ideas that "E=mc^2 is wrong" or "relativity is flawed". A good experimentalist must know something about the theory, to avoid testing his own laymen's preconceptions about physics that have nothing to do with the currently open questions in physics. Whether or not an experimental physicist likes it or not, we know certain facts about the possible and impossible extensions and variations of the current theories - and a new law that "E=mc^2" will be suddenly violated by one part in ten million in a specific experiment with a nucleus is simply not the kind of modification that can be done with the physical laws as we know them. Anyone who has learned the current status of physics knows that this is not how serious 21st century physics looks like. The current science is not about disproving some dogmatic interpretations of Bohr's complementarity principle either. Chad Orzel is not the only one who completely misunderstands these basic facts. Hektor Bim writes: • Yeah, this post from Lubos blew me away, and I’ve been trained as a theorist. Well, it does not look like a too well-trained one. • As long as we are still doing physics (and not mathematics), experiment rules. Experiments may rule, but there are still reasonable (and even exciting) experiments and useless (and even stupid) experiments. If someone thinks that the "leading role" of the experiments means that the experimentalists' often incoherent ideas about physics are gonna replace the existing theories of physics and that every experiment will be applauded even if it is silly, is profoundly confused. Weak ideas will remain weak ideas regardless of the "leading role" of the experiments. • What also blew me away is that Lubos said that “There is just no way how we could design a theory in which the results will be different.” This is frankly incredible. There are an infinite number of ways that we could design the theory to take into account that the results would be different. Once again, there are no ways how to design a scientific theory that agrees with the other known experiments but that would predict a different result of this particular experiment. If you have a theory that agrees with the experiments in the accelerators but gives completely new physics for the iron nucleus, you may try to publish it - but don't be surprised if you're described as a cook. Of course that crackpots always see millions - and the most spectacular among them infinitely many ;-) - ways to construct their theories. The more ignorant they are about the workings of Nature, the more ways to construct the theories of the real world they see. The most sane ones only think that it is easy to construct a quantum theory of gravity using the first idea that comes to your mind; the least sane ones work on their perpetuum mobile machines. I only mentioned those whose irrationality may be found on the real axis. If we also included the cardinal numbers as a possible value of irrationality, a discussion of postmodern lit crits would be necessary. Scientific theories vs. crackpots' fantasies Of course someone could construct a "theory" in which relativity including "E=mc^2" is broken whenever the iron nuclei are observed in the state of Massachusetts - much like we can construct a "theory" in which the law of gravity is revoked whenever Jesus Christ is walking on the ocean. But these are not scientific theories. They're unjustifiable stupidities. The interaction between the theory and experiments goes in both ways It is extremely important for an experimental physicist to have a general education as well as feedback from the theorists to choose the right (and nontrivial) things to measure and to know what to expect. It is exactly as important as it is for a theorist to know the results of the relevant experiments. Another anonymous poster writes: • What Lumo seems to argue is that somehow we can figure out world just by thinking about it. This is an extremely arrogant and short-sighted point of view, IMPO – and is precisely what got early 20th century philosophers in trouble. What I argue is that it is completely necessary for us to be thinking about the world when we construct our explanations of the real world as well as whenever we design our experiments. And thinking itself is responsible at least for one half of the big breakthroughs in the history of science. For example, Einstein had deduced both special relativity as well as general relativity more or less by pure thought, using only very general and rudimentary features of Nature known partially from the experiments - but much more deeply and reliably from the previous theories themselves. (We will discuss Einstein below.) Thinking is what the life of a theoretical physicist is mostly about - and this fact holds not only for theoretical physicists but also other professions including many seemingly non-theoretical ones. If an undereducated person finds this fact about the real world "arrogant", it is his personal psychological problem that does not change the fact that thinking and logical consistency are among the values that matter most whenever physical theories of the real world are deduced and constructed. The anonymous poster continues: • By the same reasoning the orbits of the planets must be circular – which is what early “philosophers” argued at some point. Circular orbits were an extremely useful approximation to start to develop astrophysics. We have gone through many other approximations and improvements, and we have also learned how to figure out which approximations may be modified and which cannot. Cutting-edge physics today studies neither circular orbits nor the questions whether "E=mc^2" is wrong; it studies very different questions because we know the answers to the questions I mentioned. Pure thought in the past and present A wise physicist in 2005 respects the early scientists and philosophers for what they have done in the cultural context that was less scientifically clear than the present era, but she clearly realizes their limitations and knows much more than those early philosophers. On the other hand, a bad and arrogant scientist in 2005 humiliates the heroes of the ancient science although he is much more dumb than they were, and he is asking much more stupid questions and promoting a much more rationally unjustifiable criticism of science in general than the comparably naive early philosophers could have dreamed about. Of course that in principle, one can get extremely far by pure thought, if the thought is logically coherent and based on the right principles, and many great people in the history of science indeed had gotten very far. These are the guys whom we try to follow, and the fact that there have been people who got nowhere by thinking cannot change the general strategy either. • Anthropic principle completely destroys whatever is left of the “elegance” argument, which is why it’s entertaining to see what will happen next. I know that some anti-scientific activists would like to destroy not only the "elegance" of science but the whole science - and join forces with the anthropic principle or anything else if necessary - but that does not yet mean that their struggle has any chance to succeed or that we should dedicate them more than this single paragraph. Another anonymous user writes: • As far as what Lubos meant, only he can answer that. But it would be obviously foolish to claim relativity could have been deduced without experimental input, and Lubos, whatever else he might be, is no fool. History of relativity as a victory of pure thought If interpreted properly, it would not be foolish; it is a historical fact. For example, I recommend you The Elegant Universe by Brian Greene, Chapter 2, for a basic description of the situation. Einstein only needed a very elementary input from the experiments - namely the invariance of physical laws under uniform motion; and the constancy of speed of light - which naturally follows from Maxwell's equations and Einstein was sure that the constancy was right long before the experiments showed that the aether wind did not exist. It is known pretty well that the Michelson-Morley experiments played a rather small role for Einstein, and for some time, it was even disputed whether Einstein knew these experiments at all back in 1905. (Yes, he did.) Some historians argue that the patented ideas about the train synchronization themselves played a more crucial role. I don't believe this either - but the small influence of the aether wind experiments on Einstein's thinking seems to be a consensus of the historians of science. Einstein had deeply theoretical reasons to be convinced about both of these two assumptions. Symmetry such as the Galilean/Lorentz symmetry or "the unity of physical explanations" are not just about some irrelevant or subjective concepts of "beauty". They are criteria that a good physicist knows how to use when he or she looks for better theories. The observation that the world is based on more concise and unified principles than what the crackpots and laymen would generally expect is an experimentally verified fact. These two observations are called the postulates of special relativity, and the whole structure of special relativity with all of its far-reaching consequences such as the equivalence of matter and energy follows logically. Needless to say, all of these effects have always been confirmed - with accuracy that currently exceeds the accuracy available to the experimentalists of Einstein's era by very many orders of magnitude. Special relativity is a genuine and true constraint on any theory describing non-gravitational phenomena in our Universe, and it is a strong constraint, indeed. Importance of relativity Whoever thinks that it is not too important and a new experiment with a low-energy nucleus may easily show that these principles are wrong, which essentially allows us to ignore special relativity, and that everything goes after all, is a crackpot. General relativity: even purer thought In a similar way, the whole structure of general relativity was derived by the same Einstein purely by knowing the previous special theory of relativity plus Newton's approximate law of gravity, including the equivalence of the inertial and gravitational mass; the latter laws were 250 years old. There was essentially no room for experiments. The first experiments came years after GR was finished, and they always confirmed Einstein's predictions. The known precession of Mercury's perihelion is an exception; this prediction of GR was known before Einstein, but Einstein only calculated the precession after he had completed his GR, and henceforth, the precession could not directly influence his construction of GR. He was much more influenced and impressed by Ernst Mach, an Austrian philosopher. I don't intend to promote Mach - but my point definitely is to show that the contemporary experiments played a very small role when both theories of relativity were being developed. There were also some experiments that argued that they rejected the theory, and Einstein knew that these experiments had to be wrong because "God was subtle but not malicious". Of course that Einstein was right and the experiments were wrong. (Similar stories happened to many great theoretical physicists; an experiment of renowned experimentalists that claimed to have falsified Feynman-Gell-Mann's theory of V-A interactions was another example - and Feynman knew right away when he was reading the paper that the experimentalists were just being silly.) Our certainty today that special relativity (or the V-A nature of the weak interactions) is correct in the "simply doable" experiments is much higher than our confidence in any single particular experimentalist. You may be sad or irritated, but that's about everything that you can do against this fact. Other theories needed more experiments It would be much harder to get that far without experiments in quantum mechanics and particle physics, among many other branches of physics and science, but whoever questions the fact that there are extremely important insights and principles that have been found - and/or could be found or can be found - by "pure thought" (or that were correctly predicted long before they were observed), is simply missing some basic knowledge about science. Although I happily admit that we could not have gotten that far without many skillful (and lucky) experimentalists and their experiments, there have been many other examples beyond relativity in which important theories and frameworks were developed by pure mathematical thinking whose details were independent of experiments. The list includes, among hundreds of other examples, • Dirac's equation. Dirac had to reconcile the first-order Schrödinger equation with special relativity. As a by-product, he also predicted something completely unknown to the experimentalists, namely antiparticles. Every successful prediction may be counted as an example of theoretical work that was not driven by experiments. • Feynman's diagrams and path integral. No one ever really observed "diagrams" or "multiple trajectories simultaneously contributing to an experiment". Feynman appreciated Dirac's theoretical argument that the classical concept of the action (and the Lagrangian) should play a role in quantum mechanics, too, and he logically deduced that it must play role because of his sum over trajectories. The whole Feynman diagram calculus for QED (generalizable to all other QFTs) followed by pure thought. Today we often say that an experiment "observes" a Feynman diagram but you should not forget about the huge amount of pure thought that was necessary for such a sentence to make any sense. • Supersymmetry and string theory. I won't provoke the readers with a description. Lorentz violations are not too interesting and they probably don't exist • If he is claiming that Lorentz invariance must be exact at all scales, then I agree that he’s being ridiculous. But I think it is reasonable to claim that this experiment was not really testing Lorentz invariance at a level where it has not been tested before. What I am saying is that it is a misguided approach to science to think that the next big goal of physics is to find deviations from the Lorentz invariance. We won't find any deviations. Most likely, there aren't any. The hypotheses about them are not too interesting. They are not justified. They don't solve any puzzles. Even if we find the deviations and write down the corresponding corrections to our actions, we will probably not be able to deduce any deep idea from these effects. Since 1905 (or maybe the 17th century), we know that the Lorentz symmetry is as fundamental, important and natural as the rotational symmetry. The Lorentz violation is just one of many hypothetical phenomenological possibilities that can in principle be observed, but that will probably never be observed. I find it entertaining that those folks criticize me for underestimating the value of the experiments when I declare that the Lorentz symmetry is a fundamental property of the Universe that holds whenever the space is sufficiently flat. Why is it entertaining? Because my statement is supported by millions of accurate experiments while their speculation is supported by 0.0001 of a sh*t. It looks like someone is counting negative experiments as evidence that more such experiments are needed. The only reason why the Lorentz symmetry irritates so many more people than the rotational symmetry is that these people misunderstand 20th century physics. From a more enlightened perspective, the search for the Lorentz breaking is equally (un)justified as a search for the violation of the rotational symmetry. The latter has virtually no support because people find the rotational symmetry "natural" - but this difference between rotations and boosts is completely irrational as we have known since 1905. Parameterizing Lorentz violation In the context of gravity, the deviations from the Lorentz symmetry that can exist can be described as spontaneous symmetry breaking, and they always include considering the effect of gravity as in general relativity and/or the presence of matter in the background. In the non-gravitational context, these violations may be described by various effective Lorentz-breaking terms, and all of their coefficients are known to be zero with a high and ever growing degree of accuracy. Look at the papers by Glashow and Coleman, among others. Undoing science? The idea that we should "undo" the Lorentz invariance, "undo" the energy-mass equivalence, or anything like that is simply an idea to return physics 100 years into the past. It is crackpotism - and a physics counterpart of creationism. The experiments that could have been interesting in 1905 are usually no longer so interesting in 2005 because many questions have been settled and many formerly "natural" and "plausible" modifications are no longer "natural" or "plausible". The previous sentence comparing 1905 and 2005 would be obvious to everyone if it were about computer science - but in the case of physics, it is not obvious to many people simply because physics is harder to understand for the general public. But believe me, even physics has evolved since 1905, and we are solving different questions. The most interesting developments as of 2005 (for readers outside the Americas: 2006) are focusing on significantly different issues, and whoever describes low-energy experiments designed to find "10^{-7}" deviations from "E=mc^2" as one of the hottest questions in 2005 is either a liar or an ignorant. It is very fine if someone is doing technologically cute experiments; but their meaning and importance should not be misinterpreted. Internet gender gap First, an off-topic answer. Celal asks me about the leap seconds - why has not the Earth already stopped to rotate if there are so many leap seconds. The answer is that we are now indeed inserting a leap second in most of the years - which means that one year is longer by roughly 1 second than it was back in 1820 when the second was defined accurately enough. More precisely, what I want to say is that one solar day is now longer by roughly 1/365 of a second than it was in the 19th century; what matters is of course that the noon stays at 12 pm. Although the process of slowing down the Earth's rotation has some irregularities, you can see that you need roughly 200 years to increase the number of the required leap seconds per year by one. In order to halve the angular velocity, you need to increase the number of leap seconds roughly by 30 million (the number of seconds per year), which means that you need 30 million times 200 years which is about 6 billion years. Indeed, at time scales comparable to the lifetime of the solar system, the length of the day may change by as much as 100 percent. 100 percent is a bit of exaggeration because a part of the recent slowing is due to natural periodic fluctuations and aperiodic noise, not a trend. However, coral reefs indeed seem to suggest that there were about 400 days per year 0.4 billion years ago. Don't forget that the slowing down is exponential, I think, and therefore the angular velocity will never quite drop to zero (which has almost happened to our Moon). BBC informs that While the same percentage of men and women use the internet, they use it in very different ways & they search for very different things. Women focus on maintaining human contacts by e-mail etc. while men look for new technologies and ways to do new things in novel ways. • "This moment in internet history will be gone in a blink," said Deborah Fallows, senior research fellow at Pew who wrote the report. I just can't believe that someone who is doing similar research is simultaneously able to share such feminist misconceptions. The Internet has been around for ten years and there has never been any political or legal pressure for the men and women to do different things - the kind of pressures in the past that is often used to justify similar hypotheses about the social origin of various effects. Friday, December 30, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Next target of terrorists: Indian string theorists A newspaper in Bombay informs that The terror attack at the Indian Institute of Science campus in Bangalore on Wednesday that killed a retired IIT professor has sent shockwaves through the Indian blogosphere. Blogger and researcher, Kate, wondered if Tata Institute of Fundamental Research [the prominent Indian center of string theory] would be the next target. Rashmi Bansal expressed sadness at scientists becoming the latest terror victims. “I mean, sure, there would be some routine security checks at the gate, but who seriously believes that a bunch of scientists gathered to discuss string theory or particle physics could be of interest to the Lashkar-e-Toiba?” she wrote in her blog, Youth Curry ( Ms. Bansal may change her mind if she analyzed some posters here - to see at least a "demo" how the anger against the values of modern science can look like. More generally, I emphasize that my warning is absolutely serious. It is not a joke, and I've erased a misleading anonymous comment that suggested that. Finally, I think that whoever thinks that a scientist cannot become a victim of terrorists is plain stupid. The islamic extremists fight against the whole modern civilization, and the string theorists in India and elsewhere - much like the information technologies experts - are textbook examples of the infiltration of the modern civilization and, indeed, the influence of the Western values - or at least something that was associated with the Western values at least for 500 years. Everyone who observes the situation and who is able to think must know that Bangalore has been on the terrorists' hit list for quite a while. If the person who signed as "Indian physicist" does not realize that and if he or she were hoping that the terrorists would treat him or her as a friend (probably because they have the same opinions about George W. Bush?), I recommend him or her to change the field because the hopes were completely absurd. I give my deepest condolences to the victim's family but I am not gonna dedicate special sorrow to the victim, Prof. Puri, just because he was a retired professor. There are many other innocent people being killed by the terrorists and I am equally sad for all of them. The death of the innocent people associated with "our" society is of course the main reason why I support the war on terror - or at least its general principles. The attack against the conference is bad, but for me it is no surprise. And the casualties of 9/11 were 3,000 times higher which should still have a certain impact on the scale of our reactions. Third string revolution predicted for physics CapitalistImperialistPig has predicted for 2006, started by someone who is quite unexpected. It would be even better if the revolution appeared in the first paper of the year. Sidney Coleman Open Source Project Update: See the arXiv version of Sidney Coleman's QFT notes (click) Jason Douglas Brown has been thinking about a project to transcribe the QFT notes of a great teacher into a usable open source book. I am going to use the notes in my course QFT I in Fall 2006; see the Course-notes directory. We are talking about 500 pages and about 10 people who would share the job. If you want to tell Jason that it is a bad or good idea, or join his team, send an e-mail to • jdbrown371 at Bayesian probability I See also a positive article about Bayesian inference... Two days ago, we had interesting discussions about "physical" situations where even the probabilities are unknown. Reliable quantitative values of probabilities can only be measured by the same experiment repeated many times. The measured probability is then "n/N" where "n" counts the "successful measurements" among all experiments of a certain kind whose total number is "N". This approach defines the "frequentist probability", and whenever we know the correct physical laws, we may also predict these probabilities. If you know the "mechanism" of any system in nature - which includes well-defined and calculable probabilities for all well-defined questions - you can always treat the system rationally. Unknown probabilities It is much more difficult when you are making bets about some events whose exact probabilities are unknown. Even in these cases, we often like to say a number that expresses our beliefs quantitatively. Such a notion of probability is called Bayesian probability and it does not really belong to exact sciences. Thursday, December 29, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere All stem cell lines were fabricated Wednesday, December 28, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Comment about the new colors: I believe that the new colors are not a registered trademark of Paul Ginsparg. Moreover, mine are better. Just a short comment about this creation of Jimbo Wales et al. I am impressed how unexpectedly efficient Wikipedia is. Virtually all of its entries can be edited by anyone in the world, even without any kind of registration. When you realize that there are billions of not-so-smart people and hundreds of millions of active idiots living on this blue planet - and many of them have an internet access - it is remarkable that Wikipedia's quality matches that of Britannica. But this kind of hypertext source of knowledge is exactly what the web was originally invented for. Moreover I am sure that Wikipedia covers many fields much more thoroughly than Britannica - and theoretical physics may be just another example. Start with list of string theory topics, 2000+ of my contributions, or any other starting point you like. Try to look for the Landau pole, topological string theory, heterotic string, or thousands of other articles that volunteers helped to create and improve. Are you unsatisfied with some of these pages? You can always edit them. Tuesday, December 27, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Hubble: cosmic string verdict by February Let me remind you that the Hubble pictures of the cosmic-string-lensing CSL-1 candidate, taken by Craig Hogan et al., should be available by February 2006. Ohio's Free Times interviews Tanmay Vachaspati who has studied cosmic strings for 20 years. (Via Rich Murray.) Monday, December 26, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Evolution and the genome The editors of the Science magazine have chosen the evolution - more precisely, the direct observations of evolution through the genome - to be the scientific breakthrough of 2005. I think it is a fair choice. The analyses of the genome are likely to become a massive part of "normal science" with a lot of people working on it and a lot of successes and potential applications for the years to come. I expect many discoveries in this direction to shed light on the past lifeforms; on the explicit relationships between the currently existing species and their common ancestry; the evolutionary strategies of diseases and our strategies to fight them; and finally on the new possible improvements of the organisms that are important for our lives, and - perhaps - the human race itself. Stem cell fraud Incidentally, the breakthrough of the year 2005 for the U.S. particle physics is called "Particle physicists in the U.S. would like to forget about 2005" which may be fair, too. However, the situation is still better than in stem cell research where some of the seemingly most impressive results in the past years - those by Hwang Woo Suk from Korea - have been identified as an undisputable fraud. Steve McIntyre points out that Hwang was one of Scientific American's 50 visionaries together with Michael Mann who, after a comparable incident (one involving the "hockey stick graph"), was not fired but instead promoted. Steve McIntyre has also written the world's most complete chronology of the scandal. Google tells you more about the sad story of scientific consensus behind the former Korean national hero. It's amazing how this fraud that no one could apparently reproduce immediately gained 117 citations. Should we believe the Koreans - without testing them - because they are so skillful in manipulating the chopsticks? Or perhaps because it is nice to see that the U.S. science is falling behind - "certainly" because of George W. Bush? Have the people in that field lost their mind? Or is it really the case that the whole cloning field - or perhaps even all Bush critics in the world - are participating in a deliberate international fraud? Back to the positive story: the genetic evidence for evolution. New tools make questions solvable New scientific methods and technologies often have the capacity to transform an academic dispute whose character used to be almost religious into an obvious set of facts. Let me give you two examples. The death of hidden variables The first example are Bell's inequalities. Before they were found, it was thought that no one could ever determine whether the quantum mechanical "randomness" was just an emergent process based on some classical "hidden variables"; this debate was thought to be a philosophical one forever. After the inequalities were found and the experimental tests confirmed quantum mechanics, it became clear that the quantum mechanical "randomness" is inherent. It cannot be emergent - unless we would be ready to accept that the underlying hidden variables obey non-local (and probably non-relativistic) classical laws of physics which seems extremely unlikely. Sun's chemistry and spectroscopy My second example goes back to the 19th century. Recall that the philosopher Auguste Comte, the founder of positivism, remarked in his "Course de philosophie positive" that the chemical composition of the Sun would forever remain a mystery. It only took seven years or so, until 1857, to show that Comte was completely wrong. Spectroscopy was discovered and it allowed us to learn the concentration of various elements in the Sun quite accurately. Unfortunately, this discovery came two years after Comte's death and therefore he could not see it. Incidentally, two more years later, in 1859, Darwin published his theory. The last we-will-never-know people Many people have been saying similar things about physics in general: physics could never determine or explain UV or XY - and all of these people have already been proved wrong except for those who argue that the parameters of the Standard Model can't be calculated with a better accuracy than what we can measure; the latter group will hopefully be proved wrong in our lifetime. Speed of evolution What do the new discoveries tell us about the evolution? First of all, evolution is not fuzzy. It is "quantized", if you allow me to use physics jargon, and the evolutionary changes are directly encoded in the genes that can be equally easily decoded. A related and equally important observation is that the evolutionary changes are quite abrupt. We have never observed skeletons of bats with one wing and similar creatures - as the creationists (including those in a cheap tuxedo, using the words of pandas from level 2) have been quite correctly pointing out for decades. Indeed, it often takes a single mutation only to establish a new species. Many mutations are harmful and they become immediately a subject of natural selection. Some mutations allow the organisms to survive. All these changes were making the tree of life ramify all diversify - and they are still doing so although this process is nowadays slower than some other types of developments. Reply to Pat Buchanan Let me finally choose an article from Dembski's blog in which he reposts It is entertaining to see a text whose political part is more or less true but the scientific one is so clearly and completely wrong. Let's clarify some errors of Buchanan's: • In his “Politically Correct Guide to Science,” Tom Bethell ... Surprisingly, the book is called "Politically Incorrect...", not "Politically correct...". Tom Bethell is rather unlikely to be politically correct. • For generations, scientists have searched for the “missing link” between ape and man. But not only is that link still missing, no links between species have been found. Because there are no highly refined intermediate links of the type Buchanan suggests; one mutation often makes these changes occur and the evolution is far from being a smooth, gradual, and continuous process. However, chimps' genome has been decoded. We can not only see that chimpanzees are our closest relatives but also deduce the existence of a common ancestor. Our relationship with the chimps is no longer a matter of superficial similarity; a long sequence of bits - a microscopic genetic information - reveals a much more detailed picture. • As Bethell writes, bats are the only mammals to have mastered powered flight. But even the earliest bats found in the fossil record have complex wings and built-in sonar. Where are the “half-bats” with no sonar or unworkable wings? Half-bats with unworkable wings are predicted by Darwin to die quite rapidly, so there should not be too many fossils around. Observations seem to confirm this prediction of Darwin's theory, too. Indeed, such changes must proceed quickly and today we know that a single change of the genome is capable to induce these macroscopic changes. • Their absence does not prove — but does suggest — that they do not exist. Is it not time, after 150 years, that the Darwinists started to deliver and ceased to be taken on faith? Don't tell me that you don't think that this comment of Pat Buchanan sounds just like Peter Woit. ;-) Let me remark, in both cases, that 150 years and maybe even 30 years is probably a long enough time to start to think about the possibility that the "alternatives" to evolution or string theory can't ever work. • No one denies “micro-evolution” — i.e., species adapting to their environment. It is macro-evolution that is in trouble. First of all, it is not a trouble - it was chosen to be the most spectacularly confirmed scientific paradigm by discoveries done in 2005. Second of all, the difference between "micro-evolution" and "macro-evolution" is just a quantitative one. Most of the errors that Buchanan and other creationists do can be blamed on this particular error in their thinking: they incorrectly believe that objects in the world can be dogmatically and sharply divided to alive and not alive; intelligent and not intelligent; micro-evolution and macro-evolution. (And of course, someone would also like to divide the whole human population to believers and non-believers.) Neither of these categories can be quite sharply defined. Even though the species are defined by "discrete", "quantized" bits of information encoded in the genome, it does not mean that each species can be classified according to some old, human-invented adjectives. Science does not break down but the adjectives used in the unscientific debate - or the Bible - certainly do break down when we want to understand life (or the whole Universe, for that matter) at a deeper level. The world is full of objects whose "aliveness" is disputable - such as the viruses. The same world also offers evolutionary steps that can be safely classified neither as micro-evolution nor as macro-evolution. Finally, there are many organisms in the world that are only marginally intelligent, and I am afraid that this group would include not only chimps but maybe also some syndicated columnists. ;-) • The Darwinian thesis of “survival of the fittest” turns out to be nothing but a tautology. How do we know existing species were the fittest? Because they survived. Why did they survive? Because they were the fittest. I completely agree that the operational definition of the "fittest" is circular. It is the whole point of Darwin's notion of natural selection that "being the fittest" and "have a higher chance to survive" are equivalent. However, there is also a theoretical way to derive whether an animal is "the fittest" which can be used to predict its chances to survive. Such a derivation must, however, use the laws of nature in a very general sense - because it is the laws of nature that determine the chances to survive. Sometimes it is easy to go through the reasoning. A bird without legs in between the tigers does not have a bright future. Sometimes the conclusion is much harder to make. But the main message is that these questions can be studied scientifically and the answers have definitely influenced the composition of the species on our planet. • While clever, this tells us zip about why we have tigers. "Why we have tigers?" is not a scientifically meaningful question unless a usable definition of a tiger is added to it as an appendix. The Bible can answer such verbal, non-scientific question, by including the word "tiger" in one of the verses (and by prohibiting everyone to ask where the word and the properties of the animal came from). Science can only answer meaningful questions. For example, we may try to answer the question why the hairy mammals - beasts of prey - whose maximum speed exceeds 50 mph have evolved. • It is less a scientific theory than a notion masquerading as a fact. It is somewhat entertaining that the word "notion" is apparently supposed to have a negative meaning. Notions, concepts, and ideas are an essential part of our theories - and the word "theory" is not negative either because the best and most reliable things we know about the real world are theories based on notions and ideas. • For those seeking the source of Darwin’s “discovery,” there is an interesting coincidence. Those who judge the validity of a scientific theory according to the coincidences that accompanied its original discovery are intellectual equivalents of chimpanzees, and therefore they are another piece of evidence for evolutionary biology. • As Bertrand Russell observed, Darwin’s theory is “essentially an extension to the animal and vegetable world of laissez-faire economics.” I completely agree with that. This is why both Darwin's theory as well as capitalism are the leading paradigms among their competitors. Many general ideas are shared by these two frameworks; other ideas are quite independent. • If it is science, why can’t scientists replicate it in microcosm in a laboratory? Of course that they can replicate many particular examples in their labs. They can't replicate them exactly with the same speed as they occured in Nature because such labs would have to cover 510 million squared kilometers and they would have to work for 5 billion years. Nevertheless, the process can be sped up in many ways, at least in some particular situations. • If scientists know life came from matter and matter from non-matter, why don’t they show us how this was done, instead of asserting it was done, and calling us names for not taking their claims on faith? Let me assume that the first sentence talks about the reheating, to be specific. The reason why I probably can't show Pat Buchanan how different forms of matter or non-matter are transforming into each other according to the laws of quantum field theory or string theory - and why we know that it is the case without any religious beliefs - is that Pat Buchanan apparently does not have a sufficient intelligence to understand my explanations. It's that simple. • Clearly, a continued belief in the absolute truth of Darwinist evolution is but an act of faith that fulfills a psychological need of folks who have rejected God. That may well be the case but such an ad hominem observation is completely irrelevant if there are clear proofs that the picture is correct. • Hence, if religion cannot prove its claim and Darwinists can’t prove their claims, we must fall back upon reason, which some of us believe is God’s gift to mankind. Unfortunately for Mr. Buchanan, this is not our situation because the Darwinists can prove their claims quite convincingly. By the way, the discovery of evolutionary biology is certainly one of God's big gifts to mankind, too. ;-) • And when you consider the clocklike precision of the planets in their orbits about the sun and ... The motion of the planets is exactly predictable by our theories. It is clocklike but not atomic-clock-like. Indeed, we can easily measure the irregularities in their motion - which means, among other things, that we will have to insert a leap second between 2005 and 2006 once again to counterbalance Nature's (or God's?) imperfection, so to say. • ...the extraordinary complexity of the human eye, does that seem to you like the result of random selection or the product of intelligent design? It is the result of very sophisticated laws of Nature - physics, biology, and so on - whose important "emergent" feature responsible for much of the progress is the natural selection. Natural selection is not quite random even though it can sometimes look so at short enough time scales. • Prediction: Like the Marxists, the Darwinists are going to wind up as a cult in which few believe this side of Berkeley and Harvard Square. It would be a bit nicer if only a few around Harvard Square believed marxism. ;-) Saturday, December 24, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Merry Christmas Background sound (press ESC to stop): Jakub Jan Ryba's "Czech Christmas Mass" (Hey master, get up quickly); a 41:39 MP3 recording here Merry Christmas! This special season is also a great opportunity for Matias Zaldarriaga and Nima Arkani-Hamed to sing for all the victims of the anthropic principle who try to live in the bad universes (audio - sorry, the true artists have not been recorded yet): Friday, December 23, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere E=mc2: a test ... interplay between theory and experiment An experiment that is claimed to be the most accurate test of Einstein's famous identity "E=mc2" has been performed by physicists on the other side from the Central Square - at MIT. Their accuracy is 55 times better than the accuracy of previous experiments. They measured the change of the mass of nucleus associated with the emission of energy after it absorbs a neutron. I find their promotion of the experiment slightly dishonest: • "In spite of widespread acceptance of this equation as gospel, we should remember that it is a theory," said David Pritchard, a professor of physics at MIT, who along with the team reported his findings in the Dec. 22 issue of Nature. "It can be trusted only to the extent that it is tested with experiments." Thursday, December 22, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere TeX for PowerPoint: TeX4PPT Aurora is a new commercial LaTeX system for MS Office Some readers may have installed TeXpoint as an add-in to their PowerPoint. Let me now mention that is probably superior and everyone who uses TeX as well as PowerPoint should install this piece of free software. In this framework, you may create a new "text box" using the drawing toolbar. Inside the text box, you may write some $tex$. When you're finished, you right-click and choose TeXify. It will convert the text box into a nice piece of LaTeX. One internal advantage over TeXpoint is that it is directly the DVI that is being converted to Microsoft's own fonts. (TeXpoint was also generating a postscript as well as an image.) This means, among other things, that the text respects the background. The father of Bott periodicity died Via David G. Raoul Bott - a Harvard mathematician who was fighting against cancer in San Diego and who discovered, among other things, the Bott periodicity theorem in the late 1950s - died the night of December 19-20, 2005. His mother and aunts spoke Hungarian. However, his Czech stepfather did not, and therefore the principal language at home was German. At the high school, on the other hand, he had to speak Slovak. His nanny was English which helped young Bott to learn authentic English. To summarize this paragraph: one should not be surprised that Bott hated foreign languages. Blog of WWW inventor The person who invented the World Wide Web has started to write No, it is not a blog of Al Gore - Al Gore has only invented the Al Gore rhythms. The new blog belongs to Tim Berners-Lee who made his invention while at CERN, and currently lives here in Boston. Figure 1: The first web server in the world (1990) MIT talk: a theory of nothing Today, John McGreevy gave an entertaining MIT seminar mainly about the theory of nothing, a concept we will try to define later. The talk described both the work about the topology change induced by closed string tachyon condensation as well as the more recently investigated role that the tachyons may play for a better understanding of the Big Bang singularity. Because we have discussed both of these related projects on this blog, let's try to look at everything from a slightly complementary perspective. Defining nothing First of all, what is nothing? John's Nothing is a new regime of quantum gravity where the metric tensor - or its vev - equals zero. This turns out to be a well-defined configuration in three-dimensional gravity described as Chern-Simons theory. It is also the ultimate "paradise" studied in canonical gravity and loop quantum gravity. Does "nothing" exist and is there anything to study about it? I remain somewhat sceptical. If the metric is equal to zero in a box, it just means that the proper lengths inside the box are zero, too. In other words, they are subPlanckian. The research of "nothing" therefore seems to me as nothing else from the research of the subPlanckian distances. This form of "nothing" is included in every piece of space you can think of, as long as you study it at extremely short distances. And we should not forget that the subPlanckian distances, in some operational sense, do not exist. I guess that John would disagree and he would argue that nothing is an "independent element" of existence; a phase in a phase diagram. I have some problems with this picture. Tachyons create nothing Wednesday, December 21, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere MIT talk: Susanne Reffert Yesterday we went to MIT to see the talk by Susanne Reffert who will be finishing her PhD under Dieter Lüst and who will probably continue her investigation of string theory in Amsterdam, turning down offers from the KITP and CERN. And it was a very nice talk. First of all, she uses Keynote, an Apple-based alternative for the PowerPoint which reconciles TeX and animations into a consistent whole. Moduli stabilization of F-theory flux vacua again There have been too many points in the talk to describe all of them here. They studied, among other things, all possible orientifolded and simultaneously orbifolded toroidal (T^6) vacua of type IIB string theory, their resolution, description in terms of toric geometry, flops, and especially the stabilization of the moduli. One of the unexpected insights was that one can't stabilize the Kähler moduli and the dilaton after the uplift to the de Sitter space if there are no complex structure moduli to start with; rigid stabilized anti de Sitter vacua may be found but can't be promoted to the positive cosmological constant case. Some possibilities are eliminated, some possibilities survive, if you require all moduli to be stabilized. Recall that the complex structure moduli and the dilaton superfield are normally stabilized by the Gukov-Vafa-Witten superpotential - the integral of the holomorphic 3-form wedged with a proper combination of the 3-form field strengths - while the Kähler moduli are stabilized by forces that are not necessarily supernatural but they are non-perturbative which is pretty similar. The latter nonperturbative processes used to stabilize the Kähler moduli include either D3-brane instantons or gaugino condensation in D7-branes. At this level, one obtains supersymmetric AdS4 vacua. Semirealistic dS4 vacua may be obtained by adding anti-D3-branes, but Susanne et al. do not deal with these issues. 43rd known Mersenne prime: M30402457 One of the GIMPS computers that try to find the largest prime integers of the form • 2^p - 1 i.e. the Mersenne primes has announced a new prime which will be the 43rd known Mersenne prime. The discovery submitted on 12/16 comes 10 months after the previous Mersenne prime. It seems that the lucky winner is a member of one of the large teams. Most likely, the number still has less than 10 million digits - assuming that 9,152,052 is less than 10 million - and the winner therefore won't win one half of the $100,000 award. The Reference Frame is the only blog in the world that also informs you that the winner is Curtis Cooper and his new greatest exponent is p = 30,402,457. (Steven Boone became a co-discoverer; note added on Saturday.) You can try to search for this number on the whole internet and you won't find anything; nevertheless, on Saturday, it will be announced as the official new greatest prime integer after the verification process is finished around 1 am Eastern time. If you believe in your humble correspondent's miraculous intuition, you may want to make bets against your friends. ;-) Tuesday, December 20, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Temperatures' autocorrelation Imagine that the Church would start to control the whole society once again. A new minister of science and propaganda would be introduced to his office. His name would not quite be Benedict but rather Benestad. How would they use scientific language to argue that the Bible in general and Genesis in particular literally describes the creation? They would argue that Genesis predicts water, grass, animals, the Sun, the Earth, and several other entities, and the prediction is physically sound. If anyone tried to focus on a possible discrepancy or a detail, Benestad would say that the heretics were pitching statistics against solid science. The choice of the name "Benestad" will be explained later. Do you think that the previous sentences are merely a fairy-tale? You may be wrong. First, we need to look at one scientific topic. Monday, December 19, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Cosmological constant seesaw One of the reasons why I have little understanding for the Rube Goldberg landscape machines is that their main goal is to explain just one number, namely the cosmological constant, which could eventually have a simple rational explanation. Let me show you two explanations leading to the same estimate. Recall that the observed cosmological constant is of order This is almost exactly the same seesaw game with the scales like the neutrino seesaw game. In the case of the neutrinos, we assume the right-handed SU(5)-neutral neutrinos to acquire the GUT scale masses - which is almost the same thing as the Planck scale above - and the unnaturally small value of the observed neutrino masses comes from the smaller eigenvalue(s) of the matrix ((mGUT, mEW), (mEW,0)). Blogs against decoherence If you're interested in a blog whose main enemy is decoherence - because they want to construct a quantum computer - see Everything new you need to know about the realization of quantum bits. LHC on schedule 2005, the international year of physics, has so far been a flawless year for the LHC. 1000 out of 1232 magnets are already at CERN; 200 magnets have already been installed. See Update, September 2008: the protons start to orbit in the LHC on September 10th, 9:00 am, see the webcast. But the collisions will only start in October 2008, before a winter break. In 2009, everything will be operating fully. Click the "lhc" category in the list below to get dozens of articles about the Large Hadron Collider. Distasteful Universe and Rube Goldberg machines A famous colleague of ours from Stanford has become very popular among the Intelligent Design bloggers. Why is it so? Because he is the unexpected prophet that suddenly revived Intelligent Design - an alternative framework for biology that almost started to disappear. How could he have done so? Well, he offered everyone two options. • Either you accept the paradigm shifting answer to Brian Greene's "Elegant Universe" - namely the answer that the Universe is not elegant but, instead, it is very ugly, unpredictable, unnatural, and resembling the Rube Goldberg machines (and you buy the book that says so) • Or you accept Intelligent Design. You may guess which of these two bad options would be picked by your humble correspondent and which of them would be chosen by most Americans. What does it mean? A rather clear victory for Intelligent Design. The creationist and nuclear physicist David Heddle writes something that makes some sense to me: • His book should be subtitled String Theory and the Possible Illusion of Intelligent Design. He has done nothing whatsoever to disprove fine-tuning. Nothing. He has only countered it with a religious speculation in scientific language, a God of the Landscape. Snatching victory from the jaws of defeat, he tells us that we should embrace the String Theory landscape, not in spite of its ugliness, but rather because of it. Physics should change its paradigm and sing praises to inelegance. Out with Occam’s razor, in with Rube Goldberg. This statement is also celebrated by Jonathan Witt, another fan of ID. Tom Magnuson, one more creationist, assures everyone that if the people are given the choice to choose between two theories with the same predictive power - and one of them includes God - be sure that they will pick the religious one. And he may be right. Well, not everyone will make the same choice. Leon Brooks won't ever accept metaphysics and Evolutionblog simply applaudes our famous Stanford colleague for disliking supernatural agents. But millions of people with the same emotions as William Dembski will make a different choice and it is rather hard to find rational arguments that their decision is wrong because this is a religious matter that can't be resolved scientifically at this point. Discussions about the issue took place at Cosmic Variance and Not Even Wrong. Intelligent design in physics Several clarifications must be added. Just like the apparent complexity of living forms supports the concept of Intelligent Design in biology (when I saw the beautiful fish today in the New England Aquarium, I had some understanding for the creationists' feelings), the apparent fine-tuning supports a similar idea in physics. A person like me who expects the parameters of the low-energy effective field theory to emerge from a deeper theory - which is not a religious speculation but a straightforward extrapolation of the developments of the 20th century physics - indeed does believe in some sort of "intelligent design". But of course its "intelligence" has nothing to do with human intelligence or the intelligence of God; it is intelligence of the underlying laws extending quantum field theory. Opposite or equivalent? The anthropic people and the Intelligent Design people agree with each other that their pictures of the real world are exactly opposite to one another. In my opinion, this viewpoint about their "contradiction" already means a victory for Intelligent Design and irrational thinking in general. The scientific opinion about this question - whether the two approaches are different - is of course diametrically different. According to a scientific kind of thinking, there is no material difference between • the theory that God has skillfully engineered our world, or has carefully chosen the place for His creation among very many possibilities • and the theory that there are uncontrollably many possibilities and "ours" is where we live simply because most of the other possibilities don't admit life like ours From a physics perspective, these things are simply equivalent. Both of them imply that the parameters "explained" by either of these two theories are really unexplainable. They are beyond our thinking abilities and it does not matter whether we use the word "God" to describe our ignorance about the actual justification of the parameters. Both of these two approaches may possibly be improved when we reduce the set of possibilities to make some predictions after all. For example, we can find which vacuum is the correct one. Once we do so, the questions whether some "God" is responsible for having chosen the right vacuum, or whether no "God" is necessary, becomes an unphysical question (or metaphysical question, if you prefer an euphemism). Again, the only way how this question may become physical is that we actually understand some rational selection mechanism - such as the Hartle-Hawking wavefunction paradigm - that will lead to a given conclusion. Or if we observe either God or the other Universes; these two possibilities look comparably unlikely to me. Without these observations and/or nontrivial quantitative predictions, God and the multiverse are just two different psychological frameworks. In this sense, the creationists are completely correct if they say that the multiverse is so far just another, "naturalistic" religion. As they like to say, the two pillars of the religion of "naturalism" - Freud and Marx - are dead. And Darwin is not feeling too well, they add - the only thing I disagree with. ;-) Marx and Freud are completely dead, indeed. Friday, December 16, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Intelligent Design: answers to William Dembski William Dembski is one of the most active intellectual promoters of Intelligent Design. He also has a blog in which he tries to collect and create various arguments and pseudoarguments to support his agenda. Just like a certain one-dimensional blog where every piece of news is projected onto the one-dimensional axis "may it hurt string theory?" - and if the projection is positive, the news is published - Uncommon descent evaluates articles and sentences according to their ability to hurt mainstream biology and to support Intelligent Design. While I am among those who find all one-dimensional blogs and especially most of their readers kind of uninspiring, let me admit that in my opinion, neither of the two Gentlemen mentioned above seems to be a complete moron and many of their questions may deserve our time. Dembski vs. Gross and Susskind Because of the description of the blog above, it should not be surprising that Dembski celebrates and promotes both Susskind's anthropic comments indicating that many physicists have accepted opinions remotely analogous to Intelligent Design - as well as Gross's statement that we don't know what we're talking about. Incidentally, when Dembski quotes David Gross, he says "remember that string theory is taught in physics courses". That's a misleading remark. String theory is only taught in courses on string theory, and with the exception of Barton Zwiebach's award-winning MIT undergraduate course, all such courses are graduate courses. What the advocates of Intelligent Design classes at schools want is definitely much more than the current exposure of the basic school and high school students to string theory. Although Dembski and some of his readers may find these quotations of the famous physicists relevant, they are not. Maybe, we don't know what we're talking about when we study quantum Planckian cosmology, but we know what we're talking about whenever we discuss particle physics below 100 GeV, the history of our Universe after the first three minutes, and millions of other situations. What Dembski wants to modify about our picture of the Universe are not some esoteric details about the workings of the Universe at the Planck scale or the mechanisms of vacuum selection. He wants to revert our knowledge about very low energy processes in physics and biology. That makes all his comparisons of biology with uncertainty in quantum gravity irrelevant. Scientists may be confused about cutting-edge physics but that's very different from being confused about the insights in biology that have been more or less settled in the 19th century. Some scientists may think that a coincidence whose probability was 10^{-350} had to happen before our Universe was created or "chosen", but they don't need probabilities of order 10^{-10^{100}} OK, the answers Finally, let me answer 5 questions from Dembski's most recent blog article about microbiology: • (1) Why does biology hand us technical devices that human design engineers drool over? It is because the natural length scale of human beings is 1 meter. This is the size of humans as Nature created them. This is the length scale at which humans are very good in designing things. I claim that the human engineers are better than Mother Nature in creating virtually any object whose structure is governed by the length scale of one meter. The engineers are also better at longer distance scales - and the trip to the Moon is an example. Engineers had to develop some technology before the humans could directly affect matter at shorter distance scales than the size of our hands. We are getting better and we may get better than Mother Nature in a majority of nanotechnologies in the near future. William Dembski shows a remarkable short-sightedness if he justifies his opinion by saying that Nature is superior over technology - because it is all but guaranteed that technology will be taking a lead and the strength of Dembski's position will therefore definitely decrease with time. At any rate, even the successes of engineers themselves reflect the miraculous powers of Mother Nature because engineers were created by Her, too. I am afraid that this fact is not appreciated by many advocates of Intelligent Design and many other people. • (2) Why don’t we ever see natural selection or any other unintelligent evolutionary mechanisms produce such systems? Of course that we do. When microprocessors are produced, for example, there is a heavy competition between different companies that produce the chips. Although Intel is planning to introduce their 65 nanometer technology in 2006, AMD may be ahead because of other reasons. This competition is nothing else than the natural selection acting at a different level, with different, "non-biological" mechanisms of reproduction, and such a competition causes the chips to evolve in an analogous way like in the case of animals. (If you want to see which factors drive the decisions about the "survival of the fittest" in the case of chipmakers, open the fast comments.) Competition also works in the case of ideas, computer programs, ideologies, cultures, "memes", and other things. Indeed, we observe similar mechanisms in many contexts. The detailed technical implementation of the reproduction, mutation, and the rules that determine the survival of the fittest depend on the situation. Some of the paradigms are however universal. • (3) Why don’t we have any plausible detailed step-by-step models for how such evolutionary mechanisms could produce such systems? In some cases we do - and some of these models are really impressive - but if we don't, it reflects several facts. The first fact is that the scientists have not been given a Holy Scripture that would describe every detail how the Universe and species were created. They must determine it themselves, using the limited data that is available today, and the answers to such questions are neither unique nor canonical. The evolution of many things could have occured in many different ways. There are many possibilities what things could have evolved and even more possibilities how they could have evolved. The fact that Microsoft bought Q-DOS at one moment is a part of the history of operating systems, but this fact was not really necessary for the actual evolution of MS Windows that followed afterwards. In the same way, the species were evolved after many events that occured within billions of years - but almost neither of them was absolutely necessary for the currently seen species to be evolved. Because the available datasets about the history of the Earth are limited - which is an inevitable consequence of various laws of Nature - it is simply impossible to reconstruct the unique history in many cases. However, it is possible in many other cases and people are getting better. • (4) Why in the world should we think that such mechanisms provide the right answer? Because of many reasons. First of all, we actually observe the biological mechanisms and related mechanisms - not only in biology. They take place in the world around us. We can observe evolution "in real time". We observe mutations, we observe natural selection, we observe technological progress driven by competition, we observe all types of processes that are needed for evolution to work. Their existence is often a fact that can't really be denied. Also, we observe many universal features of the organisms, especially the DNA molecules, proteins, and many other omnipresent entities. Sometimes we even observe detailed properties of the organisms that are predicted by evolution. Moreover, the processes mentioned above seem to be sufficient to describe the evolution of life, at least in its broad patterns. Occam's razor dictates us that we should not invent new things - and miracles - unless they become necessary. Moreover, evolution of life from simple forms seems to be necessary. We know that the Universe has been around for 13.7 billion years and the Earth was created about 5 billion years ago. We know that this can happen. We observe the evolution of more complex forms in the case of chips and in other cases, too. According to the known physical laws and the picture of cosmology, the Earth was created without any life on it. Science must always prefer the explanations that use a minimal amount of miracles, a minimal set of arbitrary assumptions and parameters, and where the final state looks like the most likely consequence of the assumptions. This feature of science was important in most of the scientific and technological developments and we are just applying the same successful concepts to our reasoning about everything in the world, including the origin of species. In this sense, I agree with William Dembski when he says that science rejects the creation by an unaccessible and unanalyzable Creator a priori. Rejecting explanations based on miracles that can be neither analyzed nor falsified is indeed a defining feature of science, and if William Dembski finds it too materialistic, that's too bad but this is how science has worked since the first moment when the totalitarian power of the Church over science was eliminated. • (5) And why shouldn’t we think that there is real intelligent engineering involved here, way beyond anything we are capable of? Because of the very same reasons as in (4). Assuming the existence of pre-existing intelligent engineering is an unnatural and highly unlikely assumption with an extremely small explanatory power. One of the fascinating properties of science as well as the real world is that simple beginnings may evolve into impressive outcomes, and modest assumptions are sufficient for us to derive great and accurate conclusions. The idea that there was a fascinating intelligent engineer - and the result of thousands or billions of years of his or her work is an intellectually weak creationist blog - looks like the same development backwards: weak conclusions derived from very strong and unlikely assumptions; poor future evolved from a magnificent past. Such a situation is simply just the opposite of what we are looking for in science - and not only in science - which is why we consider the opinion hiding in the "question" number (5) to be an unscientific preconception. (The last word of the previous sentence has been softened.) We don't learn anything by assuming that everything has to be the way it is because of the intent of a perfect pre-engineer. We used to believe such things before the humans became capable to live with some degree of confidence and before science was born. Today, the world is very different. For billions of years, it was up to the "lower layers" of Nature to engineer progress. For millions of years, monkeys and humans were mostly passive players in this magnificent game. More recently, however, humans started to contribute to the progress themselves. Nature has found a new way how to make the progress more efficient and faster - through the humans themselves. Many details are very new but many basic principles underlying these developments remain unchanged. Science and technology is an important part of this exciting story. They can only solve their tasks if they are done properly. Rejecting sloppy thinking and unjustified preconceptions is needed to achieve these goals. Incidentally, Inquisition and censorship works 100% on "Uncommon Descent". Whoever will be able to post a link on Dembski's blog pointing to this article will be a winner of a small competition. ;-) Technical note: there are some problems with the Haloscan "fast comments", so please be patient. Right-clicking the window offers you to go "Back" which you may find useful. String theory is phrase #7 The non-profit organization located in San Diego, CA, has released its top word list for 2005 (news). The top words are led by "refugee" and "tsunami". Names are led by "God", "tsunami", "Katrina", and "John Paul II". Included are also musical terms and youthspeak. The top seven phrases are the following: • out of the mainstream • bird flu • politically correct • North/South divide • purple thumb • climate change and global warming • string theory You see that almost all of the words and things that The Reference Frame dislikes are above string theory. The defeat of string theory by the global warming is particularly embarassing. ;-) But the 7th place is not so bad after all. Concerning political correctness, it is just not the phrase itself that was successful. Many new political correct words were successful, too. For example, the word "failure" was replaced by "deferred success" in Great Britain. On the other hand, the politically incorrect word "refugee" - that many people wanted to replace with "evacuee" - was a winner, too. Incidentally, Jim Simons, after having discovered Chern-Simons theory and earned billions of dollars from his hedge fun(d), wants to investigate autism. Roy Spencer has a nice essay on sustainability in TCS daily. The only sustainable thing is change, he says. He also argues that if the consumption of oil or production of carbon dioxide were unsustainable, a slower rate of the same processes would be unsustainable, too. Sustainability becomes irrelevant because of technological advances in almost all cases. Spencer chooses Michael Crichton's favorite example - the unsustainable amount of horseshit in New York City 100 years ago when there were 175,000 horses in the city. Its growth looked like a looming disaster but it was stopped because of cars that suddenly appeared. Also, he notices that the employees of a British Centre for Ecology and Hydrology - that had to be abolished - were informed that the center was unsustainable which is a very entertaining explanation for these people who fought for sustainability in their concerned scientific work. Also, Spencer gives economical explanations to various social phenomena. For example, the amount of possible catastrophic links between our acts and natural events as well as the number of types of our activities that will be claimed to be "unsustainable" in the scientific literature is proportional to the amount of money we pay to this sector of science. It looks like we can run out of oil soon because the companies have no interest to look for more oil than what is needed right now - it is expensive to look for oil. That makes it almost certain that we will find much more oil than we know today. Pure heterotic MSSM As announced in October here, Braun, He, Ovrut, and Pantev have finally found an exact MSSM constructed from heterotic string theory on a specific Calabi-Yau. The model has the Standard Model group plus the U(1)B-L, three generations of quarks and leptons including the right-handed neutrino, and exactly one pair of Higgs doublets which is the right matter content to obtain gauge coupling unification. By choosing a better gauge bundle - with some novel tricks involving the ideal sheaves - they got rid of the second Higgs doublet. While they use the same Calabi-Yau space with h11=h12=3 i.e. with 6 complex geometric moduli, they now only have 13 (instead of 19) complex bundle moduli. The probability that this model describes reality is roughly 10450 times bigger than the probability for a generic flux vacuum, for example the vacua that Prof. Susskind uses in his anthropic interview in New Scientist. ;-) Thursday, December 15, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Something between 2 and 3 billion visitors This is how you can make quarter a million sound like a lot. ;-) There is a counter on the right side. If you happen to see the number 250,000, you may write your name as a comment here. The prize for the round visitor includes 3 articles that he or she can post here. The number 250,000 counts unique visitors - in the sense that every day, one IP address can only increase the number by one. The total number of hits is close to 1 million. The Reference Frame does not plan any further celebrations. ;-) Update: Robert Helling and Matt B. both claim to have grabbed 250,000, and I still have not decided who is right. Matt B. has sent me a screenshot so his case is pretty strong. It is academically possible that the number 250,000 was shown to two people - because by reloading, one can see the current "score" without adding a hit. Lisa's public lecture I just returned from a public lecture of Lisa Randall - who promoted science of extra dimensions and her book Warped Passages - and it was a very nice and impressive experience. Not surprisingly, the room was crowded - as crowded as it was during a lecture of Steve Pinker I attended some time ago. As far as I can say today, she is a very good speaker. There was nothing in her talk that I would object to and nothing that should have been said completely differently. As you can guess, I was partially feeling as a co-coach whose athlete has already learned everything she should have learned. ;-) Nima Arkani-Hamed introduced Lisa in a very professional and entertaining way. Randall used a PowerPoint presentation, showed two minutes of a cartoon edition of Abbott's Flatland, explained what are different ways to include and hide extra dimensions (with a focus on warped geometry), how they are related to some of the problems of particle physics such as the hierarchy problem, how do they fit into the framework of string theory and what string theory is, and what are the methods with which we're possibly gonna observe them. After the talk, she answered many questions from the audience in a completely meaningful way. Wednesday, December 14, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Coldest December My time for writing on the blog continues to be limited, so let me just offer you a short provocation. The scientists may have been right after all, the global cooling is coming. ;-) This December will almost surely become one of the coldest American Decembers since the 19th century. Daily record lows have been breached in New York State (10 degrees F below the previous record), the Midwest (Illinois), Utah, Texas (classes canceled), Oklahoma, Colorado, Kansas, Pennsylvania (previous record was 1958), and elsewhere. More snow and cold is forecast. Natural gas is propelled to record. You may say that it is just the U.S. However, severe cold wave grips North India, too, with at least 21 casualties. The capital sees the coldest day in 6 years. The same thing applies to China and the Communist Party of China helps poor to survive bitter winter. You may complain that I only talk about countries that host one half of the world's population. You're right: the global temperature continues to be stable, around 2.7 Kelvins. ;-) We are doing fine in Massachusetts, the temperature is -10 Celsius degrees with windchill at -18 Celsius degrees. Tonight, it will be around 6 Fahrenheit. Don't forget your sweaters and gloves. The consensus scientists have may found a sign error in their calculations. The carbon dioxide causes global cooling. This occassional sign flip is called the climate change. Tuesday, December 13, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Shut up and calculate I would not promote overly technical lecture notes, especially not about things covered in many books. But the interpretation of quantum mechanics in general and decoherence in particular - a subject that belongs both to physics as well as advanced philosophy - is usually not given a sufficient amount of space in the textbooks, and some people may be interested in Lecture23.pdf. Monday, December 12, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Riemann's hypothesis I just received a lot of interesting snail mail. The first one is from Prof. Winterberg, one of the discoverers of cold fusion. He argues against the extra dimensions, using a picture of naked fat people (actually, some of them are M2-branes) and a German letter he received from his adviser, Werner Heisenberg. Very interesting but I apologize to Prof. Winterberg - too busy to do something with his nice mail and the attached paper. A publisher wants to sell the 1912 manuscript of Einstein about special relativity. Another publisher offers books about the Manhattan project and Feynman's impressive thesis. One of the reasons I am busy now is Riemann's hypothesis. Would you believe that a proof may possibly follow from string theory? I am afraid I can't tell you details right now. It's not the first time when I am excited about a possible proof like that. After some time, I always realize how stupid I am and how other people have tried very similar things. The first time I was attracted to Riemann's hypothesis, roughly 12 years ago, I re-discovered a relation between zeta(s) and zeta(1-s). That was too elementary an insight that was far from a proof but at least it started to be clear why the hypothesis "should be" true. The time I need to figure out that these ideas are either wrong or old and standard is increasing with every new attempt - and the attempts become increasingly similar to other attempts of mathematicians who try various methods. Will the time diverge this time? :-) Sunday, December 11, 2005 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere 52.3 percent growth What is a reasonable size of the GDP growth? 10 percent like in China? 4 percent like in the U.S.? Around 1 percent like in many European countries? What if I tell you that a particular country had the GDP growth of 52.3 percent in 2004? Moreover, it is a country that is usually described as such a failure that the president of another country who more or less caused all these developments, including the number 52.3, should be hated or maybe even impeached according to hundreds of thousands of activists? Don't you think that something is crazy about this whole situation? The country has not only a terrific growth potential but also a big potential to become an extremely civilized territory, just like it was thousands of years ago when Europe was their barbarian borderland. Whether or not these things will happen depends on the acts of many people. Especially the people in that country itself. And also the people from other places in the world, especially America. Who do you think is a better human? Someone who tries to support positive developments in the world, including the country above, or someone who dreams about a failure in that country that would confirm his or her misconceptions that the president is a bad president? I, for one, think that the members of the second group are immoral bastards. Moreover, it is pretty clear that most of them will spend the eternity at the dumping ground of history, unlike the president who will be written down as an important U.S. president in the future history textbooks. All those critics who still retain at least a flavor of some moral values: please stop your sabotage as soon as possible. Even if you achieve what you want - a failure - it will be clear to everyone that the failure is not Bush's fault but your fault.
48185d06d5f9a898
Instabilities in low-dimensional magnetism In the previous Section, measurements and simulations were discussed of ultrafast magnetization phenomena in three dimensions; here possibilities are considered for using the SwissFEL to investigate the quantum-fluctuating behavior of low-dimensional magnetic systems [13]. In many magnetic insulators, magnetic moments interact through an exchange of electrons between neighboring sites. Such exchange interactions are shor t-ranged. If these interactions are isotropic, such systems can be described by the well-known Heisenberg Hamiltonian, which is given by: where J is the exchange energy, and the summation is over nearest-neighbor spins; If J is positive, the spins Si and Sj tend to align ferromagnetically. For an ordered magnetic phase, the temperature-dependent change in the saturation magnetization can be calculated [14] in this model as Ms(0) – Ms(T) ∝ Nsw(T), where N_{sw}(T) \propto \frac{k^{d-1}}{\exp{[\epsilon(k)/k_B T]} -1} dk is the density of spin-waves excited at the temperature T. For d=3 dimensions, this integral is propor tional to T3/2, giving the well-known Bloch 3/2-law. For dimensions lower than d=3, the expression for Nsw(T) diverges, implying that fluctuations will prevent the occurrence of long-range magnetic order. This is a fundamental result, which has been rigorously proven by Mermin and Wagner [15], and which means that many types of magnetic systems do not order at any finite temperature. Some systems are disordered even at zero temperature, where thermal fluctuations are absent, due to the presence of quantum fluctuations in the ground state. This can happen if a static arrangement of magnetic moments is not an eigenstate of the Hamiltonian, causing quantum fluctuations to generate a new type of ground state. These disordered systems form ferromagnetically or antiferromagnetically-coupled spin liquids, and their quantum fluctuations, as described by the intermediate scattering function S(Q,t) (see chapter V), represent a par ticularly rich field of investigation for the SwissFEL. Two-dimensional case (d=2) As an example of the dynamics of a 2d-magnetic structure, consider the case of an infinite in-plane anisotropy (Sz=0): the so-called xy-model: H_{xy} = -J \sum_{(ij)} (S_i^x S_j^x + S_i^y S_j^y) As for the 2d-Heisenberg model, there is no magnetic order at finite temperature in the xy-model. However, it is found that spin correlations cause the formation at low temperature of a disordered array of magnetic vortices, with radii of order R. The cost in exchange energy incurred by the formation of such a vor tex is πJln(R/a), where a is the lattice constant, and the gain in entropy represented by the varying position of the vor tex center is 2kBln(R/a). Hence the free energy F = ln(R/a)(πJ-2kBT) becomes negative below the Kosterlitz-Thouless transition, at the temperature TKT=πJ/2kB, which separates a non-vor tex from a vor tex phase. At low temperatures, the S=1/2 layered perovskite ferromagnet K2CuF4 is approximately described by the xy-model, going through a Kosterlitz-Thouless transtion at 5.5 K. A fur ther example of (quasi) 2d-magnetic dynamics is that of vor tex core reversal in thin magnetic nanostructures (see Infobox). One-dimensional case (d=1) Magnetism in one dimension in the zero-temperature limit is par ticularly interesting, because it arises from quantum fluctuations. Consider first the isotropic J>0 Heisenberg model for a one-dimensional chain of N spins S=1/2, with periodic boundary conditions. The (ferromagnetic) ground state can be represented as |Ψ0 〉 = |↑↑↑...↑〉. In the Bethe Ansatz, the excited states of the system are built up as a superposition of states with discrete numbers of flipped spins. If we confine ourselves to single-spin (r=1) excitations: |n 〉= |↑↑↑...↑↓↑...↑〉 (here the nth spin has been reversed), we can write the excited state as |\Psi_1 \rangle = \sum_{n=1}^{N} a(n) |n \rangle It is then a straightforward exercise to compute from the Schrödinger equation (for convenience, written in terms of the raising and lowering operators S±) the excited- state energy E1, and one finds, for large N, that excitations exist with arbitrarily small excitation energies E1 – E0; i.e., the excitation spectrum is gapless. Higher level excitations, involving multiple spin flips r = 2, 3, 4, ..., become increasingly cumbersome to handle, but the gapless spectrum is retained (Figure I.8a shows the analogous result for the 1d-antiferromagnetic spin ½ chain [16]). Magnetic vortex core switching The magnetic vor tex is a very stable, naturally-forming magnetic configuration occurring in thin soft-magnetic nanostructures. Due to shape anisotropy, the magnetic moments in such thin-film elements lie in the film plane. The vor tex configuration is characterized by the circulation of the in-plane magnetic structure around a ver y stable core of only a few tens of nanometers in diameter, of the order of the exchange length. A par ticular feature of this structure is the core of the vor tex, which is perpendicularly magnetized relative to the sample plane. This results in two states: “up” or “down”. Their small size and per fect stability make vor tex cores promising candidates for magnetic data storage. A study by Her tel et al. [27] based on micromagnetic simulations (LLG equation) has shown that, strikingly, the core can dynamically be switched between “up” and “down” within only a few tens of picoseconds by means of an external field. Figure I.i6 below simulates the vortex core switching in a 20 nm thick Permalloy disk of 200 nm diameter after the application of a 60 ps field pulse, with a peak value of 80 mT. Using field pulses as shor t as 5 ps, the authors show that the core reversal unfolds first through the production of a new vor tex with an oppositely oriented core, followed by the annihilation of the original vortex with a transient antivortex structure. To date, no experimental method can achieve the required temporal (a few tens of ps) and spatial (a few tens of nm) resolution to investigate this switching process. The combination of the high-energy THz pump source and circularly-polarized SwissFEL probe pulses will allow such studies. One of the simplest ways for a material to avoid magnetic order and develop macroscopic quantum correlations is through the creation of an energy gap Eg in the excitation spectrum. Since Eg is of the order of the exchange interaction, the gap introduces a time-scale for fluctuations which is typically on the order of femtoseconds. One such phenomenon is the spin Peierls effect. This is related to the better-known charge Peierls metalinsulator transition (see Chapter V). In the spin-Peierls effect, a uniform 1d, S=1/2 spin chain undergoes a spontaneous distor tion, causing dimerization, and hence the appearance of two different exchange couplings J±δJ (see Fig. I.9). For δJ sufficiently large, S=0 singlet pairs are formed on the stronger links, implying a non-magnetic state and a finite energy gap to the excited states (see Fig. I.8b). The Peierls state is stable if the resulting lowering of magnetic energy more than compensates for the elastic energy of the lattice distortion. Note that the distor tion is a distinctive feature which is visible with hard X-ray diffraction. The spin-chain compound CuGeO3 is an inorganic solid which undergoes a spin-Peierels trasition at 14 K. A more subtle quantum effect also leads to an energy gap in the excitation spectrum of an antiferromagnetic Heisenberg chain of integral spins [17]. As conjectured by Haldane, neighboring S=1 spins can be resolved into two S=1/2 degrees of freedom, effectively forming singlet bonds (see Fig. I.10). This valence bond state is responsible for the existence of a Haldane energy gap, since long wavelength spin excitations cannot be generated without breaking the valence bonds. A consequence of the Haldane mechanism is a spatial correlation function for magnetic excitations which decays exponentially with distance, compared with the power-law dependence in the case of gapless excitations. An inorganic material which demonstrates the Haldane phenomenon is Y2BaNiO5. The Haldane mechanism is also used to describe the dynamic behavior of finite 1d S=1 antiferromagnetic chains, as investigated in Mg-doped Y2BaNiO5 by inelastic neutron scattering [18]. The finite chains are generated by the non-magnetic Mg impurities, and the ends of the chains represent S=1/2 impurities with a strong nano-scale correlation, with the result that the Haldane gap becomes a function of chain length. In an applied magnetic field, the triplet spin excitations undergo Zeeman splitting, eventually becoming a new ground state (see Fig. I.11). Thus the Zeeman transitions are hybrid excitations with both local and cooperative proper ties. They therefore serve as probes of the quantum correlation functions, which are otherwise difficult to access. The temperature and field ranges for such studies var y with the material, but effects can be observed in many systems at T ~1 K and B ~1 T. Some quantum magnets show magneto-electric interactions [19], which may allow perturbation of the quantum states by electric or optical pulses. In this case, it will be possible to probe the temporal evolution of macroscopic quantum correlations in a pump-probe experiment at the SwissFEL. Zero-dimensional case (d=0) The extreme brightness of the SwissFEL allows measurements of magnetic phenomena on dilute samples consisting of isolated nanopar ticles, with effectively zero dimension. A recent realization of such nanopar ticles is the single-molecule magnet, manganese acetate (Mn12Ac) shown in Fig. I.1, in which 12 magnetic Mn ions, with a total spin S=10, are held in close proximity by an organic molecular framework. Another example is the creation of magnetic nanodots by sub-monlayer deposition onto a high-index sur face of a metal (see Fig. I.12). If they have a magnetic anisotropy above the superparamagnetic limit, such nanopar ticles may exhibit room-temperature ferro- or antiferromagnetic order, and undergo sub-nanosecond quantum tunnelling between different magnetization directions [21]. Details of this tunnelling, including field-enhancement of the rate, are an attractive topic in ultrafast magnetization dynamics, suitable for study with the SwissFEL.
29fa395d86f923b4
« · » Section 13.7: The Coulomb Potential for the Idealized Hydrogen Atom Please wait for the animation to completely load. Now consider the radial part of the Schrödinger equation in Eq. (13.21)  written as [−(ħ2/2μ)(1/r2) d/dr (r2 d/dr) + l(l + 1)ħ2/(2μr2) + V(r)] R(r) = ER(r) . (13.26) We want to rewrite these terms, especially the derivative, into a more standard form. The substitution, R(r) = u(r)/r, simplifies the derivative term, in the above equation to yield [−(ħ2/2μ) (d2/dr2) + l(l + 1)ħ2/(2μr2) + V(r)] u(r) = Eu(r) . When we find solutions to this equation, we must keep in mind that we are solving for u(r), not R(r), and that we must divide u(r) by r to give the true radial wave function, R(r).  Before looking at a particular V(r), we look at the general equation and interpret terms. We see what we can interpret as an effective potential: Veff = l(l + 1)ħ2/(2μr2) + V(r) , where l(l + 1)ħ2/(2μr2) is the potential associated with the so-called the centrifugal barrier. Now consider the following Coulomb potential, V = −e2/r, which describes the potential energy function for an electron in the proximity of a proton: the potential responsible for the basic structure of the hydrogen atom.4  When we insert this Coulomb potential in the radial differential equation, we have a differential equation that describes the electron: [−(ħ2/2μe) (d2/dr2) + l(l + 1)ħ2/(2μer2) − e2/r] u(r) = Eu(r) . (13.27) where μe is the electron's mass and e is the charge of the electron. In Animation 1 the effective potential for the Coulomb problem  Veff = l(l + 1)ħ2/(2μer2) + V(r) is shown for l = 0, 1, 2. Notice that as l gets bigger, the centrifugal barrier increases as well. To get this equation into standard form, divide by −ħ2/2μe, which yields [ (d2/dr2) − l(l + 1)/r2 + 2μee2/(ħ2r)] u(r) = (−2μeE/ħ2) u(r) . We now define κ2 = (−2μeE/ħ2) (which is real since E < 0) and the dimensionless quantities ρ = κr and ρ0 = 2μee2/(κ ħ2). Making these substitutions yields [ (d2/dρ2) − l(l + 1)/ρ2 0/ρ − 1] u = 0 . We begin our analysis of the solutions of this differential equation by considering the two special limiting cases: Case I: ρ→ 0 (r→ 0). In this case the centrifugal barrier dominates in Eq. (13.46) [ (d2/dρ2) − l(l + 1)/ρ2] u = 0 . (13.28) We find that the general solution to this equation is u = Aρl+1+Bρl and therefore the normalizable piece is just u ∝ ρl+1 ∝  rl+1. Case II: ρ → ∞ (r → ∞). In this case the centrifugal term, l(l + 1)/ρ2, and the potential energy, ρ0/ρ, vanish from Eq. (13.28) at large ρ.  This leaves [ (d2/dρ2) − 1] u = 0 . (13.48) which for E < 0 gives the normalizable solution u = exp(−ρ) = exp(−κr). Now that we have an idea of what the bound states should look like asymptotically, we can find the entire solution. After much algebra we first find that E = μee4/(2n2ħ2) = −R (1/n2) , where R = μee4/(2ħ2) is the Rydberg and is 13.6 eV.  This result describes the energy levels for the Coulomb problem, and hence, the basic energy level structure for the hydrogen atom. We now simplify ρ. We use ρ = κr and the definition of κ to find ρ = μee2/(2) r = (r/na0) , where a0 = ħ2e2 is the Bohr radius. We can make use of further substitutions, this time yielding the radial wave functions Rnl(r) = Anl er/na0 [(r/na0)l+1/r] vn(r/na0) , (13.29) where Anl = [(2/(na0)3(nl − 1)!/(2n[(n + l)!]3)] is the normalization for the radial energy eigenfunction. In addition, vn( r/na0) = L2l+1n−l−1(2r/na0) are the associated Laguerre polynomials. The unnormalized radial wave functions, above without Anl, are shown in Animation 2.  In the animation, distances are given in terms of Bohr radii, a0. You may enter values of n and l and see the radial energy eigenfunction that results. We find that the entire energy eigenfunction, properly normalized, is simply the product of the radial and angular solutions: ψnlm = Rnl(r) Ylm(θ,φ) . Note that given that there are n2 states per n value, and that the energy just depends on n, the solutions have an n2 energy degeneracy. 4In Chapter 14 we discuss corrections to the Coulomb potential which are responsible for the remaining structure in the hydrogen spectral lines. We also generalize the Coulomb potential to include hydrogenic atoms, those with one electron and Z protons. The OSP Network: Open Source Physics - Tracker - EJS Modeling Physlet Physics Physlet Quantum Physics
d0247b89e1ef6b2d
Dismiss Notice Join Physics Forums Today! Hamilton-Jacobi-Equation (HJE) 1. Sep 27, 2009 #1 Hi all! I was studying the HJ-formalism of classical mechanics when I came upon a modified HJE: [tex](\nabla S)^2=\frac{1}{u^2}(\frac{\partial S}{\partial t})^2[/tex] and [tex]dr=(dx,dy,dz)[/tex] is the position vector. (I read the derivation and it's ok) Now, u is interpreted to be the wave velocity of the so called 'action waves' in phase space. However, my book (Nolting, Volume 2) states that this is a wave equation, or at least a special nonlinear case of the popular wave equation [tex]\nabla^2S=\frac{1}{u^2}\frac{\partial^2}{\partial t^2}S[/tex] which is somehow unclear to me, as the squares in both equations mean different things. A similar statement is also made in Wikipedia: (cf. Eiconal apprpximation and relationship to the Schrödinger equation) I hope someone of you can explain this to me :) best regards, 2. jcsd 3. Sep 28, 2009 #2 Hmmm, haven't fould anything so far.. Any ideas left? Similar Discussions: Hamilton-Jacobi-Equation (HJE)
cc6ed753a9f4989a
Deriving A Quaternion Analog to the Schrödinger Equation The Schrödinger equation gives the kinetic energy plus the potential (a sum also known as the Hamiltonian H) of the wave function psi, which contains all the dynamical information about a system. Psi is a scalar function with complex values. The hamiltonian operator acting on psi = -i h bar phi dot = -h bar squared over 2 m Laplacian psi + the potential V(0, X) psi For the time-independent case, energy is written at the operator -i hbar d/dt, and kinetic energy as the square of the momentum operator, i hbar Del, over 2m. Given the potential V(0, X) and suitable boundary conditions, solving this differential equation generates a wave function psi which contains all the properties of the system. In this section, the quaternion analog to the Schrödinger equation will be derived from first principles. What is interesting are the constraint that are required for the quaternion analog. For example, there is a factor which might serve to damp runaway terms. The Quaternion Wave Function The derivation starts from a curious place :-) Write out classical angular momentum with quaternions. (0, L) = (0, R Cross P) = the odd part of (0, R) times (0, P) What makes this "classical" are the zeroes in the scalars. Make these into complete quaternions by bringing in time to go along with the space 3-vector R, and E with the 3-vector P. (t, R) times (E, P) = (E t - R dot P, E R + P t + R Cross P) Define a dimensionless quaternion psi that is this product over h bar. psi is defined to be (t, R) times (E, P) over hbar = (E t - R dot P, E R + P t + R Cross P) over h bar The scalar part of psi is also seen in plane wave solutions of quantum mechanics. The complicated 3-vector is a new animal, but notice it is composed of all the parts seen in the scalar, just different permutations that evaluate to 3-vectors. One might argue that for completeness, all combinations of E, t, R and P should be involved in psi, as is the case here. Any quaternion can be expressed in polar form: q = the absolute value of q times e to the arc cosine of the scalar over the absolute value of q times the 3-vector over its absolute Express psi in polar form. To make things simpler, assume that psi is normalized, so |psi| = 1. The 3-vector of psi is quite complicated, so define one symbol to capture it: I is defined to be E R + P t + R cross P over the absolute value of the Now rewrite psi in polar form with these simplifications: psi = e to the E t - R dot P time I over h This is what I call the quaternion wave function. Unlike previous work with quaternionic quantum mechanics (see S. Adler's book "Quaternionic Quantum Mechanics"), I see no need to define a vector space with right-hand operator multiplication. As was shown in the section on bracket notation, the Euclidean product of psi (psi* psi) will have all the properties required to form a Hilbert space. The advantage of keeping both operators and the wave function as quaternions is that it will make sense to form an interacting field directly using a product such as psi psi'. That will not be done here. Another advantage is that all the equations will necessarily be invertible. Changes in the Quaternion Wave Function We cannot derive the Schrödinger equation per se, since that involves Hermitian operators that acting on a complex vector space. Instead, the operators here will be anti-Hermitian quaternions acting on quaternions. Still it will look very similar, down to the last h bar :-) All that needs to be done is to study how the quaternion wave function psi changes. Make the following assumptions. 1. Energy and Momentum are conserved. d E by d t = 0 and d P by d t = 0 1. Energy is evenly distributed in space The Gradient of E = 0 3. The system is isolated The Curl of P = 0 4. The position 3-vector X is in the same direction as the momentum 3-vector P X dot P over the absolute value of the two = 1 which implies d e to the I by d t = 0 and the Curl of e to the I = 0 The implications of this last assumption are not obvious but can be computed directly by taking the appropriate derivative. Here is a verbal explanation. If energy and momentum are conserved, they will not change in time. If the position 3-vector which does change is always in the same direction as the momentum 3-vector, then I will remain constant in time. Since I is in the direction of X, its curl will be zero. This last constraint may initially appear too confining. Contrast this with the typical classical quantum mechanics. In that case, there is an imaginary factor i which contains no information about the system. It is a mathematical tool tossed in so that the equation has the correct properties. With quaternions, I is determined directly from E, t, P and X. It must be richer in information content. This particular constraint is a reflection of that. Now take the time derivative of psi. d psi by dt = E I over h bar times psi over the square root of (E t - R dot P over h bar) squared The denominator must be at least 1, and can be greater that that. It can serve as a damper, a good thing to tame runaway terms. Unfortunately, it also makes solving explicitly for energy impossible unless Et - P.X equals zero. Since the goal is to make a direct connection to the Schrödinger equation, make one final assumption: E t - R dot P = 0 There are several important cases when this will be true. In a vacuum, E and P are zero. If this is used to study photons, then t = |R| and E = |P|. If this number happens to be constant in time, then this equation will apply to the wave front. if d E t - R dot P by d t = 0, then E = d R by d t dot P or d R by d t = E over P Now with these 5 assumptions in hand, energy can be defined with an operator. d psi dt = E I over h bar psi - I h bar d psi d t = E psi or E = - I h bar d by The equivalence of the energy E and this operator is called the first quantization. Take the spatial derivative of psi using the under the same assumptions: Del psi = - P I over h bar times psi over the square root of (E t - R dot P over h bar) squared I h bar Del acting on psi = P acting on psi or P = I h bar Del Square this operator. P squared = m v squared = 2 m times m v squared over 2 = 2 m Kinetic Energy = - h bar squared Del squared The Hamiltonian equals the kinetic energy plus the potential energy. The Hamiltonian acting on psi = - I hbar d psi by d t = - h bar squared Del squared + the potential acting on psi Typographically, this looks very similar to the Schrödinger equation. Capital I is a normalized 3-vector, and a very complicated one at that if you review the assumptions that got us here. phi is not a vector, but is a quaternion. This give the equation more, not less, analytical power. With all of the constraints in place, I expect that this equation will behave exactly like the Schrödinger equation. As the constraints are removed, this proposal becomes richer. There is a damper to quench runaway terms. The 3-vector I becomes quite the nightmare to deal with, but it should be possible, given we are dealing with a topological algebraic field. Any attempt to shift the meaning of an equation as central to modern physics had first be able to regenerate all of its results. I believe that the quaternion analog to Schrödinger equation under the listed constraints will do the task. These is an immense amount of work needed to see as the constraints are relaxed, whether the quaternion differential equations will behave better. My sense at this time is that first quaternion analysis as discussed earlier must be made as mathematically solid as complex analysis. At that point, it will be worth pushing the envelope with this quaternion equation. If it stands on a foundation as robust as complex analysis, the profound problems seen in quantum field theory stand a chance of fading away into the background.
f60f00b1d0117151
This work is licensed under a Creative Commons License. The Elephant and the Event Horizon 26 October 2006 Exclusive from New Scientist Print Edition. In everyday life, of course, locality is a given. You're over there, I'm over here; neither of us is anywhere else. Even in Einstein's theory of relativity, where distances and timescales can change depending on an observer's reference frame, an object's location in space-time is precisely defined. What Susskind is saying, however, is that locality in this classical sense is a myth. Nothing is what, or rather, where it seems. This is more than just a mind-bending curiosity. It tells us something new about the fundamental workings of the universe. Strange as it may sound, the fate of an elephant in a black hole has deep implications for a "theory of everything" called quantum gravity, which strives to unify quantum mechanics and general relativity, the twin pillars of modern physics. Because of their enormous gravity and other unique properties, black holes have been fertile ground for researchers developing these ideas. It all began in the mid-1970s, when Stephen Hawking of the University of Cambridge showed theoretically that black holes are not truly black, but emit radiation. In fact they evaporate very slowly, disappearing over many billions of years. This "Hawking radiation" comes from quantum phenomena taking place just outside the event horizon, the gravitational point of no return. But, Hawking asked, if a black hole eventually disappears, what happens to all the stuff inside? It can either leak back into the universe along with the radiation, which would seem to require travelling faster than light to escape the black hole's gravitational death grip, or it can simply blink out of existence. Trouble is, the laws of physics don't allow either possibility. "We've been forced into a profound paradox that comes from the fact that every conceivable outcome we can imagine from black hole evaporation contradicts some important aspect of physics," says Steve Giddings, a theorist at the University of California, Santa Barbara. blackhole imageResearchers call this the black hole information paradox. It comes about because losing information about the quantum state of an object falling into a black hole is prohibited, yet any scenario that allows information to escape also seems in violation. Physicists often talk about information rather than matter because information is thought to be more fundamental. In quantum mechanics, the information that describes the state of a particle can't slip through the cracks of the equations. If it could, it would be a mathematical nightmare. The Schrödinger equation, which describes the evolution of a quantum system in time, would be meaningless because any semblance of continuity from past to future would be shattered and predictions rendered absurd. "All of physics as we know it is conditioned on the fact that information is conserved, even if it's badly scrambled," Susskind says. For three decades, however, Hawking was convinced that information was destroyed in black hole evaporation. He argued that the radiation was random and could not contain the information that originally fell in. In 1997, he and Kip Thorne, a physicist at the California Institute of Technology in Pasadena, made a bet with John Preskill, also at Caltech, that information loss was real. At stake was an encyclopedia - from which they agreed information could readily be retrieved. All was quiet until July 2004, when Hawking unexpectedly showed up at a conference in Dublin, Ireland, claiming that he had been wrong all along. Black holes do not destroy information after all, he said. He presented Preskill with an encyclopedia of baseball. What inspired Hawking to change his mind? It was the work of a young theorist named Juan Maldacena of the Institute for Advanced Study in Princeton, New Jersey. Maldacena may not be a household name, but he contributed what some consider to be the most ground-breaking piece of theoretical physics in the last decade. He did it using string theory, the most popular approach to understanding quantum gravity. In 1997, Maldacena developed a type of string theory in a universe with five large dimensions of space and a contorted space-time geometry. He showed that this theory, which includes gravity, is equivalent to an ordinary quantum field theory, without gravity, living on the four-dimensional boundary of that universe. Everything happening on the boundary is equivalent to everything happening inside: ordinary particles interacting on the surface correspond precisely to strings interacting on the interior. This is remarkable because the two worlds look so different, yet their information content is identical. The higher-dimensional strings can be thought of as a "holographic" projection of the quantum particles on the surface, similar to the way a laser creates a 3D hologram from the information contained on a 2D surface. Even though Maldacena's universe was very different from ours, the elegance of the theory suggested that our universe might be something of a grand illusion - an enormous cosmic hologram (New Scientist, 27 April 2002, p 22). The holographic idea had been proposed previously by Susskind, one of the inventors of string theory, and by Gerard't Hooft of the University of Utrecht in the Netherlands. Each had used the fact that the entropy of a black hole, a measure of its information content, was proportional to its surface area rather than its volume. But Maldacena showed explicitly how a holographic universe could work and, crucially, why information could not be lost in a black hole. According to his theory, a black hole, like everything else, has an alter ego living on the boundary of the universe. Black hole evaporation, it turns out, corresponds to quantum particles interacting on this boundary. Since no information loss can occur in a swarm of ordinary quantum particles, there can be no mysterious information loss in a black hole either. "The boundary theory respects the rules of quantum mechanics," says Maldacena. "It keeps track of all the information." Of course, our universe still looks nothing like the one in Maldacena's theory. The results are so striking, though, that physicists have been willing to accept the idea, at least for now. "The opposition, including Hawking, had to give up," says Susskind. "It was so mathematically precise that for most practical purposes all theoretical physicists came to the conclusion that the holographic principle and the conservation of information would have to be true." All well and good, but a serious problem remains: if the information isn't lost in a black hole, where is it? Researchers speculate that it is encoded in the black hole radiation (see "Black hole computers"). "The idea is that Hawking radiation is not random but contains subtle information on the matter that fell in," says Maldacena. Susskind takes it a step further. Since the holographic principle leaves no room for information loss, he argues, no observer should ever see information disappear. That leads to a remarkable thought experiment. Which brings us back to the elephant. Let's say Alice is watching a black hole from a safe distance, and she sees an elephant foolishly headed straight into gravity's grip. As she continues to watch, she will see it get closer and closer to the event horizon, slowing down because of the time-stretching effects of gravity in general relativity. However, she will never see it cross the horizon. Instead she sees it stop just short, where sadly Dumbo is thermalised by Hawking radiation and reduced to a pile of ashes streaming back out. From Alice's point of view, the elephant's information is contained in those ashes. Inside or out? There is a twist to the story. Little did Alice realise that her friend Bob was riding on the elephant's back as it plunged toward the black hole. When Bob crosses the event horizon, though, he doesn't even notice, thanks to relativity. The horizon is not a brick wall in space. It is simply the point beyond which an observer outside the black hole can't see light escaping. To Bob, who is in free fall, it looks like any other place in the universe; even the pull of gravity won't be noticeable for perhaps millions of years. Eventually as he nears the singularity, where the curvature of space-time runs amok, gravity will overpower Bob, and he and his elephant will be torn apart. Until then, he too sees information conserved. Neither story is pretty, but which one is right? According to Alice, the elephant never crossed the horizon; she watched it approach the black hole and merge with the Hawking radiation. According to Bob, the elephant went through and floated along happily for eons until it turned into spaghetti. The laws of physics demand that both stories be true, yet they contradict one another. So where is the elephant, inside or out? The answer Susskind has come up with is - you guessed it - both. The elephant is both inside and outside the black hole; the answer depends on who you ask. "What we've discovered is that you cannot speak of what is behind the horizon and what is in front of the horizon," Susskind says. "Quantum mechanics always involves replacing 'and' with 'or'. Light is waves or light is particles, depending on the experiment you do. An electron has a position or it has a momentum, depending on what you measure. The same is happening with black holes. Either we describe the stuff that fell into the horizon in terms of things behind the horizon, or we describe it in terms of the Hawking radiation that comes out." Wait a minute, you might think. Maybe there are two copies of the information. Maybe when the elephant hits the horizon, a copy is made, and one version comes out as radiation while the other travels into the black hole. However, a fundamental law called the no-cloning theorem precludes that possibility. If you could duplicate information, you could circumvent the uncertainty principle, something nature forbids. As Susskind puts it, "There cannot be a quantum Xerox machine." So the same elephant must be in two places at once: alive inside the horizon and dead in a heap of radiating ashes outside. The implications are unsettling, to say the least. Sure, quantum mechanics tells us that an object's location can't always be pinpointed. But that applies to things like electrons, not elephants, and it usually spans tiny distances, not light years. It is the large scale that makes this so surprising, Susskind says. In principle, if the black hole is big enough, the two versions of the same elephant could be separated by billions of light years. "People always thought quantum ambiguity was a small-scale phenomenon," he adds. "We're learning that the more quantum gravity becomes important, the more huge-scale ambiguity comes into play." All this amounts to the fact that an object's location in space-time is no longer indisputable. Susskind calls this "a new form of relativity". Einstein took factors that were thought to be invariable - an object's length and the passage of time - and showed that they were relative to the motion of an observer. The location of an object in space or in time could only be defined with respect to an observer, but its location in space-time was certain. Now that notion has been shattered, says Susskind, and an object's location in space-time depends on an observer's state of motion with respect to a horizon. What's more, this new type of "non-locality" is not just for black holes. It occurs anywhere a boundary separates regions of the universe that can't communicate with each other. Such horizons are more common than you might think. Anything that accelerates - the Earth, the solar system, the Milky Way - creates a horizon. Even if you're out running, there are regions of space-time from which light would never reach you if you kept speeding up. Those inaccessible regions are beyond your horizon. As researchers forge ahead in their quest to unify quantum mechanics and gravity, non-locality may help point the way. For instance, quantum gravity should obey the holographic principle. That means there might be redundant information and fewer important dimensions of space-time in the theory. "This has to be part of the understanding of quantum gravity," Giddings says. "It's likely that this black hole information paradox will lead to a revolution at least as profound as the advent of quantum mechanics." From issue 2575 of New Scientist magazine, 26 October 2006, page 36-39 Black hole computers According to Leonard Susskind of Stanford University, however, it makes no sense to talk about the location of information independent of an observer. To an outside observer, information never falls into the black hole in the first place. Instead, it is heated and radiated back out before ever crossing the horizon. The quantum computer model, he says, relies on the old notion of locality. "The location of a bit becomes ambiguous and observer-dependent when gravity becomes important," he says. So the idea of a black hole computer remains controversial. Creative Commons License
3bf764861f14636a
Dismiss Notice Dismiss Notice Join Physics Forums Today! Why the least action: a fact or a meaning ? 1. Jun 28, 2006 #1 Have some people tried to find a meaning to the principle of least action that apparently underlies the whole physics? I know of one attempt, but not convincing to me (°). A convincing attempt, even modest, should suggest why it occurs, what is/could be behind the scene and how it might lead us to new discoveries. The link from QM/Schroedinger to CM/Newton is a clear explanation for the classical least action. But the surpirse is that least action can be found nearly everywhere, even as a basis for QFT (isn't it?). (°) this is how I understood the book by Roy Frieden "Science from fisher information" 2. jcsd 3. Jun 28, 2006 #2 Feynmann gave a beatiful "justification2 or explanation of this principle when dealing with Path integral..if you have: [tex] \int D[\phi]e^{iS[\phi]/\hbar} [/tex] then the classical behavior h-->0 so only the points for wich the integrand have a maximum or a minimum contribute to the integration in our case the maximum or minimum is given by the equation [tex] \delta S =0 [/tex] wich is precisely the "Principle of Least action"... Unfortunately following Feynmann there is no variational principles in quantum mechanics. 4. Jun 30, 2006 #3 The Schrödinger equation has also a Lagrangian and can be derived from a least action principle. Other systems surprisingly also have a Lagragian and a least action principle: the classical damped oscillator, and the diffusion equation, for example ! Clearly this is an exception: this pictural explanation for the CM least action derived from the stationary phase limit of QM. Least action is seen nearly everywhere. This is why I asked the PF if there is explanation or a meaning behind that. Would it be possible that a very wide range of differential equations can be reformulated as a least action principle? Then the explanation would be general mathematics, and the meaning would not be much of physics. This would translate my question to something like "why is physics based on differential equations?". Or is there more to learn on physics from the LAP ? Last edited: Jun 30, 2006 Have something to add? Similar Discussions: Why the least action: a fact or a meaning ? 1. Least action (Replies: 1)