Dataset Viewer
chash
stringlengths 16
16
| content
stringlengths 267
674k
|
---|---|
a0cc42ac9ac36376 | Lambert W function
From Wikipedia, the free encyclopedia
Jump to: navigation, search
The graph of W(x) for W > −4 and x < 6. The upper branch with W ≥ −1 is the function W0 (principal branch), the lower branch with W ≤ −1 is the function W−1.
In mathematics, the Lambert-W function, also called the omega function or product logarithm, is a set of functions, namely the branches of the inverse relation of the function f(z) = zez where ez is the exponential function and z is any complex number. In other words
By substituting the above equation in , we get the defining equation for the W function (and for the W relation in general):
for any complex number z'.
Since the function ƒ is not injective, the relation W is multivalued (except at 0). If we restrict attention to real-valued W, the complex variable z is then replaced by the real variable x, and the relation is defined only for x ≥ −1/e, and is double-valued on (−1/e, 0). The additional constraint W ≥ −1 defines a single-valued function W0(x). We have W0(0) = 0 and W0(−1/e) = −1. Meanwhile, the lower branch has W ≤ −1 and is denoted W−1(x). It decreases from W−1(−1/e) = −1 to W−1(0) = −∞.
The Lambert W relation cannot be expressed in terms of elementary functions.[1] It is useful in combinatorics, for instance in the enumeration of trees. It can be used to solve various equations involving exponentials (e.g. the maxima of the Planck, Bose–Einstein, and Fermi–Dirac distributions) and also occurs in the solution of delay differential equations, such as y'(t) = a y(t − 1). In biochemistry, and in particular enzyme kinetics, a closed-form solution for the time course kinetics analysis of Michaelis–Menten kinetics is described in terms of the Lambert W function.
Main branch of the Lambert-W function in the complex plane. Note the branch cut along the negative real axis, ending at −1/e. In this picture, the hue of a point z is determined by the argument of W(z) and the brightness by the absolute value of W(z).
The modulus of the principal branch of the Lambert-W function, colored according to the argument(W(z))
The two main branches and
The Lambert-W function is named after Johann Heinrich Lambert. The main branch W0 is denoted by Wp in the Digital Library of Mathematical Functions and the branch W−1 is denoted by Wm there.
The notation convention chosen here (with W0 and W−1) follows the canonical reference on the Lambert-W function by Corless, Gonnet, Hare, Jeffrey and Knuth.[2]
Lambert first considered the related Lambert's Transcendental Equation in 1758,[3] which led to a paper by Leonhard Euler in 1783[4] that discussed the special case of wew.
The Lambert W function was "re-discovered" every decade or so in specialized applications.[citation needed] In 1993, when it was reported that the Lambert W function provides an exact solution to the quantum-mechanical double-well Dirac delta function model for equal charges—a fundamental problem in physics—Corless and developers of the Maple Computer algebra system made a library search, and found that this function was ubiquitous in nature.[2][5]
By implicit differentiation, one can show that all branches of W satisfy the differential equation
(W is not differentiable for z = −1/e.) As a consequence, we get the following formula for the derivative of W:
Using the identity , we get the following equivalent formula which holds for all :
The function W(x), and many expressions involving W(x), can be integrated using the substitution w = W(x), i.e. x = w ew:
(The last equation is more common in the literature but does not hold at .)
One consequence of which (using the fact that ) is the identity:
Asymptotic expansions[edit]
The Taylor series of around 0 can be found using the Lagrange inversion theorem and is given by
The radius of convergence is 1/e, as may be seen by the ratio test. The function defined by this series can be extended to a holomorphic function defined on all complex numbers with a branch cut along the interval (−∞, −1/e]; this holomorphic function defines the principal branch of the Lambert W function.
For large values of x, W0 is asymptotic to
where , and is a non-negative Stirling number of the first kind.[6] Keeping only the first two terms of the expansion,
The other real branch, , defined in the interval [−1/e, 0), has an approximation of the same form as x approaches zero, with in this case and .
In [7] it is shown that the following bound holds for :
In [8] it was proven that branch can be bounded as follows:
for .
Integer and complex powers[edit]
Integer powers of also admit simple Taylor (or Laurent) series expansions at
More generally, for the Lagrange inversion formula gives
which is, in general, a Laurent series of order r. Equivalently, the latter can be written in the form of a Taylor expansion of powers of
which holds for any and .
A few identities follow from definition:
Note that, since f(x) = x⋅ex is not injective, not always W(f(x)) = x. For fixed x < 0 and x ≠ 1 the equation x⋅ex = y⋅ey has two solutions in y, one of which is of course y = x. Then, for i = 0 and x < −1 as well as for i = −1 and x ∈ (−1, 0), Wi(x⋅ex) is the other solution of the equation x⋅ex = y⋅ey.
(which can be extended to other n and x if the right branch is chosen)
From inverting f(ln(x)):
With Euler's iterated exponential h(x):
Special values[edit]
For any non-zero algebraic number x, W(x) is a transcendental number. Indeed, if W(x) is zero then x must be zero as well, and if W(x) is non-zero and algebraic, then by the Lindemann–Weierstrass theorem, eW(x) must be transcendental, implying that x=W(x)eW(x) must also be transcendental.
(the Omega constant)
Other formulas[edit]
Definite integrals[edit]
There are several useful definite integral formulas involving the W function, including the following:
The first identity can be found by writing the Gaussian integral in polar coordinates.
The second identity can be derived by making the substitution
which gives
The third identity may be derived from the second by making the substitution and the first can also be derived from the third by the substitution .
Except for z along the branch cut (where the integral does not converge), the principal branch of the Lambert W function can be computed by the following integral:
where the two integral expressions are equivalent due to the symmetry of the integrand.
Indefinite integrals[edit]
Many equations involving exponentials can be solved using the W function. The general strategy is to move all instances of the unknown to one side of the equation and make it look like Y = XeX at which point the W function provides the value of the variable in X.
In other words :
Example 1[edit]
More generally, the equation
can be transformed via the substitution
which yields the final solution
Example 2[edit]
or, equivalently,
by definition.
Example 3[edit]
taking the n-th root
let : then
Example 4[edit]
Whenever the complex infinite exponential tetration
converges, the Lambert W function provides the actual limit value as
where ln(z) denotes the principal branch of the complex log function. This can be shown by observing that
if c exists, so
which is the result which was to be found.
Example 5[edit]
Solutions for
have the form[5]
Example 6[edit]
The solution for the current in a series diode/resistor circuit can also be written in terms of the Lambert W. See diode modeling.
Example 7[edit]
The delay differential equation
has characteristic equation , leading to and , where is the branch index. If , only need be considered.
Example 8[edit]
The Lambert-W function has been recently (2013) shown to be the optimal solution for the required magnetic field of a Zeeman slower.[12]
Example 9[edit]
Granular and debris flow fronts and deposits, and the fronts of viscous fluids in natural events and in the laboratory experiments can be described by using the Lambert–Euler omega function as follows:
where H(x) is the debris flow height, x is the channel downstream position, L is the unified model parameter consisting of several physical and geometrical parameters of the flow, flow height and the hydraulic pressure gradient.
Example 10[edit]
The Lambert-W function was employed in the field of Neuroimaging for linking cerebral blood flow and oxygen consumption changes within a brain voxel, to the corresponding Blood Oxygenation Level Dependent (BOLD) signal.[13]
Example 11[edit]
The Lambert-W function was employed in the field of Chemical Engineering for modelling the porous electrode film thickness in a glassy carbon based supercapacitor for electrochemical energy storage. The Lambert "W" function turned out to be the exact solution for a gas phase thermal activation process where growth of carbon film and combustion of the same film compete with each other.[14][15]
Example 12[edit]
The Lambert-W function was employed in the field of epitaxial film growth for the determination of the critical dislocation onset film thickness. This is the calculated thickness of an epitaxial film, where due to thermodynamic principles the film will develop crystallographic dislocations in order to minimise the elastic energy stored in the films. Prior to application of Lambert "W" for this problem, the critical thickness had to be determined via solving an implicit equation. Lambert "W" turns it in an explicit equation for analytical handling with ease.[16]
Example 13[edit]
The Lambert-W function has been employed in the field of fluid flow in porous media to model the tilt of an interface separating two gravitationally segregated fluids in a homogeneus tilted porous bed of constant dip and thickness where the heavier fluid, injected at the bottom end, displaces the lighter fluid that is produced at the same rate from the top end. The principal branch of the solution corresponds to stable displacements while the -1 branch applies if the displacement is unstable with the heavier fluid running underneath the ligther fluid.[17]
Example 14[edit]
The equation (linked with the generating functions of Bernoulli numbers and Todd genus):
can be solved by means of the two real branches and :
This application shows in evidence that the branch difference of the W function can be employed in order to solve other trascendental equations.
See : D. J. Jeffrey and J. E. Jankowski, "Branch differences and Lambert W"
Example 15[edit]
The centroid of a set of histograms defined with respect to the symmetrized Kullback-Leibler divergence (also called the Jeffreys divergence) is in closed form using the Lambert function.
See : F. Nielsen, "Jeffreys Centroids: A Closed-Form Expression for Positive Histograms and a Guaranteed Tight Approximation for Frequency Histograms"
Example 16[edit]
The Lambert-W function appears in a quantum-mechanical potential (see The Lambert-W step-potential) which affords the fifth – next to those of the harmonic oscillator plus centrifugal, the Coulomb plus inverse square, the Morse, and the inverse square root potential – exact solution to the stationary one-dimensional Schrödinger equation in terms of the confluent hypergeometric functions. The potential is given as
A peculiarity of the solution is that each of the two fundamental solutions that compose the general solution of the Schrödinger equation is given by a combination of two confluent hypergeometric functions of an argument proportional to .
See : A.M. Ishkhanyan, "The Lambert W-barrier – an exactly solvable confluent hypergeometric potential"
The standard Lambert-W function expresses exact solutions to transcendental algebraic equations (in x) of the form:
where a0, c and r are real constants. The solution is . Generalizations of the Lambert W function[18][19][20] include:
and where r1 and r2 are real distinct constants, the roots of the quadratic polynomial. Here, the solution is a function has a single argument x but the terms like ri and ao are parameters of that function. In this respect, the generalization resembles the hypergeometric function and the Meijer G-function but it belongs to a different class of functions. When r1 = r2, both sides of (2) can be factored and reduced to (1) and thus the solution reduces to that of the standard W function. Eq. (2) expresses the equation governing the dilaton field, from which is derived the metric of the R=T or lineal two-body gravity problem in 1+1 dimensions (one spatial dimension and one time dimension) for the case of unequal (rest) masses, as well as, the eigenenergies of the quantum-mechanical double-well Dirac delta function model for unequal charges in one dimension.
• Analytical solutions of the eigenenergies of a special case of the quantum mechanical three-body problem, namely the (three-dimensional) hydrogen molecule-ion.[22] Here the right-hand-side of (1) (or (2)) is now a ratio of infinite order polynomials in x:
where ri and si are distinct real constants and x is a function of the eigenenergy and the internuclear distance R. Eq. (3) with its specialized cases expressed in (1) and (2) is related to a large class of delay differential equations. Hardy's notion of a "false derivative" provides exact multiple roots to special cases of (3).[23]
Applications of the Lambert "W" function in fundamental physical problems are not exhausted even for the standard case expressed in (1) as seen recently in the area of atomic, molecular, and optical physics.[24]
Numerical evaluation[edit]
The W function may be approximated using Newton's method, with successive approximations to (so ) being
The W function may also be approximated using Halley's method,
given in Corless et al. to compute W.
The Lambert-W function is implemented as LambertW in Maple, lambertw in GP (and glambertW in PARI), lambertw in MATLAB,[25] also lambertw in octave with the 'specfun' package, as lambert_w in Maxima,[26] as ProductLog (with a silent alias LambertW) in Mathematica,[27] as lambertw in Python scipy's special function package,[28] as LambertW in Perl's ntheory module,[29] and as gsl_sf_lambert_W0 and gsl_sf_lambert_Wm1 functions in special functions section of the GNU Scientific Library – GSL.
See also[edit]
1. ^ Chow, Timothy Y. (1999), "What is a closed-form number?", American Mathematical Monthly, 106 (5): 440–448, doi:10.2307/2589148, MR 1699262 .
3. ^ Lambert JH, "Observationes variae in mathesin puram", Acta Helveticae physico-mathematico-anatomico-botanico-medica, Band III, 128–168, 1758 (facsimile)
5. ^ a b Corless, R. M.; Gonnet, G. H.; Hare, D. E. G.; Jeffrey, D. J. (1993). "Lambert's W function in Maple". The Maple Technical Newsletter. MapleTech. 9: 12–22. CiteSeerX accessible.
6. ^ Approximation of the Lambert W function and the hyperpower function, Hoorfar, Abdolhossein; Hassani, Mehdi.
7. ^
8. ^ Chatzigeorgiou, I. (2013). "Bounds on the Lambert function and their Application to the Outage Analysis of User Cooperation". IEEE Communications Letters. 17 (8): 1505–1508. arXiv:1601.04895Freely accessible. doi:10.1109/LCOMM.2013.070113.130972.
9. ^
10. ^
11. ^ "The Lambert W Function". Ontario Research Centre.
12. ^ B Ohayon., G Ron. (2013). "New approaches in designing a Zeeman Slower". Journal of Instrumentation. 8 (02): P02016. doi:10.1088/1748-0221/8/02/P02016.
13. ^ Sotero, Roberto C.; Iturria-Medina, Yasser (2011). "From Blood oxygenation level dependent (BOLD) signals to brain temperature maps". Bull Math Biol. 73 (11): 2731–47. doi:10.1007/s11538-011-9645-5. PMID 21409512.
14. ^ Braun, Artur; Wokaun, Alexander; Hermanns, Heinz-Guenter (2003). "Analytical Solution to a Growth Problem with Two Moving Boundaries". Appl Math Model. 27 (1): 47–52. doi:10.1016/S0307-904X(02)00085-9.
15. ^ Braun, Artur; Baertsch, Martin; Schnyder, Bernhard; Koetz, Ruediger (2000). "A Model for the film growth in samples with two moving boundaries – An Application and Extension of the Unreacted-Core Model.". Chem Eng Sci. 55 (22): 5273–5282. doi:10.1016/S0009-2509(00)00143-3.
16. ^ Braun, Artur; Briggs, Keith M.; Boeni, Peter (2003). "Analytical solution to Matthews' and Blakeslee's critical dislocation formation thickness of epitaxially grown thin films". J Cryst Growth. 241 (1/2): 231–234. Bibcode:2002JCrGr.241..231B. doi:10.1016/S0022-0248(02)00941-7.
17. ^ Colla, Pietro (2014). "A New Analytical Method for the Motion of a Two-Phase Interface in a Tilted Porous Medium". PROCEEDINGS,Thirty-Eighth Workshop on Geothermal Reservoir Engineering,Stanford University. SGP-TR-202. ([1])
18. ^ Scott, T. C.; Mann, R. B.; Martinez Ii, Roberto E. (2006). "General Relativity and Quantum Mechanics: Towards a Generalization of the Lambert W Function". AAECC (Applicable Algebra in Engineering, Communication and Computing). 17 (1): 41–47. arXiv:math-ph/0607011Freely accessible. doi:10.1007/s00200-006-0196-1.
19. ^ Scott, T. C.; Fee, G.; Grotendorst, J. (2013). "Asymptotic series of Generalized Lambert W Function". SIGSAM (ACM Special Interest Group in Symbolic and Algebraic Manipulation). 47 (185): 75–83. doi:10.1145/2576802.2576804.
20. ^ Scott, T. C.; Fee, G.; Grotendorst, J.; Zhang, W.Z. (2014). "Numerics of the Generalized Lambert W Function". SIGSAM. 48 (1/2): 42–56. doi:10.1145/2644288.2644298.
21. ^ Farrugia, P. S.; Mann, R. B.; Scott, T. C. (2007). "N-body Gravity and the Schrödinger Equation". Class. Quantum Grav. 24 (18): 4647–4659. arXiv:gr-qc/0611144Freely accessible. doi:10.1088/0264-9381/24/18/006.
22. ^ Scott, T. C.; Aubert-Frécon, M.; Grotendorst, J. (2006). "New Approach for the Electronic Energies of the Hydrogen Molecular Ion". Chem. Phys. 324 (2–3): 323–338. arXiv:physics/0607081Freely accessible. doi:10.1016/j.chemphys.2005.10.031.
23. ^ Maignan, Aude; Scott, T. C. (2016). "Fleshing out the Generalized Lambert W Function". SIGSAM. 50 (2): 45–60. doi:10.1145/2992274.2992275.
24. ^ Scott, T. C.; Lüchow, A.; Bressanini, D.; Morgan, J. D. III (2007). "The Nodal Surfaces of Helium Atom Eigenfunctions". Phys. Rev. A. 75 (6): 060101. doi:10.1103/PhysRevA.75.060101.
25. ^ lambertw – MATLAB
26. ^ Maxima, a Computer Algebra System
27. ^ ProductLog at WolframAlpha
28. ^ [2]
29. ^ ntheory at MetaCPAN
External links[edit] |
d688faf9b5d37f5f |
Erwin Schroedinger
Erwin Schrödinger
Final Answers
© 2000-2016 Gérard P. Michon, Ph.D.
The Schrödinger Equation
but to think what nobody has yet thought,
about that which everybody sees
Arthur Schopenhauer (1788-1860)
Related articles on this site:
Related Links (Outside this Site)
Schrödinger Picture and Heisenberg Picture (equivalent representations).
Schrödinger's Equation in 1-D by Michael Fowler | Physics Applets.
Solutions to the Schrödinger Equation (calculator, shooting method).
Electron in a Finite Square Well Potential (calculator).
The Hydrogen Atom
HyperPhysics by Carl R. (Rod) Nave : Hydrogen Atom | Spherical Well
Video: Particles and Waves (MU50) by David L. Goodstein 1 | 2 | 3 | 4
Atoms to Quarks (MU51) by David L. Goodstein 1 | 2 | 3 | 4 | 5
Justifying Schrödinger's Equation
Schroedinger's 126th Birthday
The celebrated Schrödinger equation is merely what the most ordinary wave equation becomes when the celerity of a wave (i.e., the product u = ln of its wavelength by its frequency) is somehow equated to the ratio (E/p) of the energy to the momentum of an "associated" nonrelativistic particle.
This surprising relation is essentially what the (relativistic) de Broglie principle postulates. In an introductory course, it might be more pedagological and more economical to invoke the de Broglie principle in order to derive Schrödinger's equation...
However, it's enlightening to present how Erwin Schrödinger himself introduced the subject: Following Hamilton, he showed how the relation u = E/p can be obtained, by equating the classical principles previously stated by Fermat for waves (least time) and Maupertuis for particles (least "action").
This is an idea which made the revolutionary concepts of wave mechanics acceptable to physicists of a bygone era, including Erwin Schrödinger himself. Also, the more recent "sum over histories" formulation of quantum mechanics by Richard Feynman is arguably based on the same variational principles.
(2002-11-02) Hamilton's Analogy: Paths to the Schrödinger Equation
Equating the principles of Fermat and Maupertuis yields the celerity u.
Schrödinger took seriously an analogy attributed to William Rowan Hamilton (1805-1865) which bridges the gap between well-known features of two aspects of physical reality, classical mechanics and wave theory. Hamilton's analogy states that, whenever waves conspire to create the illusion of traveling along a definite path (like "light rays" in geometrical optics), they are analogous to a classical particle: The Fermat principle for waves may then be equated with the Maupertuis principle for particles. Equating also the velocity of a particle with the group speed of a wave, Schrödinger drew the mathematical consequences of combining it all with Planck's quantum hypothesis (E = hn).
These ideas were presented (March 5, 1928) at the Royal Institution of London, to start a course of "Four Lectures on Wave Mechanics" which Schrödinger dedicated to his late teacher, Fritz Hasenöhrl.
Maupertuis' Principle of Least "Action" (1744, 1750)
Pierre-Louis Moreau de
Maupertuis (1698-1759)
Adding up the masses of all bodies multiplied by their respective
speeds and the distances they travel yields the quantity called action,
which is always the least possible in any natural motion.
Pierre-Louis Moreau de Maupertuis. "Sur les lois du mouvement " (1746).
When a point of mass m moves at a speed v in a force field described by a potential energy V (which depends on the position), its kinetic energy is T = ½ mv2 (the total energy E = T+V remains constant). The actual trajectory from a point A to a point B turns out to be such as to minimize the quantity that Maupertuis (1698-1759) dubbed action, namely the integral ò 2T dt. (Maupertuis' Principle is thus also called the Least Action Principle.)
Now, introducing the curvilinear abscissa (s) along the trajectory, we have:
2T = mv2 = m (ds/dt)2 = 2(E-V)
Multiply the last two quantities by m and take their square roots to obtain an expression for m (ds/dt) , which you may plug back into the whole thing to get an interesting value for 2T:
2T = (ds/dt) Ö 2m (E-V)so the action is:ò Ö2m (E-V) ds
The time variable (t) has thus disappeared from the integral to be minimized, which is now a purely static function of the spatial path from A to B. Pierre de Fermat
Fermat's Principle: Least Time (c. 1655)
When some quantity j propagates in 3 dimensions at some celerity u (also called phase speed), it verifies the well-known wave equation:
Pierre-Simon Laplace
2 j = 2 j + 2 j + 2 j
Vinculum Vinculum Vinculum Vinculum Vinculum
u 2 t 2 x 2 y 2 z 2
=Dj [D is the Laplacian operator]
The speed u may depend on the properties of the medium in which the "thing" propagates, and it may thus vary from place to place. When light goes through some nonhomogeneous medium with a varying refractive index (n>1), it propagates at a speed u = c/n and will travel along a path (a "ray", in the approximation of geometrical optics) which is always such that the time (òdt) it takes to go from point A to point B is minimal [among "nearby" paths]. This is Fermat's Principle, first stated by Pierre de Fermat (1601-1665) for light in the context of geometrical optics, where it implies both the law of reflection and Snell's law for refraction. This principle applies quite generally to any type of wave, in those circumstances where some path of propagation can be defined.
If we introduce a curvilinear abscissa s for a wave that follows some path in the same way light propagate along rays [in a smooth enough medium], we have u = ds/dt. This allows us to express the time it takes to go from A to B as an integral of ds/u. The conclusion is that a wave will [roughly speaking] take a "path" from A to B along which the following integral is minimal:
ò 1/u ds
Hamilton's Analogy :
The above shows that, when a wave appears to propagate along a path, this path satisfies a condition of the same mathematical form as that obeyed by the trajectory of a particle. In both cases, a static integral along the path has to be minimized. If the same type of "mechanics" is relevant, it seems the quantities to integrate should be proportional. The coefficient of proportionality cannot depend on the position, but it may very well depend on the total energy E (which is constant in the whole discussion). In other words, the proportionality between the integrand of the principle of Maupertuis and its Fermat counterpart (1/u) implies that the following quantity is a function of the energy E alone:
f (E) = u Ö 2m (E-V)
Combined with Planck's formula, the next assumption implies f (E) = E ...
Schrödinger's Argument :
Schrödinger assumed that the wave equivalent of the speed v of a particle had to be the so-called group velocity, given by the following expression:
(n / u)
vinculum vinculum
We enter the quantum realm by postulating Planck's formula : E = hn. This proportionality of energy and frequency turns the previous equation into:
(E / u)
vinculum vinculum
On the other hand, since ½ mv2 = E-V, the following relation also holds:
Ö 2m (E-V)
vinculum vinculum
Recognizing the square root as the quantity we denoted f (E) / u in the above context of Hamilton's analogy [it's actually the momentum p, if you must know] the equality of the right-hand sides of the last two equations implies that the following quantity C does not depend on E:
( f (E) - E ) / u = C = [ 1 - E / f (E) ] Ö 2m (E-V)
This means f (E) = E / ( 1 - C [ 2m (E-V) ] -1/2 which is, in general, a function of E alone only if C vanishes (as V depends on space coordinates). Therefore f (E) = E, as advertised, which can be expressed by the relation:
u = E / Ö 2m (E-V)
Mathematically, this equation and Planck's relation (E = hn) turn the general wave equation into the stationary equation of Schrödinger, discussed below.
In 1928, Schrödinger quoted only as "worth mentioning" the fact that the above relation boils down to u = E/p, without identifying that as the nonrelativistic counterpart of the formally identical relation for the celerity u = ln obtained from the 1923 expression of a de Broglie wave's momentum (p = h/l) using E = hn.
Nowadays, it's simpler to merely invoke de Broglies's Principle to establish mathematically the formal stationary equation of Schrödinger, given below.
English translations of the 9 papers and 4 lectures that Erwin Schrödinger published about his own approach to Quantum Theory ("Wave Mechanics") between 1926 and 1928 have been compiled in: " Collected Papers on Wave Mechanics " by E. Schrödinger (Chelsea Publishing Co., NY, 1982)
Schrödinger's Stationary Equation
Dy + (8 p2 m / h2 ) (E - V) y = 0
(2005-07-08) Partial Confinement in a Box by a Finite Potential
Solutions for a single dimension yield the three-dimensional solutions.
Consider a particle confined within a rectangular box by a finite potential, so that (8 p2 m / h2 ) (V - E) is -l-2 inside the box, and m-2 outside of it.
Finite one-dimensional well
For a single dimension, we'd be looking at a box with boundaries at x = ± L and a bounded and continuous solution y of the following type:
y(x) = [ A cos (L/l) - B sin (L/l) ] exp ( [L+x] / m ) for x < -L
= A cos (x/l) + B sin (x/l) for |x| < L
= [ A cos (L/l) + B sin (L/l) ] exp ( [L-x] / m ) for x > L
The continuity of the derivative of y at x = ± L translates into the relations:
(A/l) sin (L/l) + (B/l) cos (L/l) = (1/m) [ A cos (L/l) - B sin (L/l) ]
(-A/l) sin (L/l) + (B/l) cos (L/l) = (1/m) [ -A cos (L/l) - B sin (L/l) ]
We may replace these by their sum and their difference, which boil down to:
• B = 0 or m cos (L/l) = -l sin (L/l)
• A = 0 or m sin (L/l) = l cos (L/l)
Since lm does not vanish, either A or B does (not both). A nonzero solution is thus either even (B=0) or odd (A=0) with the matching condition derived from the above, which is dubbed "quantization" in the following table:
Single-Dimensional Well of Width 2L and Energy Depth V
1 / l2 + 1 / m2 = (8p2 m / h2 ) V
Symmetry y(-x) = y(x) y(-x) = -y(x)
Quantization l / m = tan (L / l) m / l = -tan (L / l)
y(x) x < -L cos (L/l) exp ( [L+x] / m ) -sin (L/l) exp ( [L+x] / m )
-L < x < L cos ( x / l ) sin ( x / l )
L < x cos (L/l) exp ( [L-x] / m ) sin (L/l) exp ( [L-x] / m )
ò |y| 2 dx m cos2 (L/l) +
L + ½ l sin (2L/l)
m sin2 (L/l) +
L - ½ l sin (2L/l)
m + L
Any solution is proportional to the function expressed in either of the above columns. The last line indicates that (because of their respective quantization conditions) the norms of both tabulated functions have a unified expression. This is just a coincidence, since we merely took a priori the simplest choices among proportional expressions... Normalized functions are thus obtained by multiplying the above expressions by e / Ö(m+L) for some complex unit e ( |e| = 1 ).
The probability P ( |x| > L ) to find the particle outside the box also has a unified expression, valid for either parity of the wavefunction:
( m2 + l2 ) (m + L)
Wavefunctions for a 3-dimensional box of dimensions a, b, c are obtained as products of the above types of functions of x, y or z, respectively.
Come back later, we're
still working on this one...
(2005-07-10) Harmonic Oscillator
Quantization of energy in a parabolic well (Hooke's law).
Come back later, we're
still working on this one...
Hermite Polynomials | Charles Hermite (1822-1901; X1842)
(2005-07-10) Angular Momentum
The angular momentum of a rotator is quantized.
Come back later, we're
still working on this one...
(2005-07-10) Coulomb Potential
Classification of the orbitals corresponding to a Coulomb potential.
Come back later, we're
still working on this one...
Legendre Polynomials | Laguerre Polynomials
(2015-11-22) The Wallis Formula for p (John Wallis, 1655).
A quantum derivation by Tamar Friedmann and C. R. Hagen (2015).
Come back later, we're
still working on this one...
Tamar Friedmann & C. R. Hagen, AIP Journal of Mathematical Physics56, 112101 (2015).
New derivation of pi links quantum physics and pure math (AIP, 2015-11-10).
(2016-01-16) How tough is Schrödinger's equation, really?
Any homogeneous second-order linear differential equation reduces to it.
In one dimension, an second-order linear differential equation can be tranformed into a Schrödinger equation or a Ricatti equation, and vice-versa.
Come back later, we're
still working on this one...
visits since January 15, 2007 |
3be0621709aca154 | The Anderson Institute Logo
Where history is becoming an experimental science
Quantum Tunneling
An Overview and Comparison by Dr. David Lewis Anderson
Quantum Tunneling is an evanescent wave coupling effect that occurs in quantum mechanics. The correct wavelength combined with the proper tunneling barrier makes it possible to pass signals faster than light, backwards in time.
Quantum Tunneling Time Control
In the diagram above light pulses consisting of waves of various frequencies are shot toward a 10 centimeter chamber containing cesium vapor. All information about the incoming pulse is contained in the leading edge of its waves. This information is all the cesium atoms need to replicate the pulse and send it out the other side.
At the same time it is believed an opposite wave rebounds inside the chamber cancelling out the main part of the incoming pulse as it enters the chamber. By this time the new pulse, moving faster than the speed of light, has traveled about 60 feet beyond the chamber. Essentially the pulse has left the chamber before it finished entering, traveling backwards in time.
The key characteristics of the application of quantum tunneling for time control and time travel are presented in the picture below. This is followed by more detail describing the phenomenon below.
Quantum Tunneling Time Control and Time Travel
Wave-mechanical tunneling (also called quantum-mechanical tunneling, quantum tunneling, and the tunnel effect) is an evanescent wave coupling effect that occurs in the context of quantum mechanics because the behavior of particles is governed by Schrödinger's wave-equation. All wave equations exhibit evanescent wave coupling effects if the conditions are right. Wave coupling effects mathematically equivalent to those called "tunneling" in quantum mechanics can occur with Maxwell's wave-equation (both with light and with microwaves), and with the common non-dispersive wave-equation often applied (for example) to waves on strings and to acoustics.
For these effects to occur there must be a situation where a thin region of "medium type 2" is sandwiched between two regions of "medium type 1", and the properties of these media have to be such that the wave equation has "traveling-wave" solutions in medium type 1, but "real exponential solutions" (rising and falling) in medium type 2. In optics, medium type 1 might be glass, medium type 2 might be vacuum. In quantum mechanics, in connection with motion of a particle, medium type 1 is a region of space where the particle total energy is greater than its potential energy, medium type 2 is a region of space (known as the "barrier") where the particle total energy is less than its potential energy.
If conditions are right, amplitude from a traveling wave, incident on medium type 2 from medium type 1, can "leak through" medium type 2 and emerge as a traveling wave in the second region of medium type 1 on the far side. If the second region of medium type 1 is not present, then the traveling wave incident on medium type 2 is totally reflected, although it does penetrate into medium type 2 to some extent. Depending on the wave equation being used, the leaked amplitude is interpreted physically as traveling energy or as a traveling particle, and, numerically, the ratio of the square of the leaked amplitude to the square of the incident amplitude gives the proportion of incident energy transmitted out the far side, or (in the case of the Schrödinger equation) the probability that the particle "tunnels" through the barrier.
Quantum Tunneling Introduction
quantum tunneling
Quantum Tunneling
The scale on which these "tunneling-like phenomena" occur depends on the wavelength of the traveling wave. For electrons the thickness of "medium type 2" (called in this context "the tunneling barrier") is typically a few nanometers; for alpha-particles tunneling out of a nucleus the thickness is very much less; for the analogous phenomenon involving light the thickness is very much greater.
With Schrödinger's wave-equation, the characteristic that defines the two media discussed above is the kinetic energy of the particle if it is considered as an object that could be located at a point. In medium type 1 the kinetic energy would be positive, in medium type 2 the kinetic energy would be negative. There is no inconsistency in this, because particles cannot physically be located at a point: they are always spread out ("delocalized") to some extent, and the kinetic energy of the delocalized object is always positive.
What is true is that it is sometimes mathematically convenient to treat particles as behaving like points, particular in the context of Newton's Second Law and classical mechanics generally. In the past, people thought that the success of classical mechanics meant that particles could always and in all circumstances be treated as if they were located at points. But there never was any convincing experimental evidence that this was true when very small objects and very small distances are involved, and we now know that this viewpoint was mistaken. However, because it is still traditional to teach students early in their careers that particles behave like points, it sometimes comes as a big surprise for people to discover that it is well established that traveling physical particles always physically obey a wave-equation (even when it is convenient to use the mathematics of moving points). Clearly, a hypothetical classical point particle analyzed according to Newton's Laws could not enter a region where its kinetic energy would be negative. But, a real delocalized object, that obeys a wave-equation and always has positive kinetic energy, can leak through such a region if conditions are right. An approach to tunneling that avoids mention of the concept of "negative kinetic energy" is set out below in the section on "Schrödinger equation tunneling basics".
Quantum Tunneling Effect
Reflection and tunneling of an electron
wave packet directed at a potential barrier.
The bright spot moving to the left is the
reflected part of the wave packet. A very
dim spot can be seen moving to the right
of the barrier. This is the small fraction of
the wave packet that tunnels through the
classically forbidden barrier. Also notice
the interference fringes between the
incoming and reflected waves.
An electron approaching a barrier has to be represented as a wave-train. This wave-train can sometimes be quite long – electrons in some materials can be 10 to 20 nm long. This makes animations difficult. If it were legitimate to represent the electron by a short wave-train, then tunneling could be represented as in the animation alongside.
It is sometimes said that tunneling occurs only in quantum mechanics. Unfortunately, this statement is a bit of linguistic conjuring trick. As indicated above, "tunneling-type" evanescent-wave phenomena occur in other contexts too. But, until recently, it has only been in quantum mechanics that evanescent wave coupling has been called "tunneling". (However, there is an increasing tendency to use the label "tunneling" in other contexts too, and the names "photon tunneling" and "acoustic tunneling" are now used in the research literature.)
With regards to the mathematics of tunneling, a special problem arises. For simple tunneling-barrier models, such as the rectangular barrier, the Schrödinger equation can be solved exactly to give the value of the tunneling probability (sometimes called the "transmission coefficient"). Calculations of this kind make the general physical nature of tunneling clear. One would also like to be able to calculate exact tunneling probabilities for barrier models that are physically more realistic. However, when appropriate mathematical descriptions of barriers are put into the Schrödinger equation, then the result is an awkward non-linear differential equation. Usually, the equation is of a type where it is known to be mathematically impossible in principle to solve the equation exactly in terms of the usual functions of mathematical physics, or in any other simple way. Mathematicians and mathematical physicists have been working on this problem since at least 1813, and have been able to develop special methods for solving equations of this kind approximately. In physics these are known as "semi-classical" or "quasi-classical" methods. A common semi-classical method is the so-called WKB approximation (also known as the "JWKB approximation"). The first known attempt to use such methods to solve a tunneling problem in physics was made in 1928, in the context of field electron emission. It is sometimes considered that the first people to get the mathematics of applying this kind of approximation to tunneling fully correct (and to give reasonable mathematical proof that they had done so) were N. Fröman and P.O. Fröman, in 1965. Their complex ideas have not yet made it into theoretical-physics textbooks, which tend to give simpler (but slightly more approximate) versions of the theory. An outline of one particular semi-classical method is given below.
quantum tunneling
Three notes may be helpful. In general, students taking physics courses in quantum mechanics are presented with problems (such as the quantum mechanics of the hydrogen atom) for which exact mathematical solutions to the Schrödinger equation exist. Tunneling through a realistic barrier is a reasonably basic physical phenomenon. So it is sometimes the first problem that students encounter where it is mathematically impossible in principle to solve the Schrödinger equation exactly in any simple way. Thus, it may also be the first occasion on which they encounter the "semi-classical-method" mathematics needed to solve the Schrödinger equation approximately for such problems. Not surprisingly, this mathematics is likely to be unfamiliar, and may feel "odd". Unfortunately, it also comes in several different variants, which doesn't help.
Also, some accounts of tunneling seem to be written from a philosophical viewpoint that a particle is "really" point-like, and just has wave-like behavior. There is very little experimental evidence to support this viewpoint. A preferable philosophical viewpoint is that the particle is "really" delocalized and wave-like, and always exhibits wave-like behavior, but that in some circumstances it is convenient to use the mathematics of moving points to describe its motion. This second viewpoint is used in this section. The precise nature of this wave-like behavior is, however, a much deeper matter, beyond the scope of this article on tunneling.
Although the phenomenon under discussion here is usually called "quantum tunneling" or "quantum-mechanical tunneling", it is the wave-like aspects of particle behavior that are important in tunneling theory, rather than effects relating to the quantization of the particle's energy states. For this reason, some writers prefer to call the phenomenon "wave-mechanical tunneling.
By 1928, George Gamow had solved the theory of the alpha decay of a nucleus via tunneling. Classically, the particle is confined to the nucleus because of the high energy requirement to escape the very strong potential. Under this system, it takes an enormous amount of energy to pull apart the nucleus. In quantum mechanics, however, there is a probability the particle can tunnel through the potential and escape. Gamow solved a model potential for the nucleus and derived a relationship between the half-life of the particle and the energy of the emission.
Alpha decay via tunneling was also solved concurrently by Ronald Gurney and Edward Condon. Shortly thereafter, both groups considered whether particles could also tunnel into the nucleus.
After attending a seminar by Gamow, Max Born recognized the generality of quantum-mechanical tunneling. He realized that the tunneling phenomenon was not restricted to nuclear physics, but was a general result of quantum mechanics that applies to many different systems. Today the theory of tunneling is even applied to the early cosmology of the universe.
Quantum tunneling was later applied to other situations, such as the cold emission of electrons, and perhaps most importantly semiconductor and superconductor physics. Phenomena such as field emission, important to flash memory, are explained by quantum tunneling. Tunneling is a source of major current leakage in Very-large-scale integration (VLSI) electronics, and results in the substantial power drain and heating effects that plague high-speed and mobile technology.
Another major application is in electron-tunneling microscopes which can resolve objects that are too small to see using conventional microscopes. Electron tunneling microscopes overcome the limiting effects of conventional microscopes (optical aberrations, wavelength limitations) by scanning the surface of an object with tunneling electrons.
Quantum tunneling has been shown to be a mechanism used by enzymes to enhance reaction rates. It has been demonstrated that enzymes use tunneling to transfer both electrons and nuclei such as hydrogen and deuterium. It has even been shown, in the enzyme glucose oxidase, that oxygen nuclei can tunnel under physiological conditions. |
61f68951aff33da4 | Dismiss Notice
Join Physics Forums Today!
Why the least action: a fact or a meaning ?
1. Jun 28, 2006 #1
Have some people tried to find a meaning to the principle of least action that apparently underlies the whole physics? I know of one attempt, but not convincing to me (°). A convincing attempt, even modest, should suggest why it occurs, what is/could be behind the scene and how it might lead us to new discoveries.
The link from QM/Schroedinger to CM/Newton is a clear explanation for the classical least action. But the surpirse is that least action can be found nearly everywhere, even as a basis for QFT (isn't it?).
(°) this is how I understood the book by Roy Frieden "Science from fisher information"
2. jcsd
3. Jun 28, 2006 #2
Feynmann gave a beatiful "justification2 or explanation of this principle when dealing with Path integral..if you have:
[tex] \int D[\phi]e^{iS[\phi]/\hbar} [/tex]
then the classical behavior h-->0 so only the points for wich the integrand have a maximum or a minimum contribute to the integration in our case the maximum or minimum is given by the equation [tex] \delta S =0 [/tex] wich is precisely the "Principle of Least action"... Unfortunately following Feynmann there is no variational principles in quantum mechanics.
4. Jun 30, 2006 #3
The Schrödinger equation has also a Lagrangian and can be derived from a least action principle.
Other systems surprisingly also have a Lagragian and a least action principle:
the classical damped oscillator, and the diffusion equation, for example !
Clearly this is an exception: this pictural explanation for the CM least action derived from the stationary phase limit of QM. Least action is seen nearly everywhere. This is why I asked the PF if there is explanation or a meaning behind that.
Would it be possible that a very wide range of differential equations can be reformulated as a least action principle? Then the explanation would be general mathematics, and the meaning would not be much of physics. This would translate my question to something like "why is physics based on differential equations?".
Or is there more to learn on physics from the LAP ?
Last edited: Jun 30, 2006
Have something to add?
Similar Discussions: Why the least action: a fact or a meaning ?
1. Least action (Replies: 1) |
749f3b293e5dfa39 | Equations of motion
From Wikipedia, the free encyclopedia
(Redirected from Equation of motion)
Jump to: navigation, search
In mathematical physics, equations of motion are equations that describe the behaviour of a physical system in terms of its motion as a function of time.[1] More specifically, the equations of motion describe the behaviour of a physical system as a set of mathematical functions in terms of dynamic variables: normally spatial coordinates and time are used, but others are also possible, such as momentum components and time. The most general choice are generalized coordinates which can be any convenient variables characteristic of the physical system.[2] The functions are defined in a Euclidean space in classical mechanics, but are replaced by curved spaces in relativity. If the dynamics of a system is known, the equations are the solutions to the differential equations describing the motion of the dynamics.
There are two main descriptions of motion: dynamics and kinematics. Dynamics is general, since momenta, forces and energy of the particles are taken into account. In this instance, sometimes the term refers to the differential equations that the system satisfies (e.g., Newton's second law or Euler–Lagrange equations), and sometimes to the solutions to those equations.
However, kinematics is simpler as it concerns only variables derived from the positions of objects, and time. In circumstances of constant acceleration, these simpler equations of motion are usually referred to as the "SUVAT" equations, arising from the definitions of kinematic quantities: displacement (S), initial velocity (U), final velocity (V), acceleration (A), and time (T). (see below).
Equations of motion can therefore be grouped under these main classifiers of motion. In all cases, the main types of motion are translations, rotations, oscillations, or any combinations of these.
A differential equation of motion, usually identified as some physical law and applying definitions of physical quantities, is used to set up an equation for the problem. Solving the differential equation will lead to a general solution with arbitrary constants, the arbitrariness corresponding to a family of solutions. A particular solution can be obtained by setting the initial values, which fixes the values of the constants.
To state this formally, in general an equation of motion M is a function of the position r of the object, its velocity (the first time derivative of r, v = dr/dt), and its acceleration (the second derivative of r, a = d2r/dt2), and time t. Euclidean vectors in 3d are denoted throughout in bold. This is equivalent to saying an equation of motion in r is a second order ordinary differential equation (ODE) in r,
where t is time, and each overdot denotes one time derivative. The initial conditions are given by the constant values at t = 0,
\mathbf{r}(0) \,, \quad \mathbf{\dot{r}}(0) \,.
The solution r(t) to the equation of motion, with specified initial values, describes the system for all times t after t = 0. Other dynamical variables like the momentum p of the object, or quantities derived from r and p like angular momentum, can be used in place of r as the quantity to solve for from some equation of motion, although the position of the object at time t is by far the most sought-after quantity.
Sometimes, the equation will be linear and is more likely to be exactly solvable. In general, the equation will be non-linear, and cannot be solved exactly so a variety of approximations must be used. The solutions to nonlinear equations may show chaotic behavior depending on how sensitive the system is to the initial conditions.
Historically, equations of motion first appeared in classical mechanics to describe the motion of massive objects, a notable application was to celestial mechanics to predict the motion of the planets as if they orbit like clockwork (this was how Neptune was predicted before its discovery), and also investigate the stability of the solar system.
It is important to observe that the huge body of work involving kinematics, dynamics and the mathematical models of the universe developed in baby steps - faltering, getting up and correcting itself - over three millennia and included contributions of both known names and others who have since faded from the annals of history.
In antiquity, notwithstanding the success of priests, astrologers and astronomers in predicting solar and lunar eclipses, the solstices and the equinoxes of the Sun and the period of the moon, there was nothing other than a set of algorithms to help them. Despite the great strides made in the development of geometry in the Ancient Greece and surveys in Rome, we were to wait for another thousand years before the first equations of motion arrive.
The exposure of Europe to the collected works by the Muslims of the Greeks, the Indians and the Islamic scholars, such as Euclid’s Elements, the works of Archimedes, and Al-Khwārizmī’s treatises [3] began in Spain, and scholars from all over Europe went to Spain, read, copied and translated the learning into Latin. The exposure of Europe to Indo-Arabic numerals and their ease in computations encouraged first the scholars to learn them and then the merchants and envigorated the spread of knowledge throughout Europe.
By the 13th century the universities of Oxford and Paris had come up, and the scholars were now studying mathematics and philosophy with lesser worries about mundane chores of life—the fields were not as clearly demarcated as they are in the modern times. Of these, compendia and redactions, such as those of Johannes Campanus, of Euclid and Aristotle, confronted scholars with ideas about infinity and the ratio theory of elements as a means of expressing relations between various quantities involved with moving bodies. These studies led to a new body of knowledge that is now known as physics.
Of these institutes Merton College sheltered a group of scholars devoted to natural science, mainly physics, astronomy and mathematics, of similar in stature to the intellectuals at the University of Paris. Thomas Bradwardine, one of those scholars, extended Aristotelian quantities such as distance and velocity, and assigned intensity and extension to them. Bradwardine suggested an exponential law involving force, resistance, distance, velocity and time. Nicholas Oresme further extended Bradwardine's arguments. The Merton school proved the that the quantity of motion of a body undergoing a uniformly accelerated motion is equal to the quantity of a uniform motion at the speed achieved halfway through the accelerated motion.
For writers on kinematics before Galileo, since small time intervals could not be measured, the affinity between time and motion was obscure. They used time as a function of distance, and in free fall, greater velocity as a result of greater elevation. Only Domingo de Soto, a Spanish Theologean, in his commentary on Aristotle's Physics published in 1545, after defining "uniform difform" motion (which is uniformly accelerated motion) - the word velocity wasn't used - as proportional to time, declared correctly that this kind of motion was identifiable with freely falling bodies and projectiles, without his proving these propositions or suggesting a formula relating time, velocity and distance. de Soto's comments are shockingly correct regarding the definitions of acceleration (acceleration was a rate of change of motion (velocity) in time) and the observation that during the violent motion of ascent acceleration would be negative.
Discourses such as these spread throughout the Europe and definitely influenced Galileo and others, and helped in laying the foundation of kinematics.[4] Galileo deduced the equation \begin{align} s & = \frac{1}{2} gt^2 \quad \\ \end{align} in his work geometrically,[5] using Merton's rule, now known as a special case of one of the equations of Kinematics. He couldn't use the now-familiar mathematical reasoning. The relationships between speed, distance, time and acceleration was not known at the time.
Galileo was the first to show that the path of a projectile is a parabola. Galileo had an understanding of centrifugal force and gave a correct definition of momentum. This emphasis of momentum as a fundamental quantity in dynamics is of prime importance. He measured momentum by the product of velocity and weight; mass is a later concept, developed by Huygens and Newton. In the swinging of a simple pendulum, Galileo says in Discourses[6] that "every momentum acquired in the descent along an arc is equal to that which causes the same moving body to ascend through the same arc." His analysis on projectiles indicates that Galileo had grasped the first law and the second law of motion. He did not generalize and make them applicable to bodies not subject to the earth's gravitation. That step was Newton's contribution.
The term "inertia" was used by Kepler who applied it to bodies at rest.The first law of motion is now often called the law of inertia.
Galileo did not fully grasp the third law of motion, the law of the equality of action and reaction, though he corrected some errors of Aristotle. With Stevin and others Galileo also wrote on statics. He formulated the principle of the parallelogram of forces, but he did not fully recognize its scope.
Galileo also was interested by the laws of the pendulum, his first observations was when he was a young man. In 1583, while he was praying in the cathedral at Pisa, his attention was arrested by the motion of the great lamp lighted and left swinging, referencing his own pulse for time keeping. To him the period appeared the same, even after the motion had greatly diminished, discovering the isochronism of the pendulum.
More careful experiments carried out by him later, and described in his Discourses, revealed the period of oscillation to be independent of the mass and material of the pendulum and as the square root of its length.
Thus we arrive at Rene Descartes, Isaac Newton, Leibniz, et al; and the evolved forms of the equations of motion that begin to be recognized as the modern ones.
Later the Equations of Motion also appeared in electrodynamics, when describing the motion of charged particles in electric and magnetic fields, the Lorentz force is the general equation which serves as the definition of what is meant by an electric field and magnetic field. With the advent of special relativity and general relativity, the theoretical modifications to spacetime meant the classical equations of motion were also modified to account for the finite speed of light, and curvature of spacetime. In all these cases the differential equations were in terms of a function describing the particle's trajectory in terms of space and time coordinates, as influenced by forces or energy transformations.[7]
However, the equations of quantum mechanics can also be considered "equations of motion", since they are differential equations of the wavefunction, which describes how a quantum state behaves analogously using the space and time coordinates of the particles. There are analogs of equations of motion in other areas of physics, for collections of physical phenomena that can be considered waves, fluids, or fields.
Kinematic equations for one particle[edit]
Kinematic quantities[edit]
From the instantaneous position r = r(t), instantaneous meaning at an instant value of time t, the instantaneous velocity v = v(t) and acceleration a = a(t) have the general, coordinate-independent definitions;[8]
\mathbf{v} = \frac{d \mathbf{r}}{d t} \,, \quad \mathbf{a} = \frac{d \mathbf{v}}{d t} = \frac{d^2 \mathbf{r}}{d t^2} \,\!
Notice that velocity always points in the direction of motion, in other words for a curved path it is the tangent vector. Loosely speaking, first order derivatives are related to tangents of curves. Still for curved paths, the acceleration is directed towards the center of curvature of the path. Again, loosely speaking, second order derivatives are related to curvature.
The rotational analogues are the "angular vector" (angle the particle rotates about some axis) θ = θ(t), angular velocity ω = ω(t), and angular acceleration α = α(t):
\boldsymbol{\theta} = \theta \hat{\mathbf{n}} \,,\quad \boldsymbol{\omega} = \frac{d \boldsymbol{\theta}}{d t} \,, \quad \boldsymbol{\alpha}= \frac{d \boldsymbol{\omega}}{d t} \,,
where n is a unit vector in the direction of the axis of rotation, and θ is the angle the object turns through about the axis.
The following relation holds for a point-like particle, orbiting about some axis with angular velocity ω:[9]
\mathbf{v} = \boldsymbol{\omega}\times \mathbf{r} \,\!
where r is the position vector of the particle (radial from the rotation axis) and v the tangential velocity of the particle. For a rotating continuum rigid body, these relations hold for each point in the rigid body.
Uniform acceleration[edit]
The differential equation of motion for a particle of constant or uniform acceleration in a straight line is simple: the acceleration is constant, so the second derivative of the position of the object is constant. The results of this case are summarized below.
Constant translational acceleration in a straight line[edit]
These equations apply to a particle moving linearly, in three dimensions in a straight line with constant acceleration.[10] Since the position, velocity, and acceleration are collinear (parallel, and lie on the same line) - only the magnitudes of these vectors are necessary, and because the motion is along a straight line, the problem effectively reduces from three dimensions to one.
v & = at+v_0 \quad [1]\\
r & = r_0 + v_0 t + \frac{{a}t^2}{2} \quad [2]\\
r & = r_0 + \left( \frac{v+v_0}{2} \right )t \quad [3]\\
v^2 & = v_0^2 + 2a\left( r - r_0 \right) \quad [4]\\
r & = r_0 + vt - \frac{{a}t^2}{2} \quad [5]\\
Here a is constant acceleration, or in the case of bodies moving under the influence of gravity, the standard gravity g is used. Note that each of the equations contains four of the five variables, so in this situation it is sufficient to know three out of the five variables to calculate the remaining two.
In elementary physics the same formulae are frequently written in different notation as:
v & = u + at \quad [1] \\
s & = ut + \frac{1}{2} at^2 \quad [2] \\
s & = \frac{1}{2}(u + v)t \quad [3] \\
v^2 & = u^2 + 2as \quad [4] \\
s & = vt - \frac{1}{2}at^2 \quad [5] \\
where u has replaced v0, s replaces r, and s0 = 0. They are often referred to as the "SUVAT" equations, where "SUVAT" is an acronym from the variables: s = displacement (s0 = initial displacement), u = initial velocity, v = final velocity, a = acceleration, t = time.[11][12]
Constant linear acceleration in any direction[edit]
Trajectory of a particle with initial position vector r0 and velocity v0, subject to constant acceleration a, all three quantities in any direction, and the position r(t) and velocity v(t) after time t.
The initial position, initial velocity, and acceleration vectors need not be collinear, and take an almost identical form. The only difference is that the square magnitudes of the velocities require the dot product. The derivations are essentially the same as in the collinear case,
\mathbf{v} & = \mathbf{a}t+\mathbf{v}_0 \quad [1]\\
\mathbf{r} & = \mathbf{r}_0 + \mathbf{v}_0 t + \frac{{\mathbf{a}}t^2}{2} \quad [2]\\
\mathbf{r} & = \mathbf{r}_0 + \left( \frac{\mathbf{v}+\mathbf{v}_0}{2} \right )t \quad [3]\\
v^2 & = v_0^2 + 2\mathbf{a}\cdot\left( \mathbf{r} - \mathbf{r}_0 \right) \quad [4]\\
\mathbf{r} & = \mathbf{r}_0 + \mathbf{v}t - \frac{{\mathbf{a}}t^2}{2} \quad [5]\\
although the Torricelli equation [4] can be derived using the distributive property of the dot product as follows:
v^{2} = \mathbf{v}\cdot\mathbf{v} = (\mathbf{v}_0+\mathbf{a}t)\cdot(\mathbf{v}_0+\mathbf{a}t)=v_0^{2}+2t(\mathbf{a}\cdot\mathbf{v}_0)+a^{2}t^{2}
(2\mathbf{a})\cdot(\mathbf{r}-\mathbf{r}_0) = (2\mathbf{a})\cdot\left(\mathbf{v}_0t+\frac{1}{2}\mathbf{a}t^{2}\right)=2t(\mathbf{a}\cdot\mathbf{v}_0)+a^{2}t^{2} = v^{2} - v_0^{2}
\therefore v^{2} = v_0^{2} + 2(\mathbf{a}\cdot(\mathbf{r}-\mathbf{r}_0))
Elementary and frequent examples in kinematics involve projectiles, for example a ball thrown upwards into the air. Given initial speed u, one can calculate how high the ball will travel before it begins to fall. The acceleration is local acceleration of gravity g. At this point one must remember that while these quantities appear to be scalars, the direction of displacement, speed and acceleration is important. They could in fact be considered as uni-directional vectors. Choosing s to measure up from the ground, the acceleration a must be in fact −g, since the force of gravity acts downwards and therefore also the acceleration on the ball due to it.
At the highest point, the ball will be at rest: therefore v = 0. Using equation [4] in the set above, we have:
Substituting and cancelling minus signs gives:
s = \frac{u^2}{2g}.
Constant circular acceleration[edit]
The analogues of the above equations can be written for rotation. Again these axial vectors must all be parallel to the axis of rotation, so only the magnitudes of the vectors are necessary,
\omega & = \omega_0 + \alpha t \\
\theta &= \theta_0 + \omega_0t + \tfrac12\alpha t^2 \\
\theta & = \theta_0 + \tfrac12(\omega_0 + \omega)t \\
\omega^2 & = \omega_0^2 + 2\alpha(\theta - \theta_0) \\
\theta & = \theta_0 + \omega t - \tfrac12\alpha t^2 \\
where α is the constant angular acceleration, ω is the angular velocity, ω0 is the initial angular velocity, θ is the angle turned through (angular displacement), θ0 is the initial angle, and t is the time taken to rotate from the initial state to the final state.
General planar motion[edit]
Main article: General planar motion
Position vector r, always points radially from the origin.
Velocity vector v, always tangent to the path of motion.
These are the kinematic equations for a particle traversing a path in a plane, described by position r = r(t).[13] They are simply the time derivatives of the position vector in plane polar coordinates using the definitions of physical quantities above for angular velocity ω and angular acceleration α.
The position, velocity and acceleration of the particle are respectively:
\mathbf{r} & =\mathbf{r}\left ( r(t),\theta(t) \right ) = r \mathbf{\hat{e}}_r \\
\mathbf{v} & = \mathbf{\hat{e}}_r \frac{d r}{dt} + r \omega \mathbf{\hat{e}}_\theta \\
\mathbf{a} & =\left ( \frac{d^2 r}{dt^2} - r\omega^2\right )\mathbf{\hat{e}}_r + \left ( r \alpha + 2 \omega \frac{dr}{dt} \right )\mathbf{\hat{e}}_\theta
\end{align} \,\!
where \scriptstyle \mathbf{\hat{e}}_r, \mathbf{\hat{e}}_\theta, \,\! are the polar unit vectors. For the velocity v, dr/dt is the component of velocity in the radial direction, and is the additional component due to the rotation. For the acceleration a, (–2) is the centripetal acceleration and 2ωdr/dt the Coriolis acceleration, in addition to the radial acceleration d2r/dt2 and angular acceleration .
Special cases of motion described be these equations are summarized qualitatively in the table below. Two have already been discussed above, in the cases that either the radial components or the angular components are zero, and the non-zero component of motion describes uniform acceleration.
State of motion Constant r r linear in t r quadratic in t r non-linear in t
Constant θ Stationary Uniform translation (constant translational velocity) Uniform translational acceleration Non-uniform translation
θ linear in t Uniform angular motion in a circle (constant angular velocity) Uniform angular motion in a spiral, constant radial velocity Angular motion in a spiral, constant radial acceleration Angular motion in a spiral, varying radial acceleration
θ quadratic in t Uniform angular acceleration in a circle Uniform angular acceleration in a spiral, constant radial velocity Uniform angular acceleration in a spiral, constant radial acceleration Uniform angular acceleration in a spiral, varying radial acceleration
θ non-linear in t Non-uniform angular acceleration in a circle Non-uniform angular acceleration
in a spiral, constant radial velocity
Non-uniform angular acceleration
in a spiral, constant radial acceleration
Non-uniform angular acceleration
in a spiral, varying radial acceleration
General 3d motion[edit]
In 3d space, the equations in spherical coordinates (r, θ, ϕ) with corresponding unit vectors \scriptstyle \mathbf{\hat{e}}_r, \mathbf{\hat{e}}_\theta, \mathbf{\hat{e}}_\phi \,\!, the position, velocity, and acceleration generalize respectively to
\mathbf{r} & =\mathbf{r}\left ( t \right ) = r \mathbf{\hat{e}}_r\\
\mathbf{v} & = v \mathbf{\hat{e}}_r + r\,\frac{d\theta}{dt}\mathbf{\hat{e}}_\theta + r\,\frac{d\phi}{dt}\,\sin\theta \mathbf{\hat{e}}_\phi \\
\mathbf{a} & = \left( a - r\left(\frac{d\theta}{dt}\right)^2 - r\left(\frac{d\phi}{dt}\right)^2\sin^2\theta \right)\mathbf{\hat{e}}_r \\
& + \left( r \frac{d^2 \theta}{dt^2 } + 2v\frac{d\theta}{dt} - r\left(\frac{d\phi}{dt}\right)^2\sin\theta\cos\theta \right) \mathbf{\hat{e}}_\theta \\
& + \left( r\frac{d^2 \phi}{dt^2 }\,\sin\theta + 2v\,\frac{d\phi}{dt}\,\sin\theta + 2 r\,\frac{d\theta}{dt}\,\frac{d\phi}{dt}\,\cos\theta \right) \mathbf{\hat{e}}_\phi
\end{align} \,\!
In the case of a constant ϕ this reduces to the planar equations above.
Dynamic equations of motion[edit]
Newtonian mechanics[edit]
Main article: Newtonian mechanics
The first general equation of motion developed was Newton's second law of motion, in its most general form states the rate of change of momentum p = p(t) = mv(t) of an object equals the force F = F(x(t), v(t), t) acting on it,[14]
The force in the equation is not the force the object exerts. Replacing momentum by mass times velocity, the law is also written more famously as
\mathbf{F} = m\mathbf{a}
since m is a constant in Newtonian mechanics.
Newton's second law applies to point-like particles, and to all points in a rigid body. They also apply to each point in a mass continua, like deformable solids or fluids, but the motion of the system must be accounted for, see material derivative. In the case the mass is not constant, it is not sufficient to use the product rule for the time derivative on the mass and velocity, and Newton's second law requires some modification consistent with conservation of momentum, see variable-mass system.
It may be simple to write down the equations of motion in vector form using Newton's laws of motion, but the components may vary in complicated ways with spatial coordinates and time, and solving them is not easy. Often there is an excess of variables to solve for the problem completely, so Newton's laws are not always the most efficient way to determine the motion of a system. In simple cases of rectangular geometry, Newton's laws work fine in Cartesian coordinates, but in other coordinate systems can become dramatically complex.
The momentum form is preferable since this is readily generalized to more complex systems, generalizes to special and general relativity (see four-momentum).[14] It can also be used with the momentum conservation. However, Newton's laws are not more fundamental than momentum conservation, because Newton's laws are merely consistent with the fact that zero resultant force acting on an object implies constant momentum, while a resultant force implies the momentum is not constant. Momentum conservation is always true for an isolated system not subject to resultant forces.
For a number of particles (see many body problem), the equation of motion for one particle i influenced by other particles is[8][15]
\frac{d\mathbf{p}_i}{dt} = \mathbf{F}_{E} + \sum_{i \neq j} \mathbf{F}_{ij} \,\!
where pi is the momentum of particle i, Fij is the force on particle i by particle j, and FE is the resultant external force due to any agent not part of system. Particle i does not exert a force on itself.
Euler's laws of motion are similar to Newton's laws, but they are applied specifically to the motion of rigid bodies. The Newton–Euler equations combine the forces and torques acting on a rigid body into a single equation.
Newton's second law for rotation takes a similar form to the translational case,[16]
\boldsymbol{\tau} = \frac{d\mathbf{L}}{dt} \,,
by equating the torque acting on the body to the rate of change of its angular momentum L. Analogous to mass times acceleration, the moment of inertia tensor I depends on the distribution of mass about the axis of rotation, and the angular acceleration is the rate of change of angular velocity,
\boldsymbol{\tau} = \mathbf{I} \cdot \boldsymbol{\alpha}.
Again, these equations apply to point like particles, or at each point of a rigid body.
Likewise, for a number of particles, the equation of motion for one particle i is[17]
\frac{d\mathbf{L}_i}{dt} = \boldsymbol{\tau}_E + \sum_{i \neq j} \boldsymbol{\tau}_{ij} \,,
where Li is the angular momentum of particle i, τij the torque on particle i by particle j, and τE = resultant external torque (due to any agent not part of system). Particle i does not exert a torque on itself.
Some examples[18] of Newton's law include describing the motion of a simple pendulum,
- mg\sin\theta = m\frac{d^2 (\ell\theta)}{d t^2} \quad \Rightarrow \quad \frac{d^2 \theta}{d t^2} = - \frac{g}{\ell}\sin\theta \,,
and a damped, sinusoidally driven harmonic oscillator,
F_0 \sin(\omega t) = m\left(\frac{d^2x}{dt^2} + 2\zeta\omega_0\frac{dx}{dt} + \omega_0^2 x \right)\,.
For describing the motion of masses due to gravity, Newton's law of gravity can be combined with Newton's second law. For two examples, a ball of mass m thrown in the air, in air currents (such as wind) described by a vector field of resistive forces R = R(r, t),
- \frac{GmM}{|\mathbf{r}|^2} \mathbf{\hat{e}}_r + \mathbf{R} = m\frac{d^2 \mathbf{r}}{d t^2} + 0 \quad \Rightarrow \quad \frac{d^2 \mathbf{r}}{d t^2} = - \frac{GM}{|\mathbf{r}|^2} \mathbf{\hat{e}}_r + \mathbf{A} \,\!
where G is the gravitational constant, M the mass of the Earth, and A = R/m is the acceleration of the projectile due to the air currents at position r and time t.
The classical N-body problem for N particles each interacting with each other due to gravity is a set of N nonlinear coupled second order ODEs,
\frac{d^2\mathbf{r}_i}{dt^2} = G\sum_{i\neq j}\frac{m_i m_j}{|\mathbf{r}_j - \mathbf{r}_i|^3} (\mathbf{r}_j - \mathbf{r}_i)
where i = 1, 2, ..., N labels the quantities (mass, position, etc.) associated with each particle.
Analytical mechanics[edit]
As the system evolves, q traces a path through configuration space (only some are shown). The path taken by the system (red) has a stationary action (δS = 0) under small changes in the configuration of the system (δq).[19]
Using all three coordinates of 3d space is unnecessary if there are constraints on the system. If the system has N degrees of freedom, then one can use a set of N generalized coordinates q(t) = [q1(t), q2(t) ... qN(t)], to define the configuration of the system. They can be in the form of arc lengths or angles. They are a considerable simplification to describe motion, since they take advantage of the intrinsic constraints that limit the system's motion, and the number of coordinates is reduced to a minimum. The time derivatives of the generalized coordinates are the generalized velocities
\mathbf{\dot{q}} = d\mathbf{q}/dt \,.
The Euler–Lagrange equations are[2][20]
\frac{d}{d t} \left ( \frac{\partial L}{\partial \mathbf{\dot{q}} } \right ) = \frac{\partial L}{\partial \mathbf{q}} \,,
where the Lagrangian is a function of the configuration q and its time rate of change dq/dt (and possibly time t)
L = L\left [ \mathbf{q}(t), \mathbf{\dot{q}}(t), t \right ] \,.
Setting up the Lagrangian of the system, then substituting into the equations and evaluating the partial derivatives and simplifying, a set of coupled N second order ODEs in the coordinates are obtained.
Hamilton's equations are[2][20]
\mathbf{\dot{p}} = -\frac{\partial H}{\partial \mathbf{q}} \,, \quad \mathbf{\dot{q}} = + \frac{\partial H}{\partial \mathbf{p}} \,,
where the Hamiltonian
H = H\left [ \mathbf{q}(t), \mathbf{p}(t), t \right ] \,,
is a function of the configuration q and conjugate "generalized" momenta
\mathbf{p} = \partial L/\partial \mathbf{\dot{q}} \,,
in which ∂/∂q = (∂/∂q1, ∂/∂q2,..., ∂/∂qN) is a shorthand notation for a vector of partial derivatives with respect to the indicated variables (see for example matrix calculus for this denominator notation), and possibly time t,
Setting up the Hamiltonian of the system, then substituting into the equations and evaluating the partial derivatives and simplifying, a set of coupled 2N first order ODEs in the coordinates qi and momenta pi are obtained.
The Hamilton–Jacobi equation is[2]
- \frac{\partial S(\mathbf{q},t)}{\partial t} = H\left(\mathbf{q}, \mathbf{p}, t \right) \,.
S[\mathbf{q},t] = \int_{t_1}^{t_2}L(\mathbf{q}, \mathbf{\dot{q}}, t)\,dt \,,
is Hamilton's principal function, also called the classical action is a functional of L. In this case, the momenta are given by
\mathbf{p} = \partial S /\partial \mathbf{q}\,.
Although the equation has a simple general form, for a given Hamiltonian it is actually a single first order non-linear PDE, in N + 1 variables. The action S allows identification of conserved quantities for mechanical systems, even when the mechanical problem itself cannot be solved fully, because any differentiable symmetry of the action of a physical system has a corresponding conservation law, a theorem due to Emmy Noether.
All classical equations of motion can be derived from the variational principle known as Hamilton's principal of least action
\delta S = 0 \,,
stating the path the system takes through the configuration space is the one with the least action S.
In electrodynamics, the force on a charged particle of charge q is the Lorentz force:[21]
Combining with Newton's second law gives a first order differential equation of motion, in terms of position of the particle:
m\frac{d^2 \mathbf{r}}{dt^2} = q\left(\mathbf{E} + \frac{d \mathbf{r}}{dt} \times \mathbf{B}\right) \,\!
or its momentum:
\frac{d\mathbf{p}}{dt} = q\left(\mathbf{E} + \frac{\mathbf{p} \times \mathbf{B}}{m}\right) \,\!
The same equation can be obtained using the Lagrangian (and applying Lagrange's equations above) for a charged particle of mass m and charge q:[22]
where A and ϕ are the electromagnetic scalar and vector potential fields. The Lagrangian indicates an additional detail: the canonical momentum in Lagrangian mechanics is given by:
\mathbf{P} = \frac{\partial L}{\partial \dot{\mathbf{r}}} = m \dot{\mathbf{r}} + q \mathbf{A}
instead of just mv, implying the motion of a charged particle is fundamentally determined by the mass and charge of the particle. The Lagrangian expression was first used to derive the force equation.
Alternatively the Hamiltonian (and substituting into the equations):[20]
H = \frac{\left(\mathbf{P} - q \mathbf{A}\right)^2}{2m} + q\phi \,\!
can derive the Lorentz force equation.
General relativity[edit]
Geodesic equation of motion[edit]
Geodesics on a sphere are arcs of great circles (yellow curve). On a 2dmanifold (such as the sphere shown), the direction of the accelerating geodesic is uniquely fixed if the separation vector ξ is orthogonal to the "fiducial geodesic" (green curve). As the separation vector ξ0 changes to ξ after a distance s, the geodesics are not parallel (geodesic deviation).[23]
The above equations are valid in flat spacetime. In curved space spacetime, things become mathematically more complicated since there is no straight line; this is generalized and replaced by a geodesic of the curved spacetime (the shortest length of curve between two points). For curved manifolds with a metric tensor g, the metric provides the notion of arc length (see line element for details), the differential arc length is given by:[24]
ds = \sqrt{g_{\alpha\beta} d x^\alpha dx^\beta}
and the geodesic equation is a second-order differential equation in the coordinates, the general solution is a family of geodesics:[25]
\frac{d^2 x^\mu}{ds^2} = - \Gamma^\mu{}_{\alpha\beta}\frac{d x^\alpha}{ds}\frac{d x^\beta}{ds}
where Γμαβ is a Christoffel symbol of the second kind, which contains the metric (with respect to the coordinate system).
Given the mass-energy distribution provided by the stress–energy tensor Tαβ, the Einstein field equations are a set of non-linear second-order partial differential equations in the metric, and imply the curvature of space time is equivalent to a gravitational field (see principle of equivalence). Mass falling in curved spacetime is equivalent to a mass falling in a gravitational field - because gravity is a fictitious force. The relative acceleration of one geodesic to another in curved spacetime is given by the geodesic deviation equation:
\frac{D^2\xi^\alpha}{ds^2} = -R^\alpha{}_{\beta\gamma\delta}\frac{dx^\alpha}{ds}\xi^\gamma\frac{dx^\delta}{ds}
where ξα = (x2)α − (x1)α is the separation vector between two geodesics, D/ds (not just d/ds) is the covariant derivative, and Rαβγδ is the Riemann curvature tensor, containing the Christoffel symbols. In other words, the geodesic deviation equation is the equation of motion for masses in curved spacetime, analogous to the Lorentz force equation for charges in an electromagnetic field.[26]
For flat spacetime, the metric is a constant tensor so the Christoffel symbols vanish, and the geodesic equation has the solutions of straight lines. This is also the limiting case when masses move according to Newton's law of gravity.
Spinning objects[edit]
In general relativity, rotational motion is described by the relativistic angular momentum tensor, including the spin tensor, which enter the equations of motion under covariant derivatives with respect to proper time. The Mathisson–Papapetrou–Dixon equations describe the motion of spinning objects moving in a gravitational field.
Analogues for waves and fields[edit]
Unlike the equations of motion for describing particle mechanics, which are systems of coupled ordinary differential equations, the analogous equations governing the dynamics of waves and fields are always partial differential equations, since the waves or fields are functions of space and time. For a particular solution, boundary conditions along with initial conditions need to be specified.
Sometimes in the following contexts, the wave or field equations are also called "equations of motion".
Field equations[edit]
Equations that describe the spatial dependence and time evolution of fields are called field equations. These include
This terminology is not universal: for example although the Navier–Stokes equations govern the velocity field of a fluid, they are not usually called "field equations", since in this context they represent the momentum of the fluid and are called the "momentum equations" instead.
Wave equations[edit]
Equations of wave motion are called wave equations. The solutions to a wave equation give the time-evolution and spatial dependence of the amplitude. Boundary conditions determine if the solutions describe traveling waves or standing waves.
From classical equations of motion and field equations; mechanical, gravitational wave, and electromagnetic wave equations can be derived. The general linear wave equation in 3d is:
\frac{1}{v^2}\frac{\partial^2 X}{\partial t^2} = \nabla^2 X
where X = X(r, t) is any mechanical or electromagnetic field amplitude, say:[27]
and v is the phase velocity. Non-linear equations model the dependence of phase velocity on amplitude, replacing v by v(X). There are other linear and non-linear wave equations for very specific applications, see for example the Korteweg–de Vries equation.
Quantum theory[edit]
In quantum theory, the wave and field concepts both appear.
In quantum mechanics, in which particles also have wave-like properties according to wave–particle duality, the analogue of the classical equations of motion (Newton's law, Euler–Lagrange equation, Hamilton–Jacobi equation, etc.) is the Schrödinger equation in its most general form:
i\hbar\frac{\partial\Psi}{\partial t} = \hat{H}\Psi \,,
where Ψ is the wavefunction of the system, \hat{H} is the quantum Hamiltonian operator, rather than a function as in classical mechanics, and ħ is the Planck constant divided by 2π. Setting up the Hamiltonian and inserting it into the equation results in a wave equation, the solution is the wavefunction as a function of space and time. The Schrödinger equation itself reduces to the Hamilton–Jacobi equation in when one considers the correspondence principle, in the limit that ħ becomes zero.
Throughout all aspects of quantum theory, relativistic or non-relativistic, there are various formulations alternative to the Schrödinger equation that govern the time evolution and behavior of a quantum system, for instance:
See also[edit]
1. ^ Encyclopaedia of Physics (second Edition), R.G. Lerner, G.L. Trigg, VHC Publishers, 1991, ISBN (Verlagsgesellschaft) 3-527-26954-1 (VHC Inc.) 0-89573-752-3
2. ^ a b c d Analytical Mechanics, L.N. Hand, J.D. Finch, Cambridge University Press, 2008, ISBN 978-0-521-57572-0
3. ^ See History of Mathematics
4. ^ The Britannica Guide to History of Mathematics, ed. Erik Gregersen
5. ^ Discourses, Galileo
6. ^ Dialogues Concerning Two New Sciences, by Galileo Galilei; translated by Henry Crew, Alfonso De Salvio
7. ^ Halliday, David; Resnick, Robert; Walker, Jearl (2004-06-16). Fundamentals of Physics (7 Sub ed.). Wiley. ISBN 0-471-23231-9.
8. ^ a b Dynamics and Relativity, J.R. Forshaw, A.G. Smith, Wiley, 2009, ISBN 978-0-470-01460-8
9. ^ M.R. Spiegel, S. Lipcshutz, D. Spellman (2009). Vector Analysis. Schaum's Outlines (2nd ed.). McGraw Hill. p. 33. ISBN 978-0-07-161545-7.
10. ^ a b Essential Principles of Physics, P.M. Whelan, M.J. Hodgeson, second Edition, 1978, John Murray, ISBN 0-7195-3382-1
11. ^ Hanrahan, Val; Porkess, R (2003). Additional Mathematics for OCR. London: Hodder & Stoughton. p. 219. ISBN 0-340-86960-7.
12. ^ Keith Johnson (2001). Physics for you: revised national curriculum edition for GCSE (4th ed.). Nelson Thornes. p. 135. ISBN 978-0-7487-6236-1. The 5 symbols are remembered by "suvat". Given any three, the other two can be found.
13. ^ 3000 Solved Problems in Physics, Schaum Series, A. Halpern, Mc Graw Hill, 1988, ISBN 978-0-07-025734-4
14. ^ a b An Introduction to Mechanics, D. Kleppner, R.J. Kolenkow, Cambridge University Press, 2010, p. 112, ISBN 978-0-521-19821-9
15. ^ Encyclopaedia of Physics (second Edition), R.G. Lerner, G.L. Trigg, VHC publishers, 1991, ISBN (VHC Inc.) 0-89573-752-3
16. ^ "Mechanics, D. Kleppner 2010"
17. ^ "Relativity, J.R. Forshaw 2009"
18. ^ The Physics of Vibrations and Waves (3rd edition), H.J. Pain, John Wiley & Sons, 1983, ISBN 0-471-90182-2
19. ^ R. Penrose (2007). The Road to Reality. Vintage books. p. 474. ISBN 0-679-77631-1.
20. ^ a b c Classical Mechanics (second edition), T.W.B. Kibble, European Physics Series, 1973, ISBN 0-07-084018-0
21. ^ Electromagnetism (second edition), I.S. Grant, W.R. Phillips, Manchester Physics Series, 2008 ISBN 0-471-92712-0
22. ^ Classical Mechanics (second Edition), T.W.B. Kibble, European Physics Series, Mc Graw Hill (UK), 1973, ISBN 0-07-084018-0.
23. ^ Misner, Thorne, Wheeler, Gravitation
24. ^ C.B. Parker (1994). McGraw Hill Encyclopaedia of Physics (second ed.). p. 1199. ISBN 0-07-051400-3.
25. ^ C.B. Parker (1994). McGraw Hill Encyclopaedia of Physics (second ed.). p. 1200. ISBN 0-07-051400-3.
26. ^ J.A. Wheeler, C. Misner, K.S. Thorne (1973). Gravitation. W.H. Freeman & Co. pp. 34–35. ISBN 0-7167-0344-0.
27. ^ H.D. Young, R.A. Freedman (2008). University Physics (12th ed.). Addison-Wesley (Pearson International). ISBN 0-321-50130-6. |
d74e72e828bb8961 | How to model large atoms
1. Hi!
One can easily analyze the Hydrogen Atom since it is a two body problem.
But how do you apply Quantum Theory to model atoms (such as iron) which are much larger and predict their behaviour in an environment?
My guess is that you use statistical mechanics, but I only just started a course and it is basically limited to heat.
thank you
2. jcsd
3. mfb
Staff: Mentor
The first approach is still a two-body problem. Afterwards, interactions between the electrons can be taken into account. To describe the state of the electrons and bonds in a material, this is pure quantum mechanics.
If you want to describe things like heat, you don't have to care about those details, you take the "output" of quantum mechanics (crystal structure, energy bands and so on) and apply statistical mechanics to it.
4. Hi mfb!
Thanks for your answer.
Basically I want to evaluate the effect of electric fields, magnetic fields and magnetic vector potentials on the properties of Iron in haemoglobin.
How do I go about this?
5. mfb
Staff: Mentor
I guess that will need some protein folding software if you expect effects - the fields influence the whole thing, not just a single small atom inside.
6. By the way, I think we analyze the Hydrogen atom by quantum mechanics is a one body problem, because we assume the nuclear is fixed, and it just provide a potential to the atom system, but in the real physics we should also use a wave-function to describe the nuclear
7. mfb
Staff: Mentor
Usually the two-body problem is reduced to a 1-body problem with a reduced mass, so both electron and nucleus are taken into account. The other degrees of freedom of the two-body system correspond to a total motion of the atom.
8. cgk
cgk 485
Science Advisor
OP, real atoms (and molecules) are handled with quantum chemistry software. Such programs (e.g., Molpro, Orca) can solve the many-body Schrödinger equations in various approximations to determine the quantitative behavior of the electrons, including their response to external fields. For Iron atoms, for example, you would employ approximations like Hartree-Fock and CCSD(T) (a coupled cluster method) or Multi-configuration self consistent field (MCSCF) and mutli-reference configuration interaction (MRCI), depending on the application.
Understanding and using such approximations (correctly) is not easy, and normally requires some background reading in many-body quantum mechanics and quantum chemistry.
Have something to add?
Draft saved Draft deleted |
205d4df01391cb98 | Complex potential model for low-energy neutron scattering
Fiedeldey H. ; Frahn W.E. (1961)
The optical model for low-energy neutron scattering is treated explicitly by means of a new form of complex potential which permits an exact solution of the S-wave Schrödinger equation. This potential is everywhere continuously differentiable and its imaginary part consists of both a volume and a surface absorption term which is in close agreement with recent theoretical calculations of the spatial distribution of the imaginary potential. Closed-form expressions are obtained for the logarithmic derivative of the wave function, and hence for the S-wave strength function and scattering length, from which their dependence on all potential parameters can be studied explicitly. In particular, it is shown that concentrating the absorption in the nuclear surface can serve as a remedy for a well-known discrepancy, by lowering the minima of the strength function to more realistic values. © 1961.
This item appears in the following collections: |
b6ee73b6d300c120 | Menu principal
Comité de coordination
Printer friendly page
Qualitative Behaviour and Controllability of Partial Differential Equations / Comportement qualitatif et controlabilité des EDP
(Org: Holger Teismann, Acadia University)
DAVID AMUNDSEN, Carleton University
Resonant Solutions of the Forced KdV Equation
The forced Korteweg-de Vries (fKdV) Equation provides a canonical model for evolution of weakly nonlinear dispersive waves in the presence of additional effects such as external forcing or variable topography. While the symmetries and integrability of the underlying KdV structure facilitate extensive analysis, in this generalized setting such favourable properties no longer hold. Through physical and numerical experimentation it is known that a rich family of resonant steady solutions exist, yet qualitative analytic insight into them is limited. Based on hierarchical perturbative and matched asymptotic approaches we present a formal mathematical framework for construction of solutions in the small dispersion limit. In this way not only obtaining accurate analytic representations but also important a priori insight into the response of the system as it is detuned away from resonance. Specific examples and comparisons in the case of a fundamental periodic resonant mode will be presented.
Joint work with M. P. Mortell (UC Cork) and E. A. Cox (UC Dublin).
SEAN BOHUN, Penn State
The Wigner-Poisson System with an External Coulomb Field
This system of equations describes the time evolution of the quantum mechanical behaviour of a large ensemble of particles in a vacuum where the long range interactions between the particles can be taken into account. The model also facilitates the introduction of external classical effects. As tunneling effects become more pronounced in semiconductor devices, models which are able to bridge the gap between the quantum behaviour and external classical effects become increasingly relevant. The WP system is such a model.
Local existence is shown by a contraction mapping argument which is then extended to a global result using macroscopic control (conservation of probability and energy). Asymptotic behaviour of the WP system and the underlying SP system is established with a priori estimates on the spatial moments.
Finally, conditions on the energy are given which
(a) ensure that the solutions decay and
(b) ensure that the solutions do not decay.
SHAOHUA CHEN, University College of Cape Breton
Boundedness and Blowup for the Solution of an Activator-Inhibitor Model
We consider a general activator-inhibitor model
ut = eDu - mu + up
vt = D Dv - nv + ur
with the Neumann boundary conditions, where rq > (p-1)(s+1). We show that if r > p-1 then the solutions exist long time for all initial values and if r > p-1 and q < s+1 then the solutions are bounded for all initial values. However, if r < p-1 then, for some special initial values, the solutions will blow up.
STEPHEN GUSTAFSON, University of British Columbia, Mathematics Department, 1984 Mathematics Rd., Vancouver, BC V6T 1Z2
Scattering for the Gross-Pitaevskii Equation
The Gross-Pitaevskii equation, a nonlinear Schroedinger equation with non-zero boundary conditions, models superfluids and Bose-Einstein condensates. Recent mathematical work has focused on the finite-time dynamics of vortex solutions, and existence of vortex-pair traveling waves. However, little seems to be known about the long-time behaviour (eg. scattering theory, and the asymptotic stability of vortices). We address the simplest such problem-scattering around the vacuum state-which is already tricky due to the non-self-adjointness of the linearized operator, and "long-range" nonlinearity. In particular, our present methods are limited to higher dimensions. This is joint work in progress with K. Nakanishi and T.-P. Tsai.
HORST LANGE, Universitaet Köln, Weyertal 86-90, 50931 Köln, Germany
Noncontrollability of the nonlinear Hartree-Schrödinger and Gross-Pitaevskii-Schrödinger equations
We consider the bilinear control problem for the nonlinear Hartree-Schrödinger equation [HS] (which plays a prominent role in quantum chemistry), and for the Gross-Pitaevskii-Schrödinger equation [GPS] (of the theory of Bose-Einstein condensates); for both systems we study the case of a bilinear control term involving the position operator or the momentum operator. A target state uT Î L2(R3) is said to be reachable from an initial state u0 Î L2(R3) in time T > 0 if there exists a control s.t. the system allows a solution state u(t,x) with u(0,x) = u0(x), u(T,x) = uT(x). We prove that, for any T > 0 and any initial datum u0 Î L2 (R3) \{0}, the set of non-reachable target states (in time T > 0) is relatively L2-dense in the sphere {u Î L2(R3) | ||u||L2 = ||u0||L2} (for both [HS] and [GPS]). The proof uses Fourier transform, estimates for Riesz potentials for [HS], estimates for the Schrödinger group associated with the Hamiltonian -D+x2 for [GPS].
HAILIANG LI, Department of Pure and Applied Mathematics, Osaka University, Japan
On Well-posedness and Asymptotics of Multi-dimensional Quantum Hydrodynamics
In the modelling of semiconductor devices in nano-size, for instance, MOSFET's and RTD's where quantum effects (like particle tunnelling through potential barriers and built-up in quantum wells) take place, the quantum hydrodynamical equations are important and dominative in the description of the motion of electron or hole transport under the self-consistent electric field. These quantum hydrodynamic equations consist of conservation laws of mass, balance laws of momentum forced by an additional nonlinear dispersion (caused by the quantum (Bohm) potential), and self-consistent electric field.
In this talk, we shall review the recent progress on the multi-dimensional quantum hydrodynamic equations, including the mathematical modelings based on the moment method applied to the Wigner-Boltzmann equation, rigorous analysis on the well-posedness for general, nonconvex pressure-density relation and regular large initial data, long time stability of steady-state under a quantum subsonic condition, and global-in-time relaxation limit from the quantum hydrodynamic equations to the quantum drift-diffusion equations, and so on.
Joint with A. Jüngel, P. Marcati, and A. Matsumura.
DONG LIANG, York University, 4700 Keele Street, Toronto, Ontario M3J 1P3
Analysis of the S-FDTD Method for Three-Dimensional Maxwell Equations
The finite-difference time-domain (FDTD) method for Maxwell's equations, firstly introduced by Yee, is a very popular numerical algorithm in computational electromagnetics. However, the traditional FDTD scheme is only conditionally stable. The computation of three-dimensional problems by the scheme will need much more computer memory or become extremely difficult when the size of spatial steps becomes very small. Recently, there is considerable interest in developing efficient schemes for the problems.
In this talk, we will present a new splitting finite-difference time-domain scheme (S-FDTD) for the general three-dimensional Maxwell's equations. Unconditional stability and convergence are proved for the scheme by using the energy method. The technique of reducing perturbation error is further used to derive a high order scheme. Numerical results are given to illustrate the performance of the methods.
This research is joint work with L. P. Gao and B. Zhang.
KIRSTEN MORRIS, University of Waterloo
Controller Design for Partial Differential Equations
Many controller design problems of practical interest involve systems modelled by partial differential equations. Typically a numerical approximation is used at some stage in controller design. However, not every scheme that is suitable for simulation is suitable for controller design. Misleading results may be obtained if care is not taken in selecting a scheme. Sufficient conditions for a scheme to be suitable for linear quadratic or H¥ controller design have been obtained. Once a scheme is chosen, the resulting approximation will in general be a large system of ordinary differential equations. Standard control algorithms are only suitable for systems with model order less than 100 and special techniques are required.
KEITH PROMISLOW, Michigan State University
Nonlocal Models of Membrane Hydration in PEM Fuel Cells
Polymer electrolyte membrane (PEM) fuel cells are unique energy conversion devices, effeciently generating useful electric voltage from chemical reactants without combustion. They have recently captured public attention for automotive applications for which they promise high performance without the pollutants associated with combustion.
>From a mathematical point of view the device is governed by coupled systems of elliptic, parabolic, and degenerate parabolic equations describing the heat, mass, and ion tranpsort through porous medias and polymer electrolyte membranes. This talk will describe the overall funtionality of the PEM fuel cell, presenting analysis of the slow, nonlocal propagation of hydration fronts within the polymer electrolyte membrane.
TAI-PENG TSAI, University of British Columbia, Vancouver
Boundary regularity criteria for suitable weak solutions of Navier-Stokes equations
I will present some new regularity criteria for suitable weak solutions of the Navier-Stokes equations near boundary in space dimension 3. Partial regularity is also analyzed.
This is joint work with Stephen Gustafson and Kyungkeun Kang.
top of page
Copyright © Canadian Mathematical Society - Société mathématique du Canada.
Any comments or suggestions should be sent to - Commentaires ou suggestions envoyé à: [email protected]. |
c1338c0d60eeff37 | Moving gapless indirect excitons in monolayer graphene
• Mahmood Mahmoodian1Email author and
Affiliated with
• Matvey Entin1
Affiliated with
Nanoscale Research Letters20127:599
DOI: 10.1186/10.1186/1556-276X-7-599
Received: 16 July 2012
Accepted: 11 October 2012
Published: 30 October 2012
The existence of moving indirect excitons in monolayer graphene is theoretically evidenced in the envelope-function approximation. The excitons are formed from electrons and holes near the opposite conic points. The electron-hole binding is conditioned by the trigonal warping of the electron spectrum. It is stated that the exciton exists in some sectors of the exciton momentum space and has the strong trigonal warping of the spectrum.
Monolayer graphene Exciton Energy spectrum Optical absorption Specific heat 71.35.-y; 73.22.Lp; 73.22.Pr; 78.67.Wj; 65.80.Ck
An exciton is a usual two-particle state of semiconductors. The electron-hole attraction decreases the excitation energy compared to independent particles producing the bound states in the bandgap of a semiconductor. The absence of the gap makes this picture inapplicable to graphene, and the immobile exciton becomes impossible in a material with zero gap. However, at a finite total momentum, the gap opens that makes the binding of the moving pair allowable.
The purpose of the present paper is an envelope-approximation study of the possibility of the Wannier-Mott exciton formation near the conic point in a neutral graphene. In the present paper, we use the term ‘exciton’ in its direct meaning, unlike other papers where this term is referred to as many-body (‘excitonic’) effects[1, 2], exciton insulator with full spectrum reconstruction, or exciton-like singularities originating from saddle points (van Hove singularity) of the single-particle spectrum[3]. On the contrary, our goal is the pair bound states of electrons and holes. There is a widely accepted opinion that zero gap in graphene forbids the Mott exciton states (see, e.g.,[4]). This statement which is valid in the conic approximation proves to be incorrect beyond this approximation. Our aim is to demonstrate that the excitons exist if one takes the deviations from the conic spectrum into consideration.
We consider the envelope tight-binding Hamiltonian of monolayer graphene as follows:
H ex = ( p e ) + ( p h ) + V ( r e r h ) ,
( p ) = γ 0 1 + 4 cos a p x 2 cos 3 a p y 2 + 4 cos 2 a p x 2 ,
is the single-electron energy, a = 0.246 nm is the lattice constant, = 1, V(r )= −e2/(χr) is the potential energy of the electron-hole interaction. The electron spectrum has conic points ν K,ν = ±1, K = (4Π/3a,0), where (p)≈s|pν K|, s = γ 0 a 3 / 2 is the electron velocity in the conic approximation.
The electron and hole momenta pe,hcan be expressed via pair q=p e + p h and relative p=p e p h momenta. The momenta pe,h can be situated near the same (qk ≪ 2K) or near the opposite conic points (q = 2K + k ,k ≪ 2K).
We assumed that graphene is embedded into the insulator with a relatively large dielectric constant χ so that the effective dimensionless constant of interaction g = e 2 / ( sχℏ ) 2 / χ 1 and the many-body complications are inessential. In the conic approximation, the classical electron and hole with the same direction of momentum have the same velocities s. The interaction changes their momenta, but not their velocities. The two-particle Hamiltonian contains no terms quadratic in the component of the relative momentum p along k. In a quantum language, such attraction does not result in binding. Thus, the problem of binding demands accounting for the corrections to the conic spectrum.
Two kinds of excitons are potentially allowed in graphene: a direct exciton with k ≪ 1/a(when the pair belongs to the same extremum) and an indirect exciton with q = 2K + k. Assuming pk (this results from the smallness of g), we get to the quadratic Hamiltonian
H ex = sk + p 1 2 2 m 1 + p 2 2 2 m 2 e 2 χr ,
where the coordinate system with the basis vectors e1k/k and e2e1 is chosen, r = (x1,x2). In the conic approximation, we have m2 = k/s, m1 = . Thus, this approximation is not sufficient to find m1. Beyond the conic approximation (but near the conic point), we should expand the spectrum (2) with respect to k up to the square terms, which results in the trigonal spectrum warping. As a result, we have for the indirect exciton,
1 m 1 = ν sa 4 3 cos 3 ϕ k ,
where ϕ k is an angle between k and K.
The effective mass m1m2is directly determined by the trigonal spectrum warping, and the large value of m1 follows from the warping smallness. The sign of m1is determined by ν cos3 ϕ k . If ν cos3 ϕ k > 0, electrons and holes tend to bind, or else to run away from each other. Thus, the binding of an indirect pair is permitted for ν cos3ϕ k >0. Apart from the conic point, this condition transforms to
( 1 + u + v ) < 0 ( 1 + u + v + ) < 0 ( 1 + u + v ) < 0 ( 1 + v + v + ) < 0 ( 1 + u + v + ) < 0 ( 1 + v + v + ) < 0 ,
where u = cos a k x , v ± = cos ( ( k x ± 3 k y ) a / 2 ) .
To find the indirect exciton states analytically, we solved the Schrödinger equation with the Hamiltonian (3) using the large ratio of effective masses. This parameter can be utilized by the adiabatic approximation similar with the problem of molecular levels. Coordinates 1 and 2 play a role of heavy ‘ion’ and ‘electron’ coordinates. At the first stage, the ion term in the Hamiltonian is omitted, and the Schrödinger equation is solved with respect to the electron wave function at a fixed ion position. The resulting electron terms then are used to solve the ion equation. This gives the approximate ground level of exciton ε(k)=skε ex (k), where the binding energy of the exciton is ε ex (k) = Π−1sk g2 log2(m1/m2) (the coefficient 1/Π here is found by a variational method).
A similar reasoning for the direct exciton gives negative mass m1=−32/(ks a2(7−cos6ϕ k )). As a result, the direct exciton kinetic energy of the electron-hole relative motion is not positively determined and that means the impossibility of binding of electrons with holes from the same cone point.
Results and discussion
Figure1 shows the domain of indirect exciton existence in the momentum space. This domain covers a small part of the Brillouin zone.
Figure 1
Relief of the single-electron spectrum. Domains where exciton states exist are bounded by a thick line.
The quantity ε ex (k) essentially depends on the momentum via the ratio of effective masses m1/m2. Within the accepted assumptions, ε ex is less than the energy of unbound pair sk. However, at a small-enough dielectric constant χ, the ratio of both quantities is not too small. Although we have no right to consider the problem with a large g in the two-particle approach, it is obvious that the increase of the parameter g can only result in the binding energy growth.
Besides, we have studied the problem of the exciton numerically in the same approximation and by means of a variational approach. Figure2 represents the dependence of the exciton binding energy on its momentum for χ=10. Figure3 shows the radial sections of the two-dimensional plot. The characteristic exciton binding energies have the order of 0.2 eV.
Figure 2
Relief map of indirect exciton ground-state binding energy. The map shows ε ex (in eV) as a function of the wave vector in units of reciprocal lattice constant. The exciton exists in the colored sectors.
Figure 3
Radial sections of Figure 2 at fixed angles in degrees (marked). Curves run up to the ends of exciton spectrum.
All results for embedded graphene are applicable to the free-suspended layer if the interaction constant g is replaced with a smaller quantity g ~, which is renormalized by many-body effects. In this case, the exciton binding energy becomes essentially larger and comparable to kinetic energy sk.
We discuss the possibility of observation of the indirect excitons in graphene. As we saw, their energies are distributed between zero and some tenth of eV that smears up the exciton resonance. The large exciton momentum blocks both direct optical excitation and recombination. However, a slow recombination and an intervalley relaxation preserve the excitons (when generated someway) from recombination or the decay. On the other hand, the absence of a low-energy threshold results in the contribution of excitons in the specific heat and the thermal conductivity even at low temperature.
It is found that the exciton contribution to the specific heat at low temperatures in the Dirac point is proportional to (gT/s)2log2(aT/s)). It is essentially lower than the electron specific heat ∝(T/s)2 and the acoustic phonon contribution ∝(T/c)2, where c is the phonon velocity. Nevertheless, the exciton contribution to the electron-hole plasma specific heat is essential for experiments with hot electrons.
In conclusion, the exciton states in graphene are gapless and possess strong angular dependence. This behavior coheres with the angular selectivity of the electron-hole scattering rate[5]. In our opinion, it is reasonable to observe the excitons by means of high-resolution electron energy loss spectroscopy of the free-suspended graphene in vacuum. Such energy and angle-resolving measurements can reproduce the indirect exciton spectrum.
This research has been supported in part by the grants of RFBR nos. 11-02-00730 and 11-02-12142.
Authors’ Affiliations
Institute of Semiconductor Physics, Siberian Branch, Russian Academy of Sciences
1. Yang L, Deslippe J, Park CH, Cohen ML, Louie SG: Excitonic effects on the optical response of graphene and bilayer graphene. Phys Rev Lett 2009, 103: 186802.View Article
2. Yang L: Excitons in intrinsic and bilayer graphene. Phys Rev B 2011, 83: 085405.View Article
3. Chae DH, Utikal T, Weisenburger S, Giessen H, vKlitzing K, Lippitz M, Smet JH: Excitonic fano resonance in free-standing graphene. Nano Lett 2011, 11: 1379. 10.1021/nl200040qView Article
4. Ratnikov PV, Silin AP: Size quantization in planar graphene-based heterostructures: pseudospin splitting, interface states, and excitons. Zh Eksp Teor Fiz 2012, 141: 582. [JETP 2012, 114(3):512] [JETP 2012, 114(3):512]
5. Golub LE, Tarasenko SA, Entin MV, Magarill LI: Valley separation in graphene by polarized light. Phys Rev B 2011, 84: 195408.View Article
© Mahmoodian and Entin; licensee Springer. 2012
|
8d89d8424e478008 | More Options
Buddy Can You Paradigm?
Reality Check
Victor Stenger
Skeptical Briefs Volume 10.3, September 2000
A common view is that science progresses by a series of abrupt changes in which new scientific theories replace old ones that are “proven wrong” and never again see the light of day. Unless, as John Horgan has suggested, we have reached the ”end of science,” every theory now in use, such as evolution or gravity, seems destined to be overturned. If this is true, then we cannot interpret any scientific theory as a reliable representation of reality.
While this view of science originated with philosopher Karl Popper, its current widespread acceptance is usually imputed to Thomas Kuhn, whose The Structures of Scientific Revolutions (1962) was the best selling academic book of the twentieth century, and probably also the most cited.
Kuhn alleged that science does not progress gradually but rather through a series of revolutions. He characterized these revolutions with the now famous and overworked term paradigm shifts in which the old problem-solving tools, the “paradigms” of a discipline are replaced by new ones. In between revolutions, not much is supposed to happen. And after the revolution, the old paradigms are largely forgotten.
Being a physicist by training, Kuhn focused mainly on revolutions in physics. One of the most important examples he covered was the transition from classical mechanics to quantum mechanics that occurred in the early 1900s. In quantum mechanics, the physicist calculates probabilities for particles following certain paths, rather than calculating the exact paths themselves as in classical mechanics.
True, this constitutes a different procedure. But has classical mechanics become a forgotten tool, like the slide rule? Hardly. Except for computer chips, lasers, and a few other special devices, most of today’s hightech society is fully explicable with classical physics alone. While quantum mechanics is needed to understand basic chemistry, no special quantum effects are evident in biological mechanisms. Thus, most of what is labeled natural science in today’s world still rests on a foundation of Newtonian physics that has not changed much, in basic principles and methods, for centuries.
Nobel physicist Steven Weinberg, who was a colleague of Kuhn’s at Harvard and originally admired his work, has taken a retrospective look at Structures. In an article on the October 8, 1998, New York Review of Books called “The Revolution That Didn't Happen,” Weinberg writes:
It is not true that scientists are unable to “switch back and forth between ways of seeing,” and that after a scientific revolution they become incapable of understanding the science that went before it. One of the paradigm shifts to which Kuhn gives much attention in Structures is the replacement at the beginning of this century of Newtonian mechanics by the relativistic mechanics of Einstein. But in fact in educating new physicists the first thing that we teach them is still good old Newtonian mechanics, and they never forget how to think in Newtonian terms, even after they learn about Einstein’s theory of relativity. Kuhn himself as an instructor at Harvard must have taught Newtonian mechanics to undergraduates.
Weinberg maintains that the last “mega-paradigm shift” in physics occurred with the transition from Aristotle to Newton, which actually took several hundred years: “[N]othing that has happened in our understanding of motion since the transition from Newtonian to Einsteinian mechanics, or from classical to quantum physics fits Kuhn’s description of a ‘paradigm shift.'”
While tentative proposals often prove incorrect, I cannot think of a single case in recent times where a major physical theory that for many years has successfully described all the data within a wide domain was later found to be incorrect in the limited circumstances of that domain. Old, standby theories are generally modified, extended, often simplified with excess baggage removed, and always clarified. Rarely, if ever, are such well-established theories shown to be entirely wrong. More often the domain of applicability is refined as we gain greater knowledge or modifications are made that remain consistent with the overall principles.
This is certainly the case with Newtonian physics. The advent of relativity and quantum mechanics in the twentieth century established the precise domain for physics that had been constructed up to that point, but did not dynamite that magnificent edifice. While excess baggage such as the aether and phlogiston was cast off, the old methods still exist as smooth extrapolations of the new ones to the classical domain. The continued success and wide application of Newtonian physics must be viewed as strong evidence that it represents true aspects of reality, that it is not simply a human invention.
Furthermore, the new theories grew naturally from the old. When you look in depth at the history of quantum mechanics, you have to conclude it was not the abrupt transition from classical mechanics usually portrayed. Heisenberg retained the classical equations of motion and simply represented observables by matrices instead of real numbers. Basically, all he did was make a slight modification to the algebraic rules of mechanics by relaxing the commutative law. Quantization then arose from assumed commutation rules that were chosen based on what seemed to work. Similarly, the Schrödinger equation was derived from the classical Hamilton-Jacobi equation of motion. These were certainly major developments, but I maintain they were more evolutionary than revolutionary.
Where else in the history of science to the present can we identify significant paradigm shifts? With Darwin and Mendel, certainly, in biology. But what in biology since then? Discovering the structure of DNA and decoding the genome simply add to the details of the genetic mechanism that are being gradually enlarged without any abrupt change in the basic naturalistic paradigm.
A kind of Darwinian graduated evolution characterizes the development of science and technology. That is not to say that change is slow or uniform, in biological or social systems. The growth of science and technology in recent years has been quick but not instantaneous and still represents a relatively smooth extension of what went before.
Victor Stenger
|
ecb8e80037df8cd8 | Subscribe now
Log in
Your login is case sensitive
I have forgotten my password
New Scientist TV:
Lego pirate proves how freak waves can sink ships
Sandrine Ceurstemont, editor, New Scientist TV
A calm sea can sometimes unleash an unexpected weapon: a sudden monster wave that engulfs a large ship. Now Amin Chabchoub from the Hamburg-Harburg Technical University in Germany and colleagues have used a Lego ship to replicate the phenomenon in a wave tank for the first time, giving insight into how it occurs.
To recreate the effect, the team produced waves based on a solution of the non-linear wave equation thought to be the most likely explanation for large freak waves. In this case, a weak oscillation propagates continuously while suddenly increasing in amplitude for a short time. "I programmed the paddle of the wave maker to generate a wave train which is modulated according to theory," says Chabchoub. "This generated small waves as predicted from the equations and we observed the formation of a giant rogue wave during this evolution." As seen in the video above, the toy boat rides along on gentle waves until suddenly a large wave appears and it capsizes.
The experiment proves that the non-linear model provides a possible explanation for the sudden formation of walls of water in the ocean. The team hopes to expand on the research to model more realistic sea conditions involving wind, water currents and two-dimensional wave trains. The results could be used to develop a short-term prediction system for monster waves.
If you enjoyed this video, see how a toy boat was used to recreate the dead water effect or check out a water-bouncing ball that mimics skipping stones.
Facebook iconDigg iconDelicious iconStumbleUpon iconTwitter iconTechnorati iconReddit iconAddThis icon
Subscribe to New Scientist Magazine
I don't understand. Where are the non-linearities? What was the condition that caused the freak wave? How is it possible to have lots and lots of little waves and then a sudden freak wave? What was the deterministic principle that lead to this freak wave?
I suggest that from this day, every single new theory must be 'proven,' or at least represented in a model, using legos.
So the investigator programmed the wave maker to make a big wave, and the big wave sunk the toy ship.
Who is paying this guy for this research? I hope they're not using taxpayer money. Why is this nonsense being published in New Scientist?
No, the wave maker produces small regular oscillations. The evolution of these small oscillations in water is described by the Nonlinear Schrödinger equation. If you follow the wiki-link you can read about how this can result in in a high amplitude wave of short duration (aka a 'rogue wave').
The large amplitude pulse creation is interesting, but the Lego boat, while cute for some, distracts from the importance of this. Put something in there with a realistic CM and righting moment. Or better, show the side view near the creation point of the rogue wave and let us watch it happen.
A recent TV show commeted how "experts", using a linear wave model, could not explain freak/monster waves. Does anyone understand that the real oceans are not "linear"? They are an almost infinite series of "linear" axies at each compass heading. Electronic Engineers have used "hetrodying" for over a century to create sum and difference frequencies in radio (thru cellphone) designs. It seems to me these freak/monster wave are the result of constructive/destructive interference between the multitude of multi-axis "linear" waves.
David: The Hamburg-Harburg Technical University is in Germany. I think it's safe to say your tax money was not involved.
Besides that, I'd bet there is much more to this researcher's work than this video. Odd that someone prone to jumping to conclusions who derides a sliver of research as "nonesense" reads a scientific journal at all.
I have seen a TV show where space based radar found wave fronts of 30 meters (100 feet) out in the ocean. It seems the sailors were telling the truth when asked what sunk their vessels; they were not lying to cover any supposed incompetence.
wow.... you owe me 48 seconds of my life. plus the 30 seconds it took to load.
The brief period of low-amplitude waves both before and after the freak waves is interesting.
The scientific paper can be downloaded (free) from
Super Rogue Waves: Observation of a Higher-Order Breather in Water Waves
A. Chabchoub, N. Hoffmann1, M. Onorato, and N. Akhmediev
Phys. Rev. X 2, 011015 (2012)
Twitter Follow us
Twitter updates
Recent comments
Your login is case sensitive
I have forgotten my password
© Copyright Reed Business Information Ltd. |
a6578f9fe4cd0f71 | Researchers in the US have created the first artificial samples of graphene with electronic properties that can be controlled in a way not possible in the natural form of the material. The samples can be used to study the properties of so-called Dirac fermions, which give graphene many of its unique electronic properties. The work may also lead to the creation of a new generation of quantum materials and devices with exotic behaviour.
Graphene is a single layer of carbon atoms organized in a honeycomb lattice. Physicists know that particles, such as electrons, moving though such a structure behave as though they have no mass and travel through the material at near light speeds. These particles are called massless Dirac fermions and their behaviour could be exploited in a host of applications, including transistors that are faster than any that exist today.
The new "molecular" graphene, as it is has been dubbed, is similar to natural graphene except that its fundamental electronic properties can be tuned much more easily. It was made using a low-temperature scanning tunnelling microscope with a tip – made of iridium atoms – that can be used to individually position carbon-monoxide molecules on a perfectly smooth, conducting copper substrate. The carbon monoxide repels the freely moving electrons on the copper surface and "forces" them into a honeycomb pattern, where they then behave like massless graphene electrons, explains team leader Hari Manoharan of Stanford University.
Described by Dirac
"We confirmed that the graphene electrons are massless Dirac fermions by measuring the conductance spectrum of the electrons travelling in our material," says Manoharan. "We showed that the results match the two-dimensional Dirac equation for massless particles moving at the speed of light rather than the conventional Schrödinger equation for massive electrons."
The researchers then succeeded in tuning the properties of the electrons in the molecular graphene by moving the positions of the carbon-monoxide molecules on the copper surface. This has the effect of distorting the lattice structure so that it looks as though it has been squeezed along several axes – something that makes the electrons behave as though they have been exposed to a strong magnetic or electric field, although no actual such field has been applied. The team was also able to tune the density of the electrons on the copper surface by introducing defects or impurities into the system.
"Studying such artificial lattices in this way may certainly lead to technological applications, but they also provide a new level of control over Dirac fermions and allow us to experimentally access a set of phenomena that could only be investigated using theoretical calculations until now," adds Manoharan. "Introducing tunable interactions between the electrons could allow us to make spin liquids in graphene, for instance, and observe the spin quantum Hall effect if we can succeed in introducing spin-orbit interactions between the electrons."
He adds that molecular graphene is just the first of this type of "designer" quantum structure and hopes to make other nanoscale materials with such exotic topological properties using similar bottom-up techniques.
The work is reported in Nature 483 306. |
4bac3aa360cf025f | Quantum Mechanics/Waves and Modes
< Quantum Mechanics
Many misconceptions about quantum mechanics may be avoided if some concepts of field theory and quantum field theory like "normal mode" and "occupation" are introduced right from the start. They are needed for understanding the deepest and most interesting ideas of quantum mechanics anyway. Questions about this approach are welcome on the talk page.
Waves and modesEdit
A wave is a propagating disturbance in a continuous medium or a physical field. By adding waves or multiplying their amplitudes by a scale factor, superpositions of waves are formed. Waves must satisfy the superposition principle which states that they can go through each other without disturbing each other. It looks like there were two superimposed realities each carrying only one wave and not knowing of each other (that's what is assumed if one uses the superposition principle mathematically in the wave equations).
Examples are acoustic waves and electromagnetic waves (light), but also electronic orbitals, as explained below.
A standing wave is considered a one-dimensional concept by many students, because of the examples (waves on a spring or on a string) usually provided. In reality, a standing wave is a synchronous oscillation of all parts of an extended object at a definite frequency, in which the oscillation profile (in particular the nodes and the points of maximal oscillation amplitude) doesn't change. This is also called a normal mode of oscillation. The profile can be made visible in Chladni's figures and in vibrational holography. In unconfined systems, i.e. systems without reflecting walls or attractive potentials, traveling waves may also be chosen as normal modes of oscillation (see boundary conditions).
A phase shift of a normal mode of oscillation is a time shift scaled as an angle in terms of the oscillation period, e.g. phase shifts by 90° and 180° (or and ) are time shifts by the fourth and half of the oscillation period, respectively. This operation is introduced as another operation allowed in forming superpositions of waves (mathematically, it is covered by the phase factors of complex numbers scaling the waves).
• Helmholtz ran an experiment which clearly showed the physical reality of resonances in a box. (He predicted and detected the eigenfrequencies.)
• experiments with standing and propagating waves
Electromagnetic and electronic modesEdit
Max Planck, one of the fathers of quantum mechanics.
Planck was the first to suggest that the electromagnetic modes are not excited continuously but discretely by energy quanta proportional to the frequency. By this assumption, he could explain why the high-frequency modes remain unexcited in a thermal light source: The thermal exchange energy is just too small to provide an energy quantum if is too large. Classical physics predicts that all modes of oscillation (2 degrees of freedom each) — regardless of their frequency — carry the average energy , which amounts to an infinite total energy (called ultraviolet catastrophe). This idea of energy quanta was the historical basis for the concept of occupations of modes, designated as light quanta by Einstein, also denoted as photons since the introduction of this term in 1926 by Gilbert N. Lewis.
An electron beam (accelerated in a cathode ray tube similar to TV) is diffracted in a crystal and diffraction patterns analogous to the diffraction of monochromatic light by a diffraction grating or of X-rays on crystals are observed on the screen. This observation proved de Broglie's idea that not only light, but also electrons propagate and get diffracted like waves. In the attracting potential of the nucleus, this wave is confined like the acoustic wave in a guitar corpus. That's why in both cases a standing wave (= a normal mode of oscillation) forms. An electron is an occupation of such a mode.
An optical cavity.
An electronic orbital is a normal mode of oscillation of the electronic quantum field, very similar to a light mode in an optical cavity being a normal mode of oscillation of the electromagnetic field. The electron is said to be an occupation of an orbital. This is the main new idea in quantum mechanics, and it is forced upon us by observations of the states of electrons in multielectron atoms. Certain fields like the electronic quantum field are observed to allow its normal modes of oscillation to be excited only once at a given time, they are called fermionic. If you have more occupations to place in this quantum field, you must choose other modes (the spin degree of freedom is included in the modes), as is the case in a carbon atom, for example. Usually, the lower-energy (= lower-frequency) modes are favoured. If they are already occupied, higher-energy modes must be chosen. In the case of light, the idea that a photon is an occupation of an electromagnetic mode was found much earlier by Planck and Einstein, see below.
Processes and particlesEdit
All processes in nature can be reduced to the isolated time evolution of modes and to (superpositions of) reshufflings of occupations, as described in the Feynman diagrams (since the isolated time evolution of decoupled modes is trivial, it is sometimes eliminated by a mathematical redefinition which in turn creates a time dependence in the reshuffling operations; this is called Dirac's interaction picture, in which all processes are reduced to (redefined) reshufflings of occupations). For example in an emission of a photon by an electron changing its state, the occupation of one electronic mode is moved to another electronic mode of lower frequency and an occupation of an electromagnetic mode (whose frequency is the difference between the frequencies of the mentioned electronic modes) is created.
Electrons and photons become very similar in quantum theory, but one main difference remains: electronic modes cannot be excited/occupied more than once (= Pauli exclusion principle) while photonic/electromagnetic modes can and even prefer to do so (= stimulated emission).
This property of electronic modes and photonic modes is called fermionic and bosonic, respectively. Two photons are indistinguishable and two electrons are also indistinguishable, because in both cases, they are only occupations of modes: all that matters is which modes are occupied. The order of the occupations is irrelevant except for the fact that in odd permutations of fermionic occupations, a negative sign is introduced in the amplitude.
Of course, there are other differences between electrons and photons:
• The electron carries an electric charge and a rest mass while the photon doesn't.
• In physical processes (see the Feynman diagrams), a single photon may be created while an electron may not be created without at the same time removing some other fermionic particle or creating some fermionic antiparticle. This is due to the conservation of charge.
Mode numbers, Observables and eigenmodesEdit
The system of modes to describe the waves can be chosen at will. Any arbitrary wave can be decomposed into contributions from each mode in the chosen system. For the mathematically inclined: The situation is analogous to a vector being decomposed into components in a chosen coordinate system. Decoupled modes or, as an approximation, weakly coupled modes are particlularly convenient if you want to describe the evolution of the system in time, because each mode evolves independently of the others and you can just add up the time evolutions. In many situations, it is sufficient to consider less complicated weakly coupled modes and describe the weak coupling as a perturbation.
In every system of modes, you must choose some (continuous or discrete) numbering (called "quantum numbers") for the modes in the system. In Chladni's figures, you can just count the number of nodal lines of the standing waves in the different space directions in order to get a numbering, as long as it is unique. For decoupled modes, the energy or, equivalently, the frequency might be a good idea, but usually you need further numbers to distinguish different modes having the same energy/frequency (this is the situation referred to as degenerate energy levels). Usually these additional numbers refer to the symmetry of the modes. Plane waves, for example — they are decoupled in spatially homogeneous situations — can be characterized by the fact that the only result of shifting (translating) them spatially is a phase shift in their oscillation. Obviously, the phase shifts corresponding to unit translations in the three space directions provide a good numbering for these modes. They are called the wavevector or, equivalently, the momentum of the mode. Spherical waves with an angular dependence according to the spherical harmonics functions (see the pictures) — they are decoupled in spherically symmetric situations — are similarly characterized by the fact that the only result of rotating them around the z-axis is a phase shift in their oscillation. Obviously, the phase shift corresponding to a rotation by a unit angle is part of a good numbering for these modes; it is called the magnetic quantum number m (it must be an integer, because a rotation by 360° mustn't have any effect) or, equivalently, the z-component of the orbital angular momentum. If you consider sharp wavepackets as a system of modes, the position of the wavepacket is a good numbering for the system. In crystallography, the modes are usually numbered by their transformation behaviour (called group representation) in symmetry operations of the crystal, see also symmetry group, crystal system.
The mode numbers thus often refer to physical quantities, called observables characterizing the modes. For each mode number, you can introduce a mathematical operation, called operator, that just multiplies a given mode by the mode number value of this mode. This is possible as long as you have chosen a mode system that actually uses and is characterized by the mode number of the operator. Such a system is called a system of eigenmodes, or eigenstates: Sharp wavepackets are no eigenmodes of the momentum operator, they are eigenmodes of the position operator. Spherical harmonics are eigenmodes of the magnetic quantum number, decoupled modes are eigenvalues of the energy operator etc. If you have a superposition of several modes, you just operate the operator on each contribution and add up the results. If you chose a different modes system that doesn't use the mode number corresponding to the operator, you just decompose the given modes into eigenmodes and again add up the results of the operator operating on the contributions. So if you have a superposition of several eigenmodes, say, a superposition of modes with different frequencies, then you have contributions of different values of the observable, in this case the energy. The superposition is then said to have an indefinite value for the observable, for example in the tone of a piano note, there is a superposition of the fundamental frequency and the higher harmonics being multiples of the fundamental frequency. The contributions in the superposition are usually not equally large, e.g. in the piano note the very high harmonics don't contribute much. Quantitatively, this is characterized by the amplitudes of the individual contributions. If there are only contributions of a single mode number value, the superposition is said to have a definite or sharp value.
• the basics of wave-particle duality.
If you do a position measurement, the result is the occupation of a very sharp wavepacket being an eigenmode of the position operator. These sharp wavepackets look like pointlike objects, they are strongly coupled to each other, which means that they spread soon.
In measurements of such a mode number in a given situation, the result is an eigenmode of the mode number, the eigenmode being chosen at random from the contributions in the given superposition. All the other contributions are supposedly eradicated in the measurement — this is called the wave function collapse and some features of this process are questionable and disputed. The probability of a certain eigenmode to be chosen is equal to the absolute square of the amplitude, this is called Born's probability law. This is the reason why the amplitudes of modes in a superposition are called "probability amplitudes" in quantum mechanics. The mode number value of the resulting eigenmode is the result of the measurement of the observable. Of course, if you have a sharp value for the observable before the measurement, nothing is changed by the measurement and the result is certain. This picture is called the Copenhagen interpretation. A different explanation of the measurement process is given by Everett's many-worlds theory; it doesn't involve any wave function collapse. Instead, a superposition of combinations of a mode of the measured system and a mode of the measuring apparatus (an entangled state) is formed, and the further time evolutions of these superposition components are independent of each other (this is called "many worlds").
As an example: a sharp wavepacket is an eigenmode of the position observable. Thus the result of measurements of the position of such a wavepacket is certain. On the other hand, if you decompose such a wavepacket into contributions of plane waves, i.e. eigenmodes of the wavevector or momentum observable, you get all kinds of contributions of modes with many different momenta, and the result of momentum measurements will be accordingly. Intuitively, this can be understood by taking a closer look at a sharp or very narrow wavepacket: Since there are only a few spatial oscillations in the wavepacket, only a very imprecise value for the wavevector can be read off (for the mathematically inclined reader: this is a common behaviour of Fourier transforms, the amplitudes of the superposition in the momentum mode system being the Fourier transform of the amplitudes of the superposition in the position mode system). So in such a state of definite position, the momentum is very indefinite. The same is true the other way round: The more definite the momentum is in your chosen superposition, the less sharp the position will be, and it is called Heisenberg's uncertainty relation.
Two different mode numbers (and the corresponding operators and observables) that both occur as characteristic features in the same mode system, e.g. the number of nodal lines in one of Chladni's figures in x direction and the number of nodal lines in y-direction or the different position components in a position eigenmode system, are said to commute or be compatible with each other (mathematically, this means that the order of the product of the two corresponding operators doesn't matter, they may be commuted). The position and the momentum are non-commuting mode numbers, because you cannot attribute a definite momentum to a position eigenmode, as stated above. So there is no mode system where both the position and the momentum (referring to the same space direction) are used as mode numbers.
The Schrödinger equation, the Dirac equation etc.Edit
As in the case of acoustics, where the direction of vibration, called polarization, the speed of sound and the wave impedance of the media, in which the sound propagates, are important for calculating the frequency and appearance of modes as seen in Chladni's figures, the same is true for electronic or photonic/electromagnetic modes: In order to calculate the modes (and their frequencies or time evolution) exposed to potentials that attract or repulse the waves or, equivalently, exposed to a change in refractive index and wave impedance, or exposed to magnetic fields, there are several equations depending on the polarization features of the modes:
• Electronic modes (their polarization features are described by Spin 1/2) are calculated by the Dirac equation, or, to a very good approximation in cases where the theory of relativity is irrelevant, by the Schrödinger equation]] and the Pauli equation.
• Photonic/electromagnetic modes (polarization: Spin 1) are calculated by Maxwell's equations (You see, 19th century already found the first quantum-mechanical equation! That's why it's so much easier to step from electromagnetic theory to quantum mechanics than from point mechanics).
• Modes of Spin 0 would be calculated by the Klein-Gordon equation.
It is much easier and much more physical to imagine the electron in the atom to be not some tiny point jumping from place to place or orbiting around (there are no orbits, there are orbitals), but to imagine the electron being an occupation of an extended orbital and an orbital being a vibrating wave confined to the neighbourhood of the nucleus by its attracting force. That's why Chladni's figures of acoustics and the normal modes of electromagnetic waves in a resonator are such a good analogy for the orbital pictures in quantum physics. Quantum mechanics is a lot less weird if you see this analogy. The step from electromagnetic theory (or acoustics) to quantum theory is much easier than the step from point mechanics to quantum theory, because in electromagnetics you already deal with waves and modes of oscillation and solve eigenvalue equations in order to find the modes. You just have to treat a single electron like a wave, just in the same way as light is treated in classical electromagnetics.
In this picture, the only difference between classical physics and quantum physics is that in classical physics you can excite the modes of oscillation to a continuous degree, called the classical amplitude, while in quantum physics, the modes are "occupied" discretely. — Fermionic modes can be occupied only once at a given time, while Bosonic modes can be occupied several times at once. Particles are just occupations of modes, no more, no less. As there are superpositions of modes in classical physics, you get in quantum mechanics quantum superpositions of occupations of modes and the scaling and phase-shifting factors are called (quantum) amplitudes. In a Carbon atom, for example, you have a combination of occupations of 6 electronic modes of low energy (i.e. frequency). Entangled states are just superpositions of combinations of occupations of modes. Even the states of quantum fields can be completely described in this way (except for hypothetical topological defects).
As you can choose different kinds of modes in acoustics and electromagnetics, for example plane waves, spherical harmonics or small wave packets, you can do so in quantum mechanics. The modes chosen will not always be decoupled, for example if you choose plane waves as the system of acoustic modes in the resonance corpus of a guitar, you will get reflexions on the walls of modes into different modes, i.e. you have coupled oscillators and you have to solve a coupled system of linear equations in order to describe the system. The same is done in quantum mechanics: different systems of eigenfunctions are just a new name for the same concept. Energy eigenfunctions are decoupled modes, while eigenfunctions of the position operator (delta-like wavepackets) or eigenfunctions of the angular momentum operator in a non-spherically symmetric system are usually strongly coupled.
What happens in a measurement depends on the interpretation: In the Copenhagen interpretation you need to postulate a collapse of the wavefunction to some eigenmode of the measurement operator, while in Everett's Many-worlds theory an entangled state, i.e. a superposition of occupations of modes of the observed system and the observing measurement apparatus, is formed.
The formalism of quantum mechanics and quantum field theoryEdit
In Dirac's formalism, superpositions of occupations of modes are designated as state vectors or states, written as ( being the name of the superposition), the single occupation of the mode by or just . The vacuum state, i.e. the situation devoid of any occupations of modes, is written as . Since the superposition is a linear operation, i.e. it only involves multiplication by complex numbers and addition, as in
(a superposition of the single occupations of mode and mode with the amplitudes and , respectively), the states form a vector space (i.e. they are analogous to vectors in Cartesian coordinate systems). The operation of creating an occupation of a mode is written as a generator (for photons) and (for electrons), and the destruction of the same occupation as a destructor and , respectively. A sequence of such operations is written from right to left (the order matters): In an occupation of the electronic mode is moved to the electronic mode and a new occupation of the electromagnetic mode is created — obviously, this reshuffling formula represents the emission of a photon by an electron changing its state. is the superposition of two such processes differing in the final mode of the photon ( versus ) with the amplitudes and , respectively.
If the mode numbers are more complex — e.g. in order to describe an electronic mode of a Hydrogen atom, (i.e. an orbital) you need the 4 mode numbers n, l, m, s — the occupation of such a mode is written as or (in words: the situation after creating an occupation of mode (n, l, m, s) in the vacuum). If you have two occupations of different orbitals, you might write
or .
It is important to distinguish such a double occupation of two modes from a superposition of single occupations of the same two modes, which is written as
or .
But superpositions of multiple occupations are also possible, even superpositions of situations with different numbers or different kinds of particles: |
7bfea6fd3d8bacf1 | Tim Maudlin
The Metaphysics Within Physics
Tim Maudlin, The Metaphysics Within Physics, Oxford University Press, 2007, 197pp., $49.95 (hbk), ISBN 9780199218219.
Reviewed by Richard Healey, University of Arizona
This brief but fertile volume develops and defends the basic idea that "metaphysics, in so far as it is concerned with the natural world, can do no better than to reflect on physics." It consists of six essays sandwiched between an introduction and an epilogue. Though written independently over more than fifteen years, in combination they offer a unified blueprint for the construction of a metaphysics based on physics. Maudlin proposes to build on a foundation in which laws of nature and a directed time are assumed as primitives which generate the cosmic pattern of events -- observable or not. Physical modality follows readily, but (he argues) physics does not itself employ a notion of causation. So causal and counterfactual locutions are fit candidates for an analysis that will supplement physical law with pragmatic factors, while metaphysical possibility is suspect beyond the bounds of physical possibility.
In the first essay, Maudlin advocates the view that laws of nature should be taken as primitive, and then uses them both to analyze many counterfactual locutions and to ground the fundamental dynamical explanations so prized in science. He defends the superiority of his view over rival proposals of David Lewis and Bas Van Fraassen, among others. Lewis analyzed natural laws as those generalizations that figure in all theoretical systematizations of empirical truths that best combine strength and simplicity. Maudlin objects that this analysis rides roughshod over the intuition that some such generalizations could fail to be laws in worlds that we should follow scientists in deeming physically possible. Van Fraassen argued that laws of nature are of no philosophical significance, and may be eliminated in favor of models in a satisfactory analysis of science. Maudlin counters that this deprives one of the resources to say how cutting down its class of models can enhance a theory's explanatory power, a phenomenon that is readily accounted for when one takes a theory's model class as well as its explanatory power to derive from its constituent laws.
Laws of Temporal Evolution (LOTEs) are of special philosophical significance for Maudlin. Besides grounding dynamical explanations (as well as some laws of coexistence), they figure prominently in his accounts of propensities, counterfactuals and causation. He distinguishes some laws of temporal evolution as fundamental (modeled on Newton's second law and the Schrödinger equation) from other special laws that hold only in the absence of interference (such as laws of population biology). Fundamental Laws of Time Evolution (FLOTEs) are involved in a 3-step procedure for the evaluation of many types of counterfactuals. First, one selects a relevant time (technically, a Cauchy surface); then one responds to a command implicit in the antecedent to alter the state of the world at that time in more or less specific ways; finally, one applies FLOTEs to determine a second state of the world at another (usually later) time: the counterfactual is evaluated positively (as true or otherwise acceptable) if and only if the consequent is true in the second state of the world. It is because this procedure involves pragmatic factors and background knowledge in addition to the FLOTEs that its results may be uncertain or even indeterminate.
If a relevant FLOTE is stochastic rather than deterministic, multiple second states may emerge at the final step. Maudlin introduces a notion of infection to handle these. Roughly, a second state is infected iff the modifications at step one induce alterations in how FLOTEs produce that state. He suggests that evaluation of a counterfactual ignores uninfected second states that differ from the actual world only through differing in the outcome of a stochastic FLOTE (SLOTE), while acknowledging that this suggestion flouts some people's intuitions. As for propensities, he takes these not to ground stochastic laws, but to follow from them: a propensity for a certain outcome exists just in case a SLOTE delivers an appropriately converging sequence of probabilities as one applies it at times closer and closer to the time at which that outcome might occur.
Chapter two questions the motivation behind Lewis's influential doctrine of Humean supervenience, according to which the laws of nature, along with everything else, supervene on the local distribution of basic qualities. Maudlin decomposes the doctrine into two subdoctrines he calls Separability and Physical Statism. Separability maintains that the complete physical state of the world is determined by the intrinsic physical state at each spacetime point and the spatio-temporal relations between those points: according to Physical Statism, all facts about the world, including modal and nomological facts, are determined by its total physical state. Physics, not metaphysics, decides the fate of Separability. Maudlin argues (pace Einstein as well as Lewis) that the support it received from classical physics has been decisively withdrawn by quantum mechanics, with the entanglement of systems that Schrödinger called the characteristic trait of that theory.
Humean supervenience requires that modal properties, law, causal connections and chance all supervene on the total physical state of the world. Does this much Physical Statism derive support from physics? Not according to Maudlin. He maintains that the total physical state of the world provides a promising supervenience base for physical possibility, counterfactuals, causal connections and chances (insofar as each of these is objective), given the physical laws. But, he argues, while it accords with actual scientific practice to regard them so, it flies in the face of scientific practice to take the laws themselves to be determined by the total physical state of the world. This argument parallels a similar argument from the first essay: they are both subject to the same objection.
Here's the argument. Assume that every model of a set of laws represents a possible way for a world governed by those laws to be. Then each of two incompatible sets of laws may have a model that represents the same total physical state of the world as possible. (Indeed, two incompatible stochastic theories may have identical sets of models, agreeing on every possible total physical state of the world, disagreeing only on their constituent probabilities.) Now it is impossible for a single world to be governed by incompatible laws. Symmetry therefore suggests that a world deemed possible by incompatible laws be governed by neither set. But how can one maintain that laws cannot obtain in a world that is a model of those laws, and hence allowed by them? To avoid this threatened reductio, one must admit that which laws obtain at a world is not determined by the total physical state of that world.
A defender of Physical Statism has a natural reply. By assumption, any laws supervene on the total physical state of some world W. A world W* deemed possible by the laws of W is one whose total physical state determines no regularities that conflict with those laws. But these regularities need not be laws of W*: W*'s laws supervene on its total physical state, not on the total state of W. The metaphor of "governance" is inappropriate: a world deemed possible by laws need not be a world where these are laws, though it must be a world where they "obtain" in the weak sense that their underlying regularities are there respected.
Doubtless Maudlin would object that this reply flouts scientific practice. A physicist must abstract the laws from data provided by the actual world, but, once abstracted, regards them as "floating free" of that world, and so holding by fiat in each situation they deem possible. But this attitude may be squared with Physical Statism. For a scientific interest in physical possibility is limited to applications of laws to the actual world. The Schwarzschild solution represents a scientifically interesting possible General Relativistic world because it can be used approximately to model a system like a planet, star, or other local feature of the actual world. In such employment, of course the system's behavior will be "governed" by the laws of general relativity, insofar as these are assumed to hold in the actual world. If asked whether an infinite, empty Minkowski spacetime is "governed" by the laws of Special or General Relativity (or perhaps some other theory), the practicing scientist should decline to answer, on pain of turning metaphysician.
In chapter five, Maudlin uses hypothetical FLOTEs to sketch a novel approach to causation, in opposition to counterfactual analyses. He constructs two test cases to argue that knowledge that C caused E need neither yield nor require knowledge that if C had not occurred, then E would not have happened (or other more complex candidates for a counterfactual analysis of causation). Then he sketches an account of how laws enter into the evaluation of causal claims.
The key to this account is a basic division between quasi-Newtonian LOTEs and the rest. LOTEs are quasi-Newtonian iff they both prescribe undisturbed behavior and specify how disturbances perturb such behavior: such disturbances then count as the causes of the perturbed behavior. If the applicable laws admit no natural division between disturbed and undisturbed behavior, then we must fall back on a notion of a complete cause -- an earlier state of the world sufficient to prescribe (perhaps stochastically) the subsequent development of a system.
For Maudlin, lawlike generalizations of the special sciences apply to systems only by virtue of, and to the extent permitted by, physical laws. But the basic division applies to all LOTEs. "Those special sciences that manage to employ taxonomies with quasi-Newtonian lawlike generalizations can be expected to support particularly robust judgments about causes." But Maudlin uses an example of McDermott to argue that when we can carve up a situation in different ways to apply alternative quasi-Newtonian lawlike generalizations our causal judgments are likely to waver, even though each partition licenses the same counterfactuals. And he despairs of any adequate analysis of remote causation, where nothing less than complete causes could play the role of antecedents to reliable lawlike generalizations. Whether the world has a rich causal structure at the fundamental level depends on whether the laws of physics take quasi-Newtonian form. But the physical laws need not fulfill a metaphysician's yearning for causes.
In chapter four, Maudlin argues that time passes: along with primitive physical laws, time's passage completes what he calls his anti-Humean metaphysical package. For him the passage of time is neither a mere psychological phenomenon nor an a priori metaphysical truth. Rather, we should believe that time passes because that's what ordinary experience suggests the physical world is like, and nothing in our best physics currently tells us otherwise. But what does this belief amount to?
Maudlin tells us that the passage of time is an intrinsic asymmetry in the temporal structure of the world with no spatial counterpart. Given a classical space-time theory (Newtonian or relativistic), one can represent such an asymmetry by assuming a primitive temporal orientation -- a partition of the time-like vectors at each space-time point into two disjoint sets in a way that varies smoothly from point to point (at least locally), together with a designation of one set as future-directed, the other as past-directed. This assumption is consistent with the metaphysics of a B-theorist who believes in a "block universe" (as Maudlin says he does). Metaphysical proponents of a "dynamical" time would likely refuse to accept it as an expression of the robust sense of passage to which they are committed. (Some A-theorists may even have trouble stating the assumption, given their ontological qualms about future events.) And Maudlin seems to join company with them when he writes that "the passage of time connotes more than just an intrinsic asymmetry: not just any asymmetry would produce passing"; and "The passage of time underwrites claims about one state 'coming out of' or 'being produced from' another". But he admits that time flows only in a metaphorical sense, while seemingly committed to the literal truth of time's passage. The subtlety of this distinction has this reviewer scratching his head!
Maudlin sets out to refute logical, physical and epistemological objections to the view that time passes, culling many of these from Huw Price's influential Time's Arrow and Archimedes' Point. While he scores a few points in the ensuing philosophical brawl, I would call the contest at best a tie; at worst, it is marred by persistent confusion as to what exactly is being fought over. He then presents a case in favor of time's passage. Even here, the case is partly negative. Where Gödel denied that time could pass in a space-time with no foliation by spacelike hypersurfaces, Maudlin counters that the passage of time entails only a preferred temporal orientation. He objects to attempts to analyze change without the passage of time because they cannot account for the directionality of change: attempts to ground this in entropy increase fail.
Besides highlighting the time-asymmetry physicists acknowledge in the laws applicable to esoteric weak-interaction phenomena, Maudlin does offer one interesting physics-based argument for time's passage. Statistical physics explains pervasive asymmetries in our world by postulating an early state that is macroscopically atypical but microscopically typical. Only by supposing that later states are produced by such a state can one explain why later microscopic states are atypical, as statistical physics requires. But for a Humean opponent, it is a contingent aspect of the Humean mosaic that it permits such temporally asymmetric explanations, and another contingent fact that it features creatures like us able to exploit them to good physical (but bad metaphysical!) ends. Still, for Maudlin, arguments from physics remain secondary to what he takes to be our manifest experience of the objective passage of time. Doubtless we all experience world history as one damn thing after another: but this seems an unlikely premise on which to base a significant metaphysical conclusion.
In chapter three, Maudlin locates suggestions for deep metaphysics in the gauge theories of contemporary science. He argues that while a metaphysics of substance and universals may arise as a natural projection of the structure of language onto the world, theories such as the chromodynamics that high energy physicists use to treat the strong interactions among quarks favor a rival, novel ontology suggested by the way in which they apply the mathematics of fiber bundles. Maudlin first argues that not even spatiotemporal relations (arguably the best candidates for external relations) are what he calls 'metaphysically pure' (which I take to be a synonym of the -- equally tricky -- term 'intrinsic'). The argument is that what distance relations obtain between a pair of points depends on the existence and nature of the continuous paths that link them through other points. Next he uses the example of a plane non-Euclidean geometry modeled by the surface of a sphere to argue that whether a pair of vectors ('arrows') attached at different places point in the same direction depends on how one thinks of transporting one vector to the location of the other along some continuous curve linking the two places. The conclusion -- that pointing in the same direction is not a metaphysically pure internal relation -- is then extended to the abstract vectors that contemporary gauge theories use to represent the matter fields associated with quarks and other leptons. He concludes that to refer to a quark as red (as physicists applying chromodynamics are whimsically wont to do) is not to say that it bears a relation of color similarity to other red quarks, since the theory posits no such metaphysically pure relation. Whether two quarks will count as having the same color depends on what space-time path one chooses to connect the space-time locations associated with them. What physicists call color charge is simply not an intrinsic property of quarks, or anything else. "Fiber bundles provide new mathematical structures for representing physical states, and hence a new way to understand physical ontology."
I heartily endorse Maudlin's declaration that "Empirical science has produced more astonishing suggestions about the fundamental structure of the world than philosophers have been able to invent, and we must attend to those suggestions." But if contemporary gauge theories do have any clear suggestions for ontology, Maudlin's is not among them -- or so I have argued in my Gauging What's Real (Oxford: 2007). First, it is not clear how to reconcile the quantum field theories of quantum chromodynamics with a fundamental ontology that includes the quarks whose behavior physicists take them to describe (a point to which some of Maudlin's remarks suggest he is sensitive). More importantly, taking a gauge field such as the (quantized) electromagnetic field to be a connection on a fiber bundle is more than just a category mistake of just the kind that Maudlin warns us against in chapter five: it is to ignore the element of conventionality involved in choosing one out of a continuum of gauge-equivalent connections, each grounding a different path-dependent notion of color-similarity. Classical gauge theories, at least, suggest an ontology in which properties are ascribed to extended loops, in violation of Separability, but still in conformity to a substance/universal ontology, though one of a radically unfamiliar kind.
This is an elegantly written and enormously stimulating book. It is full of original, provocative, philosophical argumentation. Maudlin shows by example what it is to do the best kind of naturalized metaphysics: one based on thorough acquaintance with real science, but unwilling to accept a superficial analysis of how it bears on deep philosophical problems. Every metaphysician should read it and emulate Maudlin's method, even when disagreeing with his conclusions. |
c08ad1c86b32c27a | Copenhagen interpretation
From Wikiquote
Jump to navigation Jump to search
The Copenhagen interpretation is a loosely-knit informal collection of axioms or doctrines that attempt to express in everyday language the mathematical formalism of quantum mechanics. The interpretation was largely devised in the years 1925–1927 by Niels Bohr and Werner Heisenberg.
• The Copenhagen interpretation is a very ambiguous term. Some people use it just to mean the sort of practical quantum mechanics that you can do — like you can ride a bicycle without really knowing what you're doing. It's the rules for using quantum mechanics and the experience that we have in using it. […] Then there's another side to the Copenhagen interpretation, which is a philosophy of the whole thing. It tries to be very deep and tell you that these ambiguities, which you worry about, are somehow irreducible. It says that ambiguities are in the nature of things. We, the observers, are also part of nature. It's impossible for us to have any sharp conception of what is going on. because we, the observers, are involved. And so there is this philosophy, which was designed to reconcile people to the muddle; You shouldn't strive for clarity— that's naive.
• Bohr’s principle of complementarity – the heart of the Copenhagen philosophy – implies that quantum phenomena can only be described by pairs of partial, mutually exclusive, or ‘complementary’ perspectives. Though simultaneously inapplicable, both perspectives are necessary for the exhaustive description of phenomena. Bohr aspired to generalize complementarity into all fields of knowledge, maintaining that new epistemological insights are obtained by adjoining contrary, seemingly incompatible, viewpoints.
[...] The value of Bohr’s philosophy for the advancement of physics is controversial. His followers consider complementarity a profound insight into the nature of the quantum realm. Others consider complementarity an illuminating but superfluous addendum to quantum theory. More severe is the opinion that Bohr’s philosophy is an obscure ‘web of words’ and mute on crucial foundational issues.
• Mara Beller, "Bohr, Niels (1885-1962)", Routledge Encyclopedia of Philosophy
• In recent years the debate on these ideas has reopened, and there are some who question what they call "the Copenhagen interpretation of quantum mechanics"—as if there existed more than one possible interpretation of the theory.
• Rudolf Peierls, Surprises in Theoretical Physics (1979), Ch. 1. General Quantum Mechanics
If one follows the great difficulty which even eminent scientists like Einstein had in understanding and accepting the Copenhagen interpretation... one can trace the roots... to the Cartesian will take a long time for it [this partition] to be replaced by a really different attitude toward the problem of reality.
• Maxel, you know I love you and nothing can change that. But I do need to give you once a thorough head washing. So stand still. The impudence with which you assert time and again that the Copenhagen interpretation is practically universally accepted, assert it without reservation, even before an audience of the laity—who are completely at your mercy—it’s at the limit of the estimable […]. Have you no anxiety about the verdict of history? Are you so convinced that the human race will succumb before long to your own folly?
• Erwin Schrödinger, Letter to Max Born (October 10, 1960), quoted in Walter John Moore, A Life of Erwin Schrödinger (1994), p. 342
• As Bohr acknowledged, in the Copenhagen interpretation a measurement changes the state of a system in a way that cannot itself be described by quantum mechanics. […] In quantum mechanics the evolution of the state vector described by the time-dependent Schrödinger equation is deterministic. If the time-dependent Schrödinger equation described the measurement process, then whatever the details of the process, the end result would be some definite state, not a number of possibilities with different probabilities. This is clearly unsatisfactory. If quantum mechanics applies to everything, then it must apply to a physicist’s measurement apparatus, and to physicists themselves. On the other hand, if quantum mechanics does not apply to everything, then we need to know where to draw the boundary of its area of validity. Does it apply only to systems that are not too large? Does it apply if a measurement is made by some automatic apparatus, and no human reads the result?
• Steven Weinberg, Lectures on Quantum Mechanics (2012), Ch. 3 : General Principles of Quantum Mechanics
• I have always felt bitter about the way how Bohr’s authority together with Pauli’s sarcasm killed any discussion about the fundamental problems of the quantum. [...] I expect that the Copenhagen interpretation will some time be called the greatest sophism in the history of science, but I would consider it a terrible injustice if—when some day a solution should be found—some people claim that ‘this is of course what Bohr always meant’, only because he was sufficiently vague.
External links[edit]
Wikipedia has an article about: |
6ebdbbf8b428264b | Time-independent schrödinger equation
What is time independent Schrodinger equation?
The time independent Schrodinger equation for one dimension is of the form. where U(x) is the potential energy and E represents the system energy. It has a number of important physical applications in quantum mechanics.
What is M in Schrodinger equation?
…where m is the mass of the particle, V(x,t) is the potential energy function of the system, i again represents the square root of –1, and the constant ħ is defined as in equation (2.4): (2.4) Equation (2.3) is known as the time-dependent Schrödinger (wave) equation.
What is Schrodinger equation in chemistry?
The Schrödinger equation, sometimes called the Schrödinger wave equation, is a partial differential equation. It uses the concept of energy conservation (Kinetic Energy + Potential Energy = Total Energy) to obtain information about the behavior of an electron bound to a nucleus.
Why is there an I in the Schrodinger equation?
The imaginary constant i appears in the original Schroedinger article (I) for positive values of the energy, which therefore are discarded by Schrödinger, who wants real eigenvalues and requires negative energy.
What is Schrodinger’s law?
In Schrodinger’s imaginary experiment, you place a cat in a box with a tiny bit of radioactive substance. Now, the decay of the radioactive substance is governed by the laws of quantum mechanics. This means that the atom starts in a combined state of “going to decay” and “not going to decay”.
What are the applications of Schrodinger equation?
Schrödinger’s equation offers a simple way to find the previous Zeeman–Lorentz triplet. This proves once more the broad range of applications of this equation for the correct interpretation of various physical phenomena such as the Zeeman effect.
What is de Broglie equation?
In 1924, French scientist Louis de Broglie (1892–1987) derived an equation that described the wave nature of any particle. Particularly, the wavelength (λ) of any moving object is given by: λ=hmv. In this equation, h is Planck’s constant, m is the mass of the particle in kg, and v is the velocity of the particle in m/s
Can Schrodinger equation be derived?
It is not possible to derive it from anything you know. It came out of the mind of Schrödinger. The foundation of the equation is structured to be a linear differential equation based on classical energy conservation, and consistent with the De Broglie relations.
Is the cat alive or dead?
What is the equation for quantum physics?
The Schrödinger equation is the fundamental equation of physics for describing quantum mechanical behavior. It is also often called the Schrödinger wave equation, and is a partial differential equation that describes how the wavefunction of a physical system evolves over time.
What is the formula of wave function?
17.1. Schrödinger saw that for an object with E=hν (the Planck relation, where E equals energy and h is Planck’s constant), and λ = h/p (the de Broglie wavelength, where p is momentum), this equation can be rewritten as a quantum wave function. This is the quantum wave function.
Leave a Reply
Tensile stress equation
Quotient rule equation
|
944f70eab0e73639 | Dedication is a more important sign of integrity than enthusiasm. It is necessary to have faith in a pathway and clear away doubts to ascertain if they are realistic or merely forms of resistance. A seeker should have the security and support of inner certainty and firm conviction that are consequent to study, personal research, and investigation. Thus, a pathway should be intrinsically reconfirming by discovery and inner experience. A true pathway unfolds, is self-revelatory, and is subject to reconfirm action experiential.
Daily Reflections from Dr. David R. Hawkins: 365 Contemplations on Surrender, Healing, and Consciousness, pg. 23.
The source of pain is not the belief system itself but one’s attachment to it and the inflation of its imaginary value. The inner processing of attachments is dependent on the exercise of the will, which alone has the power to undo the mechanism of attachment by the process of surrender. This may be subjectively experienced or contextualized as sacrifice, although it is actually a liberation. The emotional pain of loss arises from the attachment itself and not from the “what” that has been lost.
Daily Reflections from Dr. David R. Hawkins, pg. 123
This latest book can be purchased through Amazon.
Care Instead of Fear
Each of us has within us a certain reservoir of suppressed and repressed fear. This quantity of fear spills into all areas of our life, colors all of our experience, decreases our joy in life, and reflects itself in the musculature of the face so as to affect our physical appearance, our physical strength, and the condition of health in all of the organs in the body. Sustained and chronic fear gradually suppresses the body’s immune system. … Although we know that it is totally damaging to our relationships, health, and happiness, we still hang on to fear. Why is that?
We have the unconscious fantasy that fear is keeping us alive; this is because fear is associated with our whole set of survival mechanisms. We have the idea that if we were to let go of fear, our main defense mechanism, we would become vulnerable in some way. In Reality, the truth is just the opposite. Fear is what blinds us to the real dangers of life. In fact, fear itself is the greatest danger that the human body faces. It is fear and guilt that bring about disease and failure in every area of our lives.
We could take the same protective actions out of love rather than out of fear. Can we not care for our bodies because we appreciate and value them, rather than out of fear of disease and dying? Can we not be of service to others in our life out of love, rather than out of fear of losing them? Can we not be polite and courteous to strangers because we care for our fellow human beings, rather than because we fear of losing their good opinion of us? … Can we not perform our job well because we care about the recipients of our services, rather than just the fear of losing our jobs or pursuing our own ambition? Can we not accomplish more by cooperation, rather than fearful competition? …On a Spiritual level, isn’t it more effective if, out of compassion and identification with our fellow human beings, we care for them, rather than trying to love them out of fear of God’s punishment if we don’t?
Letting Go: The Pathway of Surrender, Ch. 6, pg. 99-100
Truth is Non-Predictive
Just on the physical level, we saw from the Heisenberg principle that the state of the universe, as it is now, which we can define by the Schrödinger equation, is changed by merely observing it. Because what happens is you collapse the wave function from potentiality to actuality. You now have a new reality. In fact, you have to use different mathematical formulas, like the Dirac equation. So, you’ve gone from potential into actuality. That transformation does not occur without interjection of consciousness. Consequently, a thing could stand as a potentiality for thousands of years. Along comes somebody who looks at it differently, and bang, it becomes an actuality. So the unmanifest then becomes the manifest as the consequence of creation. Therefore, predicting the future is impossible because you would have to know the mind of God, because creation is the unfolding of potentiality, depending on local conditions and intention. You have no idea what intention is. Intention can change one second from now. If the future was predictable, there would be no point to human existence because there would be no karmic benefit, no gain or capacity to undo that which is negative. It would be confined to what is called predestination. Predestination and predictions of the future miss the whole purpose of existence and jump the whole understanding of the evolution of consciousness. There would be no karmic merit nor demerit. There would be no salvation. There would be no heavens. There would be no stratifications of levels of consciousness. We would all just emerge perfectly in a perfect realm. And therefore, there would be no purpose to this life at all.
The Wisdom of Dr. David R. Hawkins, Ch. 6, pg. 102
Note: This book is available through Amazon: The Wisdom of Dr. David R. Hawkins: Classic Teachings on Spiritual Truth and Enlightenment: Hawkins M.D. Ph.D, David R.: 9781401964979: Books or Hay House, Inc.: The Wisdom of Dr. David R. Hawkins (
Greater Freedom
Spiritual reality is a greater source of pleasure and satisfaction than the world can supply. It is endless and always available in the present instead of the future. It is actually more exciting because one learns to live on the crest of the current moment, instead of on the back of the wave (which is the past) or on the front of the wave (which is the future). There is greater freedom from living on the exciting knife-edge of the moment than being a prisoner of the past or having expectations of the future.
|
27e88fea507636d3 | Lie-algebraic discretization of differential equations
title={Lie-algebraic discretization of differential equations},
author={Yu. F. Smirnov and Alexander V. Turbiner},
journal={Modern Physics Letters A},
A certain representation for the Heisenberg algebra in finite difference operators is established. The Lie algebraic procedure of discretization of differential equations with isospectral property is proposed. Using sl2-algebra based approach, (quasi)-exactly-solvable finite difference equations are described. It is shown that the operators having the Hahn, Charlier and Meissner polynomials as the eigenfunctions are reproduced in the present approach as some particular cases. A discrete version… Expand
Umbral calculus, difference equations and the discrete Schrödinger equation
In this paper, we discuss umbral calculus as a method of systematically discretizing linear differential equations while preserving their point symmetries as well as generalized symmetries. TheExpand
Discretization of nonlinear evolution equations over associative function algebras
Abstract A general approach is proposed for discretizing nonlinear dynamical systems and field theories on suitable functional spaces, defined over a regular lattice of points, in such a way thatExpand
Linear operators with invariant polynomial space and graded algebra
The irreducible, finite-dimensional representations of the graded algebras osp(j,2) (j=1,2,3) are expressed in terms of differential operators. Some quantum deformations of these algebras are shownExpand
Bethe ansatz solutions to quasi exactly solvable difference equations
Bethe ansatz formulation is presented for several explicit examples of quasi exactly solvable difference equations of one degree of freedom which are introduced recently by one of the presentExpand
Discrete Differential Geometry and Lattice Field Theory
We develope a difference calculus analogous to the differential geometry by translating the forms and exterior derivatives to similar expressions with difference operators, and apply the results toExpand
Dolan–Grady relations and noncommutative quasi-exactly solvable systems
We investigate a U(1) gauge invariant quantum mechanical system on a 2D noncommutative space with coordinates generating a generalized deformed oscillator algebra. The Hamiltonian is taken as aExpand
Canonical commutation relation preserving maps
We study maps preserving the Heisenberg commutation relation ab - ba = 1. We find a one-parameter deformation of the standard realization of the above algebra in terms of a coordinate and its dualExpand
Quasi-Exactly Solvable Hamiltonians related to Root Spaces
Abstract sl(2)−Quasi-Exactly-Solvable (QES) generalization of the rational A n , BC n , G 2, F 4, E 6,7,8 Olshanetsky-Perelomov Hamiltonians including many-body Calogero Hamiltonian is found. ThisExpand
Heisenberg algebra, umbral calculus and orthogonal polynomials
Umbral calculus can be viewed as an abstract theory of the Heisenberg commutation relation [P,M]=1. In ordinary quantum mechanics, P is the derivative and M the coordinate operator. Here, we shallExpand
A certain notion of canonical equivalence in quantum mechanics is proposed. It is used to relate quantal systems with discrete ones. Discrete systems canonically equivalent to the celebrated harmonicExpand
Quasi-exactly-solvable problems andsl(2) algebra
Recently discovered quasi-exactly-solvable problems of quantum mechanics are shown to be related to the existence of the finite-dimensional representations of the groupSL(2,Q), whereQ=R, C. It isExpand
Lie-algebras and linear operators with invariant subspaces
A general classification of linear differential and finite-difference operators possessing a finite-dimensional invariant subspace with a polynomial basis (the generalized Bochner problem) is given.Expand
Classical Orthogonal Polynomials of a Discrete Variable
The basic properties of the polynomials p n (x) that satisfy the orthogonality relations $$ \int_a^b {{p_n}(x)} {p_m}(x)\rho (x)dx = 0\quad (m \ne n) $$ (2.0.1) hold also for the polynomialsExpand
Turbiner “ Quasiexactlysolvable problems and sl ( 2 , R ) algebra ”
• Comm . Math . Phys . Journ . Phys . A |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 38