chash
stringlengths
16
16
content
stringlengths
267
674k
f19c12e6c74dfc0c
Monthly Archives:   February 2015 Nanoscience meets Gulliver’s Travels   27 February, 2015 Photo from 20th Century Fox. Gulliver's travels from 2010, with Jack Black in the leading role, here in Lilliput-land. Photo from 20th Century Fox. Gulliver’s travels from 2010, with Jack Black in the leading role, here in Lilliput-land. Wars fought over eggs: In Jonathan Swift’s famous book from 1726, and from the 2010 movie, Gulliver is shipwrecked at the island Lilliput where very tiny people live. It turns out that these Lilliputians are in war for something that seems absolutely absurd: the correct side of breaking a boiled egg. One population believes that the correct side is the larger end and therefore they are in war with the other population. They believe that all eggs need to be broken at the smaller end. Gulliver is flabbergasted as these eggs are so tiny that he can’t even see the difference. Moreover, for the taste it wouldn’t matter at which side you break the eggs, does it? Swift used this egg-quarrel as a metaphor to criticize the political conflicts between Catholics and Protestants. But how absurd is this story? Knowledge on nanoscale Through our research at the applied theoretical chemistry group at NTNU we have, for the first time, developed an accurate method based on both molecular simulations and theory. The method can predict heat and mass transfer through curved interfaces at the nanoscale. Whereas curvature is mostly insignificant for these processes for macroscopic objects, at the nanoscale things are completely different. The interfaces of nanobubbles and nanodroplets conduct heat in an uneven way depending on their shape and size. We have calculated how the heat conductance is distributed at the interface of such nano-objects. Our work, which was recently published in the prestigious Physical Review Letters, shows examples like an oblate spheroidal droplet, a prolate spheroidal bubble, and a toroidal bubble; things that respectively look like a nanosized M&M candy, rugby ball, and a doughnut. This knowledge is of fundamental importance since it will for instance help to understand the shape of snowflakes formed from tiny ice seeds, or how gas bubbles grow, which often start out from nanosized cavities. In addition, the theory will also be useful for designing the next generation nanodevices with specific properties. For instance, the method could be useful in the development of heat diodes and heat-transistors. Size does matter Now, coming back to the story of Gulliver and the Lilliputians. Was the reason for this war really that absurd? Gulliver’s reasoning that is doesn’t matter for the taste is certainly true for our macroscopic eggs, but given the small size of the eggs in Lilliput-land, we might expect a different heat current at the two sides when the egg is being boiled. According to our method will the small end side of the egg be hard boiled and the larger end will be soft boiled. Hence, although the average taste of the egg is independent of at which side you open it, it will matter for the first bite. And let’s be frank, the average taste of a dinner will not change if you start with dessert and end with the main meal, the gastronomical experience will be different. Still not enough to start a war, but the whole egg-quarrel is maybe not as absurd as Gulliver thought. Blog text written by Titus van Erp and Øivind Wilhelmsen Meet Nanno, the very important algae   19 February, 2015 algae-nannoIn my PhD project, I study the microalgae Nannochloropsis oceanica. My goal is to get a better understanding of how the genetic regulation of the lipid production in the microalgae is affected by different environmental conditions. You cannot really see Nannochloropsis or other microalgae with the naked eye (they range in sizes from 2-100 µm). Therefore, I chose to illustrate the algae species I am working with as a cartoon represented by a little friend of mine; Nanno. Have you ever seen a really green ocean? That greenish color you can see in the water is partly because of billions of algae that are growing there. You can see those algal communities on satellite pictures. algae-co2-captureWhat you should remember that is special about algae is that they can get energy for reproduction and growth from sunlight, using photosynthesis! While growing, algae capture carbon dioxide (CO2). In fact, algae is fixing more than 40% of the worlds carbon (CO2). Why are microalgae so interesting to humans? Not only do they produce components that are industrially interesting, but also because algae can be used in a way no other plant/eukaryote can. They can be grown in bioreactors or ponds in the desert or other marginal land and they don’t need freshwater. Therefore algae don’t compete with agriculture and with human food sources such as crops does. What components do humans get from algae and how? For example, in the long run the idea is to use their lipids as an alternative source of bioenergy – see here Some algae species can also be a source for the healthy omega-3-fatty acids, like those you can find in fish. I hope you enjoyed the insight of the, for humans meaningful, life story of Nannochloropsis! How to choose a research area?   13 February, 2015 Being a scientific researcher gives you the freedom to pursue answers to questions that you, and hopefully many others, find interesting. But with freedom comes the necessity to choose: with so many topics to pick from, how do you select which research field to devote yourself to? Once this question has been posed, a number of other ones immediately rise to the surface. • Do I choose a research direction which is hot right now or one that has been a long-standing unsolved problem? • Should I select a topic which is attracting much interest in my own university/country or one where the leading research groups belong to other countries? • Is it better to choose a research area that will more easily generate funding from grants or one which I personally find more interesting even if funding will be more difficult? Not an easy task. I’d say it is quite the balancing act, frankly, because you have at least three aspects to consider. research-choice-yangFirst of all, you probably want to do research on something that you are personally fascinated by. Otherwise, it will be tough to find the motivation over time. Secondly, personal interest alone is not necessarily the only guideline that should be taken into account – it also seems reasonable to consider topics that will lead to a real and useful advance in knowledge. For instance, I would argue that it is potentially of higher importance to identify materials that become superconducting at higher temperatures than is possible today than it is to compute an analytical expression for the 10th order correction to the energy eigenvalues of the Schrödinger equation for an anharmonic oscillator, even if you happen to be absolutely fascinated by doing perturbation theory. Thirdly, you have to consider what will be best in order build your scientific career. Some research topics are simply much more strategic than others when it comes to your chances of getting funding and applying for grants. To illustrate this point, think of research problems categorized by their risk and their potential gain. Let me give two examples. Low risk – low gain Find an exact solution of the Lagrange equations for a classical particle moving in a potential which has no realization in nature and which does not require any new or interesting mathematical techniques. High risk – high gain Develop a model for the normal-state of the high-Tc superconducting cuprates and the underlying microscopic mechanism that generates superconductivity. Many would argue that this is one of the most important unsolved problems in condensed matter physics. Several brilliant minds have dedicated themselves to this topic over the last 30 years, yet a solution remains elusive. Certainly a very high risk problem, as a solution is not guaranteed by any means – but the potential gain is equally high, no doubt worthy of a Nobel prize in Physics. What should you go for then? Well, it seems clear to me that the research field you devote yourself to should have a potentially high gain in order for it to be worthwhile. A high risk associated with it could make it more suitable for the most prestigious grants such as ERC funding, although it still has to be realistic. So there you have it – ideally, you should then pick something that you find very interesting, something which will clearly move the research front forward and contribute to an expansion of useful knowledge compared to what was known previously, and something that will put you in a good position to apply for research funding and grants. If you can find something matching all these criteria, you have an excellent starting point. Another aspect that will influence what kind of research direction you choose is which career stage you are in. As a masterstudent, the emphasis is on learning new physics and techniques, and so a low risk – low gain project is perfectly viable. Proceeding with a Ph.D degree, the risk taking has to be higher since you are now supposed to make a real contribution to the research community and so it is no longer a good idea to play things completely safe. You get the idea: the importance of moving toward high risk – high gain projects increases as you continue along your career trajectory. Some food for thought, hopefully. Stepping outside of the comfort zone and embarking on a research journey which you don’t yet know the outcome of can be scary, but the reward can make it very worthwhile. And as with many other things in life, the journey itself will be very rewarding in itself.
c7444331e89fc758
My Book frontcover…at the Publisher’s website Chapter 1: Probability: Basic concepts and theorems [PDF] An invaluable supplement to standard textbooks on quantum mechanics, this unique introduction to the general theoretical framework of contemporary physics focuses on conceptual, epistemological, and ontological issues. The theory is developed by pursuing the question: what does it take to have material objects that neither collapse nor explode as soon as they are formed? The stability of matter thus emerges as the chief reason why the laws of physics have the particular form that they do. The first of the book’s three parts familiarizes the reader with the basics through a brief historical survey and by following Feynman’s route to the Schrödinger equation. The necessary mathematics, including the special theory of relativity, is introduced along the way, to the point that all relevant theoretical concepts can be adequately grasped. Part II takes a closer look. As the theory takes shape, it is applied to various experimental arrangements. Several of these are central to the discussion in the final part, which aims at making epistemological and ontological sense of the theory. Pivotal to this task is an understanding of the special status that quantum mechanics attributes to measurements — without dragging in “the consciousness of the observer.” Key to this understanding is a rigorous definition of “macroscopic” which, while rarely even attempted, is provided in this book.
54f1e43e14654650
Forgot your password? Earth Science Scientific Cruise Meets Perfect Storm, Inspires Extreme Wave Research 107 Posted by Unknown Lamer from the creative-punishment-for-copyright-infringers-discovered dept. An anonymous reader writes "The oceanographers aboard RRS Discovery were expecting the winter weather on their North Atlantic research cruise to be bad, but they didn't expect to have to negotiate the highest waves ever recorded in the open ocean. Wave heights were measured by the vessel's Shipborne Wave Recorder, which allowed scientists from the National Oceanography Centre to produce a paper titled 'Were extreme waves in the Rockall Trough the largest ever recorded?' It's that paper, in combination with the first confirmed measurement of a rogue wave (at the Draupner platform in the North Sea), that led to 'a surge of interest in extreme and rogue waves, and a renewed emphasis on protecting ships and offshore structures from their destructive power.'" Scientific Cruise Meets Perfect Storm, Inspires Extreme Wave Research Comments Filter: • by Anonymous Coward on Tuesday April 17, 2012 @12:06AM (#39707379) This scientific cruise also proved that the only kind of cruise where nobody gets laid is a "scientific cruise" • by cplusplus (782679) on Tuesday April 17, 2012 @12:15AM (#39707423) Journal I only RTFAs to find out how high the waves were - it turns out they were up to 29.1 meters (95.5 feet). • Rogue waves (Score:3, Funny) by gstrickler (920733) on Tuesday April 17, 2012 @12:23AM (#39707453) Outlaw them and put out a bounty (or a Bounty?) • 2006 (Score:5, Informative) by Anonymous Coward on Tuesday April 17, 2012 @12:32AM (#39707491) The article was published in 2006. How is this 'new?' • The article was published in 2006. How is this 'new?' I guess it's some sort of tie in with the 100th anniversary of the Titanic making it almost all the way across the Atlantic. • The wave was so high that the ship did a loopty-loop, causing a rift in time where they just ended up here. The same phenomenon can be seen if you can swing high enough on a swingset to go around once • by jlehtira (655619) Well, I agree with your point. But six years is a good time to let scientific papers simmer. Less than that is not enough time for other scientists to evaluate the correctness and value of some paper. • by Anonymous Coward Many researchers were lost during the peer-review of this paper. • by dreemernj (859414) 2006? Wasn't that around the time a rogue wave was recorded on The Deadliest Catch? • Data collected in 2000. Paper published in 2006. Reported in /. in 2012. The pace of good science is slow and deliberate. • by Anonymous Coward on Tuesday April 17, 2012 @12:43AM (#39707553) look up Schrodinger wave equations and apply them to ocean waves. You will get 30+ meter tall waves with a trough next to the "wall" of water, (the wave is tall and narrow - like a wall). This trough adds to the great difficulty in surviving one of these waves. Ships that are designed to withstand forces of 10 tons/m2 have to content with 10 times that force. I believe there was a study in which someone, (don't remember her name :( ) mapped the entire earth over a two week period and found something on the order of 20 of these waves. Fascinating stuff. • by phantomfive (622387) on Tuesday April 17, 2012 @03:21AM (#39708089) Journal Oh yeah, just found it [bbc.co.uk]. They found about 10 giant waves. • by Anonymous Coward FYI the Schrodinger wave equation does not describe ocean waves. Water waves are described by the Navier-Stokes (N-S) equations. Turbulence models fall out of N-S, however only electrons sometimes fall out from Schrodinger :) • There is a non-relativistic version of the Schrödinger equation. Some theories attempt to explain rogue waves in the open sea using these non-linear equations as a model, because the distribution of wave heights that would result from the linear model substantially underpredicts the occurrence and size of rogue waves. • by Anonymous Coward The nonlinear Schordinger equation is one of the many various equations that can be used to describe the behaviour of water waves in various regimes, with a tiny bit about it on Wikipedia here [wikipedia.org]. Although the NLS is mostly used for behaviour of the envelope of deep water waves, which means you can show soliton based rouge wave like behaviour, but not say much about trough to peak steepening as in the grandparent post. The set of equations and theories used to model nonlinear water waves is quite diverse, wit • by WaffleMonster (969671) on Tuesday April 17, 2012 @12:52AM (#39707605) For those looking for more details about this voyage http://eprints.soton.ac.uk/294/ [soton.ac.uk] • Specifically in 1998, a 120ft wave off the east coast of tasmania http://www.swellnet.com.au/news/124-a-short-history-of-tasman-lows [swellnet.com.au] • Since extreme waves were not the subject of their expedition, they had not read all the prior literature. • by TapeCutter (624760) on Tuesday April 17, 2012 @02:48AM (#39708017) Journal The Tasman sea is notorious for rouge waves. Many moons ago I worked a fishing trawler in Bass Straight, I never saw anything like 120ft but the regular waves were tall enough that the radar was blocked by the peaks when the boat was in a trough, I'm guessing the radar mast was about 30ft above the water line. A lot like riding in a giant roller coaster carriage really, slowly climb up one wave, crest, then race down the other side and watch the bow dig under the next one, throw the water over the wheel house as the bow pops up to the surface, and starts the next climb. From what I've heard, the problem with rouge waves is not so much their height but the fact that they are too steep to climb. • Wow, that is incredibly exciting. • I detect a hint of sarcasm but to be honest it was downright fucking scary the first trip but after a few trips it became as exciting to me as an old fashioned roller coaster is to the guy who stands up on it all day operating the brake. Although a stingray the size of a family dinner table flapping about on an 8X12 deck was never boring. • No sarcasm at all. If the human lifespan weren't so short I would definitely consider going down and trying it out for a few years. I don't know about that stingray thing, though. I know people who go ocean kayaking but that's nothing in comparison. • by tlhIngan (30335) Waves are never boring, especially big ones. The key is to cut through them - if you let them hit the side, you risk capsizing. The only way to do this is engine power (run • by serbanp (139486) Does this mean that the "the Perfect Storm" depiction of how the Andea Gail sunk was technically inaccurate? In that film, the ship went with its bow straight into the freak wave but could not reach the top and fell over. • Yep, it's a lot like a plane, if engine is fucked, gravity takes over and you basically fall of the wave.. • That article claims 42.5m is 120 feet - it's actually 140 feet. The wave was probably recorded as 120 feet and someone mangled the conversion rather than the other way round. • by Sarten-X (1102295) on Tuesday April 17, 2012 @12:53AM (#39707611) Homepage Rogue waves: Demonstrating yet again that reality is a fascinatingly weird place. • by iamhassi (659463) And we don't understand our planet as much as we think. We are always focused on exploring strange new worlds, to seek out new life and new civilizations, to boldly... um, you get the idea, but look, there's new things happening on our own planet. How can we understand new planets when we don't understand the one we are on? Not saying never explore space, just saying maybe we should focus on what we have. • by Anonymous Coward How can we understand this planet when we have nothing to compare it to? Rethorical questions only caters to peoples emotional response but they don't make much of an argument. • by Sarten-X (1102295) Reminds me of the TV show seaQuest... for almost a whole season, they had interesting episodes based around real weirdness in the oceans. What fascinates me even more is the emergent behavior observable in simple systems, such as growing crystals, diffusing liquids, convection currents... all of those delightfully complex results from simple principles. There's beauty in the result, and simplicity in the process. • by Anonymous Coward Although the paper might have spurred interest in rogue waves, the wave in the paper linked in the summary wouldn't really be considered a rogue wave. Usually a cut-off is arbitrarily picked at 2 times the significant wave height (the average of the highest third of waves). In this case, the wave was about 1.5 times the significant wave height. Statistically speaking, you would expect about 1 in a 100 waves to be 1.5 times the wave height, just from the mixing and constructive interference of waves, whil • Big waves (Score:4, Interesting) by MarkRose (820682) on Tuesday April 17, 2012 @01:06AM (#39707665) Homepage Waves over 20 m (60 ft) tall are actually pretty common in some places. My dad is senior keeper at Triple Island Lightstation [fogwhistle.ca], located just off the BC coast. In severe winter storms, the waves will often crest over the square part of the building, which is about 20 m above sea level. This January, one such wave blew in a storm window on the top floor -- several tons of water will sometimes do that. The building stays up because it's constructed with 2 ft thick rebar concrete walls. • Re:Big waves (Score:5, Informative) by tirerim (1108567) on Tuesday April 17, 2012 @01:39AM (#39707811) TFA is talking about waves in the open ocean, though. Waves get higher when they reach shallower water, so the 20 m waves you're talking about would have been significantly smaller in the open ocean -- which makes 29 m open ocean waves that much more impressive. • Nice traditional exterior, but sad to see the drop ceiling [lighthousememories.ca] on the interior. At least the wood floor is original. • Interesting link but some of the text is reminiscent of Julian and Sandy (http://en.wikipedia.org/wiki/Julian_and_Sandy) from "Round the Horne", I mean, "The Triple Island light was built to guide mariners through the rocky waters of Brown Passage, on their way to the port of Prince Rupert.", I ask ya! • It's interesting how often myth and legend end up being scientific fact. There has been talk since sailors took to the sea of rogue waves that reached a 100' or more. Science has been confirming these myths in recent years. Most myths have an element of truth in them. On the practical side it's a serious concern since surviving a 100' rogue wave is not something all sea worthy ships can do yet they can face them without warning. I read years ago the theoretical limit was twice what has been recorded so the • The paper is from 2006, and describes a wave observed in 2000. Satellite-based radar altimeters produce a lot of data about wave height world wide, but they don't, apparently, have quite enough resolution yet to see this kind of thing. A view of such waves from above, over a few minutes, would tell us a lot. Is it an intersection of two or more waves? How far does it travel? How long does it persist? The U.S. Navy has put considerable effort into answering questions like that. • bad statistics (Score:4, Interesting) by Tom (822) on Tuesday April 17, 2012 @04:24AM (#39708283) Homepage Journal What has fascinated me about freak/rogue waves is that sailors have known about them for decades if not centuries, but scientists were telling them it can't be. And the reason is badly understood statistics. I've recently read Black Swan, and that gave me a few new concepts to work with, but the basic idea is exactly that: We don't really have a good understanding of statistics and probabilities, especially about extremely low probabilities in big numbers. Or, as Tim Minchin put it: One-in-a-million things happen all the time. And it's not just in the oceans. The entire financial crisis was caused by the people in charge taking huge (but low probability) risks, ignoring that once enough people have taken enough of those "low probability" risk, they become very likely to actually happen. Freak waves are cool because they are in the gray area between the normal distribution and the really freaky - thus they happen often enough that they are rare, but not bigfoot-rare. We can actually study them. • Re: (Score:3, Interesting) by edxwelch (600979) There's an interesting article about that, here: http://www.bbc.co.uk/science/horizon/2002/freakwave.shtml [bbc.co.uk] Apparently, there are two scientific models, linear, which says freak waves are impossible and Quantum physics which says they are possible. • by Tom (822) The problem is that a gaussian approach to the numbers assumes that random fluctuations will even out. But the equations used in quantum physics allow for waves to combine, and that's what is happening - interference, just not between 2 waves as in the double-slit experiment, but between dozens or maybe hundreds of waves. This article here: http://dev.physicslab.org/Document.aspx?doctype=3&filename=PhysicalOptics_InterferenceDiffraction.xml [physicslab.org] shows towards the bottom how massive peaks you can get with mult • by Anonymous Coward Linear wave theory allows for interference and combining of waves (that is kind of actually one of the major properties of linear theories in a lot of situations). The statistics on linear theory waves (which ends up being a Rayleigh distribution, not a Gaussian) is what says that waves much larger than those around it are very unlikely. What nonlinear theories add is not just overlapping like interference, but soliton like solutions, where a single wave or small wave train much larger than neighboring wa • by Tom (822) Thanks, AC. In 12+ years of /. this was one of the most informative AC comments I've come across. • We have bigger waves in Texas! • I've never understood that particular idiocy. Texans know they don't live in the biggest US state, right? Texas is less than half the size of Alaska. • by dtmos (447842) * on Tuesday April 17, 2012 @07:09AM (#39708623) My uncle retired as a US Navy Captain. For many years he had two photographs displayed in his house, which he ascribed to Admiral "Bull" Halsey's "second" typhoon [navy.mil], in June 1945. At that time my uncle was an ensign, assigned to a destroyer, and on his first sea voyage. The two photographs were of a sister destroyer. In the first photograph, all one sees is a giant wave, with the bow of the destroyer sticking out of one side, and the stern sticking out of the other. The middle of the ship, including the masts and superstructure, is submerged and not visible. In the second photo, taken a few seconds later, the middle of the ship is now visible, but both the bow and stern are now submerged in the wave train. And as a kid, the part that fascinated me the most: You could see an air gap below the middle of the ship, between the ship's keel and the wave trough below. • I'm surprised I can't get for my boat (or raft) a platform with accelerometers that operates a hydraulic piston to compensate for wave action. It might need some lateral actuator too, as wave motion is circular. But it might not, if the light floats slide along the surface as the piston pushes down on them keeping the heavy inertial payload in place. Just accelerometers, hydraulic pistons, and DSP. Big bonus points for a device that harvests that energy moving through the site to power the hydraulics. Pause for storage relocation.
ddea25f9cf6ea0ef
Wolfram Blog Michael Trott Even More Formulas… for Everything—From Filled Algebraic Curves to the Twitter Bird, the American Flag, Chocolate Easter Bunnies, and the Superman Solid August 15, 2013 — Michael Trott, Chief Scientist This blog post is the continuation of my last two posts (1, 2) about formulas for curves. So far, we have discussed how to make plane curves that are sketches of animals, faces, fictional characters, and more. In this post, we will discuss the constructions of some filled curves (laminae). Here are some of the non-mathematical laminae that Wolfram|Alpha knows closed-form equations for: shape lamina Assume we want a filled curve instead of just the curve itself. For closed curves, say the James Bond logo, we could just take the curve parametrizations and fill the curves. As a graphics object, filling a curve is easy to realize by using the FilledCurve function. James Bond curve For the original curves, we had constructed closed-form Fourier series-based parametrizations. While the FilledCurve function yields a visually filled curve, it does not give us a closed-form mathematical formula for the region enclosed by the curves. We could write down contour integrals along the segment boundaries in the spirit of Cauchy’s theorem to differentiate the inside from the outside, but this also does not result in “nice” closed forms. So, for filled curves, we will use another approach, which brings us to the construction of laminae for various shapes. The method we will use is based on constructive solid geometry. We will build the laminae from simple shaped regions that we first connect with operations such as AND or OR. In a second step, we will convert the logical operations by mathematical functions to obtain formulas of the form f(x, y) > 0 for the region that we want to describe. The method of conversion from the logical formula to an arithmetic function is based on Rvachev’s R-function theory. Let’s now construct a geometrically simple shape using the just-described method: a Mitsubishi logo-like lamina, here shown as a reminder of how it looks. Mitsubishi logo-like lamina As this sign is obviously made from three rhombi, we define a function polygonToInequality that describes the interior of a single convex polygon. A point is an interior point if it lies on the inner side of all the line segments that are the boundaries of the polygon. We test the property of being inside by forming the scalar product of the normals of the line segments with the vector from a line segment’s end point to the given point. It is simple to write down the vertices of the three rhombi, and so a logical formula for the whole logo. The last equation can be considerably simplified. The translation of the logical formula into a single inequality is quite simple: we first write all inequalities with a right-hand side of zero and then translate the Or function to the Max function and the And function to the Min function. This is the central point of the Rvachev R-function theory. By using more complicated translations, we could build right-hand sides of a higher degree of smoothness, but for our purposes Min and Max are sufficient. The points where the right-hand side of the resulting inequality is greater than zero we consider part of the lamina, otherwise the points are outside. In addition to just looking nicer and more compact, the single expression, as compared to the logical formula, evaluates to a real number everywhere. This means, in addition to just a yes/no membership of a point {x,y}, we have actual function values f(x, y) available. This is an advantage, as it allows for plotting f(x, y) over an extended region. It also allows for a more efficient plotting than the logical formula because function values around f(x, y) = 0 can be interpolated. So we obtain the following quite simple right-hand side for the inequality that characterizes the Mitsubishi logo. (threeRhombiiImplicit = toRvachevRForm[Simplify[threeRhombi]]) // TraditionalForm And the resulting image looks identical to the one from the logical formula. Plotting the right-hand side for the inequality as a bivariate function in 3D shows how the parts of the inequality that are positive emerge from the overall function values. Now, this type of construction of a region of the plane through logical formulas of elementary regions can be applied to more regions and to regions of different shapes, not necessarily polygonal ones. In general, if we have n elementary building block regions, we can construct as many compound regions as there are logical functions in n variables. The function BooleanFunction enumerates all these 2^2^n possibilities. The following interactive demonstration allows us to view all 65,536 configurations for the case of four ellipses. We display the logical formula (and some equivalent forms), the 2D regions described by the formulas, the corresponding Rvachev functions, and the 3D plot of the Rvachev R-function. The region selected is colored yellow. Cutting out a region from not just four circles, but seven, we can obtain the Twitter bird. Here is the Wolfram|Alpha formula for the Twitter bird. (Worth a tweet?) By drawing the zero-curves of all of the bivariate quadratic polynomials that appear in the Twitter bird inequality as arguments of max and min, the disks of various radii that were used in the construction become obvious. The total bird consists of points from seven different disks. Some more disks are needed to restrict the parts used from these six disks. Show[{twitterRegionPlot, ContourPlot Here are two 3D versions of the Twitter bird as 3D plots. As the left-hand side of the Rvachev R-equation evaluates to a number, we use this number as the z value (possibly modified) in the plots. We can also use the closed-form equation of the Twitter bird to mint a Twitter coin. The boundary of the laminae described by Rvachev-R functions has the form f(x, y) = 0. Generalizing this to f(x, y) = g(z) naturally extrudes the 2D shape into 3D, and by using a function that increases with |z|, we obtain closed 3D surfaces. Here this is done for g(z) ~ (z^2) for the Twitter bird (we also add some color and add a cage to confine the bird). The use g(z) ~ (z^2) means z ~ ± f(x y)^(1/2) at the boundaries, and the infinite slope of the square root functions gives a smooth bird surface near the z = 0 plane. SeedRandom["twitter"]; ContourPlot3D Now we will apply the above-outlined construction idea to a slightly more complicated example: we will construct an equation for the United States’ flag. The most complicated-looking part of the construction is a single copy of the star. Using the above function polygonToInequality for the five triangle parts of the pentagram and the central pentagon, we obtain after some simplification the following form for a logical function describing a pentagram. Here is the pentagram shown, as well the five lines that occur implicitly in the defining expression of the pentagram. The detailed relative sizes of the stars and stripes are specified in Executive Order 10834 (“Proportions and Sizes of Flags and Position of Stars”) of the United States government. Using the data from this document and assuming a flag of height 1, it is straightforward to encode the non-white parts of the US flag in the following manner. For the parallel horizontal stripes we use a sin(α y) (with appropriately chosen α) construction. The grid of stars in the left upper corner of the flag is made from two square grids, one shifted against the other (a 2D version of an fcc lattice). The Mod function allows us to easily model the lattice arrays. This gives the following closed-form formula for the US flag. Taking the visual complexity of the flag into account, this is quite a compact description. inUSFlagImplicit = toRvachevRForm[inUSFlagInequalities[{x, y}]] American flag formula AAmerican flag formula BAmerican flag formula C Making a plot of this formula gives—by construction—the American flag. flagUS = RegionPlot We can apply a nonlinear coordinate transformation to the inequality to let the flag flow in the wind. SeedRandom[100]; RegionPlot And using a more quickly varying map, we can construct a visual equivalent of Jimi Hendrix‘s “Star-Spangled Banner” from the Rainbow Bridge album. SeedRandom[140]; RegionPlot As laminae describe regions in 2D, we can identify the plane with the complex plane and carry out conformal maps on the complex plane, such as for the square root function or the square. {WolframAlpha["sqrt(z)", {{"ComplexMap", 1}, "Content"}], WolframAlpha["z^2", {{"ComplexMap", 1}, "Content"}]} Here are the four maps that we will apply to the flag. And these are the conformally mapped flags. GraphicsGrid[Partition[Function[{map, mapC}, Show[flagUS /. The next interactive demonstration applies a general power function z -> (shift + scale z)α to the plane containing the flag. (For some parameter values, the branch cut of the power function can lead to folded over polygons.) Manipulate[Show[flagUS /. So far we have used circles and polygons as the elementary building blocks for our lamina. It is straightforward to use more complicated shapes. Let’s model a region of the plane that approximates the logo of the largest US company—the apple from Apple. As this is a more complicated shape, calculating an equation that describes it will need a bit more effort (and code). Here is an image of the shape to be approximated. So, how could we describe a shape like an apple? For instance, one could use osculating circles and splines (see this blog entry). Here we will go another route. Algebraic curves can take a large variety of shapes. Here are some examples: Stone–Weierstrass theorem, which guarantees that any continuous function can be approximated by polynomials. We look for an algebraic curve that will approximate the central apple shape. To do so, we first extract again the points that form the boundary of the apple. (To do this, we reuse the function pointListToLines from the first blog post of this series, mentioned previously.) pointListToLines ApointListToLines B {dx, dy} = ImageDimensions[appleImage] We assume that the core apple shape is left-right symmetric and select points from the left side (meaning the side that does not contain the bite). The following Manipulate allows us to quickly to locate all points on the left side of the apple. Manipulate appleLikeLogoLines To find a polynomial p (x, y) = 0 that describes the core apple, we first use polar coordinates (with the origin at the apple’s center) and find a Fourier series approximation of the apple’s boundary in the form Code. The use of only the cosine terms guarantees the left-right symmetry of the resulting apple. \[CurlyPhi]rData = 1. fit = Fit[\[CurlyPhi]rData, {1, Cos[\[CurlyPhi]], Cos[2 \[CurlyPhi]], Cos[3 \[CurlyPhi]], Cos[4 \[CurlyPhi]], Cos[5 \[CurlyPhi]], Cos[6 \[CurlyPhi]], Cos[7 \[CurlyPhi]], Cos[8 \[CurlyPhi]]}, \[CurlyPhi]] We rationalize the resulting approximation and find the corresponding bivariate polynomial in Cartesian coordinates using GroebnerBasis. After expressing the cos(kφ) terms in terms of just cos(φ) and sin(φ), we use the identity cos(φ)^2+sin(φ)^2 = 1 to eliminate cos(φ) and sin(φ) to obtain a single polynomial equation in x and y. {X, Y} = Rationalize[fit {Sin[\[CurlyPhi]], Cos[\[CurlyPhi]]}, 10^-3] gb = GroebnerBasis[ Append[TrigExpand[{x, y} - {X, Y}], Cos[\[CurlyPhi]]^2 + Sin[\[CurlyPhi]]^2 - 1], {}, {Cos[\[CurlyPhi]], Sin[\[CurlyPhi]]}] // Factor As we rounded the coefficients, we can safely ignore the last digits in the integer coefficients of the resulting polynomial and so shorten the result. Slightly simplified version Here is the resulting apple as an algebraic curve. RegionPlot[Evaluate[N[appleCore < 0 /. Now we need to take a bite on the right-hand side and add the leaf on top. For both of these shapes, we will just use circles as the geometric shapes. The following interactive Manipulate allows the positioning and sizing of the circles, so that they agree with the Apple logo. The initial values are chosen so that the circles match the original image boundaries. (We see that the imported image is not exactly left-right symmetric.) Manipulate with circles So, we finally arrive at the following inequality describing the Apple logo. (appleImplicit = toRvachevRForm[ bunnyEquation = WolframAlpha Easter bunny lamina ComputableData AEaster bunny lamina ComputableData B We can easily extract the polygons from this 2D graphic and construct a twisted bunny in 3D. Twisted bunny in 3D The Rvachev R-form allows us to immediately make a 3D Easter bunny made from milk chocolate and strawberry-flavored chocolate. By applying the logarithm function, the parts where the defining function is negative are not shown in the 3D plot, as they give rise to complex-valued function values. Easter bunny made from milk chocolate and strawberry-flavored chocolate We can also make the Easter bunny age within seconds, meaning his skin gets more and more wrinkles as he ages. We carry out this aging process by taking the polygons that form the lamina and letting them undergo a Brownian motion in the plane. Aging Easter bunny Let’s now play with some car logo-like laminae. We take a Yamaha-like shape; here are the corresponding region and 3D plot. yamahaEquation = WolframAlpha GraphicsRow[{yamahaRegionPlot = RegionPlot We could, for instance, take the Yamaha lamina and place 3D cones in it. volkswagenEquation = WolframAlpha volkswagenRegionPlot = RegionPlot By forming a weighted mixture between the Yamaha equation and the Volkswagen equation, we can form the shapes of Yamawagen and Volksmaha. Next we want to construct another, more complicated, 3D object from a 2D lamina. We take the Superman insignia. Superman lamina The function superman[{x, y}] returns True if a point in the x, y plane is inside the insignia. superman[{x_, y_}] = WolframAlpha RegionPlot superman And here is the object we could call the Superman solid. (Or, if made from the conjectured new supersolid state of matter, a super(man)solid.) It is straightforwardly defined through the function superman. The Superman solid engraves the shape of the Superman logo in the x, y plane as well as in the x, y plane into the resulting solid. supermanSolid[{x_, y_, z_}] := superman[{x, y}] && superman[{x, z}] Viewed from the front as well as from the side, the projection is the Superman insignia. Superman GraphicsRow Superman 3 To stay with the topic of Superman, we could take the Bizarro curve and roll it around to form a Bizarro-Superman cake where Superman and Bizarro face each other as cake cross sections. bizarro[{x_, y_}] = WolframAlpha bizarroCake = Module This cake we can then refine by adding some kryptonite crystals, here realized through elongated triangular dipyramid polyhedra. kryptoniteCrystal = Map Show[{bizarroCake, kryptoniteCrystals}] Next, let’s use a Batman insignia-shaped lamina and make a quantum Batman out of it. Batman lamina We will solve the time-dependent Schrödinger equation for a quantum particle in a 2D box with the Batman insignia as the initial condition. More concretely, assume the initial wave function is 1 within the Batman insignia and 0 outside. So, the first step is the calculation of the 2D Fourier coefficients. batman[{x_, y_}] = WolframAlpha Numerically integrating a highly oscillating function over a domain with sharp boundaries using numerical integration can be challenging. The shape of the Batman insignia suggests that we first integrate with respect to y and then with respect to x. The lamina can be conveniently broken up into the following subdomains. gcd = GenericCylindricalDecomposition[ batman[28 {x, y} - {14, 7}], {x, y}][[1]]; integralsWRTy = Monitor All of the integrals over y can be calculated in closed form. Here is one of the integrals shown. To calculate the integrals over x, we need to multiply the integralsWRTy with sin(k Π x) and then integrate. Because k is the only parameter that is changing, we use the (new in Version 9) function ParametricNDSolveValue. Do[{x1, x2} = {integralsWRTy[[j]][[1, 1, 2, 1]] We calculate 200^2 Fourier coefficients. This relatively large number is needed to obtain a good solution of the Schrödinger equation. (Due to the discontinuous nature of the initial conditions, for an accurate solution, even more modes would be needed.) With[{kmMax = 200}, Monitor Using again the function xyArray from above, here is how the Batman logo would look if it were to quantum-mechanically evolve. quantumBatmanArray = xyArray We will now slowly end our brief overview on how to equationalize shapes through laminae. As a final example, we unite the Fourier series approach for curves discussed in the first blog post of this series with the Rvachev R-function approach and build an apple where the bite has the form of the silhouette of Steve Jobs, the Apple founder who suggested the name Mathematica. The last terms of the following inequality result from the Fourier series of Jobs’ facial profile. appleWithSteveJobsSilhouetteInequality = WolframAlpha Apple with Steve Jobs bite equation ComputableDataApple with Steve Jobs bite equation ComputableData B Leave a Comment One Comment Plotting and scheming… ;-) Posted by Lucian    August 22, 2013 at 2:48 am Leave a comment
2c0ed94a76498f5f
Journal of Mathematical Chemistry , Volume 44, Issue 1, pp 142–171 Principles for determining mechanistic pathways from observable quantum control data Original Paper Hamiltonian encoding (HE) methods have been used to understand mechanism in computational studies of laser controlled quantum systems. This work studies the principles for extending such methods to extract control mechanisms from laboratory data. In an experimental setting, observables replace the utilization of wavefunctions in computational HE. With laboratory data, HE gives rise to a set of quadratic equations for the interfering transition amplitudes, and the solution to the equations reveals the mechanistic pathways. The extraction of the mechanism from the system of quadratic equations raises questions of uniqueness and solvability, even in the ideal case without noise. Symmetries are shown to exist in the quadratic system of equations, which is generally overdetermined. Therefore, the mechanism is likely to be unique up to these symmetries. Numerical simulations demonstrate the concepts on simple model systems. Schrödinger equation quantum control control mechanism Hamiltonian Encoding quantum theory  Unable to display preview. Download preview PDF. Unable to display preview. Download preview PDF. Copyright information © Springer Science+Business Media, LLC 2007 Authors and Affiliations 1. 1.Mathematical SciencesCarnegia Mellon UniversityPittsburghUSA 2. 2.Department of ChemistryPrinceton UniversityPrincetonUSA Personalised recommendations
00f60e631b902b38
Thursday, February 27, 2014 Science, Religion and Magick A savvy and intelligent occultist and ritual magician would never attempt to rewrite scientific knowledge nor would he or she attempt to prove that religious doctrine is somehow objectively scientific. A wise occultist knows that there are very important boundaries between what science has determined and what religion has determined. Magick, as a practical spiritual discipline, occupies that nether region or undefined domain that lies somewhere in between science and religion. A smart ritual magician wouldn’t make assumptions that oppose scientific thought, such as denying physical evolution or climate change on one hand, nor would he or she deny that religions are based on subjective spiritual truths that can’t be proven or measured, only experienced. As I have maintained previously in my articles and in my books, science has its domain and religion has yet another domain entirely. Perhaps they might intersect here and there, but they remain exclusive to the objective and subjective worlds in which they occur. Where problems arise is when science denies the possibility of subjective spiritual truths or when religion attempts to make its tenets objective or as absolute truths. Sacred writ is allegorical and based on subjective spiritual truths and science is governed by the scientific method. One perspective cannot over-rule the other, and they should, therefore, live happily side by side. This is especially true when you consider that science has done a pretty good job in defining the objective physical world, and religion has been good (more or less) at defining the subjective spiritual reality. Yet we live in a secular society so that our very freedoms to believe, worship or practice our faith as we choose is not dictated by the government. So, living in a secular nation, using the fruits of technology as driven by science, and choosing to believe and practice our spiritual faith in whatever manner we choose is the foundation for a diverse, harmonious and democratic society.  Unfortunately, we human beings live in a world of consciousness that intersects all of these realities, and at times, due to the nature of our subjective experience, they can seem to blend or merge into each other, even they are and should remain distinct. While dreams, imagination and intuition play very important roles in both religion and magick (and I might add, even science), they need to be disciplined so that they won’t lead us to make erroneous assumptions about ourselves and the world that we live in. Magick, much more so than religion, can be a great source of personal and subjective misinformation, especially when we take those experiences that we had while undergoing conscious transformation out of the context of our subjective experience. Even if other individuals who were either with us when we underwent this magical experience or who performed the same rite at a different time might make what is experienced by the individual more objective if it is corroborated by a group, it doesn’t make it an objective physical fact that can be tested by the scientific method. This particular issue with the nature of subjectivity represents one of the critical areas where magick can cause delusions or supply us with erroneous assumptions.    Problems arise when someone takes religious doctrines (or occult tenets) and attempts to understand them as an objective physical reality, or correspondingly when science attempts to explain away or deride the paradoxes and inexplicable events that lie within the subjective experience of the observer. What we get when these domain boundary incursions occur is either pseudo-science, aggressive atheism or religious literalism, all of which are fraught with erroneous assumptions and overly simplistic explanations. As magicians who work with the shadowy fringes between the objective physical world and the subjective spiritual world it is our responsibility to carefully judge and understand the basis ofo both worlds. Simply put, it means that we should not attempt to overly objectify our subjective occult experiences, since in doing so we will open the door to personal delusion and adopt a propensity for mythic thinking. We should also be able to recognize hyperbole when it emerges in the declarations of scientists, religious leaders or particularly other occultists.  We saw this contrast between world views on display when Bill Nye recently debated physical evolution vs. creationism with Ken Ham, who happens to own and operate the Creationist Museum in Petersburg, KY. To declare that the world is not older than 6,000 years based on the chronology of the Old Testament Bible is to open oneself to ridicule and derision, since there are trees alive today that have been measured as being older than 6,000 years. Even Pat Robertson, who has publically maintained some pretty ridiculous notions rejects “Young Earth Creationism,” saying that it gives secularists and atheists grounds to ridicule and reject Protestant Christian theology. What wasn’t in evidence at the time of this debate was a corresponding agreement in either Catholic or Jewish circles, who seem to have an altogether different perspective for interpreting their sacred literature. I never heard of any Jewish Rabbi or Catholic theologian argue for interpreting the Bible literally, and I think that this is a key point that they seem to understand and that fundamentalist Christians don’t, and these different groups are reading the same texts! In my opinion, and it would seem that it is an accepted fact in many religious organizations, sacred writ consists of allegories disguised as spiritual truths, but these truths were never meant to contradict or replace the theories as generated from the scientific method. To force these religious truths into direct contradiction of established scientific theories is to foster a kind of pseudo-science that is more mythic than factual. That is how I would classify such hypotheses as Creationism or Intelligent Design. Correspondingly, there has lately been a lot of public buzz about a new hypothesis called Biocentrism that attempts to explain away the possibility of an objective physical world governed by the objective although relative phenomena of space and time. What might not seem obvious to the layperson is that Biocentrism, Creationism and Intelligent Design are all unprovable religious-based doctrines, as we shall see. When unqualified individuals attempt to re-explain or abrogate scientific theories without any evidence or for that matter, any counter theory, it ends up promoting the worst kind of pseudo-science. This is the crux of the problem facing our post-modern age. We seem to be haunted by the profound absolutist beliefs and propositions of our pre-scientific past and are unable or unwilling to embrace what modern science has been able to aptly prove. While it is true that we don’t have all of the answers and that the scientific community continues to mature and develop its understanding along with the tools they use to demonstrate and prove their scientific hypotheses, there is more than enough of a foundation to be able to determine the boundary between the objective and subject worlds. I believe that it is important for occultists and ritual magicians to understand this important boundary between the objective and subjective domains of their experiential worlds and to ensure that their beliefs and perspectives are scientifically “neutral.” What occultists and ritual magicians want to avoid is attempting to reinvent the metaphorical wheel when the “wheel” has been so well established and is an integral part of the world that we live in. To attempt such an exercise in futility is to invite ridicule and mockery from those not so aligned to occult and metaphysical truths. Let’s face the reality that someone else’s hypocrisy or hyperbole is quite amusing to us, but it isn’t so amusing when we are the butt of the same kinds of jokes. If we proceed in a manner that respects the accepted theories of science and the also the tenets of religion and occultism, and thereby avoid making unwarranted claims on the objective world, then we will experience our occultism and magical phenomena in a sane and rational manner. What we experience subjectively must be kept within its individual context, but it still should be subject to the scrutiny of peer review. Such a regimen is not only optimal, but it should be pursued in exclusion to other more dubious approaches. This brings me to the topic of my article, and that is to deliver some ponted opinions about Biocentrism. There’s an old adage that if something sounds too good to be true then it probably isn’t true. This could be an important truism as to why Biocentrism has become popular amongst New Age types but is not accepted by the scientific community. When the biggest name of those either agreeing with or promoting your scientific counter-theory is Depak Chopra, then your hypothesis is already in trouble. I say this without any disrespect to Depak Chopra, whose writings I have found interesting and useful. However, he is hardly the objective judge of a new approach to interpreting Quantum Mechanics, which, by the way, has spawned a lot of hopeful imaginings from occultists and metaphysicians who have grasped onto an overly-simplified version of this discipline in order to explain metaphysical or spiritual phenomena. Still, I digress from the topic. I have read the book “Biocentrism: How Life and Consciousness are the Keys to Understanding the True Nature of the Universe” written by Robert Lanza, MD with Bob Berman, and I must admit that it did indeed intrigue me at first. Biocentrism is a hypothesis that defines the material world as having arisen due to the impact of consciousness acting as the “observer” so that the universe, which was in an indeterminate state of probability, became resolved into a material universe predisposed to supporting life. This is a case of the tail wagging the dog in order to explain what might otherwise be a minor paradox. Thus, Robert Lanza has reversed the basic scientific assumption that living conscious beings evolved out of an essentially lifeless and soul-less material universe. Those who are occult students will immediately see that this is the Mind Before Matter perspective that has been prevalent in occult metaphysics and is a staple of the religious theology of the west. So, according to Dr. Lanza’s hypothesis, consciousness was the driver for the materialization of the universe. This would mean that this field of consciousness would have had to exist, or perhaps pre-exist, from the very beginning of the universe. It would almost seem that Dr. Lanza is proposing some kind of intelligent design by an unknown conscious entity or by a field of unified consciousness, but he doesn’t quite go that far. Instead, he just doesn’t define what that disembodied consciousness actually is. In fact, he fails altogether to define what consciousness is, and this is a subtle but fatal flaw in his entire hypothesis. At least in the occult metaphysics of the Mind Before Matter theme, the mind that existed prior to the manifestation of the universe is obviously either a Deity or represented by the Neoplatonic definition of the One. I admit that I was intrigued by Lanza’s speculation since it turns our notions about the nature of the universe and the power of human subjectivity into the reverse of what scientists have been saying since the 19th century. Such a hypothesis, if it were true, would put the subjective mind into the driver’s seat for all material creation, and that would shine a whole new light on such occult practices as magick. After getting excited by this new perspective, I decided that the prudent thing for me to do was to browse the web and read what other scientists thought of his hypothesis. What I found is that the scientific community has flatly rejected what Dr. Lanza has written, and I might add, for some really good reasons. This debate between the New Age proponents of Biocentrism and the adherents of accepted physical sciences, such as those who are on the cutting edge of Quantum Mechanics, reminded me of the debate about physical evolution between Nye and Ham. While Dr. Lanza does at least understand the science of Quantum Mechanics from the perspective of the layman, he is not an accredited physicist. He is, in fact, a medical researcher/engineer of some renown. However, despite his disdain of New Age fads and beliefs (as stated clearly in his book), Lanza has become something of a darling amongst the New Age populous, and his ideas about Biocentrism are already being mutated to propose concepts and ideas that he never states in his book. You can find a website here that pretty much demonstrates what his backers think and how they are using Quantum Mechanics to bolster all kinds of metaphysical truths. There is even a video on that web page that has captured Lanza’s basic public presentation of Biocentrism, just in case you want to get the gist of his hypothesis without having to read his book. Unavoidably, the scientific community has also responded to Biocentrism, and I might add, they have more or less excoriated his hypothesis quite completely. It would seem that Dr. Lanza’s popular book has taken some basic Quantum Mechanics experiments and given them an interpretation that they were never intended to support. You can find a good critique of Robert Lanza’s Biocentrism, and also Depak Chopra, in an interesting website managed by Indian scientists (Nimukta) who know all too well the metaphysical sources of this hypothesis. As I have stated previously, the argument about a conscious universe is a staple of the occult Mind Before Matter perspective, and it is a tenet that has not been scientifically proven so far. In fact, the scientific evidence still supports the Matter Before Mind perspective, and I doubt that will change in the future no matter what shape or form science takes. As occultists, we can appreciate the scientific perspective on this issue, and we can also see the spiritual truth associated with the occult and metaphysical perspective. Both represent the greater truths about the universe and the nature of the human spirit, yet one is materially objective and the other is spiritually subjective.  Some of Dr. Lanza’s seven principles of Biocentrism are very popular and in evidence today amongst those who are engaged in occultism and magick. Lon Milo Duquette has stated that the reality of occultism is all in our minds, it’s just that our minds are a lot bigger than we think. This might presuppose that Mr. Duquette is aware of Dr. Lanza’s hypothesis, but the similarity is rather compelling whatever the truth. However, the fifth principle for Biocentrism is the main argument, and in it you can see an interesting bias appear. That principle is basically stated as, “The universe is fine-tuned for life, which makes perfect sense as life creates the universe, not the other way around. The ‘universe’ is simply the complete spatiotemporal logic of the self.” Lanza’s premise that “life creates the universe, not the other way around” is pretty much taken from the annals of ancient metaphysics and western occultism in general. It is also the underlying theme for most religious cosmogony, and it is the crux of the debate between Creationism and Evolution. We can find it variously promoted in the New Thought religious paradigms of the turn of the 20th century, and it is also evident in the New Age. Robert Lanza has come up with this startling perspective because he has followed the Quantum Mechanics maxim that it is the power of the observer who determines what is perceived when performing and measuring the results of the behavior of sub-atomic particles. In other words, Dr. Lanza is interpreting this classical QM theory that all matter requires a conscious observer in order to collapse the probability field and determine an outcome. Of course if we take this to its ultimate conclusion, then we would expect the mind to formulate all reality and that physical reality doesn’t really exist. It reminds me of the New Age maxim that we make our own reality, although the opposite is a more powerful but obvious truth - that reality shapes and makes us who we are. The error in this logic is that the physical world does have an objective and verifiable quality despite the fact that how we perceive it is derived from our sense organs. For instance, light still has objective and measurable qualities that determine the spectrum of colors that we observe, since these are represented by different wave lengths as detected by the nerve cells in our eyes and processed by our brains. (We don’t just make up the colors in our head as an interpreting factor.) This is also true about time and space, while they might be relative, they are still an objective fact. Our minds are indeed powerful, but they are bounded and limited by physical laws that uniformly affect all of us. If this were not true, then our world would be run by mentally projected magical powers and not by technology that is based on physical laws.   Robert Lanza has made pronouncements that have overly simplified and distorted the actual science of Quantum Mechanics. He disregards the most important question that his hypothesis leaves unanswered, namely, if conscious life shaped the universe then why is the origin of life quite recent compared to the relative age of the universe. Consciousness is localized to living beings with a brain yet somehow it is expanded to encapsulate cultures and languages, thus making it a shared phenomenon. This is part of the mystery of consciousness that has yet to be tackled in a satisfactory manner by scientists. Then there is the question, is consciousness energy, and if so, what kind of energy? While energy itself might be indestructible, the death of the brain causes all body-centered consciousness to cease. Consciousness seems to be beyond and prior to the material brain (according to metaphysicians), but there is no objective truth, so far, that would lead scientists to presume that this is indeed a fact. Dr. Lanza’s hypothesis also says that space and time aren’t objectively real, but in fact they are real, just not absolute. In this, I believe that science has the better explanations, even though there are still some questions not yet answered. We can see Biocentrism (like other creationist based beliefs) functioning as a typical domain distortion between a religious based philosophy and empirical science, and as such, it should be taken with a grain of salt. Occult philosophy declares the principle that the Mind originated before Matter, and Science has declared the opposite theory, that Matter precipitated Mind. Both perspectives present an element of truth, but the context for either one is completely different. One is a materialistic and objective perspective, and the other is a spiritual and metaphysical perspective. Thus, both are true as long as they retain their appropriate boundaries. It is foolish for occultists to meddle with science and engage in pseudo science, and it’s probably foolish for scientists to attempt to define or debunk spiritual beliefs based on physical laws. A quote from the website Nirmukta seems to define the nature of the problem of Biocentrism and why it is a seductive but erroneous proposition. According to the latest cutting edge scientific perspective on Quantum Mechanics, the nature of the “observer” has been redefined in a significant way, and there is experimental evidence to back up that hypothesis. A different resolution to the problem of interfacing the microscopic quantum description of reality with macroscopic classical reality is offered by what has been called ‘quantum Darwinism.’ This formalism does not require the existence of an observer as a witness of what occurs in the universe. Instead, the environment is the witness. A selective witness at that, rather like natural selection in Darwin’s theory of evolution. The environment determines which quantum properties are the fittest to survive (and be observed, for example, by humans). Many copies of the fitter quantum property get created in the entire environment (‘redundancy’). When humans make a measurement, there is a much greater chance that they would all observe and measure the fittest solution of the Schrödinger equation, to the exclusion (or near exclusion) of other possible outcomes of the measurement experiment.” This new hypothesis in QM pretty much destroy’s Lanza’s hypothesis since the environment itself can be the witness, and also that there are multiple copies of the quantum property and the “fittest” is the one that gets measured. So it would appear that consciousness (i.e., a direct observer) is not required to collapse a probability field into an observable material manifestation. After fully examining what the scientists had to say, I found that the hypothesis that they provided which debunks Biocentrism is far more interesting and compelling. I have always found science to be intriguing and fascinating because it establishes the nature of the objective universe that I live in. I also know that science is limited in what it can study and prove, and I am at peace and comfortable with the ambiguities, the realm of the unknown, the mysteries of mind and spirit as well as the established scientific facts.  Will science ever be able to define consciousness or determine the nature of the human soul? While the nature of the physical phenomenon of consciousness that is associated with the human brain is something that can be measured and could be experimentally proven, I suspect that the human soul and even the nature of Spirit itself will be far beyond the boundaries of science, at least for the foreseeable future. Frater Barrabbas
5eb60e768ffce091
The Nature of Things Presidential Address given to the British Society for the Philosophy of Science J.R. Lucas on June 7th, 1993 It would be improper for a President to play safe. After two years of curbing my tongue and not making all sorts of observations that have sprung to my mind, in order to let you have an opportunity of having your say, I am now off the leash. And whereas mostly in academic life it is appropriate to adopt a prudential strategy, and not say anything that might be wrong, I owe it to you on this occasion to play a maximax strategy, to speak out and say what I really think, being willing to run the risk of being wrong in order not to forgo the chance of actually being right in an area of the philosophy of science which must, I think for ever, be largely a matter of metaphysical speculation. I stand before you a failed materialist. Like Lucretius, from whom I have borrowed my title, I should have liked to be able to explain the nature of things in terms of ultimate thing-like entities of which everything was made, and in terms of which all phenomena could be explained. I have always been a would-be materialist. I remember, when I was six, telling my brother, who was only two, in the corner of a garden in Guildford, that everything was made of electricity and believing that electrons were hard knobbly sparks, and later pondering whether position was a sort of quality, and deciding that it was, absolutely considered, but that relative positions, that is to say patterns, as seen in the constellations in the sky, were only in the eye of the beholder. I am still impelled to a very thing-like view of reality, and would like to explain electricity in terms of whirring wheels, and subatomic entities as absolutely indivisible point-particles, each always remaining its individual self, and possessed of all the qualities that really signified. I find it painful to be dragged into the twentieth century, and though my rational self is forced to acknowledge that things aren't what they used to be, I find it hard to come to terms with their not being what I instinctively feel they have got to be, and am still liable to scream that that the world-view we are being forced to adopt cannot be true, and that somehow it must be fitted back into the Procrustean bed of our inherited prejudices. But I am not going to ask you to listen to my screams. Rather, I shall share with you my attempts to overcome them, and work out new categories for thinking about the nature of the world, and a correspondingly less rigid paradigm of possible explanation. It has taken me in two different directions. On the one hand reality is much softer and squodgier than I used to think. It is not only that the knobbliness is less impenetrable, as quantum tunnelling takes over, nor that it is fuzzier, without the sharp outlines of yestercentury, but, more difficult to comprehend, the very concept of haecceitas, as Duns Scotus called it, this-i-ness, or transcendental individuality, in Michael Redhead's terminology, 1 has disappeared from the categories of ultimate reality. On the other hand, reason has become much wider and more hospitable to new insights from various disciplines. The two changes are connected. Our concept of a thing, in order to be more truly a thing, has been developed into that of a substance, and substances have come to need to have more and more perfections, and we have therefore come to identify as substances more sophisticated combinations of more recherché features; and with this change in what we regard as a thing has come also a corresponding change in our canons of explanation. It will be my chief aim this evening to show how our changed apprehension of reality has opened up new vistas of rationality, and how the wider concept of rationality we have been led to adopt has in turn altered our view of what constitute real substances. The corpuscularian philosophy posited the ultimate constituents of the universe as qualitatively identical but numerically distinct, possessing only the properties of spatial position and its time-derivatives, and developing according to some deterministic law. In the beginning, on this view, God created atoms and the void. The atoms, or corpuscles, or point-particles, were thing-like entities persisting over time, each for ever distinct from every other one, each always remaining the same, each capable of changing its position in space while retaining its individual identity. Spatial position constituted the changeable parameter which explained change without altering the corpuscle's own identity. Space was the realm of mostly unactualised possibility, of changes that might, but mostly did not, occur. But space also performed the logical function of both distinguishing between qualitatively identical corpuscles---two thing-like entities cannot be in the same place at the same time---and providing, in spatio-temporal continuity, a criterion of identity over time. It was thus possible for each point-particle to be like every other one, but to be a different particular individual, and this particularity affected the corpuscularians' ideal of explanation, articulated by Laplace, and much refined in our own time by Hempel. Scientists seek generality, and eschew the contingent and the coincidental. In the Hempelian paradigm, the focus of interest is on the covering law, which is general, and not on the initial conditions, which just happen to be what they are, and can only themselves be explained by the way earlier states of the universe happened to be. Boundary conditions, being the particular positions and velocities of particular point-particles, are too particular to constitute the sort of causes that scientists, in their search for generality, are willing to take seriously as genuinely explanatory. The corpuscularian philosophy had many merits. It reflected our experience of things: stable objects that persist over time, clearly individuated by exclusive space-occupancy, capable of change without losing their identity. As a metaphysical system it had great economy and power. All macroscopic things, all events and phenomena, were to be explained in terms of the positions and movements of these ultimate entities. There was a clear ontology, a clear canon of explanation, and a clear demarcation between physically necessary laws and purely contingent initial conditions. Of course, there were also grave demerits. From my own point of view---though I have failed to persuade Robin Le Poidevin of this 2---time is essentially tensed, and it counts against the corpuscularian scheme that it did not account for the direction of time or the uniqueness of the present: more influential in the history of science was the account of space, and the difficulty in formulating a plausible account of how corpuscles could interact with one another, which in due course led us to replace corpuscularian by field theories, as being better able to account for the propagation of causal influence. The vacuum, though adequate for giving things room to exist and move in, was too thin to let them interact with one another, and Voltaire has had to return from London to Paris. But it was not only space that proved too thin to do its job. The ultimate thing-like entities not only failed to accommodate the things of our actual experience, but have turned out not to be thing-like at all. Although the atoms of modern chemistry and physics are moderately thing-like, subatomic entities are not. We do not obtain predictions borne out by observation if we count as different the case of this electron being here and that there and the case of that being here and this there. Instead of thinking of the word `electron' being a substantive referring to a substantial, identifiable thing, we do better to think of it as an adjective, with some sense of `negatively charged hereabouts'. We do not feel tempted to distinguish two pictures, one of which is red here and red there, and the other of which is red there and red here; the qualities referred to by adjectives lack haecceitas, this-i-ness, and are real only in so far as they are instantiated. We are forced to deny this-i-ness to electrons and other sub-atomic entities in order to accommodate empirical observations, but it is not just a brute fact, but rather the reflection of the probabilistic structure of quantum mechanics. The loss of determinateness in our ultimate ontology is the concomitant of our abandoning determinism in our basic scheme of explanation. Probabilities attach naturally not to specific singular propositions, but to general propositional functions, or, as Colin Howson puts it, 3 generic events, or, in Donald Gillies' terminology, 4 repeatable situations. Although you can intelligibly ask what the probability is of my dying in the next twelve months, the answer is nearly always only an estimate, extrapolated from the probabilities of Englishmen, males, oldie academics, non-smokers, non-diabetics, and other relevant general types, not dying within the subsequent year. Calculations of probabilities depend on the law of large numbers, assumptions of equiprobability, or Bayes' Theorem, which all ascribe probabilities to propositional functions dealing with general properties rather than to singular propositions asserting ultimate particularities. If we accept the probabilistic view of the world, we can no longer picture the universe as made up of particular thing-like entities that Newton could have asked Adam to name, but as a featured something, whose underlying propensities could be characterized in quantum-mechanical terms, and whose features calculated up to a point, and found to be borne out in experience. The loss of particularity legitimises a paradigm shift in our canon of explanation. In his Presidential Address, Professor Redhead noted the shift from a Nineteenth Century ideal, in which we could deduce the occurrence of events granted a unified theory together with certain boundary conditions, to a Twentieth Century schema, which, although less demanding, in as much as it is not deterministic, is more demanding, in that it seeks to explain the boundary conditions too. 5 Outside physics that has always been the case---and often within physics too. It is one of the chief objections to the Hempelian canon, an objection expressed by many of those present here tonight---Nancy Cartwright, John Worrall, Peter Lipton---that it fails to accommodate the types of explanation scientists actually put forward. 6 It depends on the science concerned what patterns of law-like association, to use a phrase of David Papineau's, count as causes. 7 Different sciences count different patterns of law-like associations as causes because they ask different questions and therefore need to have different answers explaining differently with different becauses. The fact that different sciences ask different questions is of crucial importance. Once we distinguish questions from answers, we can resolve ancient quarrels between different disciplines. 8 The biologists have long felt threatened by reductionism, and felt that there was something amiss with the claim that it was all in the Schrödinger equation, or as Francis Crick put it, ``the ultimate aim of the modern movement in biology is in fact to explain all biology in terms of physics and chemistry''. 9 But their claim that there was something else, not in the realm understood by physicists, smacked of vitalism, and was rejected out of hand by all practising physicists. Vitalism made out that answers were in principle unavailable, whereas what is really at issue is not a shortage of answers but an abundance of questions. It was not a case of biologists asking straightforward physicists' questions and claiming to get non-physicists' answers, but of their asking non-physicists' questions, to which the physicists' answers were germane, but could not, in the nature of the enquiry, constitute an exhaustive answer to what was being asked. Biologists differ from physicists in what they are interested in---no hint of vitalism in pointing out that the life sciences investigate the phenomenon of life---and in pursuing their enquiries pick on features which are significant according to their canons of interest, not the physicists'. What is at issue is not whether there is some physical causal process of which the physicists know nothing, but whether there are principles of classification outside the purview of physics. It is a question of concepts rather than causality. My favourite, excessively simpliste example is that of the series of bagatelle balls running down through a set of evenly spaced pins and being collected in separate slots at the bottom: we cannot predict into which slot any particular ball will go, but we can say that after a fair number have run down through the pins, the number of balls in each slot will approximate to a Gaussian distribution. There is nothing vitalist about a Gaussian distribution, but it is a probabilistic concept, unknown to Newtonian mechanics. In order to recognise it, we have to move from strict corpuscularian individualism to a set, an ensemble, or a Kollectiv of similar instances, and consider the properties of the whole lot. More professionally, all the insights of thermodynamics depend on not following through the position and momentum of each molecule, but viewing the ensemble in a more coarse-grained way, and considering only the mean momentum of those molecules impinging on a wall, or the mean kinetic energy of all the molecules in the vessel. Equally the chemist and the biologist are not concerned with the life histories of any particular atoms or molecules, and reckon one hydrogen ion as good as another, and one molecule of oxygen absorbed in the lungs of a blackbird as good as another. 10 The chemist is concerned with the reaction as a whole, the biologist with the organism in relation to its environment and other members of its species. A biologist is not interested in the precise accounting for the exact position and momentum of every atom, even if that were feasible. Such a wealth of information would only be noise, drowning the signal he was anxious to discern, namely the activities and functioning of organisms, and their interactions with one another and with their ecological environment. It is the song of Mr Blackbird as he tries to attract the attention of Mrs Blackbird that concerns the ethologist. He is not concerned with exactly which oxygen molecules are in the blackbird's lungs or blood stream, but in the notes that he trills as dawn breaks, and their significance for his potential mate. If he were presented with a complete Laplacian picture, his first task would be to try and discern the relevant patterns of interacting carbon, oxygen, hydrogen and nitrogen atoms that constituted continuing organisms, and to pick out the wood from the trees. In this change of focus the precise detail becomes irrelevant. He is not, in Professor Watkins' terminology, a methodological individualist. What interests him is not the life history of particular molecules of oxygen, but the metabolic state of the organism, which will be the same in either case. Different disciplines, because they concentrate on different questions, abstract from irrelevant detail, in order to adduce the information that is relevant to their concerns. In practice scientists have long recognised that in order to see the wood they must often turn their attention away from the trees. But whereas that shift was to be defended simply as a matter of choice on their part, now it is legitimised by our new understanding of logical status of the boundary conditions we are interested in. If our ultimate theory of everything can talk only in general terms, and cannot assign positions and velocities to particular atoms, it follows that it is no criticism of other theories that they can talk only in general terms too. Hitherto there has been a sense of information being thrown away, information which was there and ultimately important, so that we were, in some profound way, being given less than the whole truth. There was a Laplacian Theory of Everything which was in principle knowable and in principle held the key to all ologies. Every other discipline was only a partial apprehension of ultimate truth, useful perhaps because more accessible for our imperfect minds, but conveying only imperfect information none the less. Just as we rely on journalists to reduce the welter of information about the Balkans or South America to manageable size, so chemists and biologists seemed to select and distil from total truth to tell us things in a form we were capable of taking in. Compared with the high priests of total truth, they were mere popularisers. I may discern Gaussian patterns in long runs of bagatelle balls, but they are patterns only in the eye of an ill-informed beholder: better informed, I should see why each ball went into the slot that it did, and be aware of the occasions when a non-Gaussian distribution emerged. My Gaussian discernment would seem a rough and ready approximation, like describing France as hexagonal, which is fair enough for some purposes, but falls far short of being fully true. Even though the things we pick on as worthy of note and in need of explanation---the shape of the Gaussian curve, the significance of bird-song---lie outside the compass of the limited concepts and explanation of a Theory of Everything, the possession of perfect information trumps curiosity. The case is altered if there is no fully particularised ultimate reality, and no complete theory of it. We cannot claim that ultimately there are trees which exist in their own right, whereas the woods are only convenient shorthand indications of there being trees there: we cannot trump the different, admittedly partial, explanations put forward by different disciplines by a paradigm one that claims to be complete, nor can we suppose that there is some bottom line that establishes a final reckoning to which all other explanations must be held accountable. All natural sciences concern themselves with general features of the universe, and there is no reason to discountenance any science because it selects some general features rather than others. Questions about boundary conditions cannot, then, be faulted on grounds of their being general, and not ultimately particular. The answers, too, are to be assessed differently, once the mirage of a complete Laplacian explanation is dispelled. Not only is it irrelevant to the ethologist's purposes, which particular mate the blackbird seeks to attract, or which oxygen molecules are in the blackbird's lungs or blood stream, it is, in its precise detail, causally irrelevant too. The blackbird's song is not addressed to a particular Mrs Blackbird in all her individuality, but to potential Mrs Blackbirds in general, and if one mate proves hard to win, another will do. Much more so at lower levels of existence: if one worm escapes the early bird, another will be equally succulent; if one molecule of oxygen is not absorbed by his haemoglobin, another will. Explanations are inherently universalisable, and if the physical universe is one of qualitatively identical features that cannot, even in principle, be numerically distinguished, then the explanations offered by other disciplines are ones that cannot, even in principle, be improved upon by a fuller physical explanation. Indistinguishability and indeterminism imply a looseness of fit on the part of physical explanation which take away its Procrustean character. The new world-view makes room for there being different sciences which are autonomous without invoking any mysterious causal powers beyond the reach of physical investigation. The autonomy I am arguing for is, in the words of Bechner, 11 theory autonomy rather than a process autonomy: we use new concepts to ask new questions, rather than find that old questions have suddenly acquired surprising new answers. But this distinction between questions and answers offers a solution to the problem of reductionism only if there is some further fundamental difference between the concepts involved in framing the questions asked by different sciences. Otherwise, they might still be vulnerable to a take-over bid on the part of physics. A reductionist programme whereby every concept of chemistry and physics is exhaustively defined in terms of physical concepts alone might still be mounted. Thus far I have only cited examples---Gaussian curves, temperature, blackbird song---where reductive analysis seems out of the question. But the unavailablity of reductive analyses is much wider than that. Tony Dale bowled me out recently, when I had overlooked the fact that the concept of a finite number cannot be expressed in first-order logic. The very concept of a set, and more generally of a relational structure, is a holistic one. But rather than multiply examples, let me cite an in-principle argument. Tarski's theorem shows that the concept of truth cannot be defined within a logistic calculus: roughly, although we can teach a computer many tricks, we cannot program it to use the term `true' in just the way we do. It therefore seems reasonable to hold that other concepts, too, are irreducible, and the failure of the reductionist programme is due not to some mysterious forms of causality but to our endless capacity to form new concepts and in terms of them to ask new questions and seek new types of explanation. The new world-view we are being forced to adopt not only permits us to concern ourselves, qua scientists, with general features, but impels us to do so. Even the corpuscularian philosophy gave somewhat short shrift to the things of ordinary experience. Most configurations of atoms were transitory. Even rocks were subject to the attrition of time, and the mountains, far from being eternal, were being eroded by the wind and the rain. Processes could in principle withstand the ravages of time, and at first glance Liouville's theorem seemed to suggest that point-particles whose initial conditions were close to one another would end up close still. But although, indeed, there was a one-one correlation between initial and final conditions, the correlation was much less stable than at first sight appeared. True, the volume in phase-space remains constant, but its shape does not, and may become spreadeagled with the elapse of time, so that the very smallest difference in initial conditions can lead to a wide difference in outcome. Poincaré pointed out the logic of the roulette wheel, 12 and we now regularly hear of the damage done by irresponsible butterflies on the other side of the universe destroying the reliability of met office forecasts. No longer can Newton number the ultimate things among the (kumaton anerithmon gelasma), the innumerable laughter of quantum waves, but if he wants atoms, must raise his sights to those stable solutions of the Schrödinger time-independent equation, which one way or another, will be realised. And although some solid objects are likely to remain substantially the same over time, most collocations of atoms are evanescent. If we seek stability amid the flux of molecular movement, we are likely to find it at a higher level of generality where chaos theory can indicate the recurrence of relatively stable patterns. In the Heraclitean swirl eddies may last long enough to be identified. Flames are processes, but possess the thingly property of subsisting and sometimes of being identified and individuated. So if we want permanence, we shall be led to focus on certain general features, certain types of boundary condition, which can persist over reasonable stretches of time. Just as chemists look to the time-independent Schrödinger equation to show them what stable atomic configurations there are, and would like to be able to work out in detail what molecules are stable too, so at a much higher level, biologists take note of organisms and species of organisms, which are the basic things of their discipline. Organisms are homeostatic, self-perpetuating and self-replicating. They are processes, like flames, but longer lasting and with greater adaptability in the face of adventitious change. They react to adverse changes in the environment so as to keep some variables the same, which together constitute the same organism that survives over time in the face of alterations in the environment. There is thus an essential difference between organism and environment which differentiates all the life sciences from the physical ones. Thinghood has become modal as well as diachronic. It is not enough to continue to be the same over time: organisms need to be able to change in some respects in order to remain the same in others, more important. Even if I were to alter the environment by watering the garden, moving the bird table, replacing the coconut with peanuts, the flora and fauna, though responding in various ways to the altered situation, would mostly persist as the self-same organisms as if I make no alterations. This invariance under a limited range of altered circumstance is more like the invariance of operation of natural laws than the continuance of atomic matter, but goes further; laws of nature would operate even if initial conditions were different, but do not characteristically alter their mode of operation so as to restore some antecedent condition, whereas biological organisms typically do, provided the alteration of initial conditions is not too drastic. Homeostasis is a familiar concept in science---but logically a treacherous one. A homeostatic system tends to maintain the same state, and sameness can easily shift without our noticing it. The simple negative feedback of a flame or an eddy or a thunderstorm results in the process not being interrupted by every adventitious alteration of circumstance, but the persistence is short-lived none the less. Living organisms last longer, and are better able to withstand the attrition of time, because they react to counter the effect of a wider variety of circumstances. The requirement of persistence alters what we count as the substance that persists, and per contra as the concept of substance develops, so also does our idea of what counts as survival, and more generally what goals the substance seeks to secure and maintain. We begin to recognise as important explanatory schemata not only the survival of the organism, but the survival of the species, and now, even, the survival of the biosphere. And we begin to see not only the individual's maximising its own advantage as a rational goal, but the value of co-operative action, if we are to escape from the Prisoners' Dilemma and not be driven by individual selfishness into collective sub-optimality. Beyond that, I find it difficult to peer, but still hope dimly to discern the lineaments of what, if I may borrow a suggestive phrase from Nicholas Maxwell, 13 we might describe as an aim-oriented rationality. The concept of homeostasis is borrowed from control engineering. It leads on naturally into information theory, and information theory provides the key concepts for understanding genetics. As self-perpetuation gives rise to self-replication, there is a greater need for the exact specification of the self, and the chromosome needs to be understood not only biochemically as a complicated molecule of DNA, but as a genetic code specifying what the new organism is to be like. Once again, the change of emphasis from the particular physical configuration to the general boundary condition, and the looseness of fit between the probabilistic explanations of the underlying physics and the quite different explanations of the emergent discipline allow us to accommodate the new insights without falling into obscurantist obfuscation. 14 Homeostasis also implies sensitivity. If an organism is to be independent of its environment, it must respond to it so as to counteract the changes which the changes in the environment would otherwise bring about within the organism itself: if I am to maintain a constant body temperature, I must sweat when it is hot outside and shiver when it is cold. Even plants must respond to light and to the earth's gravitational field. The greater the independence and the more marked the distinction between the self and the non-self, the greater the awareness the self needs to have of the non-self, and the more it needs to register, so as to be able to offset, untoward changes in the world around it. We are still in the dark as to what exactly consciousness is or how it evolved, but can see in outline why it is needed. A windowless monad cannot survive the changes and chances of this fleeting life---sensitivity to clouds on the horizon no bigger than a man's hand is the price of not being destroyed by unanticipated storms. My interest lies in the end of this line of development. We can give a general characterization of what it is for a system to be able to represent within itself some other system, and so can think of organisms in terms not of biochemistry or evolutionary biology but of information theory and formal logic. And from this point of view we can consider not only consciousness but self-consciousness, and a system that can represent within itself not just some other system but itself as well. There are a whole series of self-reflexive arguments. Popper, a former President of our Society, has devoted much energy to arguing from them to an open universe; in particular, he argues from the impossibility of self-prediction. MacKay argues similarly---other people may predict what I am going to do, but I cannot. 15 Many people, Haldane, Joseph, Malcolm, Mascall, Popper, Price, Wick and others, have been concerned about rationality, and have argued that if determinism or materialism were true, we could not be rationally convinced of it. 16 Reductive metaphysics, which reduces rationality to something else---the movement of physical particles, for example---cannot leave room for the rational arguments which alone could establish its truth. I myself found these arguments intriguing, and indeed, compelling, but extraordinarily difficult to formulate in a cast-iron way. Eventually I came up with an argument based on Gödel's theorem, which is indeed a version of these arguments, and is intended to show in one swoop the failure of any reductionist programme as regards reason. I have received much stick for using Gödel's theorem to show that the mind is not a Turing machine, but I am quite impenitent on that score, and believe that the argument goes much further, and shows not only the impossibility of reducing reason to the mere following of rules, but the essential creativity of reason. We can never formalise reason completely or tie it down to any set of canonical forms, for we can always step outside and view all that has been thus far settled from a fresh standpoint. In particular we can find fresh features that seem significant, and seek fresh sorts of explanation of them. It does, I believe, establish the essential openness of the universe, granted only that there is at least one rational agent. If there be rational agents, since we are rational agents, it follows that the course of events in the universe cannot be reduced to a system of things evolving according to a determinate algorithm, but that there are always new opportunities and further possible exercises of rationality. The interplay between things and explanations is illuminating. Instead of starting with things, we are able to identify things only at higher levels of organization, and the higher we go the more thingly properties we find. Atoms have stability (usually), but are qualitatively identical with many others. Organisms have more individuality, and are less commonly clones, but still view their environment if not in terms of chemical similarity nevertheless in terms of fungibility, readily replacing one food supply by another. Nor is it only the environment that organisms regard fungibly: although some birds are faithfully monogamous, many are not, and if one Mrs Blackbird fails to respond to the musical blandishments of her would-be mate, another will serve his reproductive purposes just as well. Human love likewise is not uniformly faithful to the individual ideal, but with human beings we can see this as a derogation from humanity, and can construct a coherent concept of unique individuality, according to which this person is irreducibly himself, and essentially different from anybody else. 17 Our idea of thinghood leads us from the utterly simple and essentially similar atoms of the corpuscularians to infinitely complex and unique persons, each necessarily different from every other. The different ideals of thinghood support different paradigms of explanation. Since different sorts of feature characterize things at different levels, and the features that characterize at the higher levels cannot be completely defined in terms of those that play a part in lower-level explanations, the higher-level explanations cannot be reduced to lower-level ones. As we have seen, a Gaussian curve cannot be defined in terms of a Laplacian explanation, for it essentially involves the notion of an ensemble or Kollectiv. Higher-level systems are not derivable from some fundamental system, but are, instead, autonomous. We cannot predict the exact position or velocity of a sub-atomic entity, but by means of the time-independent Schrödinger equation we can say what properties a hydrogen atom would have if it existed, and we can have good reason for supposing that many such atoms will exist, since they are stable configurations of quantum-mechanical systems. The explanations sought by a chemist are in terms of energy levels and the valency bonds they generate: those sought by the biologist are in terms of the maintenance of life and the continuation of the species. And as these explanations differ, so also do the things they are explanations about. Explanations influence what is to count as a thing, and ideas of what it truly is to be a thing influence what questions we ask, and what explanations we seek to discover. 18 We can see this, if we like, as a form of emergent evolutionary development: new levels of being evolve from lower, chemical elements from the flux after the Big Bang, molecules, organisms, consciousness, and self-consciousness, in the fullness of time; but we can also see it in terms of a hierarchy of Platonic forms and explanations, each going beyond the limits of its predecessors, and at the higher levels reaching out to ever new kinds of creative rationality. To summarise, then. The new scientific world-view differs from traditional corpuscularianism in not postulating some ultimate thing-like entities whose motions determine completely the state of the world not only at that time but at all subsequent ones too. Instead of there being particular point-particles, there are only general features, and instead of a rigid determinist law, there are only probabilities, which are, indeed, enough to enable us to make reliable predictions about many aspects of the world, but do not foreclose the possibility of other types of explanation being the best available. Other types of explanation are answers to other types of questions, and it is because we ask different questions that the different sciences are different. These different questions pick on different general features, often different types of boundary condition; and once we acknowledge that there is no metaphysical reason to reduce the generic characterization of boundary conditions typical of other sciences to the paradigm physical terms of Laplacian corpuscularianism, we can accept these other sciences as sciences in their own right, since, metaphysics apart, we have good reason to resist reductionism as applied to questions rather than answers. The abolition of ultimate things thus opens the way to our acknowledging the autonomy of the various sciences. At the same time, the notion of a thing leads us to pick out various types of boundary condition as instantiating, to a greater and greater degree, certain characteristic features of being a thing---permanence, stability, ability to survive adventitious alterations in the environment, and the like. As we follow these through, we find a natural hierarchy of the sciences in which we ask questions about more and more complicated entities, possessing more and more thing-like perfections. Things have gone up market. By an almost Hegelian dialectic our notion of a thing becomes transmuted into that of a substance, and in so far as we remain pluralists at all, we move from the minimal qualitatively identical, though numerically distinct, atoms of the corpuscularians to the infinitely complex, though windowed, monads of a latter-day Leibniz. Whether Lucretius would have been pleased at this outcome of the complex interplay between ontological intimations of existence and rationalist requirements of explicability, I do not know. But he could hardly complain at my taking this as my theme, here at an address to the British Society for the Philosophy of Science taking place in the London School of Economics, whose motto is taken from Virgil's description of him, and also expresses the common sentiment of all our members, Felix qui potuit rerum cognoscere causas Happy he who understands the explanations of things To return from footnote to text, click on footnote number 1. Michael Redhead, ``A Philosopher Looks at Quantum Field Theory'', in Harvey Brown and Rom Harré, eds., Philosophical Foundations of Quantum Mechanics, Oxford, 1988, p.10. 2. Robin Le Poidevin, Change, Cause and Contradiction, London, 1991, esp. ch.8. 3. C.A.Howson and P.Urbach, Scientific Reasoning: the Bayesian Approach, La Salle, Illinois, USA, 1989, p.19. 4. D.A.Gillies, Objective Probability, Cambridge, 1973, esp. ch.5. 5. Reprinted in S.French and H.Kamminga, eds., Correspondence, Invariance and Heuristics, (Kluwer Academic Publishers, Holland), 1993, p. 329. 6. Nancy Cartwright, How the Laws of Physics Lie, Oxford, 1983, ch.2, esp. pp.44-46. Peter Lipton, Inference to the Best Explanation, London, 1991, esp. ch.3; John Worrall, ``The Value of a Fixed Methodology'', British Journal for the Philosophy of Science, 39, 1988. 7. David Papineau, British Journal for the Philosophy of Science, 47, 1991, p.399. 8. I owe this point to H.C.Longuet-Higgins, The Nature of Mind, Edinburgh, 1972, ch.2, pp.16-21, esp. p.19; reprinted in H.C.Longuet-Higgins, Mental Processes, Cambridge, Mass., 1987, ch.2, pp.13-18, esp.p.16. I am also particularly indebted to C.F.A.Pantin, The Relations between the Sciences, Cambridge, 1968; and to A.R.Peacocke, God and the New Biology, London, 1986, and Theology for a Scientific Age, Oxford, 1990. Michael Polanyi emphasized the importance of boundary conditions and their relevance to the different sorts of explanation sought by different disciplines. In his ``Tacit Knowing'', Reviews of Modern Physics, October, 1962, pp.257-259, he cites the example of a steam engine, which although entirely subject to the laws of chemistry and physics, cannot be explained in terms of those disciplines alone, but must be explained in terms of the function it is capable, in view of its construction, perform. What is interesting about the steam engine is not the laws of chemistry and physics, but the boundary conditions, which in view of those laws, make it capable of transforming heat into mechanical energy; it is the province of engineering science, not physics. The example of the steam engine is illuminating in that no question of vitalism arises. See also, Michael Polanyi, ``Life Transcending Physics and Chemistry'', Chemical and Engineering News, August 21, 1067, pp.54-66; and ``Life's Irreducible Structure'', Science, 160, 1968, pp.1308-1312. 9. F.H.C.Crick, Of Molecules and Man, University of Washington Press, Seattle and London, 1966, p.10. 10. That the biologist is primarily concerned with boundary conditions of a special type is pointed out by Bernd-Olaf Küppers, Information and the Origin of Life, M.I.T. Press, Cambridge, Mass, U.S.A., 1990, p.163. 11. Compare the distinction drawn by M.Bechner between theory autonomy and process autonomy in his ``Reduction, Hierarchies and Organicism'' in F.J.Ayala and T.Dozbhanski, Studies in the Philosophy of Biology: Reduction and Related Problems, London, 1974, p.170; cited by A.R.Peacocke, God and the New Biology, London, 1986, p.9. 12. Henri Poincaré, Science and Method, tr. F.Maitland, London, 1914, p.68. 13. Nicholas Maxwell, From Knowledge to Wisdom, Oxford, 1984, esp. ch.4. 15. D.M.MacKay, ``On the Logical Indeterminacy of a Free Choice'', Mind, LXIX, 1960, pp.31-40. 16. See K.R.Popper, The Open Universe, ed. W.W.Bartley, III, London, 1982, ch.III, $$ 23,24. Popper traces the argument back to Descartes and St Augustine. A further list is given in J.R.Lucas, The Freedom of the Will, Oxford, 1970, p.174. Further arguments and fuller references may be found in Behavioral Sciences, 1990, 13, 4. 17. I argue this in my ``A Mind of One's Own'', Philosophy, October, 1993. 18. Compare A.R. Peacocke, Theology for a Scientific Age, Oxford, 1990, p.41: Because of widely pervasive reductionist presuppositions, there has been a tendency to regard the level of atoms and molecules as alone `real'. However, there are good grounds for not affirming any special priority to the physical and chemical levels of description and for believing that what is real is what the various levels of description actually refer to. There is no sense in which subatomic particles are to be graded as `more real' than, say, a bacterial cell a human person or a social fact. Each level has to be regarded as a slice thorough the totality of reality, in the sense that we have to take account of its mode of operation at that level. Return to bibliography Return to home page
6de0a41c8daf173a
October 15, 2006 A breakthrough method for determining the behavior of electrons in atoms and molecules A goal in chemistry has been to use a pair of electrons to represent any number of electrons accurately. A new method allows the use of pairs or trios of electrons to approximate any number of electrons. Since 1993, we have had approximations that were 71-96% accurate. A new method is 95-100% accurate. University of Chicago chemist David Mazziotti has created an improved solution for the contracted Schrodinger equation. The contracted Schrödinger equation may soon become solvable with a package of computer software, according to Mazziotti. further reading: A 2003 article about recent work on the contracted Schrodinger equation Info on David Mazziotti Mazziotti Group Home Page Motivated by the contracted Schrödinger equation, we have also recently developed variational two-electron methods with systematic, nontrivial N-representability conditions. This second class of two-electron methods directly computes the effective two-electron probability distribution of a many-electron atom or molecule without any higher-electron probability distributions. Variational optimization of the ground-energy energy in terms of only two effective electrons is treatable by a class of optimization techniques known as semidefinite programming. The variational two-electron method has been accurately applied to generating potential energy surfaces of molecules including the difficult-to-predict dissociation curve for N2 where wavefunction methods fail to give physically meaningful results. While two-electron approaches are still in their early stages, the direct determination of chemical properties by mapping any atom or molecule onto an effective twoelectron problem offers a new level of accuracy and efficiency for electronic structure calculations.
6462cf2399f163b0
MaplePrimes Announcement Reporting from Amsterdam, it's a pleasure to report on day one of the 2014 Maple T.A. User Summit. Being our first Maple T.A. User Summit, we wanted to ensure attendees were not only given the opportunity to sit-in on key note presentations from various university or college professors, high school teachers and Maplesoft staff, but to also engage in active discussions with each other on how they have implemented Maple T.A. at their institution. Featured Posts Last week the Physics package was presented in a talk at the Perimeter Institute for Theoretical Physics and in a combined Applied Mathematics and Physics Seminar at the University of Waterloo. The presentation at the Perimeter Institute got recorded. It was a nice opportunity to surprise people with the recent advances in the package. It follows the presentation with sections closed, and at the end there is a link to a pdf with the sections open and to the related worksheet, used to run the computations in real time during the presentation. Generally speaking, physicists still experience that computing with paper and pencil is in most cases simpler than computing on a Computer Algebra worksheet. On the other hand, recent developments in the Maple system implemented most of the mathematical objects and mathematics used in theoretical physics computations, and dramatically approximated the notation used in the computer to the one used in paper and pencil, diminishing the learning gap and computer-syntax distraction to a strict minimum. In connection, in this talk the Physics project at Maplesoft is presented and the resulting Physics package illustrated tackling problems in classical and quantum mechanics, general relativity and field theory. In addition to the 10 a.m lecture, there will be a hands-on workshop at 1pm in the Alice Room. ... Why computers? We can concentrate more on the ideas instead of on the algebraic manipulations We can extend results with ease We can explore the mathematics surrounding a problem We can share results in a reproducible way Representation issues that were preventing the use of computer algebra in Physics Notation and related mathematical methods that were missing: coordinate free representations for vectors and vectorial differential operators, covariant tensors distinguished from contravariant tensors, functional differentiation, relativity differential operators and sum rule for tensor contracted (repeated) indices Bras, Kets, projectors and all related to Dirac's notation in Quantum Mechanics Inert representations of operations, mathematical functions, and related typesetting were missing: inert versus active representations for mathematical operations ability to move from inert to active representations of computations and viceversa as necessary hand-like style for entering computations and texbook-like notation for displaying results Key elements of the computational domain of theoretical physics were missing: ability to handle products and derivatives involving commutative, anticommutative and noncommutative variables and functions ability to perform computations taking into account custom-defined algebra rules of different kinds (problem related commutator, anticommutator, bracket, etc. rules) Vector and tensor notation in mechanics, electrodynamics and relativity Dirac's notation in quantum mechanics Computer algebra systems were not originally designed to work with this compact notation, having attached so dense mathematical contents, active and inert representations of operations, not commutative and customizable algebraic computational domain, and the related mathematical methods, all this typically present in computations in theoretical physics. This situation has changed. The notation and related mathematical methods are now implemented. Tackling examples with the Physics package Classical Mechanics Inertia tensor for a triatomic molecule Problem: Determine the Inertia tensor of a triatomic molecule that has the form of an isosceles triangle with two masses m[1] in the extremes of the base and mass m[2] in the third vertex. The distance between the two masses m[1] is equal to a, and the height of the triangle is equal to h. Quantum mechanics Quantization of the energy of a particle in a magnetic field Show that the energy of a particle in a constant magnetic field oriented along the z axis can be written as H = `&hbar;`*`&omega;__c`*(`#msup(mi("a",mathcolor = "olive"),mo("&dagger;"))`*a+1/2) where `#msup(mi("a",mathcolor = "olive"),mo("&dagger;"))`and a are creation and anihilation operators. The quantum operator components of `#mover(mi("L",mathcolor = "olive"),mo("&rarr;",fontstyle = "italic"))` satisfy "[L[j],L[k]][-]=i `&epsilon;`[j,k,m] L[m]" Unitary Operators in Quantum Mechanics (with Pascal Szriftgiser, from Laboratoire PhLAM, Université Lille 1, France) A linear operator U is unitary if 1/U = `#msup(mi("U"),mo("&dagger;"))`, in which case, U*`#msup(mi("U"),mo("&dagger;"))` = U*`#msup(mi("U"),mo("&dagger;"))` and U*`#msup(mi("U"),mo("&dagger;"))` = 1.Unitary operators are used to change the basis inside an Hilbert space, which physically means changing the point of view of the considered problem, but not the underlying physics. Examples: translations, rotations and the parity operator. 1) Eigenvalues of an unitary operator and exponential of Hermitian operators 2) Properties of unitary operators 3) Schrödinger equation and unitary transform 4) Translation operators Classical Field Theory The field equations for a quantum system of identical particles Problem: derive the field equation describing the ground state of a quantum system of identical particles (bosons), that is, the Gross-Pitaevskii equation (GPE). This equation is particularly useful to describe Bose-Einstein condensates (BEC). The field equations for the lambda*Phi^4 model Maxwell equations departing from the 4-dimensional Action for Electrodynamics General Relativity Given the spacetime metric, g[mu, nu] = (Matrix(4, 4, {(1, 1) = -exp(lambda(r)), (1, 2) = 0, (1, 3) = 0, (1, 4) = 0, (2, 1) = 0, (2, 2) = -r^2, (2, 3) = 0, (2, 4) = 0, (3, 1) = 0, (3, 2) = 0, (3, 3) = -r^2*sin(theta)^2, (3, 4) = 0, (4, 1) = 0, (4, 2) = 0, (4, 3) = 0, (4, 4) = exp(nu(r))})) a) Compute the trace of "Z[alpha]^(beta)=Phi R[alpha]^(beta)+`&Dscr;`[alpha]`&Dscr;`[]^(beta) Phi+T[alpha]^(beta)" where `&equiv;`(Phi, Phi(r)) is some function of the radial coordinate, R[alpha, `~beta`] is the Ricci tensor, `&Dscr;`[alpha] is the covariant derivative operator and T[alpha, `~beta`] is the stress-energy tensor T[alpha, beta] = (Matrix(4, 4, {(1, 1) = 8*exp(lambda(r))*Pi, (1, 2) = 0, (1, 3) = 0, (1, 4) = 0, (2, 1) = 0, (2, 2) = 8*r^2*Pi, (2, 3) = 0, (2, 4) = 0, (3, 1) = 0, (3, 2) = 0, (3, 3) = 8*r^2*sin(theta)^2*Pi, (3, 4) = 0, (4, 1) = 0, (4, 2) = 0, (4, 3) = 0, (4, 4) = 8*exp(nu(r))*Pi*epsilon})) b) Compute the components of "W[alpha]^(beta)"" &equiv;"the traceless part of  "Z[alpha]^(beta)" of item a) c) Compute an exact solution to the nonlinear system of differential equations conformed by the components of  "W[alpha]^(beta)" obtained in b) Background: paper from February/2013, "Withholding Potentials, Absence of Ghosts and Relationship between Minimal Dilatonic Gravity and f(R) Theories", by P. Fiziev. c) An exact solution for the nonlinear system of differential equations conformed by the components of  "W[alpha]^(beta)" The Physics Project "Physics" is a software project at Maplesoft that started in 2006. The idea is to develop a computational symbolic/numeric environment specifically for Physics, targeting educational and research needs in equal footing, and resembling as much as possible the flexible style of computations used with paper and pencil. The main reference for the project is the Landau and Lifshitz Course of Theoretical Physics. A first version of "Physics" with basic functionality appeared in 2007. Since then the package has been growing every year, including now, among other things, a searcheable database of solutions to Einstein equations and a new dedicated programming language for Physics. Since August/2013, weekly updates of the Physics package are distributed on the web, including the new developments related to our plan as well as related to people's feedback. Edgardo S. Cheb-Terrab Physics, Differential Equations and Mathematical Functions, Maplesoft Someone asked on about plotting x*y*z=1 and, while it's easy enough to handle it with implicitplot3d it raised the question of how to get nice constained axes in the case that the x- or y-range is much less than the z-range. Here's what WolframAlpha gives. (Mathematica handles it straight an an plot of the explict z=1/(x*y), which is interesting although I'm more interested here in axes scaling than in discontinuous 3D plots) Here is the result of a call to implicitplot3d with default scaling=unconstrained. The axes appear like in a cube, each of equal "length". Here is the same plot, with scaling=constrained. This is not pretty, because the x- and y-range are much smalled than the z-range. How can we control the axes scaling? Resizing the inlined plot window with the mouse just affects the window. The plot itself remains  rendered in a cube. Using right-click menus to rescale just makes all axes grow or shrink together. One unattractive approach it to force a small z-view on a plot of a much larger z-range, for a piecewise or procedure that is undefined outisde a specific range. if abs(z)>200 then undefined; else x*y*z-1; end if; end proc, -1..1, -1..1, -200..200, view=[-1..1,-1..1,-400..400], style=surfacecontour, grid=[30,30,30]); Another approach is to scale the x and y variables, scale their ranges, and then force scaled tickmark values. Here is a rough procedure to automate such a thing. The basic idea is for it to accept the same kinds of arguments are implicitplot3d does, with two extra options for scaling the axis x-relative-to-z, and axis y-relative-to-z. implplot3d:=proc( expr, {scalex::numeric:=1, scaley::numeric:=1} ) local d1, d2, dz, n1, n2, r1, r2, rngs, scx, scy; uses plotfn=plots:-implicitplot3d; (n1,n2) := lhs(rng1), lhs(rng2); dz := rhs(rhs(rng3))-lhs(rhs(rng3)); (scx,scy) := scalex*dz/(rhs(rhs(rng1))-lhs(rhs(rng1))), (r1,r2) := map(`*`,rhs(rng1),scx), map(`*`,rhs(rng2),scy); (d1,d2) := rhs(r1)-lhs(r1), rhs(r1)-lhs(r1); plotfn( subs([n1=n1/scx, n2=n2/scy], expr), n1=r1, n2=r2, rng3, _rest[], end proc: The above could be better. It could also detect user-supplied custom x- or y-tickmarks and then scale those instead of forming new ones. Here is an example of using it, implplot3d( x*y*z=1, x=-1..1, y=-1..1, z=-200..200, grid=[30,30,30], style=surfacecontour, shading=xy, orientation=[-60,60,0], scalex=1.618, scaley=1.618 ); Here is another example implplot3d( x*y*z=1, x=-5..13, y=-11..5, z=-200..200, grid=[30,30,30], style=surfacecontour, orientation=[-50,55,0], scaley=0.5 ); Ideally I would like to see the GUI handle all this, with say (two or three) additional (scalar) axis scaling properties in a PLOT3D structure. Barring that, one might ask whether a post-processing routine could use plots:-transform (or friend) and also force the tickmarks. For that I believe that picking off the effective x-, y-, and z-ranges is needed. That's not too hard for the result of a single call to the plot3d command. Where it could get difficult is in handling the result of plots:-display when fed a mix of several spacecurves, 3D implicit plots, and surfaces. Have I overlooked something much easier? MaplePrimes Questions Recent Unanswered Maple MapleSim Maple T.A.
5e103ff407359ba5
Bringing physics back to Quantum Physics: classical quantum theory The quantum of time and distance Original post: In my previous post, I introduced the elementary wavefunction of a particle with zero rest mass in free space (i.e. the particle also has zero potential). I wrote that wavefunction as ei(kx − ωt) ei(x/2 − t/2) = cos[(x−t)/2] + i∙sin[(x−t)/2], and we can represent that function as follows: If the real and imaginary axis in the image above are the y- and z-axis respectively, then the x-axis here is time, so here we’d be looking at the shape of the wavefunction at some fixed point in space. Now, we could also look at its shape at some fixed in point in time, so the x-axis would then represent the spatial dimension. Better still, we could animate the illustration to incorporate both the temporal as well as the spatial dimension. The following animation does the trick quite well: Please do note that space is one-dimensional here: the y- and z-axis represent the real and imaginary part of the wavefunction, not the y- or z-dimension in space. You’ve seen this animation before, of course: I took it from Wikipedia, and it actually represents the electric field vector (E) for a circularly polarized electromagnetic wave. To get a complete picture of the electromagnetic wave, we should add the magnetic field vector (B), which is not shown here. We’ll come back to that later. Let’s first look at our zero-mass particle denuded of all properties, so that’s not an electromagnetic wave—read: a photon. No. We don’t want to talk charges here. OK. So far so good. A zero-mass particle in free space. So we got that ei(x/2 − t/2) = cos[(x−t)/2] + i∙sin[(x−t)/2] wavefunction. We got that function assuming the following: 1. Time and distance are measured in equivalent units, so = 1. Hence, the classical velocity (v) of our zero-mass particle is equal to 1, and we also find that the energy (E), mass (m) and momentum (p) of our particle are numerically the same. We wrote: E = m = p, using the p = m·v (for = c) and the E = m∙c2 formulas. 2. We also assumed that the quantum of energy (and, hence, the quantum of mass, and the quantum of momentum) was equal to ħ/2, rather than ħ. The de Broglie relations (k = p/ħ and ω = E/ħ) then gave us the rather particular argument of our wavefunction: kx − ωt = x/2 − t/2. The latter hypothesis (E = m = p = ħ/2) is somewhat strange at first but, as I showed in that post of mine, it avoids an apparent contradiction: if we’d use ħ, then we would find two different values for the phase and group velocity of our wavefunction. To be precise, we’d find for the group velocity, but v/2 for the phase velocity. Using ħ/2 solves that problem. In addition, using ħ/2 is consistent with the Uncertainty Principle, which tells us that ΔxΔp = ΔEΔt = ħ/2. OK. Take a deep breath. Here I need to say something about dimensions. If we’re saying that we’re measuring time and distance in equivalent units – say, in meter, or in seconds – then we are not saying that they’re the same. The dimension of time and space is fundamentally different, as evidenced by the fact that, for example, time flows in one direction only, as opposed to x. To be precise, we assumed that x and t become countable variables themselves at some point in time. However, if we’re at t = 0, then we’d count time as t = 1, 2, etcetera only. In contrast, at the point x = 0, we can go to x = +1, +2, etcetera but we may also go to x = −1, −2, etc. I have to stress this point, because what follows will require some mental flexibility. In fact, we often talk about natural units, such as Planck units, which we get from equating fundamental constants, such as c, or ħ, to 1, but then we often struggle to interpret those units, because we fail to grasp what it means to write = 1, or ħ = 1. For example, writing = 1 implies we can measure distance in seconds, or time in meter, but it does not imply that distance becomes time, or vice versa. We still need to keep track of whether or not we’re talking a second in time, or a second in space, i.e. c meter, or, conversely, whether we’re talking a meter in space, or a meter in time, i.e. 1/c seconds. We can make the distinction in various ways. For example, we could mention the dimension of each equation between brackets, so we’d write: t = 1×10−15 s [t] ≈ 299.8×10−9 m [t]. Alternatively, we could put a little subscript (like t, or d), so as to make sure it’s clear our meter is a a ‘light-meter’, so we’d write: t = 1×10−15 s ≈ 299.8×10−9 mt. Likewise, we could add a little subscript when measuring distance in light-seconds, so we’d write x = 3×10m ≈ 1 sd, rather than x = 3×10m [x] ≈ 1 s [x]. If you wish, we could refer to the ‘light-meter’ as a ‘time-meter’ (or a meter of time), and to the light-second as a ‘distance-second’ (or a second of distance). It doesn’t matter what you call it, or how you denote it. In fact, you will never hear of a meter of time, nor will you ever see those subscripts or brackets. But that’s because physicists always keep track of the dimensions of an equation, and so they know. They know, for example, that the dimension of energy combines the dimensions of both force as well as distance, so we write: [energy] = [force]·[distance]. Read: energy amounts to applying a force over a distance. Likewise, momentum amounts to applying some force over some time, so we write: [momentum] = [force]·[time]. Using the usual symbols for energy, momentum, force, distance and time respectively, we can write this as [E] = [F]·[x] and [p] = [F]·[t]. Using the units you know, i.e. joulenewton, meter and seconds, we can also write this as: 1 J = 1 N·m and 1… Hey! Wait a minute! What’s that N·s unit for momentum? Momentum is mass times velocity, isn’t it? It is. But it amounts to the same. Remember that mass is a measure for the inertia of an object, and so mass is measured with reference to some force (F) and some acceleration (a): F = m·⇔ m = F/a. Hence, [m] = kg = [F/a] = N/(m/s2) = N·s2/m. [Note that the m in the brackets is symbol for mass but the other m is a meter!] So the unit of momentum is (N·s2/m)·(m/s) = N·s = newton·second. Now, the dimension of Planck’s constant is the dimension of action, which combines all dimensions: force, time and distance. We write: ħ ≈ 1.0545718×10−34 N·m·s (newton·meter·second). That’s great, and I’ll show why in a moment. But, at this point, you should just note that when we write that E = m = p = ħ/2, we’re just saying they are numerically the same. The dimensions of E, m and p are not the same. So what we’re really saying is the following: 1. The quantum of energy is ħ/2 newton·meter ≈ 0.527286×10−34 N·m. 2. The quantum of momentum is ħ/2 newton·second ≈ 0.527286×10−34 N·s. What’s the quantum of mass? That’s where the equivalent units come in. We wrote: 1 kg = 1 N·s2/m. So we could substitute the distance unit in this equation (m) by sd/= sd/(3×108). So we get: 1 kg = 3×108 N·s2/sd. Can we scrap both ‘seconds’ and say that the quantum of mass (ħ/2) is equal to the quantum of momentum? Think about it. The answer is… Yes and no—but much more no than yes! The two sides of the equation are only numerically equal, but we’re talking a different dimension here. If we’d write that 1 kg = 0.527286×10−34 N·s2/sd = 0.527286×10−34 N·s, you’d be equating two dimensions that are fundamentally different: space versus time. To reinforce the point, think of it the other way: think of substituting the second (s) for 3×10m. Again, you’d make a mistake. You’d have to write 0.527286×10−34 N·(mt)2/m, and you should not assume that a time-meter is equal to a distance-meter. They’re equivalent units, and so you can use them to get some number right, but they’re not equal: what they measure, is fundamentally different. A time-meter measures time, while a distance-meter measure distance. It’s as simple as that. So what is it then? Well… What we can do is remember Einstein’s energy-mass equivalence relation once more: E = m·c2 (and m is the mass here). Just check the dimensions once more: [m]·[c2] = (N·s2/m)·(m2/s2) = N·m. So we should think of the quantum of mass as the quantum of energy, as energy and mass are equivalent, really. Back to the wavefunction The beauty of the construct of the wavefunction resides in several mathematical properties of this construct. The first is its argument: θ = kx − ωt, with k = p/ħ and ω = E/ħ Its dimension is the dimension of an angle: we express in it in radians. What’s a radian? You might think that a radian is a distance unit because… Well… Look at how we measure an angle in radians below: But you’re wrong. An angle’s measurement in radians is numerically equal to the length of the corresponding arc of the unit circle but… Well… Numerically only. 🙂 Just do a dimensional analysis of θ = kx − ωt = (p/ħ)·x − (E/ħ)·t. The dimension of p/ħ is (N·s)/(N·m·s) = 1/m = m−1, so we get some quantity expressed per meter, which we then multiply by x, so we get a pure number. No dimension whatsoever! Likewise, the dimension of E/ħ is (N·m)/(N·m·s) = 1/s = s−1, which we then multiply by t, so we get another pure number, which we then add to get our argument θ. Hence, Planck’s quantum of action (ħ) does two things for us: 1. It expresses p and E in units of ħ. 2. It sorts out the dimensions, ensuring our argument is a dimensionless number indeed. In fact, I’d say the ħ in the (p/ħ)·x term in the argument is a different ħ than the ħ in the (E/ħ)·t term. Huh? What? Yes. Think of the distinction I made between s and sd, or between m and mt. Both were numerically the same: they captured a magnitude, but they measured different things. We’ve got the same thing here: 1. The meter (m) in ħ ≈ 1.0545718×10−34 N·m·s in (p/ħ)·x is the dimension of x, and so it gets rid of the distance dimension. So the m in ħ ≈ 1.0545718×10−34 m·s goes, and what’s left measures p in terms of units equal to 1.0545718×10−34 N·s, so we get a pure number indeed. 2. Likewise, the second (s) in ħ ≈ 1.0545718×10−34 N·m·s in (E/ħ)·t is the dimension of t, and so it gets rid of the time dimension. So the s in ħ ≈ 1.0545718×10−34 N·m·s goes, and what’s left measures E in terms of units equal to 1.0545718×10−34 N·m, so we get another pure number. 3. Adding both gives us the argument θ: a pure number that measures some angle. That’s why you need to watch out when writing θ = (p/ħ)·x − (E/ħ)·t as θ = (p·x − E·t)/ħ or – in the case of our elementary wavefunction for the zero-mass particle – as θ = (x/2 − t/2) = (x − t)/2. You can do it – in fact, you should do when trying to calculate something – but you need to be aware that you’re making abstraction of the dimensions. That’s quite OK, as you’re just calculating something—but don’t forget the physics behind! You’ll immediately ask: what are the physics behind here? Well… I don’t know. Perhaps nobody knows. As Feynman once famously said: “I think I can safely say that nobody understands quantum mechanics.” But then he never wrote that, and I am sure he didn’t really mean that. And then he said that back in 1964, which is 50 years ago now. 🙂 So let’s try to understand it at least. 🙂 Planck’s quantum of action – 1.0545718×10−34 N·m·s – comes to us as a mysterious quantity. A quantity is more than a a number. A number is something like π or e, for example. It might be a complex number, like eiθ, but that’s still a number. In contrast, a quantity has some dimension, or some combination of dimensions. A quantity may be a scalar quantity (like distance), or a vector quantity (like a field vector). In this particular case (Planck’s ħ or h), we’ve got a physical constant combining three dimensions: force, time and distance—or space, if you want.  It’s a quantum, so it comes as a blob—or a lump, if you prefer that word. However, as I see it, we can sort of project it in space as well as in time. In fact, if this blob is going to move in spacetime, then it will move in space as well as in time: t will go from 0 to 1, and x goes from 0 to ± 1, depending on what direction we’re going. So when I write that E = p = ħ/2—which, let me remind you, are two numerical equations, really—I sort of split Planck’s quantum over E = m and p respectively. You’ll say: what kind of projection or split is that? When projecting some vector, we’ll usually have some sine and cosine, or a 1/√2 factor—or whatever, but not a clean 1/2 factor. Well… I have no answer to that, except that this split fits our mathematical construct. Or… Well… I should say: my mathematical construct. Because what I want to find is this clean Schrödinger equation: ∂ψ/∂t = i·(ħ/2m)·∇2ψ = i·∇2ψ for m = ħ/2 Now I can only get this equation if (1) E = m = p and (2) if m = ħ/2 (which amounts to writing that E = p = m = ħ/2). There’s also the Uncertainty Principle. If we are going to consider the quantum vacuum, i.e. if we’re going to look at space (or distance) and time as count variables, then Δx and Δt in the ΔxΔp = ΔEΔt = ħ/2 equations are ± 1 and, therefore, Δp and ΔE must be ± ħ/2. In any case, I am not going to try to justify my particular projection here. Let’s see what comes out of it. The quantum vacuum Schrödinger’s equation for my zero-mass particle (with energy E = m = p = ħ/2) amounts to writing: Now that reminds of the propagation mechanism for the electromagnetic wave, which we wrote as ∂B/∂t = –∇×and E/∂t = ∇×B, also assuming we measure time and distance in equivalent units. However, we’ll come back to that later. Let’s first study the equation we have, i.e. Let’s think some more. What is that ei(x/2 − t/2) function? It’s subject to conceiving time and distance as countable variables, right? I am tempted to say: as discrete variables, but I won’t go that far—not now—because the countability may be related to a particular interpretation of quantum physics. So I need to think about that. In any case… The point is that x can only take on values like 0, 1, 2, etcetera. And the same goes for t. To make things easy, we’ll not consider negative values for x right now (and, obviously, not for t either). But you can easily check it doesn’t make a difference: if you think of the propagation mechanism – which is what we’re trying to model here – then x is always positive, because we’re moving away from some source that caused the wave. In any case, we’ve got a infinite set of points like: • ei(0/2 − 0/2) ei(0) = cos(0) + i∙sin(0) • ei(1/2 − 0/2) = ei(1/2) = cos(1/2) + i∙sin(1/2) • ei(0/2 − 1/2) = ei(−1/2) = cos(−1/2) + i∙sin(−1/2) • ei(1/2 − 1/2) = ei(0) = cos(0) + i∙sin(0) In my previous post, I calculated the real and imaginary part of this wavefunction for x going from 0 to 14 (as mentioned, in steps of 1) and for t doing the same (also in steps of 1), and what we got looked pretty good: graph real graph imaginary I also said that, if you wonder how the quantum vacuum could possibly look like, you should probably think of these discrete spacetime points, and some complex-valued wave that travels as illustrated above. In case you wonder what’s being illustrated here: the right-hand graph is the cosine value for all possible x = 0, 1, 2,… and t = 0, 1, 2,… combinations, and the left-hand graph depicts the sine values, so that’s the imaginary part of our wavefunction. Taking the absolute square of both gives 1 for all combinations. So it’s obvious we’d need to normalize and, more importantly, we’d have to localize the particle by adding several of these waves with the appropriate contributions. But so that’s not our worry right now. I want to check whether those discrete time and distance units actually make sense. What’s their size? Is it anything like the Planck length (for distance) and/or the Planck time? Let’s see. What are the implications of our model? The question here is: if ħ/2 is the quantum of energy, and the quantum of momentum, what’s the quantum of force, and the quantum of time and/or distance? Huh? Yep. We treated distance and time as countable variables above, but now we’d like to express the difference between x = 0 and x = 1 and between t = 0 and t = 1 in the units we know, this is in meter and in seconds. So how do we go about that? Do we have enough equations here? Not sure. Let’s see… We obviously need to keep track of the various dimensions here, so let’s refer to that discrete distance and time unit as tand lP respectively. The subscript (P) refers to Planck, and the refers to a length, but we’re likely to find something else than Planck units. I just need placeholder symbols here. To be clear: tand lP are expressed in meter and seconds respectively, just like the actual Planck time and distance, which are equal to 5.391×10−44 s (more or less) and  1.6162×10−35 m (more or less) respectively. As I mentioned above, we get these Planck units by equating fundamental physical constants to 1. Just check it: (1.6162×10−35 m)/(5.391×10−44 s) = ≈ 3×10m/s. So the following relation must be true: lP = c·tP, or lP/t= c. Now, as mentioned above, there must be some quantum of force as well, which we’ll write as FP, and which is – obviously – expressed in newton (N). So we have: 1. E = ħ/2 ⇒ 0.527286×10−34 N·m = FP·lN·m 2. p = ħ/2 ⇒ 0.527286×10−34 N·s = FP·tN·s Let’s try to divide both formulas: E/p = (FP·lN·m)/(FP·tN·s) = lP/tP m/s = lP/tP m/s = c m/s. That’s consistent with the E/p = equation. Hmm… We found what we knew already. My model is not fully determined, it seems. 😦 What about the following simplistic approach? E is numerically equal to 0.527286×10−34, and its dimension is [E] = [F]·[x], so we write: E = 0.527286×10−34·[E] = 0.527286×10−34·[F]·[x]. Hence, [x] = [E]/[F] = (N·m)/N = m. That just confirms what we already know: the quantum of distance (i.e. our fundamental unit of distance) can be expressed in meter. But our model does not give that fundamental unit. It only gives us its dimension (meter), which is stuff we knew from the start. 😦 Let’s try something else. Let’s just accept that Planck length and time, so we write: • lP = 1.6162×10−35 m • t= 5.391×10−44 s Now, if the quantum of action is equal to ħ N·m·s = FP·lP·tP N·m·s = 1.0545718×10−34 N·m·s, and if the two definitions of land tP above hold, then 1.0545718×10−34 N·m·s = (FN)×(1.6162×10−35 m)×(5.391×10−44 s) ≈ FP  8.713×10−79 N·m·s ⇔ FP ≈ 1.21×1044 N. Does that make sense? It does according to Wikipedia, but how do we relate this to our E = p = m = ħ/2 equations? Let’s try this: 1. EP = (1.0545718×10−34 N·m·s)/(5.391×10−44 s) = 1.956×109 J. That corresponds to the regular Planck energy. 2. pP = (1.0545718×10−34 N·m·s)/(1.6162×10−35 m) = 0.6525 N·s. That corresponds to the regular Planck momentum. Is EP = pP? Let’s substitute: 1.956×109 N·m = 1.956×109 N·(s/c) = 1.956×109/2.998×10N·s = 0.6525 N·s. So, yes, it comes out alright. In fact, I omitted the 1/2 factor in the calculations, but it doesn’t matter: it does come out alright. So I did not prove that the difference between my x = 0 and x = 1 points (or my t = 0 and t  = 1 points) is equal to the Planck length (or the Planck time unit), but I did show my theory is, at the very least, compatible with those units. That’s more than enough for now. And I’ll come surely come back to it in my next post. 🙂 Post Scriptum: One must solve the following equations to get the fundamental Planck units: Planck units We have five fundamental equations for five fundamental quantities respectively: tP, lP, FP, mP, and EP respectively, so that’s OK: it’s a fully determined system alright! But where do the expressions with G, kB (the Boltzmann constant) and ε0 come from? What does it mean to equate those constants to 1? Well… I need to think about that, and I’ll get back to you on it. 🙂
9f0610f14dc59f0b
ATP molecule 9 ATP molecule 8 Autumn 2003 • Vol. 1 No. 2 Calculation and Prediction of Molecular Structures and Reaction Paths ATP molecule The atomic arrangement of molecules and the changes that take place when molecules react is a central problem in chemistry, molecular and condensed matter physics, and molecular biology. It is not surprising that enormous effort has been expended to calculate these quantities and supercomputers have played a prominent role. would lead, in principle, to this information and much more, but this approach is restricted to systems containing few atoms. The description of the forces in the system by adjusting the parameters of a potential function (a ’’force field’’) provides an alternative, but the results are sensitive to the way the parameters are fixed. Geometrical arrangements and reaction paths could be predicted if one could calculate the total energy of a Experience over the past 20 years or so has shown that the density functional formalism can provide energy diffe- system for all atomic positions. Stable structures correspond to local minima in the energy ’’surface’’, and the energy required to perform the transition from one structure to another (the energy ’’barrier’’) determines the rate at which the reaction can proceed. Accurate solutions of the Schrödinger equation for the many-electron wave function rences that have predictive value. The range of applications has been overwhelming: reactions at surfaces, the structure and magnetic properties of metallic multilayers, catalytic reactions in bulk systems, etc.. The formalism states that the energy (and many other properties) can be determined from a knowledge of the electronic density. It results in equations that have a form that is similar to that of the Schrödinger equation, but much simpler to solve numerically. Approximations are still unavoidable for one contribution to the energy (the ’’exchange-correlation energy’’), but there has been significant progress on this front as well. A recent application of density functional methods is provided by the reaction of water with adenosine 5‘-triphosphate (ATP). ATP is the most important energy carrier in cellular metabolism, and each human being produces its own weight in ATP every day. The ATP molecule is shown in Figure 1, where carbon atoms are grey, hydrogen white, oxygen red, nitrogen blue, and phosphorus orange. The situation in biological systems is complicated by the presence of water and many other molecules and ions, and we show in Figure 2 the model system for which we have studied two reactions in an aqueous environment: an associative reaction involving the attack of one water molecule, and a dissociative reaction involving the scission of the terminal bridging P-O bond. Magnesium ions (green) are catalysts in these processes. The calculations were performed with the CPMD program (J. Hutter et al., IBM Zürich and MPI für Festkörperforschung, Stuttgart), which uses a plane wave basis and makes extensive use of the fast Fourier transform (FFT). The program runs on many platforms, ATP molecule and it is used by research groups in Figure 2: Chemical reactions of ATP in an aqueous environment many parts of the world. There are, of course, implementations of the density functional approach in many other program packages. Figure 1: ATP molecule Autumn 2003 • Vol. 1 No. 2 • inSiDE inSiDE • Vol. 1 No. 2 • Autumn 2003 •Robert O. Jones •J. Akola Institut für Festkörperforschung (IFF), Forschungszentrum Jülich GmbH Similar documents
306c8cf90aaad7ac
In addition to the standard time evolution, QuantumOptics.jl also features various possibilities to treat stochastic problems. In general, the usage of the Stochastics module is very similar to the standard time evolution. The main difference is, that additional terms that are proportional to the noise in the equation have to be passed to the corresponding functions. tout, ψt = stochastic.schroedinger(T, ψ0, H, Hs; dt=1e-2) tout, ρt = stochastic.master(T, ψ0, H, J, Js; dt=1e-1) Note, that we need to set the keyword dt here, since the default algorithm is a fixed time step method (see below). Like the Time-evolution module, the stochastic solvers are built around DifferentialEquations.jl using its stochastic module StochasticDiffEq. Many of the options available for stochastic problems treated with DifferentialEquations.jl like, for example, the choice of algorithm can be used seamlessly within QuantumOptics.jl. Default algorithm and noise The default algorithm is a basic Euler-Maruyama method with fixed step size. This choice has been made, since this algorithm is versatile yet easy to understand. Note, that this means that by default, stochastic problems are solved in the Ito sense. To override the default algorithm, simply set the alg keyword argument with one of the solvers you found here, e.g. tout, ψt = stochastic.schroedinger(T, ψ0, H, Hs; alg=StochasticDiffEq.EulerHeun(), dt=1e-2) Note, that the switch to the EulerHeun method solves the problem in the Stratonovich sense. The default noise is uncorrelated (white noise). Furthermore, since most equations involving quantum noise feature Hermitian noise operators, the noise is chosen to be real. For example, tout, ψt = stochastic.schroedinger(T, ψ0, H, Hs; noise=StochasticDiffEq.RealWienerProcess(0.0, 0.0), dt=1e-1) corresponds to the default for a single noise term in the Schrödinger equation. Note, that the default is complex noise for semiclassical stochastic equations, where only classical noise is included (for details see stochastic semiclassical systems) For details on the available algorithms and further control over the solvers, we refer to the documentation of DifferentialEquations.jl.
ab957d3ffa37191a
I need to solve the Schrödinger equation with a time dependent Hamiltonian $$i\hbar \frac{\partial}{\partial t} \Psi = \left[-\frac{\hbar^2}{2m}\nabla^2 +\frac{1}{2} k(t)(x^2+y^2) + V(r)\right]\Psi $$ Can anybody recommend me an efficient numerical method or software package for solving the problem. EDIT: This is a suitable form of the equation for numerically solving(which we discussed in the one of the below answers with @davidhigh). \begin{align*} i\frac{\partial}{\partial t}u_{\ell}(r,t) = \Bigg(-\frac{1}{2} \frac{\partial ^2}{\partial r^2} + \frac{\ell(\ell+1)}{2r^2} + V(r)\Bigg)u_{\ell}(r,t)\quad \quad \quad \quad \quad \quad \quad \nonumber\\ + \frac{2}{3}k(t)r^2 \sum_{\ell' = {max}(\ell-2,0)}^{{min}(\ell+2,L_{{max}})} \Bigg(\delta_{\ell,\ell'} - \sqrt{ \frac{4\pi}{5}} \alpha(\ell, \ell') \Bigg) u_{\ell'}(r,t) \end{align*} where the coefficient $\alpha(\ell, \ell')$ is given by \begin{align*} \alpha(\ell, \ell') = \int Y^*_{\ell0}(\Omega)Y_{20}(\Omega)Y_{\ell'0}(\Omega) d\Omega \end{align*} which the integral can be solved by Wigner-3j coefficients (see here ). Can anyone suggest an efficient numerically method for solving the equation? • $\begingroup$ Is using Mathematica an option? $\endgroup$ – user21 Jun 26 '17 at 16:16 • $\begingroup$ Due to type of potential, I don't think Mathematica is an appropriate method. The potential is kind of central potential like Yukawa and Lennard Jones. $\endgroup$ – kelasmadin Jun 27 '17 at 8:38 • $\begingroup$ Can you add what k(t) and V(r) look like? $\endgroup$ – user21 Jun 27 '17 at 20:11 • $\begingroup$ For example, Yukawa potiential $V(r) = -V_0 \frac{e^{-\alpha r}}{r}$ and $k(t) = k_0 - F_0 \cos(\omega t)$ $\endgroup$ – kelasmadin Jun 28 '17 at 6:12 • $\begingroup$ Could you provide more information regarding the number of particles you wish to use, domain, dimensions, etc? $\endgroup$ – knl Jun 28 '17 at 11:14 I'd suggest to try it on your own. Do an expansion of your wavefunction in terms of spherical harmonics, $$ \psi(\mathbf r) \ = \ \sum_{\ell} R_\ell(r,t) \, Y_{\ell 0} (\theta,\phi)\,. $$ Note that I've set the index $m$ in $Y_{\ell m}$ to zero, in order to account for the symmetry of your Hamiltonian with respect to rotations around the $x-y$ plane. This makes your problem essentially two-dimensional. You can also write the spherical harmonic in terms of Legendre polynomials, $$ Y_{\ell0}(\theta,\varphi) = \sqrt{\frac{2\ell+1}{4\pi}} P_{\ell}(\cos\theta). $$ Insertion into your Schrödinger equation and project on the spherical hamonics, and you'll end up with a coupled equation for the radial functions $R_l(r,t)$. Here, "coupled" means coupled in $\ell$, i.e. the function $R_\ell(r,t)$ depends on all the other quantum numbers $0\leq \ell \leq L_{\max}$. Solve that with standard finite differences, it's not that hard. In the insertion-and-projection step above, the only problem appears in evaluating the matrix elements with $x^2+y^2$. If you write it as $r^2 Y_{10}$, it boils down to an integral over three spherical harmonics, which is related to the Clebsch-Gordan or Wigner-3j coefficients. But It's an easy one, for which analytical formulae exist (--just google for the buzzwords in the previous sentence). If you arrived at the working formula and need further assistance, let me know. EDIT: summarizing our lengthy discussion in the comment section, here is the final equation which is about to be solved numerically. $$ i\frac{\partial}{\partial t} u_\ell(r,t) = \left(-\frac{1}{2} \frac{\partial^2}{\partial r^2} + \frac{\ell(\ell+1)}{2r^2} + V(r)\right) u_\ell(r,t) \\ \qquad \qquad\qquad\qquad\qquad\quad+ \frac{2}3 \,k(t)\, r^2 \; \sum_{\ell^\prime=\max(\ell-2,0)}^{\min(\ell+2,L_\max)} \left( \delta_{\ell,\ell^\prime} - \sqrt{\frac{4\pi}5} \alpha(\ell,\ell^\prime) \right)\,u_{\ell^\prime}(r,t) $$ Here the coefficient $\alpha(\ell,\ell^\prime)$ which you introduced is given by $$ \alpha(\ell,\ell^\prime) \ = \ \int Y^\ast_{\ell 0}(\Omega) Y_{20}(\Omega) Y_{\ell^\prime 0}(\Omega)\ d\Omega $$ (you can also express that more in standard terms such as Wigner3j symbols, see e.g. here). Note the restriction in the summation indices which come from the fact that $0\leq \ell^\prime \leq L_\max$ (where $L_\max$ is the maximum angular quantum number chosen in the numerical representation). • 1 $\begingroup$ I didn't completely catch your mean.As your mentioned, I inserted the expansion into the equation.\begin{align*}\sum_{\ell}\Big[ \frac{1}{R_{\ell}} \frac{d}{dr}(r^2 \frac{dR_{\ell}}{dr})-\frac{2mr^2}{\hbar^2}\big(\frac{1}{2}k(t)(x^2 + y^2)+V(r) \big)+\nonumber\\ \frac{2mr^2}{\hbar}\frac{1}{R_{\ell}} \frac{\partial R_{\ell}}{\partial t}\Big]+\sum_{\ell} \frac{1}{Y_{\ell 0}} \Big( \frac{1}{\sin\theta} \frac{\partial}{\partial \theta}\big(\sin \theta \frac{\partial Y_{\ell 0}}{\partial \theta}\big) \Big)=0.\end{align*}we have two terms separated.Should I solve numerically the first term? $\endgroup$ – kelasmadin Jul 1 '17 at 18:10 • $\begingroup$ You project out the radial part, and by this you should notice that the spherical harmonics are eigenfunctions of the radial part of the Laplace operator. See here, for example. But my bad, instead of $R_l(r)$ use $\frac{R_l(r)}{r}$ -- it's more appropriate for the radial part of the Laplace operator. $\endgroup$ – davidhigh Jul 1 '17 at 20:48 • $\begingroup$ thanks to you. I was wondering what were your mean of $x^2 + y^2 \sim r^2Y_{20}$? Can I insert it in the equation for numerically solving?($u=rR$) \begin{align*} \frac{d^2u(r,t)}{dr^2} - \frac{2mr^2}{\hbar^2}\Big(\frac{1}{2}k(t)r^2Y_{20} +\frac{\hbar^2 \ell (\ell +1)}{2mr^2}+V(r) \Big)u(r,t)+\frac{2imr^2}{\hbar}\frac{\partial u(r,t)}{\partial t} = 0. \end{align*} $\endgroup$ – kelasmadin Jul 2 '17 at 14:40 • $\begingroup$ The idea is: express $x^2+y^2$ in terms of $r^2$ and some appropriate combination of spherical harmonics. For that, go here, and figure out which ones to take (my attempt above is wrong)... once you have it, insert it into the Schrödinger equation ... and only thereafter, you do the projection onto spherical harmonics. $\endgroup$ – davidhigh Jul 2 '17 at 15:46 • $\begingroup$ All other terms remain as in the previous link, and the only difficulty is in evaluating the $x^2+y^2$ term. For that, you get terms like $\langle Y_{lm} | Y_{20} | Y_{\bar l \bar m} \rangle$ and similar (where the inner SH comes from your expansion of $x^2+y^2$. You need to evaluate these integrals (again, there closely related to the Wigner3j symbol). In your final equation, there should be no spherical harmonics anymore, it should depend on $\ell$ and $r$ only. $\endgroup$ – davidhigh Jul 2 '17 at 15:48 FEniCS users have solved this problem before, but keep in mind that FEniCS does not natively support complex numbers right now in its code. Therefore you have to make a workaround. See: https://fenicsproject.org/qa/9209/how-to-use-complex-numbers-iterative-solvers https://fenicsproject.org/qa/10671/mixed-function-spaces-for-complex-valued-problems-in-2016-0 • $\begingroup$ I will try to solve it with your suggestion and let you know if it works. $\endgroup$ – kelasmadin Jun 29 '17 at 12:36 Your Answer
af8d9c20b63c7cc2
I start by outlining the little I know about the basics of quantum field theory. The simplest relativistic field theory is described by the Klein-Gordon equation of motion for a scalar field $\phi(\vec{x},t)$: $$\frac{\partial^2\phi}{\partial t^2}-\nabla^2\phi+m^2\phi=0.$$ We can decouple the degrees of freedom from each other by taking the Fourier transform: $$\phi(\vec{x},t)=\int \frac{d^3p}{(2\pi)^3}e^{i\vec{p}\cdot \vec{x}}\phi(\vec{p},t).$$ Substituting back into the Klein-Gordon equation we find that $\phi(\vec{p},t)$ satisfies the simple harmonic equation of motion $$\frac{\partial^2\phi}{\partial t^2}=-(\vec{p}^2+m^2)\phi.$$ Therefore, for each value of $\vec{p}$, $\phi(\vec{p},t)$ solves the equation of a harmonic oscillator vibrating at frequency $$\omega_\vec{p}=+\sqrt{\vec{p}^2+m^2}.$$ Thus the general solution to the Klein-Gordon equation is a linear superposition of simple harmonic oscillators with frequency $\omega_\vec{p}$. When these harmonic oscillators are quantized we find that each has a set of discrete positive energy levels given by $$E^p_n=\hbar\omega_\vec{p}(n+\frac{1}{2})$$ for $n=0,1,2\ldots$ where $n$ is interpreted as the number of particles with momentum $\vec{p}$. My question is what about the harmonic oscillator solutions that vibrate at negative frequency $$\bar{\omega}_\vec{p}=-\sqrt{\vec{p}^2+m^2}?$$ When these harmonic oscillators are quantized we get a set of discrete negative energy levels given by $$\bar{E}^p_n=\hbar\bar{\omega}_\vec{p}(n+\frac{1}{2})$$ for $n=0,1,2\ldots$ where $n$ can now be interpreted as the number of antiparticles with momentum $\vec{p}$. If this is correct then the total energy of the ground state, per momentum $\vec{p}$, is given by \begin{eqnarray} T^p_0 &=& E^p_0+\bar{E}^p_0\\ &=& \frac{\hbar\sqrt{\vec{p}^2+m^2}}{2} + \frac{-\hbar\sqrt{\vec{p}^2+m^2}}{2}\\ &=& 0. \end{eqnarray} Thus the total ground state energy, $T_0$, is zero; there is no zero-point energy. Does this interpretation of the negative frequency solutions make sense? No, this doesn't make any sense. There are no negative momentum oscillators here. In momentum space, the Hamiltonian of a free real scalar field $\phi$ is $$ H = \int \left(\frac{1}{2} \lvert \Pi(\vec p) \rvert^2 + \frac{\omega_p^2}{2} \lvert \phi(\vec p) \rvert^2 \right)\frac{\mathrm{d}^3 p}{(2\pi)^3},$$ where $\omega_p = \sqrt{\vec p^2 + m^2}$. There is no sign ambiguity: $\omega_p$ is always positive, and the free scalar field can be seen as a collection of such oscillators with positive frequency, one for each momentum $\vec p$. The "negative frequency solutions" you have likely heard about are something different: In the mode expansion for the field in position space, we have $$ \phi( x) = \int \frac{\mathrm{d}^3 p}{(2\pi)^3}\frac{1}{\sqrt{2\omega_p}} \left( a(\vec p) \exp(\mathrm{i}px) + a^\dagger(\vec p)\exp(-\mathrm{i}px)\right)$$ and the pre-quantum field theoretic interpretation of $\phi(x)$ as a wavefunction would now identify $a(\vec p)\exp(\mathrm{i}px)$ as a "negative frequency solution" since a Hamiltonian eigenstate evolves as $\exp(-\mathrm{i}\omega_pt)$ but this contains the term $\exp(\mathrm{i}\omega_p t)$. Since quantum field theory does not identify $\phi(x)$ as a solution to the Schrödinger equation, there is no problem with this term here. There are no negative energy levels. The energy levels belonging to negative frequencies are positive as well. The Noether energy is proportional to $\omega^2$ divided by the square of a norm proportional to $\sqrt{\omega} $, so it is positive definite. The angular frequency can be positive or negativebut its sign determines the sign of the charge. • 2 $\begingroup$ Why is the energy proportional to $\omega^2$? where did you get this from? And how does that imply that it is positive-definite? $A=-\omega^2$ is also proportional to $\omega^2$, but it is negative-definite. $\endgroup$ – AccidentalFourierTransform Aug 21 '18 at 19:24 • $\begingroup$ @accidentalfouriertransform You are entirely correct. - E is negative definite. The point is that changing the sign of $\omega$ does not change the sign of the energy, only that of the charge. $\endgroup$ – my2cts Aug 22 '18 at 13:28 Your Answer
427ac93cde17edd3
Are Parallel Universes Unscientific Nonsense? If you’re a multiverse skeptic, you should know that there are many potential weaknesses in the case for parallel universes, and I hope you’ll find my cataloging of these weaknesses below useful. To identify these weaknesses in the pro-multiverse arguments, we first need to review what the arguments are. via Are Parallel Universes Unscientific Nonsense? Insider Tips for Criticizing the Multiverse | Guest Blog, Scientific American Blog Network. Max Tegmark seems to be everywhere these days.  This is an interesting piece exploring the arguments for, and problems with, the various multiverse theories. I have to admit that I’m a multiverse skeptic.  I can appreciate that if a successful theory predicts something, we should take that prediction seriously.  However, when that prediction is extraordinary, we should be cautious until we have empirical support.  As Carl Sagan said, extraordinary claims require extraordinary evidence. But Tegmark omits what I think is the biggest flaw in the multiverse, namely that there isn’t just one multiverse prediction.  There are several.  Each of the theories he mentions predicts a different type of multiverse.  It’d be one thing if all of these theories converged on the same multiverse concept.  I think I’d find that compelling.  But they don’t. I’m not closed minded about the multiverse.  If someone found evidence that the multiverse was the simplest explanation for, I’d be much more willing to accept it.  Until then, while it’s fascinating speculation, I’m going to keep firmly in mind that it is speculation. This entry was posted in Zeitgeist and tagged , , , , , , , . Bookmark the permalink. 38 Responses to Are Parallel Universes Unscientific Nonsense? 1. Steve Morris says: A very interesting article, but I would pick on a few issues. His Level I parallel universes aren’t parallel universes as I understand them. It’s just one very large (possibly infinite) universe. His Level III universes aren’t dependent on the Schrödinger equation, but on a particular interpretation of the Schrödinger equation. He’s being naughty here. His Level IV universes are complete fantasy. I know that he says that “fantasy” is a a vague and unscientific criticism, but his level IV universes are precisely fantastic, so I feel justified in using the word. He is saying, as I understand it, that any universe we can imagine is just as real as the universe we live in. Worse than that, he is associating criticism of this idea with criticisms of the Level I-III multiverse theories, in an attempt to tar critics of his theory with the same brush as critics of inflation, etc. He even implies that critics of his fantasy universes are in the same boat as people who criticise the predictions of general relativity. That’s very naughty. In fact, it’s a very naughty article all told. Is naughty a scientific word? • Hi Steve, It’s a short article. Read his book. He’s a lot more humble and fothcoming than you might think. He very very clearly distinguishes what is mainstream from what is controversial, and in particular emphasises that his Level IV multiverse is extremely so, to the extent that he was advised not to pursue it for fear that it might destroy his career. The Many Worlds Interpretation is the only one which takes the Schrodinger equation at face value. The others generally do not, either positing hidden variables or objective wavefunction collapse in order to justify a unique universe. I do think it is not fair to call the Level IV multiverse a fantasy. There are very good reasons to suspect it is true. Indeed I became convinced it was true even before I heard of Max Tegmark. I explain why on my blog. I would genuinely be very excited if you could point out any particular flaw in my reasoning. • Steve Morris says: I will take a look at your blog with pleasure, but at this point I don’t see any way even in principle of demonstrating that parallel universes exist. Of course they *could* exist – anything *could* exist outside our universe, although that is stretching the meaning of the word “exist.” • Thanks Steve. You comment clarified some things for me. I agree on the Level I universe. It doesn’t even seem like the term “parallel” applies. I think for Level IV, I would use the term “speculative” rather than “fantasy”, since fantasy to me implies something that we know to be impossible. I don’t know if “naughty” is a scientific word, but it is a much friendlier one than others you might have used 🙂 2. Hi SAP, I disagree that the multiverse prediction is extraordinary. It’s an extremely simple idea that explains a lot. Indeed it would seem to me to be weirder if only one universe were possible. To me, it would be like if there were only one planet in the universe. I have a post on my blog which explains why I think an infinite multitude is actually simpler than a single member of a set. Even apart from the fact that multiverses are predicted from various theories, I think they have immense explanatory power when considered from the point of view of the anthropic principle. I don’t see a problem with the idea that there are different kinds of multiverses. All of these multiverses are complementary. In fact, in his book Tegmark presents an interesting argument which unifies the Level I (Distant Space) and Level 3 (Many Worlds Interpretation) multiverses into one consistent whole. Overall, I think the existence of the Level I (Distant Space), Level III (Many Worlds) and Level IV (MUH) multiverses is less extraordinary and simpler than their absence. I’m much less sure about the Level II (Endless Inflation) multiverse. The existence of the Level I multiverse is essentially beyond doubt, I think. All that is required for this to exist is for space to extend beyond our cosmic horizon, which it certainly does and in fact seems to do for a very great (potentially infinite) distance based on measurements of the flatness of space. The Level II multiverse is possibly correct, being predicted by some models of inflation. This is a scientific hypothesis because it could potentially be confirmed or falsified not by direct observation but by testing these various inflation models based on what we can glean about the early history of the universe. The Level III multiverse is simply the most parsimonious straightforward interpretation of quantum mechanics there is. The traditional Copenhagen interpretation postulates the collapse of the wavefunction, without having any test for this or prediction of how it might work. It’s becoming increasingly obvious that this is a desperate post-hoc rationalisation which seeks to preserve our intuition of a single universe from what the equations actually predict. If quantum computation ever becomes a practical reality, Tegmark argues that this would be direct evidence of the multiverse, because it can be interpreted as having all the copies of a quantum computer performing the same computation with different variables in parallel, thereby covering a massive amount of ground in the time it takes for a single iteration. It’s hard to see how this could work with the other interpretations of QM. I agree though that this is perhaps not strictly science, because since all of the interpretations agree on the mathematical fundamentals of QM, it’s hard to see how one interpretation could ever be proven. The Level IV multiverse is not scientific but neither is it nonsense, as you know I have argued on my blog. I very much recommend that you read Tegmark’s book. I’m about halfway through it currently, and it manages to be engaging, personal and informative without ever beating you over the head with his particular world-view. Even apart from the multiverse stuff, it’s a great way to brush up on your cosmology and physics in general. • Hi DM, I’m familiar with the standard arguments for why the multiverse is simpler than one universe, but it just feels like just-so reasoning to me, and attempts to unify them sound almost like apologetics. It just goes to show that complexity and simplicity are matters of judgment, and some of us are going to have different intuitions about it. Whose intuition is right? Only time (maybe) will tell. I have to admit that I don’t really understand how quantum computing is supposed to work, and I’ve been meaning to do some reading on it. It never made sense to me that, even if a qubit can be in superposition, how we can ever have access to more than one of those states? Wouldn’t the very act of accessing it cause a universe fork, wave function collapse, or whatever your favorite interpretation is? And if we can only ever have access to one, it seems like any work done by the other is lost, even if the many worlds interpretation is true. But I’m completely open that I may be missing some key fact here. I may take a look at Tegmark’s book, but my current reading list is pretty long (and getting longer), so it may be a while. It might well move up on my list if it gets a lot of acclaim in the reviews. • Steve Morris says: Quantum computing is based on the idea that we keep the qubits in superposition until the calculation is complete. We don’t fork the universe / collapse the wavefunction until the work is done. It’s a bit like juggling – you have to keep the balls in the air until the performance is over – if you drop one, you’ll have to start again! • Thanks, but I’m still left wondering how we access the results of the work done on the side that doesn’t “win” on measurement. For that matter, how do we work with the qubit, manipulate it, without causing decoherence / wave function collapse / universe forking? • Steve Morris says: The mechanics of accessing the results are complicated. Quantum computing algorithms are not like classical ones. Quantum calculations are reversible for one thing. Algorithm’s like Shor’s algorithm can be used to perform specific calculations. Keeping the system in a coherent quantum state until the calculation is done is one of the big challenges, which is why you don’t have a quantum computer sitting on your desk. But room temperature calculations are looking promising, I believe. That’s pretty much the limit of my expertise on the subject! • Thanks for the explanation! • Hi SAP, Tegmark’s attempt to unify the Level I and Level III multiverse is interesting, but ultimately it doesn’t matter. They don’t need to be unified. I disagree that complexity and simplicity are matters of judgement. There are mathematical ways to look at it, such as Shannon entropy, where the amount of information in a sequence is related to how much “surprise” we feel on seeing each bit. Let me explain. I think we intuitively feel that two universes are more complex than one, and three are more complex than two. Our intuition is correct. However it is not correct to extrapolate this to assume that unlimited universes are more complex than one. The maximum amount of entropy (complexity) is actually If each possible universe has an equal chance of existing or not existing with no rule to say which is the case. This is like tossing a fair coin, where there is one bit of information per universe to specify if it exists or not. The only way to describe this situation is to list out all the possible universes and say whether they exist or not. As the probability moves away from 50%, the entropy goes down. If your coin is only 5% likely to show a heads, you will typically be able to express a sequence in much fewer bits than listing the whole thing out because you can just indicate the rare cases where the heads came up. The case where only one universe is impossible is exactly as complex as the case where only one universe is possible, and just as abritrary and prima facie unlikely. The case where no universes exist and the case where all universes exist are the simplest cases. And as it turns out, both of these are compatible with the MUH. The MUH says that that the concept of existence as applied to universes doesn’t really work. They exist from one point of view but not from another. The real point is that the distinction between those that exist and those that don’t is simply incoherent. As both of the simplest cases show no such distinction, they are both compatible with the MUH. • Thanks for the explanation. I can see an infinite number of universes being no more complex than 2 universes, but 2 through n universes still seems more complicated to me than the one we can observe. Of course, there are aspects to reality that we can’t observe yet (or possibly ever), and those aspects might include additional universes. My understanding is that we don’t have empirical evidence to rule them out, but we also don’t have evidence to make them mandatory. (I know a lot of people feel like there’s enough in the mathematics to justify certitude. Personally, I need more than mathematical manipulations to give me certitude.) • Hi SAP, I’d be interested to know why you think an infinite number of universes is no more complex than two but more complex than one. I would also like to know which of 2, 1,000,000 or infinite universes is more complex/unlikely. I agree that the MUH in particular is not science. But to me the philosophical and mathematical argument is convincing. From my point of view it can only be false if my reason has failed me in some way. And this is possible, but I see no problem in being very confident that it has not, for the same reasons that we feel confident about our capacity to reason in other contexts. Lack of empirical evidence does not mean we ought to be squarely on the fence, as you seem to be (or perhaps leaning towards MUH-denial). We have no direct evidence for the proposition that God does not exist, but I don’t think either of us would judge the probability of God’s existence at 50%. • Steve Morris says: Disagreeable Me, I left a comment on your blog about this. I questioned your assumption of mathematical platonism (the idea that mathematical objects have their own “existence” outside our universe. I put forward an argument for why I don’t think this can be true. Could you respond to that? • Steve Morris says: Sorry, please ignore my latest comment. I see that you have left a detailed reply on your blog. • Hi DM, Well, to add a second universe requires positing a new dimension, a new parameter of existence. Once that dimension or parameter is there, once we have a second universe, then a third, fourth, etc, seems much more plausible. Actually, now that I think of it, once you have a second universe, stopping at a certain number would be an additional complication. But the whole realm of two or more universes is more complicated than just the one due to that additional dimension, parameters, or set of parameters. The God comparison is interesting. A deistic god cannot be proven or disproven. However, the only thing it really has going for it is that a lot of people would like to see it exist. There’s nothing that makes its existence mandatory, so you’re right, I don’t regard the probability as 50%. It can’t be ruled out, but the number of possible concepts that can’t be ruled out is vast, but the number of concepts that actually exists is far smaller. The chances that one of our cherished concepts falls within the exists category is minuscule (although a small number will occasionally). • Steve, do you have any follow up to the discussion on my blog? Hi SAP, I see where you are coming from now as you posit a new dimension to allow a second universe. You seem to conceptualise the current universe by analogy to a 3D Cartesian space, and the introduction of further universes as the extension of this to 4D. Points in a 3D space are defined by positional paramters (X,Y,Z), but now we need an extra parameter (W) to locate ourselves in this larger space. Our universe would then be the set of all points for which W=0, say. However I don’t see it like that at all. Unlike Cartesian points, universes in the MUH are not defined by their relative positions in some kind of space but by their laws of physics. In the Level IV multiverse, our universe is not above, under, before, after or beside any other. Everything required to “locate” it is there in its physics. This universe thus already has its “position” in the set of possible universes and no new dimension or parameter is required. It seems to me that it is the case of a limited number of universes that requires another parameter. In addition to all the laws of physics, we must include a non-mathematical parameter to represent whether the universe exists or not. The only way this might not be true of a single universe scenario is if there is only one possible universe, one specific way the laws of physics could have been arranged (which could in principle be derived from the armchair). This seems highly improbable to me, to put it mildly. • Hi DM, You read in detail that I actually tried to avoid in my comment. It’s why I said “dimension or parameter”. However you construct it, it seems like you’re adding an extra characteristic to reality. An added attribute, dimension, parameter, characteristic, or whatever you want to call it, that by its very addition adds complexity, at least in my mind. On your last point, in my mind, the complexity hierarchy, from high to low, is: 1. single universe 2. infinite universes (requires the extra (generic) dimension I mention above) 3. a finite number of universes (requires the extra dimension of 2. plus an additional constraint to limit the iterations) Hope this makes sense. • Disagreeable Me says: HI SAP, Sorry for banging on about this, and sorry for apparently misinterpreting what you meant by parameter or dimension, but then I am lost as to what you do mean. What is this new parameter that is needed for two universes and not for one and what kind of value could it have? I’m still left with the opinion that one universe requires an extra parameter that infinite universes do not. We presumably agree that there are effectively infinite possible universes, so if only one of those is actual, then does there not need to be a parameter to indicate which one of those it is? I wonder how you feel about the question about whether it would be more surprising if it turned out that life had only arisen once in the universe or if there were countless instances of it. I personally would find the former scenario to be much more strange (infinitely more so if there is only one universe!). Do you also feel that two instances of life life arising need an extra parameter, or is this different from the universe case? If so, why? Or we could make it more abstract. Suppose we found a very strange unique object of some kind floating in interstellar space (but that we had little reason to suspect that it was artificial). Perhaps it’s a lump of rock made of antimatter, for instance. Such a discovery would lead me to suspect that there were many other such objects in the universe. By your reasoning, this would seem to be unlikely, as it would mean the introduction of a new parameter or dimension to distinguish between them. This seems absurd to me, so I assume that this is not in fact your position, but that’s how it comes across to me. Liked by 1 person • Hi DM, I’m struggling to think of a way to explain the new parameter without repeating myself. Maybe this: Suppose on one hand you have X. Then suppose on the other you have a series of Xs. Isn’t the single X a simpler phenomena than the series of Xs? On life elsewhere, when we conclude it’s probable that there’s life elsewhere in the universe, it’s because we observe life here, we observe that it arises from the laws of nature, we observe that the laws of nature seem to exist everywhere, and we observe the immensity of the universe. There’s still a logical step in our conclusion, but it seems far less tenuous. We haven’t observed another universe. We’re evoking multiverses to explain (excuse) the fact that we don’t understand this one. Again, I see the multiverse as a possible explanation, but can’t go past that until we see something that makes it mandatory, or at least compelling. Here’s a question in the “turtles all the way down” tradition. If there were a multiverse, then why not a mult-multi-verse? Or a multi-multi-multi-verse? Where do we draw the line? • Hi SAP, I still don’t see the extra parameter in your series of X’s. Is one X simpler than a series of X’s? Perhaps. It depends on questions such as whether X has any other properties or on whether the series is infinite. If X is an arbitrary integer, chosen with each integer having equal probability, then it is likely to be very large (as most numbers are), effectively infinite. As such, it would require an effectively infinite number of bits to precisely specify a single X. An infinite series of Xs however can be defined with only the axioms needed to define the integers. On the analogy to life, I’m not exactly arguing that we ought to have precisely the same level of confidence that there is life elsewhere than that there are other universes, I’m just trying to understand how you perceive the relative complexity of the different scenarios. Perhaps you think that multiple origins for life is indeed more complex but that the evidence we have to bolster it still makes it the more reasonable position. Do we observe that life arises from the laws of nature or do we infer that it does? Do we observe that the laws of nature are the same everywhere or do we infer that they are? To paraphrase your comments about life, I would say that we observe that this universe exists, we infer that there is some explanation for how it is that the universe exists, and we infer that this same explanation could account for other universes. I would say it is also virtually certain that the set of logically possible universes is much vaster than what we directly observe. But where these inferences may be less straightforward than those you make about life, I think they are bolstered by a pretty robust argument that the MUH must be true given naturalism, computationalism and Platonism. How would you feel about a phenomenon for which we have no understanding? Such as a lump of antimatter? Would you feel an inference that there must be many such objects to be tenuous? Can you explain how your idea of the extra parameter or dimension fits in here, or indeed give any example where it would apart from universes? Well, in a way that’s what the MUH proposes, since the MUH encompasses lesser multiverses on multiple levels (e.g. Tegmark’s Levels I-III). But it’s not turtles all the way down, because the MUH is so fundamentally simple. It is at core the proposition that all possible universes exist. By definition, any possible universe therefore exists inside the MUH multiverse. There can be no greater multiverse. 3. john zande says: Excellent book on this: “Why is there anything,” by fellow blogger, quantum physicist, and Many Worlds Theory advocate, Matthew Rave. It’s an easy read, a dialogue had between two characters, and details the argument quite convincingly. 4. Steve Morris says: Disagreeable Me, when you write this: I cannot help but notice that this idea of bits used to represent X sounds like X can only exist in a physical universe (that contains bits), not in an abstract sense. In other words, numbers and other mathematical objects are instantiated. • Hi Steve, Bits are a mathematical concept, not a physical concept, although they do have significance for certain physical concepts such as entropy and the Holographic principle etc. Bits are relevant here because most definitions of complexity hold structures with more information (measured in bits) to be more complex. 5. Steve Morris says: I’m no mathematician, but I’m comfortable with the idea of the infinite. If X = anything you like Then X needs no information to constrain it. It can be infinite. In fact it is. But if X = 1 then I need to write an equation to constrain it that way. So an infinite number of objects is simpler than 1 or some other integer. But no objects at all is the simplest possibility. Another example. An empty page contains no information. This is the simplest configuration of the page. It contains no stories. If I write on the page, “Once upon a time a King lived in a forest,” then I have constrained the possible stories. But this is not yet an interesting story. The more information I add, the more constraints there are, until eventually I arrive at a single story. And this story will be rich, detailed and interesting. Our universe seems to me like that. So a single universe seems to be quite an unlikely occurrence. Most likely of all is the blank page. So I can understand why (all possible universes) could be simpler than (our universe). But I don’t understand why we have any universes at all, if that is the logic. As I say, I’m no mathematician. This is probably complete garbage. • Hi Steve, I would say that no objects is exactly as simple as all possible objects existing. No objects: For all possible objects x; x does not exist. All objects: For all possible objects x; x does exist. The reason that all mathematical objects exist and not no mathematical objects is that it is logically impossible for no mathematical objects to exist. On a Platonist view of existence, a mathematical object exists if it can be defined consistently. It is possible to define mathematical objects therefore they exist. The laws of logic are not contingent, they are necessary, so it could not have been otherwise. The most apt comparison to nothing existing is not the blank page, because the set of all possible states of the page includes the blank page (and there is an analogous mathematical structure, the empty set). The comparison to absolutely nothing existing would be the view that it is impossible for the page to be in a non-blank state (i.e. to compose a block of text containing more than zero characters), which is patently absurd. By the way, I’d love to know if I have given you any reason to reconsider Platonism at all, or if you still feel comfortable dismissing Tegmark as a naughty fantasist. • Steve Morris says: Yes, you have answered my questions well, and although I cannot say that I really believe in MUH or Platonism, I cannot say that they are clearly not true. I think that by definition, Tegmark is a naughty fantasist, even if what he says turns out to be true! He conflates ideas that should not be conflated. Your way of reasoning is not naughty at all, but equally fantastic (fantastic can be a good or bad word, depending on your outlook). 6. DM, On the antimatter question, my initial thoughts would be that there may be more in the universe, but given the properties of antimatter and its reaction to matter, I would judge that it would have to be rare. But you’re asking me about my thoughts of it existing within the universe, which is vast and observable, and equating that with the existence of other universes. If we could observe a vast multiverse and you then asked me if there were antimatter universes out there, I would give it a much higher probability than I can now, not having observed any other universes. I also have to admit to being skeptical about Platonism, at least in any ontological sense. • Again though, how would you use your thoughts about the complexity parameter or dimension to treat the antimatter question? Are you beginning to think that maybe that’s not the right way to think about such questions? Yes, we can observe the universe and see that it is vast. For me, this is just like appreciating that there are many other ways the universe could be, even if we are only modestly tweaking the fundamental constants (or do you entertain the notion that there is something logically necessary about the way the constants are?). If there are many ways the universe could have been, it seems to me to be very strange that only one of those should be realised, like having only one sizeable lump of antimatter in a massive universe. Rejecting the MUH on the grounds of not accepting Platonism makes a lot more sense to me than your arguments from complexity, although I still think it is wrong to reject Platonism! • Hmmm. I thought I had addressed that question. Asking what the complexity of one or multiple lumps of antimatter in the universe, versus one or more universes, aren’t equivalent questions. That said, would it be more or less complex if there were only one lump of antimatter in existence? I would think only one instance of an antimatter lump in the universe (assuming we had some way to know that) would be more complicated (probably related to our way of knowing about it). On fundamental constants, I don’t know the reason why they are what they are. Multiverses are one possibility. Another is logical constraints of some kind. (Of course, then we’d be asking why those logical constraints? Turtles again.) Until we have reasons to isolate one particular explanation, I don’t think we should stop looking. • Hi SAP, If they’re not equivalent questions, that’s OK. Could you give any other example of a case where your reasoning about parameters or dimensions would apply? • Hi DM, Another name for what I’m talking about is Occam’s razor. Any situation where we multiply the assumptions beyond necessity would apply. So a light in the night sky is either an airplane or an alien from outer space. I’m sure you’d agree that we should assume it’s an airplane or some other mundane thing before we assume aliens, spirits, etc. Now, I do know many people insist that multiple universes are simpler than one. But I don’t think we know enough about how this universe works yet to make firm conclusions. Maybe it is simpler, or maybe we just don’t yet understand why the laws and constants are the way they are. • Sure, Occam’s razor is why I like the MUH. We have this concept of existence, and we have this concept of mathematical consistency. The MUH says that these are the same concept, or alternatively that we can do away with the concept of physical existence because it is meaningless. 7. Disagreeable Me, I think that’s one of the jumps I can’t make yet. Dismissing the concept of existence strikes me as violating Einstein’s advice that, “Things should be as simple as possible, but no simpler.” Our experience seems to show that existence or non-existence matters. This seems like a brute fact that we need more justification to dismiss. 8. s7hummel says: there are many great books that should be read. there are many great theory, and although they may seem sometimes ridiculous, you can not reject them before, at least not try to understand them, so you need something to read on this topic. however, in the case of this theory, one can completely abandon attempts to understand. it is a theory completely detached from reality. the problem is not that this theory doesn’t make sense. simply put, there can be no such reality. also very funny are evidence adduced to prove the existence of p.u; multiverse. Your thoughts? You are commenting using your account. Log Out /  Change ) Google photo Twitter picture Facebook photo Connecting to %s
b2e539a0c659b3f1
What is Loop Quantum Gravity? 15 Mar Loop Quantum Gravity (also known as Canonical Quantum General Relativity) is a quantization of General Relativity (GR) including its conventional matter coupling. It merges General Relativity and Quantum Mechanics without extra speculative assumptions (e.g., no extra-dimensions, just 4 dimensions; no strings; not assuming that space is formed by individual discrete points). LQG has no ambition to do unification of forces or to add more than 4 spacetime dimension, nor supersymmetry [1]. In this sense, LQG has a less ambitious research program than String Theory and is its biggest competitor. General Relativity envisages spacetime and the gravitational field as the same entity, “spacetime” itself, that, in many ways, can be seen as a physical object analog to the electromagnetic field. Quantum Mechanics (QM) was formulated by means of an external time variable t like it appears in Schrödinger equation Fig. 1 below shows the solutions of SE when applied to the simplest atom in Nature, the hydrogen atom, where the potential is V=\frac{1}{r}. Electrons that rotate around the positive nucleus have the energy is quantified according to the law $\latex E_n=\frac{13.6}{n^2}$ and the wave function $\latex \Psi$ are given by the mathematical functions given in Figure 1. However, in General Relativity (GR) this external time (represented above by the letter t) is incompatible because the role of time becomes dynamical in the framework of Minkowski spacetime. Time is no longer absolute (as Sir Isaac Newton once stated) but is relative to a frame of measurement. In addition, GR was formulated formerly by Albert Einstein in the framework of Riemannian geometry, where it is assumed that the metric is a smooth and deterministic dynamical field (Fig.2). Fig.2 – Example of Riemann surface. Image courtesy: http://virtualmathmuseum.org (See for details about this specific surface here: http://virtualmathmuseum.org/Surface/riemann/riemann.html This raises an immediate problem, since QM requires that any dynamical field be quantized, that is, be made of discrete quanta that follows probabilistic laws… This would mean that we should treat quanta of space and quanta of time… All the known forces in the universe have been quantized, except gravity. The first approach to quantization of gravity consists of writing the gravitational field as composed of the sum of two terms, a background field g_{background} and a perturbation h(x). So, its full metric g_{\mu \nu}:                        g(x)=\eta_{background} (x)+ h(x) where \eta_{\mu \nu} represents the background spacetime, normally Minkowski) and h_{\mu \nu} a perturbation of the field (representing the graviton). The Minkowski space united space and time as a single entity introducing the new concept of space-time manifold where two points are distant by               ds^2= c^2 dt^2-d \bf{x}^2. The problem resides in the intrinsic difficulty that this approach face when describing extreme astrophysical (near a black hole) or cosmological scenarios (Big Bang singularity). The inconsistency between GR and QM becomes more clear when looking at Einstein equation of GR:                R_{\mu \nu} - \frac{1}{2}gR=\kappa T_{\mu \nu}(g)  R_{\mu \nu} is the Ricci curvature tensor, R is the curvature, and T_{\mu \nu} is the energy-momentum tensor. \kappa \equiv \frac{8 \pi G}{c^4}. While the left-hand side is described by a classical theory of fields, the right-hand side is described by the quantum theory of fields… LQG avoids any background metric structure (described by the metric g), choosing a background independent approach, along the suggestion of Roger Penrose on the spin-networks where a system is supposed to be built of discrete “units” (anything from the system can be known on purely combinatorial principles) and all is purely relational (avoiding the use of space and time…) In GR spacetime is represented as a well-defined grid of lines, even if curved in the presence of a massive body, such as In LQG, spacetime is represented rather as background-independent, the geometry is not fixed, is a spin network of points defined by field quantities and angular momentum, more like a mesh of polygons; spacetime is more a derived concept rather than a pre-structure, pre-concept on which events take place, as shown here Image credit: http://www.timeone.ca/glossary/spin-network/ This new representation of fields has the advantage of representing both their intrinsic attributes but also their induction attributes. That is, the field quantities depend not only of the point where it exist but also on the neighboring points connected by a line. That’s why the mathematical idea that best express this representation is the holonomy of the gauge potential A along the loop (line) \alpha, U(A,\alpha), which is given by the integral U(A,\alpha)=exp{\int_0^{2\pi} ds A_a(\alpha(s))\frac{d\alpha^a(s)}{ds}}. NB-These books can be downloaded free from the site http://bookzz.org/ [1] Carlo Rovelli, Quantum Gravity (Cambridge University Press, Cambridge, 2004) Leave a Reply WordPress.com Logo Google photo Twitter picture Facebook photo Connecting to %s %d bloggers like this:
12037bf332400ea8
The Bohr model of the atom is essentially that the nucleus is a ball and the electrons are balls orbiting the nucleus in a rigid orbit. enter image description here This allowed for chemists to find a model of chemical bonding where the electrons in the outer orbits could be exchanged. And it works pretty well as seen in the Lewis structures: enter image description here However, electron orbitals were found to be less rigid and instead be fuzzy fields which, instead of being discrete/rigid orbits, look more like: enter image description here enter image description here However, in chemistry education like organic chemistry you still learn about chemical reactions using essentially diagrams that are modified lewis structures that take into account information about electron orbitals: enter image description here What I'm wondering is, if the Bohr model is used essentially throughout college education in the form of these diagrams, it seems like it must be a pretty accurate model, even though it turns out atoms are more fuzzy structures than discrete billiard balls. So I'm wondering what the inaccuracies are, and if there is a better way to understand them than the Bohr model. If you build a computer simulation of atoms with the Bohr model, I'm wondering if it would be "accurate" in the sense of modeling atomic phenomena, or is it not a good model to perform simulations on. If not, I'm wondering what an alternative model is that is better for simulation. Essentially, how good the Bohr model is as a diagram, as a tool for learning, and as a tool for simulation. • 1 $\begingroup$ Don't forget about the little problem of having zero angular momentum in an orbit! $\endgroup$ – JDługosz Mar 27 '18 at 1:09 • 3 $\begingroup$ What is the part with 1s, 2p, 3d, 4f and the other related "diagrams" illustrating? I understand the other graphics in your question just not this. $\endgroup$ – d-b Mar 28 '18 at 9:29 • 1 $\begingroup$ That's the electron orbital/configuration en.wikipedia.org/wiki/Electron_configuration. $\endgroup$ – Lance Pollard Mar 28 '18 at 18:55 • 1 $\begingroup$ Biggest problems: In the ground state the electron is accelerating but doesn't generate electromagnetic radiation? Excited states don't decay to the ground state? $\endgroup$ – jim Mar 28 '18 at 19:37 • 3 $\begingroup$ You make a lot of statements that the Bohr model has various successes, but basically every statement you make is false. The Bohr model gets essentially nothing right other than the energy levels. $\endgroup$ – Ben Crowell Mar 28 '18 at 19:43 In hydrogen: 1. It incorrectly predicts the number of states with given energy. This number can be seen through Zeeman splitting. In particular, it doesn't have the right angular momentum quantum numbers for each energy levels. Most obvious is the ground state, with has $\ell=0$ in Schrodinger's theory but $\ell=1$ in Bohr's theory. 2. It doesn't hold well under perturbation theory. In particular, because of angular momentum degeneracies, the spin-orbit interaction is incorrect. 3. It predicts a single "radius" for the electron rather than a probability density for the position of the electron. What it does do well: a. Correct energy spectrum for hydrogen (although completely wrong even for helium). In particular, one deduces the right value of the Rydberg constant. b. The Bohr radii for various energy levels turn out to be the most probable values predicted by the Schrodinger solutions. c. Also does a lot of chemistry stuff quite well (as suggested in the original question) but I'm not a chemist so can't praise the model for that. • 6 $\begingroup$ The point about the number of states with a given energy is a good one. In particular, note that the "magic number" of eight electrons in a Lewis diagram relies on the $n = 2$ orbitals being able to accept eight electrons. This is not a prediction of the Bohr model, and it's hard to see how to modify it to account for this. $\endgroup$ – Michael Seifert Mar 26 '18 at 17:40 • 1 $\begingroup$ Good information here, but to say that it predicts any chemistry seems a colossal stretch. It has no mechanism for bonding at all. $\endgroup$ – DWin Mar 27 '18 at 1:27 • $\begingroup$ @DWin The chemists use it a lot so there must be some roughly right about it. But then again, I’m not a chemist... $\endgroup$ – ZeroTheHero Mar 27 '18 at 2:08 • 8 $\begingroup$ I'm not a chemist either but my experience with chemistry taught in the latter half of the 20th century was that we were taught about electron clouds and pairing of outer shell electrons. Electron pairing and hybrid orbitals do not seem to be an immediate consequence of the Bohr model. I rather argue that chemists use a version of Pauling and Wilson's "Introduction to Quantum Mechanics with Applications to Chemistry" (c) 1935, 1963. My copy is the Dover edition. $\endgroup$ – DWin Mar 27 '18 at 3:56 This is an example of the "correspondence principle" in the broadest sense, that new theories should explain why old ones got some things right. The linked article discusses the Bohr model, but leaves some of your sub-questions unanswered. Going beyond that, how does an "electrons are somewhere specific" approximation lead to useful models of sharing and transferring electrons in covalent, ionic and metallic bonding? Well, we'll focus on covalent for now. When physicists teach undergraduates enough quantum mechanics to do the hydrogen atom properly, electrons end up in specific atomic orbitals due to their quantum numbers, and each orbital can hold at most 2 electrons. The applications of Bohr-like reasoning you've brought up concern molecular orbitals, and these are a slightly more advanced topic; at this point I wish I knew what chemistry undergrads are taught about them, but I imagine Peter Atkins explains MOs with much the same rigour. Like atomic orbitals, $\pi$ MOs hold at most 2 electrons (let's not get into $\sigma$ bonding for the moment). The Bohr lie would be that the electrons living in these orbitals have a precise location, and that orbitals form so as to get the electron count in each atom's outermost shell right and make for a stable molecule - you know, the usual $8$-electron rule (or $2$ for hydrogen, since it's trying to be like helium, not neon). The short answer to your question is that when we transition from quantum numbers for electrons in monatomic allotropes of an element to the analogous treatment of a molecule, the way the pattern of legal orbitals transforms is the same as would be expected on a classical model. Why? Because all you really need is the legal-number-combinations rules, not the way it's derived from the Schrödinger equation. Let's consider the simplest possible example, $\mathrm{H}_2$. The simple model says, "we have one legal orbital, and it has room for $2$ electrons, which is just what we need, and they end up in an orbit like planets in a binary-star system". The more accurate model is, "again we fill the unique legal orbital with $2$ electrons, but the electrons' behaviour is quantum-mechanical". You can approximate the electrons in that orbital as two particles in a box (although that's not a perfect analogy), because they don't have enough energy to escape unless a photon excites them, nor can they fall into a lower orbital because those are full. With this constraint, the quantum effects are quantitative but don't make much of a qualitative difference. • $\begingroup$ not sure what you mean by " legal-number-combinations rules" $\endgroup$ – Lance Pollard Mar 26 '18 at 17:46 • $\begingroup$ @LancePollard In the monatomic case $n\ge 1,\,0\le\ell\le n-1,\,-\ell\le m\le \ell,\,s=\pm\frac{1}{2}$. $\endgroup$ – J.G. Mar 26 '18 at 18:01 • 1 $\begingroup$ "legal combinations" of electron energy states i.e. the rules that lead to Lewis structure diagrams. $\endgroup$ – JimmyJames Mar 26 '18 at 18:07 • $\begingroup$ I have some questions from a layman: Say you can calculate the probability of finding an electron somewhere, and it has a high probability in a certain cloud of space, Is there a > 0 probability of finding that electron anywhere in the Observable Universe? If yes, is this at all related to "virtual particles" and how so? Does the probability falloff exponentially with distance or what? $\endgroup$ – George Mar 27 '18 at 19:25 • $\begingroup$ @Nofacr Yes (unless it's in a "box" but that's unphysical), no, yes or something like that depending on the wave function. $\endgroup$ – J.G. Mar 27 '18 at 19:46 The parallel between the Bohr picture and the Lewis diagrams isn't that great if you consider that the electron is moving in the Bohr model, while the electrons are static in a Lewis diagram. If a Bohr electron was "at rest" outside a nucleus, as it is in a Lewis diagram or one of your organic-chemistry diagrams, it would immediately accelerate towards the nucleus. And I cannot see how you would modify a Lewis diagram so that the electrons were "shared" while still being in orbit around the nuclei. Lewis dots and molecular structure diagrams, as a practical notation used by chemists, have very little to do with the Bohr model of the atom. They capture a set of empirical, qualitative rules which are true for most atoms under most conditions, and which were discovered first: 1. Every atom has some number of "valence electrons" which control how it can combine with other atoms. The number usually ranges from zero to eight. The number of valence electrons possessed by an isolated atom with no electric charge is determined by which element it is. 2. Atoms can have a different number of valence electrons from the default for that element, but then they will also have an electric charge, and we call them "ions". 3. Chemical compounds are usually most stable when each atom of the compound can be said to have either zero or eight valence electrons (or, in one very important special case, namely hydrogen, two valence electrons). 4. Atoms can combine to fulfill rule 3 in two ways. They can fully transfer electrons from one to another, forming ions, and then be held together by electrostatic attraction. Or they can "share" pairs of electrons between two atoms, in which case both electrons contribute to the valence electron count of both atoms, and then be held together ... well, ultimately again by electrostatic attraction, but a more targeted variety, such that we can say that each atom is "covalently bonded" to specific other atoms; this is not the case for ions. 5. Combining rules 1, 3, and 4 lets you predict how many covalent bonds an atom of a particular element can form. For instance, carbon has four valence electrons in its uncharged, unbonded state, each of which can be shared with one other atom, so it can make up to four covalent bonds. Chlorine, on the other hand, has seven valence electrons, which means it can form only one covalent bond (any more and it would have too many valence electrons). 6. Two atoms can share more than one pair of electrons, which we call a "double bond" or "triple bond" (four is very rare, five would break the 'usually zero to eight' rule -- I'm not going to say impossible but I've never heard of it); these bonds are physically stronger but also more reactive than single bonds. 7. If you have a double bond next to a single bond and the overall molecule is sufficiently symmetric, what actually happens is the second bond gets "delocalized" over all three atoms. This can keep happening down a chain of alternating single and double bonds, and sometimes makes the molecule extra stable (e.g. benzene) or changes what chemical reactions the molecule will undergo (e.g. enols). The Bohr model doesn't try to predict most of this; it is only concerned with isolated atoms. Bohr probably did have rules 1, 2, and 3 in mind when he developed it, though. Schroedinger's equation isn't a model of atoms at all, it's the quantum equivalent of $\mathbf{F}=m\mathbf{a}$; to predict anything with it you have to define what the forces are. That's the domain of atomic and molecular orbital theory, and the actual quantitative Hamiltonians get really messy really fast; recovering the above rules for systems of more than about two atoms was still cutting-edge theory as of when I decided I didn't want to do that for a living (admittedly, that was 15 years ago now). • 1 $\begingroup$ 5 and even 6 bonds are possible. The "8 electron' rule comes from light elements with just 1 s and 3 p orbitals, but dimolybdenum and ditungsten are formed from very heavy elements. E.g. in molybdenum the electrons participating come from the 4d, 5s and 5p orbitals. $\endgroup$ – MSalters Mar 28 '18 at 10:40 I can remember the course on introduction in quantum physisc when we discussed the Bohr model. The reasoning was this: • Electron circulating around the core on circular trajectory has nonzero acceleration. • Therefore it must emit radiation. • Therefore it loses energy. Therefore it must either "get a punch from somewhere" or fall in deeper and deeper orbit levels. • Finally it must fall in the core. But, we do not observe such orbital decay and radiation emission, so the model is flawed. On the other hand, Bohr's model is easy to imagine and there are nice parallels between "our big universe" and the "tiny universe down there". Once you accept the electrons are following strictly defined trajectories with corresponding energies, it is easier to accept that the electrons are somewhere in a more loosely define volume with energies still strictly defined. So. we introduce the word "orbital" to replace "orbit" to make a distinction between the probability volume of an "orbital", versus the exact radius implied by the word "orbit". Lewis dot structures were not based upon the Bohr model. Oppositely, upon introducing the dot structures in The Atom and the Molecule, Lewis specifically states: Bohr in his electron moving in a fixed orbit, have invented systems containing electrons of which the motion produces no effect upon external charges. Now this is not only inconsistent with the accepted laws of electromagnetics but, I may add, is logically objectionable, for that state of motion which produces no physical effect whatsoever may better be called a state of rest. I believe that there is one part of Bohr’s theory for which the assumption of the orbital electron is not necessary, since it may be translated directly into the terms of the present theory. He explains the spectral series of hydrogen by assuming that an electron can move freely in any one of a series of orbits in which the velocities differ by steps, these steps being simply expressed in terms of ultimate units (in his theory Planck’s h is such a unit), and that radiation occurs when the electron passes from one orbital velocity to the next. It seems to me far simpler to assume that an electron may be held in the atom in stable equilibrium in a series of different positions, each of which having definite constraints, corresponds to a definite frequency of the electron, the intervals between the constraints in successive positions being simply expressible in terms of ultimate rational units the most stable condition for the atomic shell is the one in which eight electrons are held at the corners of a cube So, for example, diatomic iodine was considered by Lewis as two cubes sharing an edge. • $\begingroup$ nice contribution! $\endgroup$ – ZeroTheHero Mar 28 '18 at 19:46 protected by Qmechanic Mar 26 '18 at 19:47 Would you like to answer one of these unanswered questions instead?
eb14d1e11919a712
Quantum machine learning From Wikipedia, the free encyclopedia Jump to navigation Jump to search Quantum machine learning is an emerging interdisciplinary research area at the intersection of quantum physics and machine learning.[1][2][3][4][5][6] The most common use of the term refers to machine learning algorithms for the analysis of classical data executed on a quantum computer.[7][8][9][10] While machine learning algorithms are used to compute immense quantities of data, quantum machine learning increases such capabilities intelligently, by creating opportunities to conduct analysis on quantum states and systems.[11] This includes hybrid methods that involve both classical and quantum processing, where computationally difficult subroutines are outsourced to a quantum device.[12][13][14] These routines can be more complex in nature and executed faster with the assistance of quantum devices.[2] Furthermore, quantum algorithms can be used to analyze quantum states instead of classical data.[15] Beyond quantum computing, the term "quantum machine learning" is often associated with machine learning methods applied to data generated from quantum experiments, such as learning quantum phase transitions[16][17] or creating new quantum experiments.[18][19][20] Quantum machine learning also extends to a branch of research that explores methodological and structural similarities between certain physical systems and learning systems, in particular neural networks. For example, some mathematical and numerical techniques from quantum physics are applicable to classical deep learning and vice versa.[21][22][23] Finally, researchers investigate more abstract notions of learning theory with respect to quantum information, sometimes referred to as "quantum learning theory".[24] Four different approaches to combine the disciplines of quantum computing and machine learning.[25][26] The first letter refers to whether the system under study is classical or quantum, while the second letter defines whether a classical or quantum information processing device is used. Machine learning with quantum computers[edit] Quantum-enhanced machine learning refers to quantum algorithms that solve tasks in machine learning, thereby improving and often expediting classical machine learning techniques. Such algorithms typically require one to encode the given classical data set into a quantum computer to make it accessible for quantum information processing. Subsequently, quantum information processing routines are applied and the result of the quantum computation is read out by measuring the quantum system. For example, the outcome of the measurement of a qubit reveals the result of a binary classification task. While many proposals of quantum machine learning algorithms are still purely theoretical and require a full-scale universal quantum computer to be tested, others have been implemented on small-scale or special purpose quantum devices. Linear algebra simulation with quantum amplitudes[edit] A number of quantum algorithms for machine learning are based on the idea of amplitude encoding, that is, to associate the amplitudes of a quantum state with the inputs and outputs of computations.[27][28][29][30] Since a state of qubits is described by complex amplitudes, this information encoding can allow for an exponentially compact representation. Intuitively, this corresponds to associating a discrete probability distribution over binary random variables with a classical vector. The goal of algorithms based on amplitude encoding is to formulate quantum algorithms whose resources grow polynomially in the number of qubits , which amounts to a logarithmic growth in the number of amplitudes and thereby the dimension of the input. Many quantum machine learning algorithms in this category are based on variations of the quantum algorithm for linear systems of equations[31] (colloquially called HHL, after the paper's authors) which, under specific conditions, performs a matrix inversion using an amount of physical resources growing only logarithmically in the dimensions of the matrix. One of these conditions is that a Hamiltonian which entrywise corresponds to the matrix can be simulated efficiently, which is known to be possible if the matrix is sparse[32] or low rank.[33] For reference, any known classical algorithm for matrix inversion requires a number of operations that grows at least quadratically in the dimension of the matrix. Quantum matrix inversion can be applied to machine learning methods in which the training reduces to solving a linear system of equations, for example in least-squares linear regression,[28][29] the least-squares version of support vector machines,[27] and Gaussian processes.[30] A crucial bottleneck of methods that simulate linear algebra computations with the amplitudes of quantum states is state preparation, which often requires one to initialise a quantum system in a state whose amplitudes reflect the features of the entire dataset. Although efficient methods for state preparation are known for specific cases,[34][35] this step easily hides the complexity of the task.[36][37] Quantum machine learning algorithms based on Grover search[edit] Another approach to improving classical machine learning with quantum information processing uses amplitude amplification methods based on Grover's search algorithm, which has been shown to solve unstructured search problems with a quadratic speedup compared to classical algorithms. These quantum routines can be employed for learning algorithms that translate into an unstructured search task, as can be done, for instance, in the case of the k-medians[38] and the k-nearest neighbors algorithms.[7] Another application is a quadratic speedup in the training of perceptron.[39] Amplitude amplification is often combined with quantum walks to achieve the same quadratic speedup. Quantum walks have been proposed to enhance Google's PageRank algorithm[40] as well as the performance of reinforcement learning agents in the projective simulation framework.[41] Quantum-enhanced reinforcement learning[edit] Reinforcement learning is a branch of machine learning distinct from supervised and unsupervised learning, which also admits quantum enhancements.[42][41][43][44] In quantum-enhanced reinforcement learning, a quantum agent interacts with a classical environment and occasionally receives rewards for its actions, which allows the agent to adapt its behavior—in other words, to learn what to do in order to gain more rewards. In some situations, either because of the quantum processing capability of the agent,[41] or due to the possibility to probe the environment in superpositions,[26] a quantum speedup may be achieved. Implementations of these kinds of protocols in superconducting circuits[45] and in systems of trapped ions[46][47] have been proposed. Quantum annealing[edit] Quantum annealing is an optimization technique used to determine the local minima and maxima of a function over a given set of candidate functions. This is a method of discretizing a function with many local minima or maxima in order to determine the observables of the function. The process can be distinguished from Simulated annealing by the Quantum tunneling process, by which particles tunnel through kinetic or potential barriers from a high state to a low state. Quantum annealing starts from a superposition of all possible states of a system, weighted equally. Then the time-dependent Schrödinger equation guides the time evolution of the system, serving to affect the amplitude of each state as time increases. Eventually, the ground state can be reached to yield the instantaneous Hamiltonian of the system. Quantum sampling techniques[edit] A computationally hard problem, which is key for some relevant machine learning tasks, is the estimation of averages over probabilistic models defined in terms of a Boltzmann distribution. Sampling from generic probabilistic models is hard: algorithms relying heavily on sampling are expected to remain intractable no matter how large and powerful classical computing resources become. Even though quantum annealers, like those produced by D-Wave Systems, were designed for challenging combinatorial optimization problems, it has been recently recognized as a potential candidate to speed up computations that rely on sampling by exploiting quantum effects.[48] Some research groups have recently explored the use of quantum annealing hardware for training Boltzmann machines and deep neural networks.[49][50][51][52][53] The standard approach to training Boltzmann machines relies on the computation of certain averages that can be estimated by standard sampling techniques, such as Markov chain Monte Carlo algorithms. Another possibility is to rely on a physical process, like quantum annealing, that naturally generates samples from a Boltzmann distribution. The objective is to find the optimal control parameters that best represent the empirical distribution of a given dataset. The D-Wave 2X system hosted at NASA Ames Research Center has been recently used for the learning of a special class of restricted Boltzmann machines that can serve as a building block for deep learning architectures.[51] Complementary work that appeared roughly simultaneously showed that quantum annealing can be used for supervised learning in classification tasks.[49] The same device was later used to train a fully connected Boltzmann machine to generate, reconstruct, and classify down-scaled, low-resolution handwritten digits, among other synthetic datasets.[50] In both cases, the models trained by quantum annealing had a similar or better performance in terms of quality. The ultimate question that drives this endeavour is whether there is quantum speedup in sampling applications. Experience with the use of quantum annealers for combinatorial optimization suggests the answer is not straightforward. Inspired by the success of Boltzmann machines based on classical Boltzmann distribution, a new machine learning approach based on quantum Boltzmann distribution of a transverse-field Ising Hamiltonian was recently proposed.[54] Due to the non-commutative nature of quantum mechanics, the training process of the quantum Boltzmann machine can become nontrivial. This problem was, to some extent, circumvented by introducing bounds on the quantum probabilities, allowing the authors to train the model efficiently by sampling. It is possible that a specific type of quantum Boltzmann machine has been trained in the D-Wave 2X by using a learning rule analogous to that of classical Boltzmann machines.[50][52][55] Quantum annealing is not the only technology for sampling. In a prepare-and-measure scenario, a universal quantum computer prepares a thermal state, which is then sampled by measurements. This can reduce the time required to train a deep restricted Boltzmann machine, and provide a richer and more comprehensive framework for deep learning than classical computing.[56] The same quantum methods also permit efficient training of full Boltzmann machines and multi-layer, fully connected models and do not have well-known classical counterparts. Relying on an efficient thermal state preparation protocol starting from an arbitrary state, quantum-enhanced Markov logic networks exploit the symmetries and the locality structure of the probabilistic graphical model generated by a first-order logic template.[57] This provides an exponential reduction in computational complexity in probabilistic inference, and, while the protocol relies on a universal quantum computer, under mild assumptions it can be embedded on contemporary quantum annealing hardware. Quantum neural networks[edit] Quantum analogues or generalizations of classical neural nets are often referred to as quantum neural networks. The term is claimed by a wide range of approaches, including the implementation and extension of neural networks using photons, layered variational circuits or quantum Ising-type models. Quantum neural networks are often defined as an expansion on Deutsch's model of a quantum computational network.[58] Within this model, nonlinear and irreversible gates, dissimilar to the Hamiltonian operator, are deployed to speculate the given data set.[58] Such gates make certain phases unable to be observed and generate specific oscillations.[58] Quantum neural networks apply the principals quantum information and quantum computation to classical neurocomputing.[59] Current research shows that QNN can exponentially increase the amount of computing power and the degrees of freedom for a computer, which is limited for a classical computer to its size.[59] A quantum neural network has computational capabilities to decrease the number of steps, qubits used, and computation time.[58] The wave function to quantum mechanics is the neuron for Neural networks. To test quantum applications in a neural network, quantum dot molecules are deposited on a substrate of GaAs or similar to record how they communicate with one another. Each quantum dot can be referred as an island of electric activity, and when such dots are close enough (approximately 10±20 nm)[60] electrons can tunnel underneath the islands. An even distribution across the substrate in sets of two create dipoles and ultimately two spin states, up or down. These states are commonly known as qubits with corresponding states of |0> and |1> in Dirac notation.[60] Hidden Quantum Markov Models[edit] Hidden Quantum Markov Models[61] (HQMMs) are a quantum-enhanced version of classical Hidden Markov Models (HMMs), which are typically used to model sequential data in various fields like robotics and natural language processing. Unlike the approach taken by other quantum-enhanced machine learning algorithms, HQMMs can be viewed as models inspired by quantum mechanics that can be run on classical computers as well.[62] Where classical HMMs use probability vectors to represent hidden 'belief' states, HQMMs use the quantum analogue: density matrices. Recent work has shown that these models can be successfully learned by maximizing the log-likelihood of the given data via classical optimization, and there is some empirical evidence that these models can better model sequential data compared to classical HMMs in practice, although further work is needed to determine exactly when and how these benefits are derived.[62] Additionally, since classical HMMs are a particular kind of Bayes net, an exciting aspect of HQMMs is that the techniques used show how we can perform quantum-analogous Bayesian inference, which should allow for the general construction of the quantum versions of probabilistic graphical models.[62] Fully quantum machine learning[edit] In the most general case of quantum machine learning, both the learning device and the system under study, as well as their interaction, are fully quantum. This section gives a few examples of results on this topic. One class of problem that can benefit from the fully quantum approach is that of 'learning' unknown quantum states, processes or measurements, in the sense that one can subsequently reproduce them on another quantum system. For example, one may wish to learn a measurement that discriminates between two coherent states, given not a classical description of the states to be discriminated, but instead a set of example quantum systems prepared in these states. The naive approach would be to first extract a classical description of the states and then implement an ideal discriminating measurement based on this information. This would only require classical learning. However, one can show that a fully quantum approach is strictly superior in this case.[63] (This also relates to work on quantum pattern matching.[64]) The problem of learning unitary transformations can be approached in a similar way.[65] Going beyond the specific problem of learning states and transformations, the task of clustering also admits a fully quantum version, wherein both the oracle which returns the distance between data-points and the information processing device which runs the algorithm are quantum.[66] Finally, a general framework spanning supervised, unsupervised and reinforcement learning in the fully quantum setting was introduced in,[26] where it was also shown that the possibility of probing the environment in superpositions permits a quantum speedup in reinforcement learning. Classical learning applied to quantum problems[edit] The term quantum machine learning is also used for approaches that apply classical methods of machine learning to the study of quantum systems. A prime example is the use of classical learning techniques to process large amounts of experimental or calculated (for example by solving Schrodinger's equation data in order to characterize an unknown quantum system (for instance in the context of quantum information theory and for the development of quantum technologies or computational materials design), but there are also more exotic applications. Noisy data[edit] The ability to experimentally control and prepare increasingly complex quantum systems brings with it a growing need to turn large and noisy data sets into meaningful information. This is a problem that has already been studied extensively in the classical setting, and consequently, many existing machine learning techniques can be naturally adapted to more efficiently address experimentally relevant problems. For example, Bayesian methods and concepts of algorithmic learning can be fruitfully applied to tackle quantum state classification,[67] Hamiltonian learning,[68] and the characterization of an unknown unitary transformation.[69][70] Other problems that have been addressed with this approach are given in the following list: • Identifying an accurate model for the dynamics of a quantum system, through the reconstruction of the Hamiltonian;[71][72][73] • Extracting information on unknown states;[74][75][76][67][77] • Learning unknown unitary transformations and measurements;[69][70] • Engineering of quantum gates from qubit networks with pairwise interactions, using time dependent[78] or independent[79] Hamiltonians. Calculated and noise-free data[edit] Quantum machine learning can also be applied to dramatically accelerate the prediction of quantum properties of molecules and materials.[80] This can be helpful for the computational design of new molecules or materials. Some examples include • Interpolating interatomic potentials;[81] • Inferring molecular atomization energies throughout chemical compound space;[82] • Accurate potential energy surfaces with restricted Boltzmann machines;[83] • Automatic generation of new quantum experiments;[18][19] • Solving the many-body, static and time-dependent Schrödinger equation;[84] • Identifying phase transitions from entanglement spectra;[85] • Generating adaptive feedback schemes for quantum metrology.[86] Variational Circuits[edit] Variational circuits are a family of algorithms which utilize training based on circuit parameters and an objective function.[87] Variational circuits are generally composed of a classical device communicating input parameters (random or pre-trained parameters) into a quantum device, along with a classical Mathematical optimization function. These circuits are very heavily dependent on the architecture of the proposed quantum device because parameter adjustments are adjusted based solely on the classical components within the device.[88] Though the application is considerably infantile in the field of quantum machine learning, it has incredibly high promise for more efficiently generating efficient optimization functions. Quantum learning theory[edit] Quantum learning theory pursues a mathematical analysis of the quantum generalizations of classical learning models and of the possible speed-ups or other improvements that they may provide. The framework is very similar to that of classical computational learning theory, but the learner in this case is a quantum information processing device, while the data may be either classical or quantum. Quantum learning theory should be contrasted with the quantum-enhanced machine learning discussed above, where the goal was to consider specific problems and to use quantum protocols to improve the time complexity of classical algorithms for these problems. Although quantum learning theory is still under development, partial results in this direction have been obtained.[89] The starting point in learning theory is typically a concept class, a set of possible concepts. Usually a concept is a function on some domain, such as . For example, the concept class could be the set of disjunctive normal form (DNF) formulas on n bits or the set of Boolean circuits of some constant depth. The goal for the learner is to learn (exactly or approximately) an unknown target concept from this concept class. The learner may be actively interacting with the target concept, or passively receiving samples from it. In active learning, a learner can make membership queries to the target concept c, asking for its value c(x) on inputs x chosen by the learner. The learner then has to reconstruct the exact target concept, with high probability. In the model of quantum exact learning, the learner can make membership queries in quantum superposition. If the complexity of the learner is measured by the number of membership queries it makes, then quantum exact learners can be polynomially more efficient than classical learners for some concept classes, but not more.[90] If complexity is measured by the amount of time the learner uses, then there are concept classes that can be learned efficiently by quantum learners but not by classical learners (under plausible complexity-theoretic assumptions).[90] A natural model of passive learning is Valiant's probably approximately correct (PAC) learning. Here the learner receives random examples (x,c(x)), where x is distributed according to some unknown distribution D. The learner's goal is to output a hypothesis function h such that h(x)=c(x) with high probability when x is drawn according to D. The learner has to be able to produce such an 'approximately correct' h for every D and every target concept c in its concept class. We can consider replacing the random examples by potentially more powerful quantum examples . In the PAC model (and the related agnostic model), this doesn't significantly reduce the number of examples needed: for every concept class, classical and quantum sample complexity are the same up to constant factors.[91] However, for learning under some fixed distribution D, quantum examples can be very helpful, for example for learning DNF under the uniform distribution.[92] When considering time complexity, there exist concept classes that can be PAC-learned efficiently by quantum learners, even from classical examples, but not by classical learners (again, under plausible complexity-theoretic assumptions).[90] This passive learning type is also the most common scheme in supervised learning: a learning algorithm typically takes the training examples fixed, without the ability to query the label of unlabelled examples. Outputting a hypothesis h is a step of induction. Classically, an inductive model splits into a training and an application phase: the model parameters are estimated in the training phase, and the learned model is applied an arbitrary many times in the application phase. In the asymptotic limit of the number of applications, this splitting of phases is also present with quantum resources.[93] Implementations and experiments[edit] The earliest experiments were conducted using the adiabatic D-Wave quantum computer, for instance, to detect cars in digital images using regularized boosting with a nonconvex objective function in a demonstration in 2009.[94] Many experiments followed on the same architecture, and leading tech companies have shown interest in the potential of quantum machine learning for future technological implementations. In 2013, Google Research, NASA, and the Universities Space Research Association launched the Quantum Artificial Intelligence Lab which explores the use of the adiabatic D-Wave quantum computer.[95][96] A more recent example trained a probabilistic generative models with arbitrary pairwise connectivity, showing that their model is capable of generating handwritten digits as well as reconstructing noisy images of bars and stripes and handwritten digits.[50] Using a different annealing technology based on nuclear magnetic resonance (NMR), a quantum Hopfield network was implemented in 2009 that mapped the input data and memorized data to Hamiltonians, allowing the use of adiabatic quantum computation.[97] NMR technology also enables universal quantum computing[citation needed], and it was used for the first experimental implementation of a quantum support vector machine to distinguish hand written number ‘6’ and ‘9’ on a liquid-state quantum computer in 2015.[98] The training data involved the pre-processing of the image which maps them to normalized 2-dimensional vectors to represent the images as the states of a qubit. The two entries of the vector are the vertical and horizontal ratio of the pixel intensity of the image. Once the vectors are defined on the feature space, the quantum support vector machine was implemented to classify the unknown input vector. The readout avoids costly quantum tomography by reading out the final state in terms of direction (up/down) of the NMR signal. Photonic implementations are attracting more attention,[99] not the least because they do not require extensive cooling. Simultaneous spoken digit and speaker recognition and chaotic time-series prediction were demonstrated at data rates beyond 1 gigabyte per second in 2013.[100] Using non-linear photonics to implement an all-optical linear classifier, a perceptron model was capable of learning the classification boundary iteratively from training data through a feedback rule.[101] A core building block in many learning algorithms is to calculate the distance between two vectors: this was first experimentally demonstrated for up to eight dimensions using entangled qubits in a photonic quantum computer in 2015.[102] Recently, based on a neuromimetic approach, a novel ingredient has been added to the field of quantum machine learning, in the form of a so-called quantum memristor, a quantized model of the standard classical memristor.[103] This device can be constructed by means of a tunable resistor, weak measurements on the system, and a classical feed-forward mechanism. An implementation of a quantum memristor in superconducting circuits has been proposed,[104] and an experiment with quantum dots performed.[105] A quantum memristor would implement nonlinear interactions in the quantum dynamics which would aid the search for a fully functional quantum neural network. Since 2016, IBM has launced on online cloud-based platform for quantum software developers, called the IBM Q Experience. This platform consists of several fully operational quantum processors accessible via the IBM Web API. In doing so, the company is encouraging software developers to pursue new algorithms through a development environment with quantum capabilities. New architectures are being explored on an experimental basis, up to 32 qbits, utilizing both trapped-ion and superconductive quantum computing methods. See also[edit] 1. ^ Schuld, Maria; Petruccione, Francesco (2018). Supervised Learning with Quantum Computers. Quantum Science and Technology. doi:10.1007/978-3-319-96424-9. ISBN 978-3-319-96423-2. 2. ^ a b Schuld, Maria; Sinayskiy, Ilya; Petruccione, Francesco (2014). "An introduction to quantum machine learning". Contemporary Physics. 56 (2): 172–185. arXiv:1409.3097. Bibcode:2015ConPh..56..172S. CiteSeerX doi:10.1080/00107514.2014.964942. 3. ^ Wittek, Peter (2014). Quantum Machine Learning: What Quantum Computing Means to Data Mining. Academic Press. ISBN 978-0-12-800953-6. 4. ^ Adcock, Jeremy; Allen, Euan; Day, Matthew; Frick, Stefan; Hinchliff, Janna; Johnson, Mack; Morley-Short, Sam; Pallister, Sam; Price, Alasdair; Stanisic, Stasja (2015). "Advances in quantum machine learning". arXiv:1512.02900 [quant-ph]. 5. ^ Biamonte, Jacob; Wittek, Peter; Pancotti, Nicola; Rebentrost, Patrick; Wiebe, Nathan; Lloyd, Seth (2017). "Quantum machine learning". Nature. 549 (7671): 195–202. arXiv:1611.09347. Bibcode:2017Natur.549..195B. doi:10.1038/nature23474. PMID 28905917. 6. ^ Perdomo-Ortiz, Alejandro; Benedetti, Marcello; Realpe-Gómez, John; Biswas, Rupak (2018). "Opportunities and challenges for quantum-assisted machine learning in near-term quantum computers". Quantum Science and Technology. 3: 030502. doi:10.1088/2058-9565/aab859. 7. ^ a b Wiebe, Nathan; Kapoor, Ashish; Svore, Krysta (2014). "Quantum Algorithms for Nearest-Neighbor Methods for Supervised and Unsupervised Learning". Quantum Information & Computation. 15 (3): 0318–0358. arXiv:1401.2142. Bibcode:2014arXiv1401.2142W. 8. ^ Lloyd, Seth; Mohseni, Masoud; Rebentrost, Patrick (2013). "Quantum algorithms for supervised and unsupervised machine learning". arXiv:1307.0411 [quant-ph]. 9. ^ Yoo, Seokwon; Bang, Jeongho; Lee, Changhyoup; Lee, Jinhyoung (2014). "A quantum speedup in machine learning: Finding a N-bit Boolean function for a classification". New Journal of Physics. 16 (10): 103014. arXiv:1303.6055. Bibcode:2014NJPh...16j3014Y. doi:10.1088/1367-2630/16/10/103014. 10. ^ Lee, Joong-Sung; Bang, Jeongho; Hong, Sunghyuk; Lee, Changhyoup; Seol, Kang Hee; Lee, Jinhyoung; Lee, Kwang-Geol (2019). "Experimental demonstration of quantum learning speedup with classical input data". Physical Review A. 99 (1): 012313. arXiv:1706.01561. doi:10.1103/PhysRevA.99.012313. 11. ^ Schuld, Maria; Sinayskiy, Ilya; Petruccione, Francesco (2014-10-15). "An introduction to quantum machine learning". Contemporary Physics. 56 (2): 172–185. CiteSeerX doi:10.1080/00107514.2014.964942. ISSN 0010-7514. 12. ^ Benedetti, Marcello; Realpe-Gómez, John; Biswas, Rupak; Perdomo-Ortiz, Alejandro (2017-11-30). "Quantum-Assisted Learning of Hardware-Embedded Probabilistic Graphical Models". Physical Review X. 7 (4): 041052. arXiv:1609.02542. doi:10.1103/PhysRevX.7.041052. ISSN 2160-3308. 13. ^ Farhi, Edward; Neven, Hartmut (2018-02-16). "Classification with Quantum Neural Networks on Near Term Processors". arXiv:1802.06002 [quant-ph]. 14. ^ Schuld, Maria; Bocharov, Alex; Svore, Krysta; Wiebe, Nathan (2018-04-02). "Circuit-centric quantum classifiers". arXiv:1804.00633 [quant-ph]. 15. ^ Yu, Shang; Albarran-Arriagada, F.; Retamal, J. C.; Wang, Yi-Tao; Liu, Wei; Ke, Zhi-Jin; Meng, Yu; Li, Zhi-Peng; Tang, Jian-Shun (2018-08-28). "Reconstruction of a Photonic Qubit State with Quantum Reinforcement Learning". arXiv:1808.09241 [quant-ph]. 16. ^ Broecker, Peter; Assaad, Fakher F.; Trebst, Simon (2017-07-03). "Quantum phase recognition via unsupervised machine learning". arXiv:1707.00663 [cond-mat.str-el]. 17. ^ Huembeli, Patrick; Dauphin, Alexandre; Wittek, Peter (2018). "Identifying Quantum Phase Transitions with Adversarial Neural Networks". Physical Review B. 97 (13): 134109. arXiv:1710.08382. doi:10.1103/PhysRevB.97.134109. ISSN 2469-9950. 18. ^ a b Krenn, Mario (2016-01-01). "Automated Search for new Quantum Experiments". Physical Review Letters. 116 (9): 090405. arXiv:1509.02749. Bibcode:2016PhRvL.116i0405K. doi:10.1103/PhysRevLett.116.090405. PMID 26991161. 19. ^ a b Knott, Paul (2016-03-22). "A search algorithm for quantum state engineering and metrology". New Journal of Physics. 18 (7): 073033. arXiv:1511.05327. Bibcode:2016NJPh...18g3033K. doi:10.1088/1367-2630/18/7/073033. 20. ^ Melnikov, Alexey A.; Nautrup, Hendrik Poulsen; Krenn, Mario; Dunjko, Vedran; Tiersch, Markus; Zeilinger, Anton; Briegel, Hans J. (1221). "Active learning machine learns to create new quantum experiments". Proceedings of the National Academy of Sciences. 115 (6): 1221–1226. arXiv:1706.00868. doi:10.1073/pnas.1714936115. ISSN 0027-8424. PMC 5819408. PMID 29348200. 21. ^ Huggins, William; Patel, Piyush; Whaley, K. Birgitta; Stoudenmire, E. Miles (2018-03-30). "Towards Quantum Machine Learning with Tensor Networks". arXiv:1803.11537 [quant-ph]. 22. ^ Carleo, Giuseppe; Nomura, Yusuke; Imada, Masatoshi (2018-02-26). "Constructing exact representations of quantum many-body systems with deep neural networks". arXiv:1802.09558 [cond-mat.dis-nn]. 23. ^ Bény, Cédric (2013-01-14). "Deep learning and the renormalization group". arXiv:1301.3124 [quant-ph]. 24. ^ Arunachalam, Srinivasan; de Wolf, Ronald (2017-01-24). "A Survey of Quantum Learning Theory". arXiv:1701.06806 [quant-ph]. 25. ^ Aïmeur, Esma; Brassard, Gilles; Gambs, Sébastien (2006-06-07). Machine Learning in a Quantum World. Advances in Artificial Intelligence. Lecture Notes in Computer Science. 4013. pp. 431–442. doi:10.1007/11766247_37. ISBN 978-3-540-34628-9. 26. ^ a b c Dunjko, Vedran; Taylor, Jacob M.; Briegel, Hans J. (2016-09-20). "Quantum-Enhanced Machine Learning". Physical Review Letters. 117 (13): 130501. arXiv:1610.08251. Bibcode:2016PhRvL.117m0501D. doi:10.1103/PhysRevLett.117.130501. PMID 27715099. 27. ^ a b Rebentrost, Patrick; Mohseni, Masoud; Lloyd, Seth (2014). "Quantum Support Vector Machine for Big Data Classification". Physical Review Letters. 113 (13): 130503. arXiv:1307.0471. Bibcode:2014PhRvL.113m0503R. doi:10.1103/PhysRevLett.113.130503. hdl:1721.1/90391. PMID 25302877. 28. ^ a b Wiebe, Nathan; Braun, Daniel; Lloyd, Seth (2012). "Quantum Algorithm for Data Fitting". Physical Review Letters. 109 (5): 050505. arXiv:1204.5242. Bibcode:2012PhRvL.109e0505W. doi:10.1103/PhysRevLett.109.050505. PMID 23006156. 29. ^ a b Schuld, Maria; Sinayskiy, Ilya; Petruccione, Francesco (2016). "Prediction by linear regression on a quantum computer". Physical Review A. 94 (2): 022342. arXiv:1601.07823. Bibcode:2016PhRvA..94b2342S. doi:10.1103/PhysRevA.94.022342. 30. ^ a b Zhao, Zhikuan; Fitzsimons, Jack K.; Fitzsimons, Joseph F. (2015). "Quantum assisted Gaussian process regression". arXiv:1512.03929 [quant-ph]. 31. ^ Harrow, Aram W.; Hassidim, Avinatan; Lloyd, Seth (2008). "Quantum algorithm for solving linear systems of equations". Physical Review Letters. 103 (15): 150502. arXiv:0811.3171. Bibcode:2009PhRvL.103o0502H. doi:10.1103/PhysRevLett.103.150502. PMID 19905613. 32. ^ Berry, Dominic W.; Childs, Andrew M.; Kothari, Robin (2015). Hamiltonian simulation with nearly optimal dependence on all parameters. 56th Annual Symposium on Foundations of Computer Science. IEEE. pp. 792–809. arXiv:1501.01715. doi:10.1109/FOCS.2015.54. 33. ^ Lloyd, Seth; Mohseni, Masoud; Rebentrost, Patrick (2014). "Quantum principal component analysis". Nature Physics. 10 (9): 631. arXiv:1307.0401. Bibcode:2014NatPh..10..631L. CiteSeerX doi:10.1038/nphys3029. 34. ^ Soklakov, Andrei N.; Schack, Rüdiger (2006). "Efficient state preparation for a register of quantum bits". Physical Review A. 73 (1): 012307. arXiv:quant-ph/0408045. Bibcode:2006PhRvA..73a2307S. doi:10.1103/PhysRevA.73.012307. 35. ^ Giovannetti, Vittorio; Lloyd, Seth; MacCone, Lorenzo (2008). "Quantum Random Access Memory". Physical Review Letters. 100 (16): 160501. arXiv:0708.1879. Bibcode:2008PhRvL.100p0501G. doi:10.1103/PhysRevLett.100.160501. PMID 18518173. 36. ^ Aaronson, Scott (2015). "Read the fine print". Nature Physics. 11 (4): 291–293. Bibcode:2015NatPh..11..291A. doi:10.1038/nphys3272. 37. ^ Bang, Jeongho; Dutta, Arijit; Lee, Seung-Woo; Kim, Jaewan (2019). "Optimal usage of quantum random access memory in quantum machine learning". Physical Review A. 99 (1): 012326. doi:10.1103/PhysRevA.99.012326. 38. ^ Aïmeur, Esma; Brassard, Gilles; Gambs, Sébastien (2013-02-01). "Quantum speed-up for unsupervised learning". Machine Learning. 90 (2): 261–287. doi:10.1007/s10994-012-5316-5. ISSN 0885-6125. 39. ^ Wiebe, Nathan; Kapoor, Ashish; Svore, Krysta M. (2016). Quantum Perceptron Models. Advances in Neural Information Processing Systems. 29. pp. 3999–4007. arXiv:1602.04799. Bibcode:2016arXiv160204799W. 40. ^ Paparo, Giuseppe Davide; Martin-Delgado, Miguel Angel (2012). "Google in a Quantum Network". Scientific Reports. 2 (444): 444. arXiv:1112.2079. Bibcode:2012NatSR...2E.444P. doi:10.1038/srep00444. PMC 3370332. PMID 22685626. 41. ^ a b c Paparo, Giuseppe Davide; Dunjko, Vedran; Makmal, Adi; Martin-Delgado, Miguel Angel; Briegel, Hans J. (2014). "Quantum Speedup for Active Learning Agents". Physical Review X. 4 (3): 031002. arXiv:1401.4997. Bibcode:2014PhRvX...4c1002P. doi:10.1103/PhysRevX.4.031002. 42. ^ Dong, Daoyi; Chen, Chunlin; Li, Hanxiong; Tarn, Tzyh-Jong (2008). "Quantum Reinforcement Learning". IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics). 38 (5): 1207–1220. arXiv:0810.3828. CiteSeerX doi:10.1109/TSMCB.2008.925743. PMID 18784007. 43. ^ Crawford, Daniel; Levit, Anna; Ghadermarzy, Navid; Oberoi, Jaspreet S.; Ronagh, Pooya (2018). "Reinforcement Learning Using Quantum Boltzmann Machines". arXiv:1612.05695 [quant-ph]. 44. ^ Briegel, Hans J.; Cuevas, Gemma De las (2012-05-15). "Projective simulation for artificial intelligence". Scientific Reports. 2 (400): 400. arXiv:1104.3787. Bibcode:2012NatSR...2E.400B. doi:10.1038/srep00400. ISSN 2045-2322. PMC 3351754. PMID 22590690. 45. ^ Lamata, Lucas (2017). "Basic protocols in quantum reinforcement learning with superconducting circuits". Scientific Reports. 7 (1): 1609. arXiv:1701.05131. Bibcode:2017NatSR...7.1609L. doi:10.1038/s41598-017-01711-6. PMC 5431677. PMID 28487535. 46. ^ Dunjko, V.; Friis, N.; Briegel, H. J. (2015-01-01). "Quantum-enhanced deliberation of learning agents using trapped ions". New Journal of Physics. 17 (2): 023006. arXiv:1407.2830. Bibcode:2015NJPh...17b3006D. doi:10.1088/1367-2630/17/2/023006. ISSN 1367-2630. 47. ^ Sriarunothai, Th.; Wölk, S.; Giri, G. S.; Friis, N.; Dunjko, V.; Briegel, H. J.; Wunderlich, Ch. (2019). "Speeding-up the decision making of a learning agent using an ion trap quantum processor". Quantum Science and Technology. 4 (1): 015014. doi:10.1088/2058-9565/aaef5e. ISSN 2058-9565. 48. ^ Biswas, Rupak; Jiang, Zhang; Kechezi, Kostya; Knysh, Sergey; Mandrà, Salvatore; O’Gorman, Bryan; Perdomo-Ortiz, Alejando; Pethukov, Andre; Realpe-Gómez, John; Rieffel, Eleanor; Venturelli, Davide; Vasko, Fedir; Wang, Zhihui (2016). "A NASA perspective on quantum computing: Opportunities and challenges". Parallel Computing. 64: 81–98. doi:10.1016/j.parco.2016.11.002. 49. ^ a b Adachi, Steven H.; Henderson, Maxwell P. (2015). "Application of quantum annealing to training of deep neural networks". arXiv:1510.06356 [quant-ph]. 50. ^ a b c d Benedetti, Marcello; Realpe-Gómez, John; Biswas, Rupak; Perdomo-Ortiz, Alejandro (2017). "Quantum-assisted learning of graphical models with arbitrary pairwise connectivity". Physical Review X. 7 (4): 041052. arXiv:1609.02542. Bibcode:2017PhRvX...7d1052B. doi:10.1103/PhysRevX.7.041052. 51. ^ a b Benedetti, Marcello; Realpe-Gómez, John; Biswas, Rupak; Perdomo-Ortiz, Alejandro (2016). "Estimation of effective temperatures in quantum annealers for sampling applications: A case study with possible applications in deep learning". Physical Review A. 94 (2): 022308. arXiv:1510.07611. Bibcode:2016PhRvA..94b2308B. doi:10.1103/PhysRevA.94.022308. 52. ^ a b Korenkevych, Dmytro; Xue, Yanbo; Bian, Zhengbing; Chudak, Fabian; Macready, William G.; Rolfe, Jason; Andriyash, Evgeny (2016). "Benchmarking quantum hardware for training of fully visible Boltzmann machines". arXiv:1611.04528 [quant-ph]. 53. ^ Khoshaman, Amir; Vinci, Walter; Denis, Brandon; Andriyash, Evgeny; Amin, Mohammad H (2018-09-12). "Quantum variational autoencoder". Quantum Science and Technology. 4 (1): 014001. doi:10.1088/2058-9565/aada1f. ISSN 2058-9565. 54. ^ Amin, Mohammad H.; Andriyash, Evgeny; Rolfe, Jason; Kulchytskyy, Bohdan; Melko, Roger (2018). "Quantum Boltzmann machines". Phys. Rev. X. 8 (21050): 021050. arXiv:1601.02036. doi:10.1103/PhysRevX.8.021050. 55. ^ "Phys. Rev. E 72, 026701 (2005): Quantum annealing in a kinetically co…". archive.is. 2014-01-13. Retrieved 2018-12-07. 56. ^ Wiebe, Nathan; Kapoor, Ashish; Svore, Krysta M. (2014). "Quantum deep learning". arXiv:1412.3489 [quant-ph]. 57. ^ Wittek, Peter; Gogolin, Christian (2017). "Quantum Enhanced Inference in Markov Logic Networks". Scientific Reports. 7 (45672): 45672. arXiv:1611.08104. Bibcode:2017NatSR...745672W. doi:10.1038/srep45672. PMC 5395824. PMID 28422093. 58. ^ a b c d Gupta, Sanjay; Zia, R.K.P. (2001-11-01). "Quantum Neural Networks". Journal of Computer and System Sciences. 63 (3): 355–383. doi:10.1006/jcss.2001.1769. ISSN 0022-0000. 59. ^ a b Ezhov, Alexandr A.; Ventura, Dan (2000), "Quantum Neural Networks", Future Directions for Intelligent Systems and Information Sciences, Physica-Verlag HD, pp. 213–235, CiteSeerX, doi:10.1007/978-3-7908-1856-7_11, ISBN 9783790824704 60. ^ a b Behrman, E.C.; Nash, L.R.; Steck, J.E.; Chandrashekar, V.G.; Skinner, S.R. (2000-10-01). "Simulations of quantum neural networks". Information Sciences. 128 (3–4): 257–269. doi:10.1016/S0020-0255(00)00056-6. ISSN 0020-0255. 61. ^ Clark, Lewis A.; Huang W., Wei; Barlow, Thomas H.; Beige, Almut (2015). "Hidden Quantum Markov Models and Open Quantum Systems with Instantaneous Feedback". In Sanayei, Ali; Rössler, Otto E.; Zelinka, Ivan (eds.). ISCS 2014: Interdisciplinary Symposium on Complex Systems. Emergence, Complexity and Computation. Iscs , P. 143, Springer (2015). Emergence, Complexity and Computation. 14. pp. 131–151. arXiv:1406.5847. CiteSeerX doi:10.1007/978-3-319-10759-2_16. ISBN 978-3-319-10759-2. 62. ^ a b c Srinivasan, Siddarth; Gordon, Geoff; Boots, Byron (2018). "Learning Hidden Quantum Markov Models" (PDF). Aistats. 63. ^ Sentís, Gael; Guţă, Mădălin; Adesso, Gerardo (9 July 2015). "Quantum learning of coherent states". EPJ Quantum Technology. 2 (1). doi:10.1140/epjqt/s40507-015-0030-4. 64. ^ Sasaki, Masahide; Carlini, Alberto (6 August 2002). "Quantum learning and universal quantum matching machine". Physical Review A. 66 (2): 022303. arXiv:quant-ph/0202173. Bibcode:2002PhRvA..66b2303S. doi:10.1103/PhysRevA.66.022303. 65. ^ Bisio, Alessandro; Chiribella, Giulio; D’Ariano, Giacomo Mauro; Facchini, Stefano; Perinotti, Paolo (25 March 2010). "Optimal quantum learning of a unitary transformation". Physical Review A. 81 (3): 032324. arXiv:0903.0543. Bibcode:2010PhRvA..81c2324B. doi:10.1103/PhysRevA.81.032324. 66. ^ Aïmeur, Esma; Brassard, Gilles; Gambs, Sébastien (1 January 2007). Quantum Clustering Algorithms. Proceedings of the 24th International Conference on Machine Learning. pp. 1–8. CiteSeerX doi:10.1145/1273496.1273497. ISBN 9781595937933. 67. ^ a b Sentís, Gael; Calsamiglia, John; Muñoz-Tapia, Raúl; Bagan, Emilio (2012). "Quantum learning without quantum memory". Scientific Reports. 2: 708. arXiv:1106.2742. Bibcode:2012NatSR...2E.708S. doi:10.1038/srep00708. PMC 3464493. PMID 23050092. 68. ^ Wiebe, Nathan; Granade, Christopher; Ferrie, Christopher; Cory, David (2014). "Quantum Hamiltonian learning using imperfect quantum resources". Physical Review A. 89 (4): 042314. arXiv:1311.5269. Bibcode:2014PhRvA..89d2314W. doi:10.1103/physreva.89.042314. hdl:10453/118943. 69. ^ a b Bisio, Alessandro; Chiribella, Giulio; D'Ariano, Giacomo Mauro; Facchini, Stefano; Perinotti, Paolo (2010). "Optimal quantum learning of a unitary transformation". Physical Review A. 81 (3): 032324. arXiv:0903.0543. Bibcode:2010PhRvA..81c2324B. doi:10.1103/PhysRevA.81.032324. 70. ^ a b Jeongho; Junghee Ryu, Bang; Yoo, Seokwon; Pawłowski, Marcin; Lee, Jinhyoung (2014). "A strategy for quantum algorithm design assisted by machine learning". New Journal of Physics. 16 (1): 073017. arXiv:1304.2169. Bibcode:2014NJPh...16a3017K. doi:10.1088/1367-2630/16/1/013017. 71. ^ Granade, Christopher E.; Ferrie, Christopher; Wiebe, Nathan; Cory, D. G. (2012-10-03). "Robust Online Hamiltonian Learning". New Journal of Physics. 14 (10): 103013. arXiv:1207.1655. Bibcode:2012NJPh...14j3013G. doi:10.1088/1367-2630/14/10/103013. ISSN 1367-2630. 72. ^ Wiebe, Nathan; Granade, Christopher; Ferrie, Christopher; Cory, D. G. (2014). "Hamiltonian Learning and Certification Using Quantum Resources". Physical Review Letters. 112 (19): 190501. arXiv:1309.0876. Bibcode:2014PhRvL.112s0501W. doi:10.1103/PhysRevLett.112.190501. ISSN 0031-9007. PMID 24877920. 73. ^ Wiebe, Nathan; Granade, Christopher; Ferrie, Christopher; Cory, David G. (2014-04-17). "Quantum Hamiltonian Learning Using Imperfect Quantum Resources". Physical Review A. 89 (4): 042314. arXiv:1311.5269. Bibcode:2014PhRvA..89d2314W. doi:10.1103/PhysRevA.89.042314. hdl:10453/118943. ISSN 1050-2947. 74. ^ Sasaki, Madahide; Carlini, Alberto; Jozsa, Richard (2001). "Quantum Template Matching". Physical Review A. 64 (2): 022317. arXiv:quant-ph/0102020. Bibcode:2001PhRvA..64b2317S. doi:10.1103/PhysRevA.64.022317. 76. ^ Sentís, Gael; Guţă, Mădălin; Adesso, Gerardo (2015-07-09). "Quantum learning of coherent states". EPJ Quantum Technology. 2 (1): 17. doi:10.1140/epjqt/s40507-015-0030-4. ISSN 2196-0763. 77. ^ Lee, Sang Min; Lee, Jinhyoung; Bang, Jeongho (2018-11-02). "Learning unknown pure quantum states". Physical Review A. 98 (5): 052302. doi:10.1103/PhysRevA.98.052302. 78. ^ Zahedinejad, Ehsan; Ghosh, Joydip; Sanders, Barry C. (2016-11-16). "Designing High-Fidelity Single-Shot Three-Qubit Gates: A Machine Learning Approach". Physical Review Applied. 6 (5): 054005. arXiv:1511.08862. Bibcode:2016PhRvP...6e4005Z. doi:10.1103/PhysRevApplied.6.054005. ISSN 2331-7019. 79. ^ Banchi, Leonardo; Pancotti, Nicola; Bose, Sougato (2016-07-19). "Quantum gate learning in qubit networks: Toffoli gate without time-dependent control". npj Quantum Information. 2: 16019. Bibcode:2016npjQI...216019B. doi:10.1038/npjqi.2016.19. 80. ^ von Lilienfeld, O. Anatole (2018-04-09). "Quantum Machine Learning in Chemical Compound Space". Angewandte Chemie International Edition. 57 (16): 4164. doi:10.1002/anie.201709686. 81. ^ Bartok, Albert P.; Payne, Mike C.; Risi, Kondor; Csanyi, Gabor (2010). "Gaussian approximation potentials: The accuracy of quantum mechanics, without the electrons". Physical Review Letters. 104 (13): 136403. doi:10.1103/PhysRevLett.104.136403. 82. ^ Rupp, Matthias; Tkatchenko, Alexandre; Muller, Klaus-Robert; von Lilienfeld, O. Anatole (2012-01-31). "Fast and Accurate Modeling of Molecular Atomization Energies With Machine Learning". Physical Review Letters. 355 (6325): 602. arXiv:1109.2618. Bibcode:2012PhRvL.108e8301R. doi:10.1103/PhysRevLett.108.058301. PMID 22400967. 83. ^ Xia, Rongxin; Kais, Sabre (2018-10-10). "Quantum machine learning for electronic structure calculations". Nature Communications. 9: 4195. doi:10.1038/s41467-018-06598-z. 84. ^ Carleo, Giuseppe; Troyer, Matthias (2017-02-09). "Solving the quantum many-body problem with artificial neural networks". Science. 355 (6325): 602–606. arXiv:1606.02318. Bibcode:2017Sci...355..602C. doi:10.1126/science.aag2302. PMID 28183973. 85. ^ van Nieuwenburg, Evert; Liu, Ye-Hua; Huber, Sebastian (2017). "Learning phase transitions by confusion". Nature Physics. 13 (5): 435. arXiv:1610.02048. Bibcode:2017NatPh..13..435V. doi:10.1038/nphys4037. 86. ^ Hentschel, Alexander (2010-01-01). "Machine Learning for Precise Quantum Measurement". Physical Review Letters. 104 (6): 063603. arXiv:0910.0762. Bibcode:2010PhRvL.104f3603H. doi:10.1103/PhysRevLett.104.063603. PMID 20366821. 87. ^ "Variational Circuits — Quantum Machine Learning Toolbox 0.7.1 documentation". qmlt.readthedocs.io. Retrieved 2018-12-06. 88. ^ Schuld, Maria (2018-06-12). "Quantum Machine Learning 1.0". XanaduAI. Retrieved 2018-12-07. 89. ^ Arunachalam, Srinivasan; de Wolf, Ronald (2017). "A Survey of Quantum Learning Theory". arXiv:1701.06806 [quant-ph]. 90. ^ a b c Servedio, Rocco A.; Gortler, Steven J. (2004). "Equivalences and Separations Between Quantum and Classical Learnability". SIAM Journal on Computing. 33 (5): 1067–1092. CiteSeerX doi:10.1137/S0097539704412910. 91. ^ Arunachalam, Srinivasan; de Wolf, Ronald (2016). "Optimal Quantum Sample Complexity of Learning Algorithms". arXiv:1607.00932 [quant-ph]. 92. ^ Nader, Bshouty H.; Jeffrey, Jackson C. (1999). "Learning DNF over the Uniform Distribution Using a Quantum Example Oracle". SIAM Journal on Computing. 28 (3): 1136–1153. CiteSeerX doi:10.1137/S0097539795293123. 93. ^ Monràs, Alex; Sentís, Gael; Wittek, Peter (2017). "Inductive supervised quantum learning". Physical Review Letters. 118 (19): 190503. arXiv:1605.07541. Bibcode:2017PhRvL.118s0503M. doi:10.1103/PhysRevLett.118.190503. PMID 28548536. 94. ^ "NIPS 2009 Demonstration: Binary Classification using Hardware Implementation of Quantum Annealing" (PDF). Static.googleusercontent.com. Retrieved 26 November 2014. 95. ^ "Google Quantum A.I. Lab Team". Google Plus. 31 January 2017. Retrieved 31 January 2017. 96. ^ "NASA Quantum Artificial Intelligence Laboratory". NASA. NASA. 31 January 2017. Retrieved 31 January 2017. 97. ^ Neigovzen, Rodion; Neves, Jorge L.; Sollacher, Rudolf; Glaser, Steffen J. (2009). "Quantum pattern recognition with liquid-state nuclear magnetic resonance". Physical Review A. 79 (4): 042321. arXiv:0802.1592. Bibcode:2009PhRvA..79d2321N. doi:10.1103/PhysRevA.79.042321. 98. ^ Li, Zhaokai; Liu, Xiaomei; Xu, Nanyang; Du, Jiangfeng (2015). "Experimental Realization of a Quantum Support Vector Machine". Physical Review Letters. 114 (14): 140504. arXiv:1410.1054. Bibcode:2015PhRvL.114n0504L. doi:10.1103/PhysRevLett.114.140504. PMID 25910101. 99. ^ Wan, Kwok-Ho; Dahlsten, Oscar; Kristjansson, Hler; Gardner, Robert; Kim, Myungshik (2017). "Quantum generalisation of feedforward neural networks". Npj Quantum Information. 3 (36): 36. arXiv:1612.01045. Bibcode:2017npjQI...3...36W. doi:10.1038/s41534-017-0032-4. 100. ^ Brunner, Daniel; Soriano, Miguel C.; Mirasso, Claudio R.; Fischer, Ingo (2013). "Parallel photonic information processing at gigabyte per second data rates using transient states". Nature Communications. 4: 1364. Bibcode:2013NatCo...4E1364B. doi:10.1038/ncomms2368. PMC 3562454. PMID 23322052. 101. ^ Tezak, Nikolas; Mabuchi, Hideo (2015). "A coherent perceptron for all-optical learning". EPJ Quantum Technology. 2. arXiv:1501.01608. doi:10.1140/epjqt/s40507-015-0023-3. 102. ^ Cai, X.-D.; Wu, D.; Su, Z.-E.; Chen, M.-C.; Wang, X.-L.; Li, Li; Liu, N.-L.; Lu, C.-Y.; Pan, J.-W. (2015). "Entanglement-Based Machine Learning on a Quantum Computer". Physical Review Letters. 114 (11): 110504. arXiv:1409.7770. Bibcode:2015PhRvL.114k0504C. doi:10.1103/PhysRevLett.114.110504. PMID 25839250. 103. ^ Pfeiffer, P.; Egusquiza, I. L.; Di Ventra, M.; Sanz, M.; Solano, E. (2016). "Quantum memristors". Scientific Reports. 6 (2016): 29507. arXiv:1511.02192. Bibcode:2016NatSR...629507P. doi:10.1038/srep29507. PMC 4933948. PMID 27381511. 104. ^ Salmilehto, J.; Deppe, F.; Di Ventra, M.; Sanz, M.; Solano, E. (2017). "Quantum Memristors with Superconducting Circuits". Scientific Reports. 7 (42044): 42044. arXiv:1603.04487. Bibcode:2017NatSR...742044S. doi:10.1038/srep42044. PMC 5307327. PMID 28195193. 105. ^ Li, Ying; Holloway, Gregory W.; Benjamin, Simon C.; Briggs, G. Andrew D.; Baugh, Jonathan; Mol, Jan A. (2017). "A simple and robust quantum memristor". Physical Review B. 96 (7): 075446. arXiv:1612.08409. Bibcode:2017PhRvB..96g5446L. doi:10.1103/PhysRevB.96.075446.
4597c64e30e996e5
Mathematical formulation of quantum mechanics explained The mathematical formulations of quantum mechanics are those mathematical formalisms that permit a rigorous description of quantum mechanics. Such are distinguished from mathematical formalisms for theories developed prior to the early 1900s by the use of abstract mathematical structures, such as infinite-dimensional Hilbert spaces and operators on these spaces. Many of these structures are drawn from functional analysis, a research area within pure mathematics that was influenced in part by the needs of quantum mechanics. In brief, values of physical observables such as energy and momentum were no longer considered as values of functions on phase space, but as eigenvalues; more precisely as spectral values of linear operators in Hilbert space.[1] These formulations of quantum mechanics continue to be used today. At the heart of the description are ideas of quantum state and quantum observables which are radically different from those used in previous models of physical reality. While the mathematics permits calculation of many quantities that can be measured experimentally, there is a definite theoretical limit to values that can be simultaneously measured. This limitation was first elucidated by Heisenberg through a thought experiment, and is represented mathematically in the new formalism by the non-commutativity of operators representing quantum observables. Prior to the emergence of quantum mechanics as a separate theory, the mathematics used in physics consisted mainly of formal mathematical analysis, beginning with calculus, and increasing in complexity up to differential geometry and partial differential equations. Probability theory was used in statistical mechanics. Geometric intuition played a strong role in the first two and, accordingly, theories of relativity were formulated entirely in terms of geometric concepts. The phenomenology of quantum physics arose roughly between 1895 and 1915, and for the 10 to 15 years before the emergence of quantum theory (around 1925) physicists continued to think of quantum theory within the confines of what is now called classical physics, and in particular within the same mathematical structures. The most sophisticated example of this is the Sommerfeld–Wilson–Ishiwara quantization rule, which was formulated entirely on the classical phase space. History of the formalism The "old quantum theory" and the need for new mathematics See main article: Old quantum theory. In the 1890s, Planck was able to derive the blackbody spectrum which was later used to avoid the classical ultraviolet catastrophe by making the unorthodox assumption that, in the interaction of electromagnetic radiation with matter, energy could only be exchanged in discrete units which he called quanta. Planck postulated a direct proportionality between the frequency of radiation and the quantum of energy at that frequency. The proportionality constant,, is now called Planck's constant in his honor. In 1905, Einstein explained certain features of the photoelectric effect by assuming that Planck's energy quanta were actual particles, which were later dubbed photons. All of these developments were phenomenological and challenged the theoretical physics of the time. Bohr and Sommerfeld went on to modify classical mechanics in an attempt to deduce the Bohr model from first principles. They proposed that, of all closed classical orbits traced by a mechanical system in its phase space, only the ones that enclosed an area which was a multiple of Planck's constant were actually allowed. The most sophisticated version of this formalism was the so-called Sommerfeld–Wilson–Ishiwara quantization. Although the Bohr model of the hydrogen atom could be explained in this way, the spectrum of the helium atom (classically an unsolvable 3-body problem) could not be predicted. The mathematical status of quantum theory remained uncertain for some time. In 1923 de Broglie proposed that wave–particle duality applied not only to photons but to electrons and every other physical system. The situation changed rapidly in the years 1925–1930, when working mathematical foundations were found through the groundbreaking work of Erwin Schrödinger, Werner Heisenberg, Max Born, Pascual Jordan, and the foundational work of John von Neumann, Hermann Weyl and Paul Dirac, and it became possible to unify several different approaches in terms of a fresh set of ideas. The physical interpretation of the theory was also clarified in these years after Werner Heisenberg discovered the uncertainty relations and Niels Bohr introduced the idea of complementarity. The "new quantum theory" Werner Heisenberg's matrix mechanics was the first successful attempt at replicating the observed quantization of atomic spectra. Later in the same year, Schrödinger created his wave mechanics. Schrödinger's formalism was considered easier to understand, visualize and calculate as it led to differential equations, which physicists were already familiar with solving. Within a year, it was shown that the two theories were equivalent. Schrödinger himself initially did not understand the fundamental probabilistic nature of quantum mechanics, as he thought that the absolute square of the wave function of an electron should be interpreted as the charge density of an object smeared out over an extended, possibly infinite, volume of space. It was Max Born who introduced the interpretation of the absolute square of the wave function as the probability distribution of the position of a pointlike object. Born's idea was soon taken over by Niels Bohr in Copenhagen who then became the "father" of the Copenhagen interpretation of quantum mechanics. Schrödinger's wave function can be seen to be closely related to the classical Hamilton–Jacobi equation. The correspondence to classical mechanics was even more explicit, although somewhat more formal, in Heisenberg's matrix mechanics. In his PhD thesis project, Paul Dirac[2] discovered that the equation for the operators in the Heisenberg representation, as it is now called, closely translates to classical equations for the dynamics of certain quantities in the Hamiltonian formalism of classical mechanics, when one expresses them through Poisson brackets, a procedure now known as canonical quantization. To be more precise, already before Schrödinger, the young postdoctoral fellow Werner Heisenberg invented his matrix mechanics, which was the first correct quantum mechanics–– the essential breakthrough. Heisenberg's matrix mechanics formulation was based on algebras of infinite matrices, a very radical formulation in light of the mathematics of classical physics, although he started from the index-terminology of the experimentalists of that time, not even aware that his "index-schemes" were matrices, as Born soon pointed out to him. In fact, in these early years, linear algebra was not generally popular with physicists in its present form. Although Schrödinger himself after a year proved the equivalence of his wave-mechanics and Heisenberg's matrix mechanics, the reconciliation of the two approaches and their modern abstraction as motions in Hilbert space is generally attributed to Paul Dirac, who wrote a lucid account in his 1930 classic The Principles of Quantum Mechanics. He is the third, and possibly most important, pillar of that field (he soon was the only one to have discovered a relativistic generalization of the theory). In his above-mentioned account, he introduced the bra–ket notation, together with an abstract formulation in terms of the Hilbert space used in functional analysis; he showed that Schrödinger's and Heisenberg's approaches were two different representations of the same theory, and found a third, most general one, which represented the dynamics of the system. His work was particularly fruitful in all kinds of generalizations of the field. The first complete mathematical formulation of this approach, known as the Dirac–von Neumann axioms, is generally credited to John von Neumann's 1932 book Mathematical Foundations of Quantum Mechanics, although Hermann Weyl had already referred to Hilbert spaces (which he called unitary spaces) in his 1927 classic paper and book. It was developed in parallel with a new approach to the mathematical spectral theory based on linear operators rather than the quadratic forms that were David Hilbert's approach a generation earlier. Though theories of quantum mechanics continue to evolve to this day, there is a basic framework for the mathematical formulation of quantum mechanics which underlies most approaches and can be traced back to the mathematical work of John von Neumann. In other words, discussions about interpretation of the theory, and extensions to it, are now mostly conducted on the basis of shared assumptions about the mathematical foundations. Later developments The application of the new quantum theory to electromagnetism resulted in quantum field theory, which was developed starting around 1930. Quantum field theory has driven the development of more sophisticated formulations of quantum mechanics, of which the ones presented here are simple special cases. On a different front, von Neumann originally dispatched quantum measurement with his infamous postulate on the collapse of the wavefunction, raising a host of philosophical problems. Over the intervening 70 years, the problem of measurement became an active research area and itself spawned some new formulations of quantum mechanics. A related topic is the relationship to classical mechanics. Any new physical theory is supposed to reduce to successful old theories in some approximation. For quantum mechanics, this translates into the need to study the so-called classical limit of quantum mechanics. Also, as Bohr emphasized, human cognitive abilities and language are inextricably linked to the classical realm, and so classical descriptions are intuitively more accessible than quantum ones. In particular, quantization, namely the construction of a quantum theory whose classical limit is a given and known classical theory, becomes an important area of quantum physics in itself. Finally, some of the originators of quantum theory (notably Einstein and Schrödinger) were unhappy with what they thought were the philosophical implications of quantum mechanics. In particular, Einstein took the position that quantum mechanics must be incomplete, which motivated research into so-called hidden-variable theories. The issue of hidden variables has become in part an experimental issue with the help of quantum optics. Mathematical structure of quantum mechanics A physical system is generally described by three basic ingredients: states; observables; and dynamics (or law of time evolution) or, more generally, a group of physical symmetries. A classical description can be given in a fairly direct way by a phase space model of mechanics: states are points in a symplectic phase space, observables are real-valued functions on it, time evolution is given by a one-parameter group of symplectic transformations of the phase space, and physical symmetries are realized by symplectic transformations. A quantum description normally consists of a Hilbert space of states, observables are self adjoint operators on the space of states, time evolution is given by a one-parameter group of unitary transformations on the Hilbert space of states, and physical symmetries are realized by unitary transformations. (It is possible, to map this Hilbert-space picture to a phase space formulation, invertibly. See below.) Postulates of quantum mechanics The following summary of the mathematical framework of quantum mechanics can be partly traced back to the Dirac–von Neumann axioms. One can in this formalism state Heisenberg's uncertainty principle and prove it as a theorem, although the exact historical sequence of events, concerning who derived what and under which framework, is the subject of historical investigations outside the scope of this article. Furthermore, to the postulates of quantum mechanics one should also add basic statements on the properties of spin and Pauli's exclusion principle, see below. Pictures of dynamics See main article: article and Dynamical pictures. The time evolution of the state is given by a differentiable function from the real numbers, representing instants of time, to the Hilbert space of system states. This map is characterized by a differential equation as follows:If denotes the state of the system at any one time, the following Schrödinger equation holds: where is a densely defined self-adjoint operator, called the system Hamiltonian, is the imaginary unit and is the reduced Planck constant. As an observable, corresponds to the total energy of the system. Alternatively, by Stone's theorem one can state that there is a strongly continuous one-parameter unitary map : such that for all times . The existence of a self-adjoint Hamiltonian such that is a consequence of Stone's theorem on one-parameter unitary groups. It is assumed that does not depend on time and that the perturbation starts at ; otherwise one must use the Dyson series, formally written as is Dyson's time-ordering symbol. (This symbol permutes a product of noncommuting operators of the form B1(t1)B2(t2) ⋅ ...Bn(tn) into the uniquely determined re-ordered expression ) ⋅ ... The result is a causal chain, the primary cause in the past on the utmost r.h.s., and finally the present effect on the utmost l.h.s. .) It is then easily checked that the expected values of all observables are the same in both pictures and that the time-dependent Heisenberg operators satisfywhich is true for time-dependent . Notice the commutator expression is purely formal when one of the operators is unbounded. One would specify a representation for the expression to make sense of it. The interaction picture does not always exist, though. In interacting quantum field theories, Haag's theorem states that the interaction picture does not exist. This is because the Hamiltonian cannot be split into a free and an interacting part within a superselection sector. Moreover, even if in the Schrödinger picture the Hamiltonian does not depend on time, e.g., in the interaction picture it does, at least, if does not commute with, since So the above-mentioned Dyson-series has to be used anyhow. The Heisenberg picture is the closest to classical Hamiltonian mechanics (for example, the commutators appearing in the above equations directly translate into the classical Poisson brackets); but this is already rather "high-browed", and the Schrödinger picture is considered easiest to visualize and understand by most people, to judge from pedagogical accounts of quantum mechanics. The Dirac picture is the one used in perturbation theory, and is specially associated to quantum field theory and many-body physics. Similar equations can be written for any one-parameter unitary group of symmetries of the physical system. Time would be replaced by a suitable coordinate parameterizing the unitary group (for instance, a rotation angle, or a translation distance) and the Hamiltonian would be replaced by the conserved quantity associated to the symmetry (for instance, angular or linear momentum). The original form of the Schrödinger equation depends on choosing a particular representation of Heisenberg's canonical commutation relations. The Stone–von Neumann theorem dictates that all irreducible representations of the finite-dimensional Heisenberg commutation relations are unitarily equivalent. A systematic understanding of its consequences has led to the phase space formulation of quantum mechanics, which works in full phase space instead of Hilbert space, so then with a more intuitive link to the classical limit thereof. This picture also simplifies considerationsof quantization, the deformation extension from classical to quantum mechanics. The quantum harmonic oscillator is an exactly solvable system where the different representations are easily compared. There, apart from the Heisenberg, or Schrödinger (position or momentum), or phase-space representations, one also encounters the Fock (number) representation and the Segal–Bargmann (Fock-space or coherent state) representation (named after Irving Segal and Valentine Bargmann). All four are unitarily equivalent. Time as an operator The framework presented so far singles out time as the parameter that everything depends on. It is possible to formulate mechanics in such a way that time becomes itself an observable associated to a self-adjoint operator. At the classical level, it is possible to arbitrarily parameterize the trajectories of particles in terms of an unphysical parameter, and in that case the time t becomes an additional generalized coordinate of the physical system. At the quantum level, translations in would be generated by a "Hamiltonian", where E is the energy operator and is the "ordinary" Hamiltonian. However, since s is an unphysical parameter, physical states must be left invariant by "s-evolution", and so the physical state space is the kernel of (this requires the use of a rigged Hilbert space and a renormalization of the norm). This is related to the quantization of constrained systems and quantization of gauge theories. Itis also possible to formulate a quantum theory of "events" where time becomes an observable (see D. Edwards). In addition to their other properties, all particles possess a quantity called spin, an intrinsic angular momentum. Despite the name, particles do not literally spin around an axis, and quantum mechanical spin has no correspondence in classical physics. In the position representation, a spinless wavefunction has position and time as continuous variables,, for spin wavefunctions the spin is an additional discrete variable:, where takes the values; That is, the state of a single particle with spin is represented by a -component spinor of complex-valued wave functions. Two classes of particles with very different behaviour are bosons which have integer spin, and fermions possessing half-integer spin . Pauli's principle The property of spin relates to another basic property concerning systems of identical particles: Pauli's exclusion principle, which is a consequence of the following permutation behaviour of an -particle wave function; again in the position representation one must postulate that for the transposition of any two of the particles one always should have i.e., on transposition of the arguments of any two particles the wavefunction should reproduce, apart from a prefactor which is for bosons, but for fermions.Electrons are fermions with ; quanta of light are bosons with . In nonrelativistic quantum mechanics all particles are either bosons or fermions; in relativistic quantum theories also "supersymmetric" theories exist, where a particle is a linear combination of a bosonic and a fermionic part. Only in dimension can one construct entities where is replaced by an arbitrary complex number with magnitude 1, called anyons. Although spin and the Pauli principle can only be derived from relativistic generalizations of quantum mechanics the properties mentioned in the last two paragraphs belong to the basic postulates already in the non-relativistic limit. Especially, many important properties in natural science, e.g. the periodic system of chemistry, are consequences of the two properties. The problem of measurement The picture given in the preceding paragraphs is sufficient for description of a completely isolated system. However, it fails to account for one of the main differences between quantum mechanics and classical mechanics, that is, the effects of measurement.[5] The von Neumann description of quantum measurement of an observable, when the system is prepared in a pure state is the following (note, however, that von Neumann's description dates back to the 1930s and is based on experiments as performed during that time – more specifically the Compton–Simon experiment; it is not applicable to most present-day measurements within the quantum domain): where is the resolution of the identity (also called projection-valued measure) associated to . Then the probability of the measurement outcome lying in an interval of is . In other words, the probability is obtained by integrating the characteristic function of against the countably additive measure For example, suppose the state space is the -dimensional complex Hilbert space and is a Hermitian matrix with eigenvalues, with corresponding eigenvectors . The projection-valued measure associated with,, is then where is a Borel set containing only the single eigenvalue . If the system is prepared in state Then the probability of a measurement returning the value can be calculated by integrating the spectral measure over . This gives trivially The characteristic property of the von Neumann measurement scheme is that repeating the same measurement will give the same results. This is also called the projection postulate. A more general formulation replaces the projection-valued measure with a positive-operator valued measure (POVM). To illustrate, take again the finite-dimensional case. Here we would replace the rank-1 projections by a finite set of positive operators whose sum is still the identity operator as before (the resolution of identity). Just as a set of possible outcomes is associated to a projection-valued measure, the same can be said for a POVM. Suppose the measurement outcome is . Instead of collapsing to the (unnormalized) state after the measurement, the system now will be in the state Since the operators need not be mutually orthogonal projections, the projection postulate of von Neumann no longer holds. The same formulation applies to general mixed states. In von Neumann's approach, the state transformation due to measurement is distinct from that due to time evolution in several ways. For example, time evolution is deterministic and unitary whereas measurement is non-deterministic and non-unitary. However, since both types of state transformation take one quantum state to another, this difference was viewed by many as unsatisfactory. The POVM formalism views measurement as one among many other quantum operations, which are described by completely positive maps which do not increase the trace. In any case it seems that the above-mentioned problems can only be resolved if the time evolution included not only the quantum system, but also, and essentially, the classical measurement apparatus (see above). The relative state interpretation An alternative interpretation of measurement is Everett's relative state interpretation, which was later dubbed the "many-worlds interpretation" of quantum physics. List of mathematical tools Part of the folklore of the subject concerns the mathematical physics textbook Methods of Mathematical Physics put together by Richard Courant from David Hilbert's Göttingen University courses. The story is told (by mathematicians) that physicists had dismissed the material as not interesting in the current research areas, until the advent of Schrödinger's equation. At that point it was realised that the mathematics of the new quantum mechanics was already laid out in it. It is also said that Heisenberg had consulted Hilbert about his matrix mechanics, and Hilbert observed that his own experience with infinite-dimensional matrices had derived from differential equations, advice which Heisenberg ignored, missing the opportunity to unify the theory as Weyl and Dirac did a few years later. Whatever the basis of the anecdotes, the mathematics of the theory was conventional at the time, whereas the physics was radically new. The main tools include: See also: list of mathematical topics in quantum theory. 1. Frederick W. Byron, Robert W. Fuller; Mathematics of classical and quantum physics; Courier Dover Publications, 1992. 2. Dirac . P. A. M. . The Fundamental Equations of Quantum Mechanics . 10.1098/rspa.1925.0150 . Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences . 109 . 752 . 642–653 . 1925 . 1925RSPSA.109..642D . 3. Sellier. Jean Michel. A signed particle formulation of non-relativistic quantum mechanics. Journal of Computational Physics. 2015. 297. 254–265. 10.1016/ 1509.06708 . 2015JCoPh.297..254S . 4. Solem. J. C.. Biedenharn. L. C.. 1993. Understanding geometrical phases in quantum mechanics: An elementary example. Foundations of Physics. 23. 2. 185–195. 1993FoPh...23..185S . 10.1007/BF01883623 . 5.,M1 G. Greenstein and A. Zajonc
cd0fb8a5ebab0600
Take the 2-minute tour × I am to teach section 18 of "Elementary Number Theory" (Dudley) - Sums of Two Squares - to an undergraduate Number Theory class, and am having trouble cultivating anything other than a rote dissection of the lemmas/theorems presented in the text. The professor copies (exclusively) from the text onto the chalkboard during lectures, but I would like to present the students with something a little more interesting and that they cannot find in their text. What are the connections of the "Sums of Two Squares" to other fields of mathematics? Why would anyone care about solving $n = x^2 + y^2$ in the integers? I am aware of the norm of the Gaussian integers, and will probably mention something about how the identity $$(a^2 + b^2)(c^2 + d^2) = (ac -bd)^2 + (ad + bc)^2$$ is deeper than just the verification process of multiplying it out (e.g. I might introduce $\mathbb{Z}[i] $ and mention that "the norm is multiplicative"). What else is there? The book mentions (but only in passing) sums of three and four squares, Waring's Problem, and Goldbach's Conjecture. Also, I have seen Akhil's answer and the Fermat Christmas question, but these don't admit answers to my question. share|improve this question The solutions to $x^2+y^2=n$ describe all the points in $Z^2$ which belong to the same center with the center in $(0,0)$. You can also use them to find the intersection between $Z^2$ and a circle with the centre at some lattice point.... –  N. S. Mar 27 '12 at 16:22 @N.S. - I'd vote on that as an answer. –  The Chaz 2.0 Mar 27 '12 at 16:24 A theorem says every nonnegative integer is the sum of four squares of nonnegative integers. It is also true that every nonnegative integer is the sum of three triangular numbers, of five pentagonal numbers, of six hexagonal numbers, etc. Maybe that has no relevance to other areas of mathematics, but if you're wondering why you would care about sums of squares, maybe the fact that it's part of this larger pattern matters. –  Michael Hardy Mar 27 '12 at 16:24 It sounds from your question like the main problem here is the professor copying directly from the text onto the chalkboard. What a waste of student's time. Does the instructor explain to colleagues why his/her latest research is interesting by reading directly from the paper? Zzzzzz.... –  KCd Mar 28 '12 at 1:37 Asking which integers are sums of two squares is a quintessential theme from number theory. Do the students already find the course interesting at all?? Look at the number of solutions x,y for each n and see how erratically that count behaves as n increases step by step. Some regularity appears if we think about it at primes first. This illustrates the difference between the linear ordering way of thinking about integers in many other areas of math vs. the divisibility relation among integers that is central to number theory. –  KCd Mar 28 '12 at 1:44 show 1 more comment 3 Answers up vote 7 down vote accepted Consider the Laplacian $\Delta = \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2}$ acting on nice functions $f : \mathbb{R}^2 \to \mathbb{C}$ which are doubly periodic in the sense that $f(x, y) = f(x+1, y) = f(x, y+1)$. There is a nice set of eigenvectors one can write down given by $$f_{a,b}(x, y) = e^{2 \pi i (ax + by)}, a, b \in \mathbb{Z}$$ with eigenvalues $-4 \pi^2 (a^2 + b^2)$, and these turn out to be all eigenvectors, so it is possible to expand a suitable class of such functions in terms of linear combinations of the above. Eigenvectors of the Laplacian are important because they can be used to construct solutions to the wave equation, the heat equation, and the Schrödinger equation. I'll restrict myself to talking about the wave equation: in that context, eigenvectors of the Laplacian give standing waves, and the corresponding eigenvalue tells you what the frequency of the standing wave is. So eigenvalues of the Laplacian on a space tell you about the "acoustics" of a space (here the torus $\mathbb{R}^2/\mathbb{Z}^2$). For more details, see the Wikipedia article on hearing the shape of a drum. A more general keyword here is spectral geometry. share|improve this answer By considering different periodicity conditions you can also motivate studying solutions to $n = ax^2 + bxy + cy^2$ for more general $a, b, c$, and working in higher dimensions you can motivate studying more general positive-definite quadratic forms. There is an interesting general question you can ask here about whether you can "hear the shape of a torus" (the answer turns out to be yes in two dimensions if you interpret "shape" suitably and no in general). –  Qiaochu Yuan Mar 27 '12 at 16:35 Of course you don't need me to tell you this, but this is a perfect example of what I am looking for. –  The Chaz 2.0 Mar 27 '12 at 16:36 @The Chaz: if you liked this then you might want to pick up a copy of Schroeder's Number theory in science and communication and look in particular at section 7.10. –  Qiaochu Yuan Mar 27 '12 at 16:39 Thanks again for this answer. I left it "un-accepted" for a while to encourage more answers. –  The Chaz 2.0 May 1 '12 at 2:49 add comment In another direction, counting the solutions $n=ax^2+bxy+cx^2$, to quadratic forms with negative discriminant is often the starting place for a course on Algebraic Number Theory. I believe Gauss was one of the first people to think about this area. This leads to the definition of Class Number, and we can prove things like Dirichlet's Class Number Formula. Solutions to $x^2+y^2$ is one of the simplest examples to start with. share|improve this answer add comment At a much more elementary level, one might want to draw connections to what they already know. For example, there is a very nice connection between the identity $(a^2 + b^2)(c^2 + d^2) = (ac -bd)^2 + (ad + bc)^2$ and the addition laws for cosine and sine. As another example, suppose that $a$ and $b$ are positive, and we want to maximize $ax+by$ subject to $x^2+y^2=r^2$. Using $(a^2+b^2)(x^2+y^2)=(ax+by)^2+(ay-bx)^2$, we can see that the maximum of $ax+by$ is reached when $ay-bx=0$. Then there is the generalization (Brahmagupta identity). Connection with Fibonacci numbers. Everything is connected to everything else! share|improve this answer Could you point me in the direction of the trig identities you had in mind? Also, is there a sign error? I might just be projecting my tendency to make such errors (cf revisions of this question!) –  The Chaz 2.0 Mar 27 '12 at 18:28 I was just thinking of $a=\cos x$, $b=\sin x$, $c=\cos y$, $d=\sin y$. That gives the right signs. For the max problem, I probably switched signs, doesn't matter, one can switch signs without changing correctness of identity. –  André Nicolas Mar 27 '12 at 19:08 Got it. Maybe the changed signs only matter in the context of the norm in $\mathbb{Z} [i]$... –  The Chaz 2.0 Mar 28 '12 at 1:56 add comment Your Answer
497d0c998ffaa7d7
Take the 2-minute tour × When trying to solve the Schrödinger equation for hydrogen, one usually splits up the wave function into two parts: $\psi(r,\phi,\theta)= R(r)Y_{l,m}(\phi,\theta)$ I understand that the radial part usually has a singularity for the 1s state at $r=0$ and this is why you remove it by writing: $R(r) = \frac{U(r)}{r}$ But what is the physical meaning of $R(r=0) = \infty$. Wouldn't this mean that the electron cloud is only at the centre of the atomic nucleolus? Thanks in advance! share|improve this question 5 Answers 5 The infinitesimal probability for the electron to be in the volume $dV$ around a point $(r,\theta,\phi)\leftrightarrow (x,y,z)$ is given by $$ dP = dV\cdot |\psi(x,y,z)|^2 = dV\cdot |R(r)|^2\cdot |Y_{lm}(\theta,\phi)|^2 =\dots$$ as you can see if you substitute your Ansatz for the wave function. However, the infinitesimal volume $dV=dx\cdot dy\cdot dz$ may be rewritten in terms of differentials of the spherical coordinates as $$ dV = dr\cdot r^2 \cdot d\Omega = dr\cdot r^2 \cdot \sin\theta\cdot d\theta\cdot d\phi $$ where the small solid angle $d\Omega$ was rewritten in terms of the spherical coordinates. You see that for dimensional reasons (or because the surface of a sphere scales like $r^2$), there is an extra factor of $r^2$ in $dV$ and therefore also in $dP$ which suppresses the probability. There is simply not enough volume for small values of $r$. So $|R(r)|^2$ may still go like $1/r^2$ for small $r$ and in that case, $dV$ will be proportional to $dr$ times a function that is finite for $r\to 0$. Such $dP$ may be integrated and there's no divergence at all near $r=0$. That's why one should allow the wave function to go like $1/r$ near $r=0$ which is the true counterpart of one-dimensional wave function's being finite near a point. However, Nature doesn't use this particular loophole because the wave function $\psi$ for small $r$ actually scales like $r^l$ where $l$ is the orbital quantum number and the wave function actually never diverges even though it could. share|improve this answer The physical observable is not the wavefunction, but its integral over a finite area. In spherical coordinates, this is: $P({\vec x})=\int dr\, d\theta\, d\phi r^{2}\sin\theta \psi^{*}\psi$ This integrand is manifestly finite at $r=0$, even if $R(r)$ has a $\frac{1}{r}$ divergance. share|improve this answer Dear @Jerry, you were a minute faster but shorter ;-). I think that $\sin^2\theta$ should be just $\sin\theta$. –  Luboš Motl Apr 26 '12 at 16:32 Indeed! I'm so used to writing the metric that I forgot the square root. –  Jerry Schirmer Apr 26 '12 at 17:16 Good way to phrase the priorities. ;-) –  Luboš Motl Apr 26 '12 at 18:09 For a hydrogen-like atom in 3 spatial dimensions, the rewriting of the radial part is not performed to keep the $u(r)$ part regular, as OP suggests, but usually because the 3D radial equation in terms of the $u$ function has the same form as a 1D Schrödinger equation. Imagine that the radial wave function goes as a power $$R(r) ~\sim ~ r^{p} \qquad {\rm for} \qquad r~\to~ 0, \qquad p~\in~\mathbb{R}.$$ On general grounds, one can impose the following list of consistency conditions, listed with the weakest condition first and the strongest condition last. 1. Normalizability of the wave function $$\infty~>~\langle\psi|\psi\rangle~=~\int d^3r~|\psi(\vec{r})|^2 ~\propto~ \int_0^{\infty} r^{2}dr~|R(r)|^2 .$$ Integrability at $r=0$ yields that the power $p>-\frac{3}{2}$. In other words, this normalizability condition does not by itself imply that $R(r)$ or $u(r)$ should be regular at $r=0$, which is also the conclusion of many of the other answers. 2. The expectation value of the potential energy $V$ should be bounded from below, $$-\infty~<~\langle\psi| V|\psi\rangle~=~\int d^3r~V(r)|\psi(\vec{r})|^2~\propto~-\int_0^{\infty} rdr~|R(r)|^2. $$ Integrability at $r=0$ yields that the power $p>-1$. In other words, $u(r)$ should be regular for $r\to 0$. 3. The kinetic energy operator (or equivalently, the Laplacian $\Delta$) should behave self-adjointly for two wave functions $\psi_1(\vec{r})$ and $\psi_2(\vec{r})$, $$\langle\psi_1| \Delta\psi_2\rangle~=~-\langle\vec{\nabla}\psi_1| \cdot\vec{\nabla}\psi_2\rangle,$$ without picking up pathological contributions at $r=0$. A detailed analysis shows that the powers of the radial parts of $\psi_1(\vec{r})$ and $\psi_2(\vec{r})$ should satisfy $p>-\frac{1}{2}$. In comparison, the actual bound state solutions have non-negative $p=\ell\in \mathbb{N}_0$, and therefore satisfy these three conditions. share|improve this answer In addition to the simply geometric constraints that Jerry and Lubos talk about, the derivation used to illustrate the problem almost always assumes that the proton is a point particle which is a pretty good approximation but not strictly true. Working the problem again with a realistic proton charge density function (roughly constant inside a radius of about 1 fm) would be another way to remove the singularity. Mind you, you this argument does not hold true for the positronium so you still need the geometric constraint. share|improve this answer Re:positronium: wouldn't sub-Compton-wavelength renormalization of the Coulomb law soften the singularity? –  Slaviks Apr 26 '12 at 19:28 @Slaviks: I'm a little on thin ice here, but I think that renormalization does solve the problem, but that's in the context of QFT, while this question seem to be phrased in the language of introductory QM. –  dmckee Apr 26 '12 at 19:53 Sure, I was just entertaining the concept :) There is no singularity in the w.f., worrying in about the radial part is just staring at a singularity if the coordinate system, imho. –  Slaviks Apr 26 '12 at 19:56 For Hydrogen, $R(r)$ does not diverge, as $U(r)$ vanishes as fast as (or faster than) $r$ as $r\rightarrow 0$. In fact, it's only for the $s$ orbitals that the wavefunction is non zero at $r=0$. But as pointed out before, a non-zero radial wavefunction does not mean a non-zero probability of finding the electron at the center. share|improve this answer Your Answer
2355e78ad346e195
Quantum state From Wikipedia, the free encyclopedia Jump to: navigation, search In quantum physics, quantum state refers to the state of a quantum system. A quantum state is given as a vector in a Hilbert space, called the state vector. For example, when dealing with the energy spectrum of the electron in a hydrogen atom, the relevant state vector is identified by the principal quantum number \{ n \} . For a more complicated case, consider Bohm formulation of EPR experiment, where the state vector \left|\psi\right\rang = \frac{1}{\sqrt{2}}\bigg(\left|\uparrow\downarrow\right\rang - \left|\downarrow\uparrow\right\rang \bigg) involves superposition of joint spin states for 2 different particles.[1]:47–48 In a more general usage, a quantum state can be either "pure" or "mixed." The above example is pure. Mathematically, a pure quantum state is represented by a state vector in a Hilbert space over complex numbers, which is a generalization of our more usual three-dimensional space.[2]:93–96 If this Hilbert space is represented as a function space, then its elements are called wave functions. A mixed quantum state corresponds to a probabilistic mixture of pure states; however, different distributions of pure states can generate equivalent (i.e., physically indistinguishable) mixed states. Quantum states, mixed as well as pure, are described by so-called density matrices, although these give probabilities, not densities. For example, if the spin of an electron is measured in any direction, e.g., with a Stern–Gerlach experiment, there are two possible results, up or down. The Hilbert space for the electron's spin is therefore two-dimensional. A pure state is a two-dimensional complex vector (\alpha, \beta), with a length of one. That is, |\alpha|^2 + |\beta|^2 = 1\,. A mixed state is a 2 \times 2 matrix that is Hermitian, positive-definite, and has trace 1. Before a particular measurement is performed on a quantum system, the theory usually gives only a probability distribution for the outcome (see details in #Interpretation section), and the form that this distribution takes is completely determined by the quantum state and the observable describing the measurement. These probability distributions arise for both mixed states and pure states: it is impossible in quantum mechanics (unlike classical mechanics) to prepare a state in which all properties of the system are fixed and certain. This is exemplified by the uncertainty principle, and reflects a core difference between classical and quantum physics. Even in quantum theory, however, for every observable[dubious ] there are states that determine its value exactly.[3] Conceptual description[edit] Pure states[edit] Probability densities for the electron of a hydrogen atom in different quantum states. In the mathematical formulation of quantum mechanics, pure quantum states correspond to vectors in a Hilbert space, while each observable quantity (such as the energy or momentum of a particle) is associated with a mathematical operator. The operator serves as a linear function which acts on the states of the system. The eigenvalues of the operator correspond to the possible values of the observable, i.e. it is possible to observe a particle with a momentum of 1 kg⋅m/s if and only if one of the eigenvalues of the momentum operator is 1 kg⋅m/s. The corresponding eigenvector (which physicists call an "eigenstate") with eigenvalue 1 kg⋅m/s would be a quantum state with a definite, well-defined value of momentum of 1 kg⋅m/s, with no quantum uncertainty. If its momentum were measured, the result is guaranteed to be 1 kg⋅m/s. On the other hand, a system in a linear combination of multiple different eigenstates does in general have quantum uncertainty. We can represent this linear combination of eigenstates as: |\Psi(t)\rangle = \sum_n C_n(t) |\Phi_n\rang. The coefficient which corresponds to a particular state in the linear combination is complex thus allowing interference effects between states. The coefficients are time dependent. How a quantum system changes in time is governed by the time evolution operator. The symbols "|" and ""[4] surrounding the \Psi are part of bra–ket notation. Statistical mixtures of states are separate from a linear combination. A statistical mixture of states occurs with a statistical ensemble of independent systems. Statistical mixtures represent the degree of knowledge whilst the uncertainty within quantum mechanics is fundamental. Mathematically a statistical mixture is not a combination of complex coefficients but by a combination of probabilities of different states \Phi_n. P_n represents the probability of a randomly selected system being in the state \Phi_n. Unlike the linear combination case each system is in a definite eigenstate.[5][6] In general we must understand the expectation value \langle A \rangle _\sigma of an observable A as a statistical mean. It is this mean and the distribution of probabilities that is predicted by physical theories. There is no state which is simultaneously an eigenstate for all observables. For example, we cannot prepare a state such that both the position measurement Q(t) and the momentum measurement P(t) (at the same time t) are known exactly; at least one of them will have a range of possible values.[a] This is the content of the Heisenberg uncertainty relation. Moreover, in contrast to classical mechanics, it is unavoidable that performing a measurement on the system generally changes its state. More precisely: After measuring an observable A, the system will be in an eigenstate of A; thus the state has changed, unless the system was already in that eigenstate. This expresses a kind of logical consistency: If we measure A twice in the same run of the experiment, the measurements being directly consecutive in time, then they will produce the same results. This has some strange consequences however: Consider two observables, A and B, where A corresponds to a measurement earlier in time than B.[7] Suppose that the system is in an eigenstate of B. If we measure only B, we will not notice statistical behaviour. If we measure first A and then B in the same run of the experiment, the system will transfer to an eigenstate of A after the first measurement, and we will generally notice that the results of B are statistical. Thus: Quantum mechanical measurements influence one another, and it is important in which order they are performed. Another feature of quantum states becomes relevant if we consider a physical system that consists of multiple subsystems; for example, an experiment with two particles rather than one. Quantum physics allows for certain states, called entangled states, that show certain statistical correlations between measurements on the two particles which cannot be explained by classical theory. For details, see entanglement. These entangled states lead to experimentally testable properties (Bell's theorem) that allow us to distinguish between quantum theory and alternative classical (non-quantum) models. Schrödinger picture vs. Heisenberg picture[edit] In the discussion above, we have taken the observables P(t), Q(t) to be dependent on time, while the state σ was fixed once at the beginning of the experiment. This approach is called the Heisenberg picture. One can, equivalently, treat the observables as fixed, while the state of the system depends on time; that is known as the Schrödinger picture. Conceptually (and mathematically), both approaches are equivalent; choosing one of them is a matter of convention. Both viewpoints are used in quantum theory. While non-relativistic quantum mechanics is usually formulated in terms of the Schrödinger picture, the Heisenberg picture is often preferred in a relativistic context, that is, for quantum field theory. Compare with Dirac picture.[8] Formalism in quantum physics[edit] Pure states as rays in a Hilbert space[edit] Quantum physics is most commonly formulated in terms of linear algebra, as follows. Any given system is identified with some finite- or infinite-dimensional Hilbert space. The pure states correspond to vectors of norm 1. Thus the set of all pure states corresponds to the unit sphere in the Hilbert space. If two unit vectors differ only by a scalar of magnitude 1, known as a "global phase factor", then they are indistinguishable. Therefore, distinct pure states can be put in correspondence with "rays" in the Hilbert space, or equivalently points in the projective Hilbert space. Bra–ket notation[edit] Calculations in quantum mechanics make frequent use of linear operators, inner products, dual spaces and Hermitian conjugation. In order to make such calculations more straightforward, and to obviate the need (in some contexts) to fully understand the underlying linear algebra, Paul Dirac invented a notation to describe quantum states, known as bra-ket notation. Although the details of this are beyond the scope of this article (see the article bra–ket notation), some consequences of this are: • The variable name used to denote a vector (which corresponds to a pure quantum state) is chosen to be of the form |\psi\rangle (where the "\psi" can be replaced by any other symbols, letters, numbers, or even words). This can be contrasted with the usual mathematical notation, where vectors are usually bold, lower-case letters, or letters with arrows on top. • Instead of vector, the term ket is used synonymously. • Each ket |\psi\rangle is uniquely associated with a so-called bra, denoted \langle\psi|, which is also said to correspond to the same physical quantum state. Technically, the bra is the adjoint of the ket. It is an element of the dual space, and related to the ket by the Riesz representation theorem. In a finite-dimensional space with a chosen basis, writing |\psi\rangle as a column vector, \langle\psi| is a row vector; just take the transpose and entry-wise complex conjugate of |\psi\rangle. • Inner products (also called brackets) are written so as to look like a bra and ket next to each other: \lang \psi_1|\psi_2\rang. (The phrase "bra-ket" is supposed to resemble "bracket".) The angular momentum has the same dimension as the Planck constant and, at quantum scale, behaves as a discrete degree of freedom. Some particles possess a kind of intrinsic angular momentum that does not appear at all in classical mechanics and arises from Dirac's relativistic generalization of the theory. Mathematically it is described with spinors. In non-relativistic quantum mechanics the fundamental representations of SU(2) are used to describe this additional freedom. For a given particle, it is characterized quantitatively by a non-negative number S that, in units of Planck's reduced constant ħ, is either an integer (0, 1, 2 ...) or a half-integer (1/2, 3/2, 5/2 ...). For a massive particle of the spin S, its spin quantum number m always assumes 2S + 1 possible values from the set \{ -S, -S+1, \ldots +S-1, +S \} As a consequence, the quantum state of a particle is described by a vector-valued wave function with values in C2S+1 or, equivalently, by a complex-valued function of four variables: one discrete quantum number variable is added to three continuous (spatial) variables. Many-body states and particle statistics[edit] For more information, see Particle statistics. The quantum state of a system of N particles is described by a complex-valued function with four variables per particle, e.g. |\psi (\mathbf r_1,m_1;\dots ;\mathbf r_N,m_N)\rangle. Here, the spin variables mν assume values from the set \{ -S_\nu, -S_\nu +1, \ldots +S_\nu -1,+S_\nu \} where S_\nu is the spin of νth particle. Moreover, the case of identical particles makes the difference between bosons (particles with integer spin) and fermions (particles with half-integer spin). The above N-particle function must either be symmetrized (in the bosonic case) or anti-symmetrized (in the fermionic case) with respect to the particle numbers. If not all N particles are identical, but some of them are, then the function must be (anti)symmetrized over respective groups of variables, for each flavour of particles separately according to its statistics. Electrons are fermions with S = 1/2, photons (quanta of light) are bosons with S = 1 (although in the vacuum they are massless and can't be described with Schrödingerian mechanics). Apart from the symmetrization or anti-symmetrization, N-particle spaces of states can thus simply be obtained by tensor products of one-particle spaces, to which we return herewith. Basis states of one-particle systems[edit] As with any Hilbert space, if a basis is chosen for the Hilbert space of a system, then any ket can be expanded as a linear combination of those basis elements. Symbolically, given basis kets |{k_i}\rang, any ket |\psi\rang can be written | \psi \rang = \sum_i c_i |{k_i}\rangle where ci are complex numbers. In physical terms, this is described by saying that |\psi\rang has been expressed as a quantum superposition of the states |{k_i}\rang. If the basis kets are chosen to be orthonormal (as is often the case), then c_i=\lang {k_i} | \psi \rang. One property worth noting is that the normalized states |\psi\rang are characterized by \sum_i \left | c_i \right | ^2 = 1. Expansions of this sort play an important role in measurement in quantum mechanics. In particular, if the |{k_i}\rang are eigenstates (with eigenvalues ki) of an observable, and that observable is measured on the normalized state |\psi\rang, then the probability that the result of the measurement is ki is |ci|2. (The normalization condition above mandates that the total sum of probabilities is equal to one.) A particularly important example is the position basis, which is the basis consisting of eigenstates of the observable which corresponds to measuring position. If these eigenstates are nondegenerate (for example, if the system is a single, spinless particle), then any ket |\psi\rang is associated with a complex-valued function of three-dimensional space: \psi(\mathbf{r}) \equiv \lang \mathbf{r} | \psi \rang. This function is called the wavefunction corresponding to |\psi\rang. Superposition of pure states[edit] One aspect of quantum states, mentioned above, is that superpositions of them can be formed. If |\alpha\rangle and |\beta\rangle are two kets corresponding to quantum states, the ket is a different quantum state (possibly not normalized). Note that which quantum state it is depends on both the amplitudes and phases (arguments) of c_\alpha and c_\beta. In other words, for example, even though |\psi\rang and e^{i\theta}|\psi\rang (for real θ) correspond to the same physical quantum state, they are not interchangeable, since for example |\phi\rang+|\psi\rang and |\phi\rang+e^{i\theta}|\psi\rang do not (in general) correspond to the same physical state. However, |\phi\rang+|\psi\rang and e^{i\theta}(|\phi\rang+|\psi\rang) do correspond to the same physical state. This is sometimes described by saying that "global" phase factors are unphysical, but "relative" phase factors are physical and important. One example of a quantum interference phenomenon that arises from superposition is the double-slit experiment. The photon state is a superposition of two different states, one of which corresponds to the photon having passed through the left slit, and the other corresponding to passage through the right slit. The relative phase of those two states has a value which depends on the distance from each of the two slits. Depending on what that phase is, the interference is constructive at some locations and destructive in others, creating the interference pattern. By the analogy with coherence in other wave phenomena, a superposed state can be referred to as a coherent superposition. Another example of the importance of relative phase in quantum superposition is Rabi oscillations, where the relative phase of two states varies in time due to the Schrödinger equation. The resulting superposition ends up oscillating back and forth between two different states. Mixed states[edit] A pure quantum state is a state which can be described by a single ket vector, as described above. A mixed quantum state is a statistical ensemble of pure states (see quantum statistical mechanics). Mixed states inevitably arise from pure states when, for a composite quantum system H_1 \otimes H_2 with an entangled state on it, the part H_2 is inaccessible to the observer. The state of the part H_1 is expressed then as the partial trace over H_2. A mixed state cannot be described as a ket vector. Instead, it is described by its associated density matrix (or density operator), usually denoted ρ. Note that density matrices can describe both mixed and pure states, treating them on the same footing. Moreover, a mixed quantum state on a given quantum system described by a Hilbert space H can be always represented as the partial trace of a pure quantum state (called a purification) on a larger bipartite system H \otimes K for a sufficiently large Hilbert space K. The density matrix is defined as \rho = \sum_s p_s | \psi_s \rangle \langle \psi_s | where p_s is the fraction of the ensemble in each pure state |\psi_s\rangle. Here, one typically uses a one-particle formalism to describe the average behaviour of an N-particle system. A simple criterion for checking whether a density matrix is describing a pure or mixed state is that the trace of ρ2 is equal to 1 if the state is pure, and less than 1 if the state is mixed.[9] Another, equivalent, criterion is that the von Neumann entropy is 0 for a pure state, and strictly positive for a mixed state. The rules for measurement in quantum mechanics are particularly simple to state in terms of density matrices. For example, the ensemble average (expectation value) of a measurement corresponding to an observable A is given by \langle A \rangle = \sum_s p_s \langle \psi_s | A | \psi_s \rangle = \sum_s \sum_i p_s a_i | \langle \alpha_i | \psi_s \rangle |^2 = \operatorname{tr}(\rho A) where |\alpha_i\rangle, \; a_i are eigenkets and eigenvalues, respectively, for the operator A, and "tr" denotes trace. It is important to note that two types of averaging are occurring, one being a weighted quantum superposition over the basis kets |\psi_s\rangle of the pure states, and the other being a statistical (said incoherent) average with the probabilities ps of those states. Although theoretically, for a given quantum system, a state vector provides the full information about its evolution, it is not easy to understand what information about the "real world" does it carry. Due to the uncertainty principle, a state, even if has the value of one observable exactly defined (i.e. the observable has this state as an eigenstate), cannot exactly define values of all observables. For state vectors (pure states), probability amplitudes offer a probabilistic interpretation. It can be generalized for all states (including mixed), for instance, as expectation values mentioned above. Mathematical generalizations[edit] States can be formulated in terms of observables, not of a vector space. These are positive normalized linear functionals on a C*-algebra, or sometimes other classes of algebras of observables. See Gelfand–Naimark–Segal construction for more details. See also[edit] 1. ^ To avoid misunderstandings: Here we mean that Q(t) and P(t) are measured in the same state, but not in the same run of the experiment. 1. ^ Ballentine, Leslie (1998). Quantum Mechanics: A Modern Development (2nd, illustrated, reprint ed.). World Scientific. ISBN 9789810241056.  2. ^ Griffiths, David J. (2004), Introduction to Quantum Mechanics (2nd ed.), Prentice Hall, ISBN 0-13-111892-7  3. ^ Ballentine, L. E. (1970), "The Statistical Interpretation of Quantum Mechanics", Reviews of Modern Physics 42: 358–381, Bibcode:1970RvMP...42..358B, doi:10.1103/RevModPhys.42.358  4. ^ Sometimes written ">"; see angle brackets. 5. ^ Statistical Mixture of States 6. ^ http://electron6.phys.utk.edu/qm1/modules/m6/statistical.htm 7. ^ For concreteness' sake, suppose that A = Q(t1) and B = P(t2) in the above example, with t2 > t1 > 0. 8. ^ Gottfried, Kurt; Yan, Tung-Mow (2003). Quantum Mechanics: Fundamentals (2nd, illustrated ed.). Springer. p. 65. ISBN 9780387955766.  9. ^ Blum, Density matrix theory and applications, page 39. Note that this criterion works when the density matrix is normalized so that the trace of ρ is 1, as it is for the standard definition given in this section. Occasionally a density matrix will be normalized differently, in which case the criterion is \operatorname{Tr}(\rho^2)=(\operatorname{Tr} \rho)^2 Further reading[edit] The concept of quantum states, in particular the content of the section Formalism in quantum physics above, is covered in most standard textbooks on quantum mechanics. For a discussion of conceptual aspects and a comparison with classical states, see: For a more detailed coverage of mathematical aspects, see: • Bratteli, Ola; Robinson, Derek W (1987). Operator Algebras and Quantum Statistical Mechanics 1. Springer. ISBN 978-3-540-17093-8. 2nd edition.  In particular, see Sec. 2.3. For a discussion of purifications of mixed quantum states, see Chapter 2 of John Preskill's lecture notes for Physics 219 at Caltech.
578b6751b5c4ff6f
Home arrow Hydrogen Economy Hydrogen PDF Print E-mail Written by Administrator    Sunday, 26 August 2007 From Wikipedia, the free encyclopedia. Jump to: navigation, search 1 - hydrogen helium Name, Symbol, Number hydrogen, H, 1 Chemical series nonmetals Group, Period, Block 1, 1, s Appearance colorless Atomic mass 1.00794(7) g/mol Electron configuration 1s1 Electrons per shell 1 Physical properties Phase gas Density (0 °C, 101.325 kPa) 0.08988 g/L Melting point 14.01 K (-259.14 °C, -434.45 °F) Boiling point 20.28 K (-252.87 °C, -423.17 °F) Triple point 13.8033 K, 7.042 kPa Heat of fusion (H2) 0.117 kJ/mol Heat of vaporization (H2) 0.904 kJ/mol Heat capacity (25 °C) (H2) 28.836 J/(mol·K) Vapor pressure P/Pa 1 10 100 1 k 10 k 100 k at T/K         15 20 Critical temperature 32.19 K Critical pressure 1.315 MPa Critical density 30.12 g/L Atomic properties Crystal structure hexagonal Oxidation states 1, -1 (amphoteric oxide) Electronegativity 2.20 (Pauling scale) Ionization energies 1st: 1312.0 kJ/mol Atomic radius 25 pm Atomic radius (calc.) 53 pm (Bohr radius) Covalent radius 37 pm Van der Waals radius 120 pm Magnetic ordering  ??? Thermal conductivity (300 K) 180.5 mW/(m·K) Speed of sound (gas, 27 °C) 1310 m/s CAS registry number 1333-74-0 Notable isotopes Main article: Isotopes of hydrogen iso NA half-life DM DE (MeV) DP 1H 99.985% H is stable with 0 neutrons 2H 0.015% H is stable with 1 neutrons 3H trace 12.32 y β- 0.019 3He Hydrogen (Latin: hydrogenium, from Greek: hydro: water, genes: forming) is a chemical element in the periodic table that has the symbol H and atomic number 1. At standard temperature and pressure it is a colorless, odorless, nonmetallic, univalent, tasteless, highly flammable diatomic gas. Hydrogen is the lightest and most abundant element in the universe. It is present in water, all organic compounds (rare exceptions exist, such as buckminsterfullerene) and in all living organisms. Hydrogen is able to react chemically with most other elements. Stars in their main sequence are overwhelmingly composed of hydrogen in its plasma state. The element is used in ammonia production, as a lifting gas, as an alternative fuel, and more recently as a power source of fuel cells. Despite its ubiquity in the universe, hydrogen is surprisingly difficult to produce in large quantities on the Earth. In the laboratory, the element is prepared by the reaction of acids on metals such as zinc. The electrolysis of water is a simple method of producing hydrogen, but is economically inefficient for mass production. Large-scale production is usually achieved by steam reforming natural gas. Scientists are now researching new methods for hydrogen production; if they succeed in developing a cost-efficient method of large-scale production, hydrogen may become a viable alternative to greenhouse-gas-producing fossil fuels. One of the methods under investigation involves the use of green algae; another promising method involves the conversion of biomass derivatives such as glucose or sorbitol at low temperatures using a catalyst. Yet another method is the "steaming" of carbon, whereby hydrocarbons are broken down with heat to release hydrogen. Basic features Hydrogen is the lightest chemical element; its most common isotope comprises just one negatively charged electron, distributed around a positively charged proton (the nucleus of the atom). The electron is bound to the proton by the Coulomb force, the electrical force that one stationary, electrically charged nanoparticle exerts on another. The hydrogen atom has special significance in quantum mechanics as a simple physical system for which there is an exact solution to the Schrödinger equation; from that equation, the experimentally observed frequencies and intensities of hydrogen's spectral lines can be calculated. Spectral lines are dark or bright lines in an otherwise uniform and continuous spectrum, resulting from an excess or deficiency of photons in a narrow frequency range, compared with the nearby frequencies. At standard temperature and pressure, hydrogen forms a diatomic gas, H2, with a boiling point of only 20.27 K and a melting point of 14.02 K.[1] Under extreme pressures, such as those at the centre of gas giants, the molecules lose their identity and the hydrogen becomes a metal (metallic hydrogen). Under the extremely low pressure in space—virtually a vacuum—the element tends to exist as individual atoms, simply because there is no way for them to combine. However, clouds of H2 and possibly singular hydrogen atoms are said to form in H I and H II regions and are associated with star formation. Hydrogen plays a vital role in powering stars through the proton–proton and carbon–nitrogen cycle. These are nuclear fusion processes, which release huge amounts of energy in stars and other hot celestial bodies as hydrogen atoms combine into helium atoms. At high temperatures, hydrogen gas can exist as a mixture of atoms, protons, and negatively charged hydride ions. This mixture has a high emissivity and absorptivity in the visible light range, and plays an important part in the emission of light from the sun and other stars. H2 is highly soluble in water, alcohol, and ether. It has a high capacity for adsorption, in which it is attached to and held to the surface of some substances. It is an odorless, tasteless, colorless, and highly flammable gas that burns at concentrations as low as 4% H2 in air. It reacts violently with chlorine and fluorine, forming hydrohalic acids that can damage the lungs and other tissues. When mixed with oxygen, hydrogen explodes upon ignition. A unique property of hydrogen is that its flame is completely invisible in air. This makes it difficult to tell if a leak is burning, and carries the added risk that it is easy to walk into a hydrogen fire inadvertently. See also: hydrogen atom. Large quantities of hydrogen are needed in the chemical and petroleum industries, notably in the Haber process for the production of ammonia, which by mass ranks as the world's fifth most produced industrial compound. Hydrogen is used in the hydrogenation of fats and oils (found in items such as margarine), and in the production of methanol. Hydrogen is used in hydrodealkylation, hydrodesulfurization, and hydrocracking[2]. The element has several other important uses. There are no "hydrogen wells" or "hydrogen mines" on Earth, so hydrogen cannot be considered a primary energy source such as fossil fuels or uranium. Hydrogen can however be burned in internal combustion engines, an approach advocated by BMW's experimental hydrogen car. However, it is currently difficult and dangerous to store and handle in sufficient quantity for motor fuel use. Hydrogen fuel cells are being investigated as mobile power sources with lower emissions than hydrogen-burning internal combustion engines. The low emissions of hydrogen in internal combustion engines and fuel cells are currently offset by the pollution created by hydrogen production. This may change if the substantial amounts of electricity required for water electrolysis can be generated primarily from low pollution sources such as nuclear energy or wind. Research is being conducted on hydrogen as a replacement for fossil fuels. It could become the link between a range of energy sources, carriers and storage. Hydrogen can be converted to and from electricity (solving the electricity storage and transport issues), from biofuels, and from and into natural gas and diesel fuel. All of this can theoretically be achieved with zero emissions of CO2 and toxic pollutants. Hydrogen was first produced by Theophratus Bombastus von Hohenheim (14931541)—also known as Paracelsus—by mixing metals with acids. He was unaware that the explosive gas produced by this chemical reaction was hydrogen. In 1671, Robert Boyle described the reaction between two iron fillings and dilute acids, which results in the production of gaseous hydrogen.[3] In 1766, Henry Cavendish was the first to recognize hydrogen as a discrete substance, by identifying the gas from this reaction as "inflammable" and finding that the gas produces water when burned in air. Cavendish stumbled on hydrogen when experimenting with acids and mercury. Although he wrongly assumed that hydrogen was a compound of mercury—and not of the acid—he was still able to accurately describe several key properties of hydrogen. Antoine Lavoisier gave the element its name and proved that water is composed of hydrogen and oxygen. One of the first uses of the element was for balloons. The hydrogen was obtained by mixing sulfuric acid and iron. In 1931, Harold C. Urey discovered deuterium, an isotope of hydrogen, by repeated distilling the same sample of water. For this discovery, Urey received the Nobel Prize in Chemistry in 1934. In the same year, the third isotope, tritium, was discovered. Because of its relatively simple structure, hydrogen has often been used in models of how an atom works. Electron energy levels The ground state energy level of the electron in a hydrogen atom is 13.6 eV, which is equivalent to an ultraviolet photon of roughly 92 nm. With the Bohr Model, the energy levels of hydrogen can be calculated fairly accurately. This is done by modeling the electron as revolving around the proton, much like the earth revolving around the sun. Except the sun holds earth in orbit with the force of gravity, but the proton holds the electron in orbit with the force of electromagnetism. Another difference between the Earth-Sun system and the electron-proton system is that, in this model, due to quantum mechanics the electron is allowed to only be at very specific distances from the proton. Modeling the hydrogen atom in this fashion yields the correct energy levels and spectrum. NGC 604, a giant H II region in the Triangulum Galaxy. Hydrogen is the most abundant element in the universe, making up 75% of normal matter by mass and over 90% by number of atoms. [4] This element is found in great abundance in stars and gas giant planets. It is very rare in the Earth's atmosphere (1 ppm by volume), because being the lightest gas causes it to escape Earth's gravity, though when compounds are considered, it is the tenth most abundant element on Earth. The most common source for this element on Earth is water, which is composed two parts hydrogen to one part oxygen (H2O). Other sources include most forms of organic matter including coal, natural gas, and other fossil fuels. Methane (CH4) is an increasingly important source of hydrogen. Throughout the universe, hydrogen is mostly found in the plasma state whose properties are quite different to molecular hydrogen. As a plasma, hydrogen's electron and proton are not bound together, resulting in very high electrical conductivity, even when the gas is only partially ionized. The charged particles are highly influenced by magnetic and electric fields, for example, in the solar wind they interact with the Earth's magnetosphere giving rise to Birkeland currents and the aurora. Hydrogen can be prepared in several different ways: steam on heated carbon, hydrocarbon decomposition with heat, reaction of a strong base in an aqueous solution with aluminium, water electrolysis, or displacement from acids with certain metals. Commercial bulk hydrogen is usually produced by the steam reforming of natural gas. At high temperatures (700–1100 °C), steam reacts with methane to yield carbon monoxide and hydrogen. CH4 + H2O CO + 3 H2 Additional hydrogen can be recovered from the carbon monoxide through the water-gas shift reaction: CO + H2O CO2 + H2 The lightest of all gases, hydrogen combines with most other elements to form compounds. Hydrogen has an electronegativity of 2.2, so it forms compounds where it is the more nonmetallic and where it is the more metallic element. The former are called hydrides, where hydrogen either exists as H- ions or just as a solute within the other element (as in palladium hydride). The latter tend to be covalent, since the H+ ion would be a bare nucleus and so has a strong tendency to pull electrons to itself. These both form acids. Thus even in an acidic solution one sees ions like hydronium (H3O+) as the protons latch on to something. Although exotic on earth, one of the most common ions in the universe is the H3+ ion. Hydrogen combines with oxygen to form water, H2O, and releases significant amounts of energy in doing so, burning explosively in air. Deuterium oxide, or D2O, is commonly referred to as heavy water. Hydrogen also forms a vast array of compounds with carbon. Because of their association with living things, these compounds are called organic compounds, and the study of the properties of these compounds is called organic chemistry. First tracks observed in Liquid hydrogen bubble chamber. First tracks observed in Liquid hydrogen bubble chamber. Under normal conditions, hydrogen gas is a mix of two different kinds of molecules which differ from one another by the relative spin of the nuclei.[5] These two forms are known as ortho- and para-hydrogen (this is different from isotopes, see below). In ortho-hydrogen the nuclear spins are parallel and form a triplet, while in para they are antiparallel and form a singlet. At standard conditions hydrogen is composed of about 25% of the para form and 75% of the ortho form (the so-called "normal" form). The equilibrium ratio of these two forms depends on temperature, but since the ortho form has higher energy (is an excited state), it cannot be stable in its pure form. At low temperatures (around boiling point), the equilibrium state is comprised almost entirely of the para form. The conversion process between the forms is slow, and if hydrogen is cooled down and condensed rapidly, it contains large quantities of the ortho form. It is important in preparation and storage of liquid hydrogen, since the ortho-para conversion produces more heat than the heat of its evaporation, and a lot of hydrogen can be lost by evaporation in this way during several days after liquefying. Therefore, some catalysts of the ortho-para conversion process are used during hydrogen cooling. The two forms have also slightly different physical properties. For example, the melting and boiling points of parahydrogen are about 0.1 K lower than of the "normal" form. Main Article: Isotopes of hydrogen Hydrogen is the only element that has different names for its isotopes. (During the early study of radioactivity, various heavy radioactive isotopes were given names, but such names are no longer used, although one element, radon, has a name that originally applied to only one of its isotopes.) The symbols D and T (instead of 2H and 3H) are sometimes used for deuterium and tritium, although this is not officially sanctioned. (The symbol P is already in use for phosphorus and is not available for protium.) The most common isotope of hydrogen, this stable isotope has a nucleus consisting of a single proton; hence the descriptive, although rarely used, name protium. The spin of a protium atom is 1/2+. [6] The other stable isotope is deuterium, with an extra neutron in the nucleus. Deuterium comprises 0.0184%–0.0082% of all hydrogen (IUPAC); ratios of deuterium to protium are reported relative to the VSMOW standard reference water. The spin of a deuterium atom is 1+. The third naturally occurring hydrogen isotope is the radioactive tritium. The tritium nucleus contains two neutrons in addition to the proton. It decays through beta decay and has a half-life of 12.32 years. Tritium occurs naturally due to cosmic rays interacting with atmospheric gases. Like ordinary hydrogen, tritium reacts with the oxygen in the atmosphere to form T2O. This radioactive "water" molecule constantly enters the Earth's seas and lakes in the form of slightly radioactive rain, but its half-life is short enough to prevent a buildup of hazardous radioactivity. The spin of a tritium atom is 1/2+. Hydrogen-4 was synthesized by bombarding tritium with fast-moving deuterium nuclei. It decays through neutron emission and has a half-life of 9.93696x10-23 seconds. The spin of a hydrogen-4 atom is 2-. In 2001 scientists detected hydrogen-5 by bombarding a hydrogen target with heavy ions. It decays through neutron emission and has a half-life of 8.01930x10-23 seconds. Hydrogen-6 decays through triple neutron emission and has a half-life of 3.26500x10-22 seconds. In 2003 hydrogen-7 was created (article) at the RIKEN laboratory in Japan by colliding a high-energy beam of helium-8 atoms with a cryogenic hydrogen target and detecting tritons—the nuclei of tritium atoms—and neutrons from the breakup of hydrogen-7, the same method used to produce and detect hydrogen-5. Scientists from the University of Colorado at Boulder discovered in 2005 that microbes living in the hot waters of Yellowstone National Park gain their sustenance from molecular hydrogen. See also 1. ^  A PDF file from commonsensescience.org on hydrogen. URL accessed on September 15, 2005. 2. ^  Los Alamos National Laboratory – Hydrogen. URL accessed on September 15, 2005. 3. ^  Webelements – Hydrogen historical information. URL accessed on September 15, 2005. 4. ^  Universal Industrial Gases, Inc. – Hydrogen (H2) Applications and Uses. URL accessed on September 15, 2005. 5. ^  Jefferson Lab – Hydrogen. URL accessed on September 15, 2005. 6. ^  Lawrence Berkeley National Laboratory – Hydrogen isotopes. URL accessed on September 15, 2005. Book references • Stwertka, Albert (2002). A Guide to the Elements, Oxford University Press, New York, NY. ISBN 0195150279. • Krebs, Robert E. (1998). The History and Use of Our Earth's Chemical Elements : A Reference Guide, Greenwood Press, Westport, Conn.. ISBN 0313301239. • Newton, David E. (1994). The Chemical Elements, Franklin Watts, New York, NY. ISBN 0531125017. • Rigden, John S. (2002). Hydrogen : The Essential Element, Harvard University Press, Cambridge, MA. ISBN 0531125017. Some or all of the content of this page is From Wikipedia, the free encyclopedia. Last Updated ( Sunday, 26 August 2007 ) < Prev   Next > © 2015 iENERGY Inc.
4450c7f8556d6aa4
unbounded operator This page is about unbounded linear operators on Hilbert spaces. For operators on Hilbert spaces, “bounded” and “continuous” are synonymous, so the first question to be answered is: Why consider unbounded, i.e., discontinuous operators in a category that is a subcategory of Top? The reason is simple: It is forced upon us both by applications, such as quantum mechanics, and by the fact that simple and useful operators like differentiation are not bounded. Happily, in most applications the operators considered retain some sort of “limit property”, namely the property of being “closed”. Although that seems to be negligible compared to continuity, it allows the development of a rich and useful theory, and as a consequence there is a tremendous amount of literature devoted to this subject. One way of dealing with unbounded operators is via affiliated operators, see there. Example: differentiation is unbounded Let \mathcal{H} be the Hilbert space L 2()L^2(\mathbb{R}), and let TT be the differentiation operator defined on the dense subspace of Schwartz functions ff by Tf(x)f(x)Tf(x) \coloneqq f'(x). One might hope TT has a continuous extension to all of L 2L^2, but consider the sequence f k(x):=exp(k|x|)f_k(x) := \exp(-k|x|) for kk \in \mathbb{N}. Then we have Tf kf k=k\frac{\|Tf_k\|}{\|f_k\|} = k, so TT is unbounded. Note that the domain of definition of an unbounded operator will generally be given only on a dense subspace, as in this example. Indeed, the existence of unbounded operators defined everywhere (on a Hilbert space) is non-constructive, relying on the Hahn–Banach theorem and refutable in dream mathematics. After the definition we will look at some concepts that can be transferred from the bounded context here. We will talk about the von Neumann algebra that contains all spectral projections? here. Finally we give some counterexamples, i.e., phenomena that contradict the intuition built from bounded operators here. An unbounded operator TT on a Hilbert space \mathcal{H} is a linear operator defined on a subspace DD of \mathcal{H}. DD is necessarily a linear submanifold. Usually one assumes that DD is dense in \mathcal{H}, which we will do, too, unless we indicate otherwise. In particular every bounded operator A:A: \mathcal{H} \to \mathcal{H} is an unbounded operator (red herring principle). Domains: a first look Unbounded operators are not defined on the whole Hilbert space, so it is essential that, when talking about a specific unbounded operator, we are actually talking about the pair (T,D T)(T, D_T) of an operator TT together with its domain D TD_T. In particular two unbounded operators T,ST, S are equal iff their domains are equal, D T=D SD_T = D_S, and for all xD S=D Tx \in D_S=D_T we have Tx=SxTx = Sx. If the domain is not specified, the default definition of the domain of a given operator TT is simply D T:={x|Tx}D_T := \{x \in \mathcal{H} \vert Tx \in \mathcal{H} \}. Warning: if one composes two unbounded operators TT and SS, it may happen that D TD S={0}D_T \cap D_S = \{0\}. If we insist that all our unbounded operators are densely defined, we need as an additional assumption that D TD SD_T \cap D_S is dense to make sense of the composite TST S. The Hellinger-Toeplitz theorem • Theorem: Let AA be an everywhere defined linear operator on a Hilbert space \mathcal{H} that is symmetric. Then A is bounded. For the definition of symmetric see below. The Hellinger-Toeplitz theorem is a no-go theorem for quantum mechanics. Since it is known that operators essential for quantum mechanics are both symmetric and unbounded, we are led to conclude that they cannot be everywhere defined. This means that the problems that accompany only densely defined operators cannot be avoided. This is a corollary to the closed graph theorem III.2 in the book • Reed, M.; Simon, B.: Methods of modern mathematical physics. Volume 1, Functional Analysis Closedness, selfadjointness, resolvent Recall that the graph of an operator TT (or any function, in general) is the subset 𝒢 T:={(x,y)×|Tx=y}\mathcal{G}_T := \{(x, y) \in \mathcal{H} \times \mathcal{H} \vert Tx = y \}. The graph of a given operator need not be closed (in the product topology of ×\mathcal{H} \times \mathcal{H}). The notion that will be a surrogate for continuity is “closable”, defined as follows: • Definition: Given an operator T with domain D TD_T, any operator TT' with larger domain that is equal to TT on D TD_T is called an extension of T, we write TTT \subset T'. • Definition: An operator is closed if its graph is closed. • Definition: An operator is closable if it has a closed extension. The smallest such extension is called the closure of TT and is denoted by T¯\overline{T}. • Proposition (closure of graph is graph of closure): If an operator TT is closable, then the closure of its graph 𝒢¯\overline{\mathcal{G}} is the graph of an operator, and this operator is its closure. The last part deserves some elaboration: Given an operator TT, we can always form the closure of its graph 𝒢\mathcal{G}. How can the closure not be the graph of an operator? Given a sequence (x n)(x_n) in 𝒢\mathcal{G}, such that both limits xlim nx nx \coloneqq \lim_{n \to \infinity} x_n and ylim nTx ny \coloneqq \lim_{n \to \infinity} T x_n exist, we have that (x,y)(x, y) is in the closure of 𝒢\mathcal{G}. Now it may happen that there is another point (x,y)(x, y') in the closure with yyy \neq y', which implies that the closure cannot be the graph of a single valued function. We may assume without loss of generality that lim nx n=0\lim_{n \to \infinity} x_n = 0, so that we get as a characterisation of closability: if ylim nTx ny \coloneqq \lim_{n \to \infinity} T x_n exists, then y=0y=0. It TT were continuous, we would not have to assume that (Tx n)(Tx_n) is convergent, so this additional assumption tells us in what respect closability is weaker than continuity. • Definition: For a closed operator TT a subset DD of D TD_T is called a core of TT if T| D¯=T\overline{T \vert_{D}} = T. In other words: if we restrict TT to DD and take the closure, we obtain again TT. Example of an operator that is not closable We let T *T^* be the adjoint of an operator TT. Note that for an only densely defined TT, the domain of the adjoint may be strictly larger. • Definition (selfadjoint et al.): An operator is symmetric (or Hermitian) if T=T *| D TT = T^* \vert_{D_T} (the adjoint is restricted to the domain of TT). It is selfadjoint if it is symmetric and D T=D T *D_T = D_{T^*}. A symmetric operator is essentially selfadjoint if its closure is selfadjoint. The difference of being symmetric and being selfadjoint is crucial, although there is a famous anecdote that seems to indicate otherwise: • Anecdote of selfadjointness: Once upon a time John von Neumann thanked Werner Heisenberg for the invention of quantum mechanics, because this had led to the development of so much beautiful mathematics, adding that mathematics paid back a part of the debt by clarifying for example the difference of a selfadjoint operator and one that is only symmetric. Heisenberg replied: “What is the difference?” Nevertheless theorems that assume an operator to be selfadjoint will be not applicable to an operator that is only symmetric. One example is the spectral theorem. Example of a symmetric, but not selfadjoint, operator The definition of the resolvent does not pose any problems compared to the bounded case: • Definition: let TT be a closed operator on a Hilbert space \mathcal{H}. A complex number λ\lambda is in the resolvent set ρ(T)\rho(T) if λ𝟙T\lambda \mathbb{1} - T is a bijection of D TD_T and \mathcal{H} with bounded inverse. The inverse operator is called the resolvent R λ(T)R_{\lambda}(T) of TT at λ\lambda. • Theorem: The resolvent set is an open subset of \mathbb{C} on which the resolvent is an analytic operator valued function. Resolvents at different points commute and we have R λ(T)R μ(T)=(μλ)R μ(T)R λ(T) R_{\lambda}(T) - R_{\mu}(T) = (\mu - \lambda) R_{\mu}(T) R_{\lambda}(T) The proof can be done as in the bounded case. Commuting operators The concept of commuting operators, which is of no problem in the bounded case, presents a conceptual difficulty in the unbounded one: for given operators AA, BB we would like to be able to say whether they commute, although their composite may not have dense domain. For selfadjoint operators there is a solution to this problem: We know that in the bounded case, two selfadjoint operators commute iff their spectral projections commute. This suggests the • Definition: two (possibly unbounded) selfadjoint operators commute iff all their spectral projections commute. The spectral theorem shows that all bounded Borel functions of two commuting operators will also commute. The following theorem states the reverse for two of the most important functions and shows that the definition of commutation above is reasonable: • Theorem: let AA and BB be selfadjoint operators on a Hilbert space \mathcal{H}. Then the following three statements are equivalent: (a) AA and BB commute (b) if Imλ\operatorname{Im} \lambda and Imμ\operatorname{Im} \mu are 0\neq 0, then R λ(A)R_{\lambda}(A) and R μ(B)R_{\mu}(B) commute, as it is defined for bounded operators: R λ(A)R μ(B)=R μ(B)R λ(A)R_{\lambda}(A) R_{\mu}(B) = R_{\mu}(B) R_{\lambda}(A) (c) for all t,st, s \in \mathbb{R} we have exp(itA)exp(isB)=exp(isB)exp(itA)\exp(itA) \exp(isB) = \exp(isB) \exp(itA) Strongly continuous one-parameter semigroups A strongly continuous one-parameter semigroup is an unitary representation of \mathbb{R} on \mathcal{H} where \mathbb{R} is seen as a topological group with respect to multiplication, see topological group. An explicit definition recalling these concepts is this: • Definition: a function U:()U : \mathbb{R} \to \mathcal{B}(\mathcal{H}) is a one-parameter semigroup if the semigroup condition U(t+s)=U(t)U(s)U(t+s) = U(t)U(s) holds. If for every xx \in \mathcal{H} and tt 0t \to t_0 we have U(t)xU(t 0)xU(t)x \to U(t_0)x then it is strongly continuous. If every UU is an unitary operator, it is an unitary semigroup. In the following a semigroup will be understood to be a one-parameter unitary strongly continuous semigroup. In physics, one-parameter semigroups of this kind often represent the time evolution of a physical system described by an evolution equation. • Theorem: a selfadjoint operator AA generates a semigroup via U(t):=exp(itA)U(t) := \exp(itA) • Theorem (Stone’s theorem): let UU be a semigroup, then there is a selfadjoint operator AA such that U(t)=exp(itA)U(t) = \exp(i t A). This operator is often called the infinitesimal generator of UU. These two theorems are essential for the Schrödinger picture of quantum mechanics, which describes a system by the Schrödinger equation, we have now a one-to-one correspondence of selfadjoint operators which can be seen as Hamilton operators (only special operators will be seen as describing actual physical systems, of course), and semigroups which describe the time evolution generated by the Hamilton operator. As a trivial observation we add that Stone’s theorem is a (huge) generalization of the Taylor series, let f:f: \mathbb{R} \to \mathbb{R} that is (real) analytic in a neighborhood of 00, then we get for xx small enough: f(h)= k=0 f n(0)n!h n=exp(ih(iddx)) f(h) = \sum_{k=0}^{\infty} \frac{f^{n}(0)}{n!} h^n = \exp(i h(-i \frac{d}{dx} )) This shows that the operator iddx-i \frac{d}{dx} generates the semigroup of translations on the real line. Now we could, for example, use Stone’s theorem to prove that iddx-i \frac{d}{dx} is selfadjoint by proving that the translation group is strongly continuous. Subtleties resulting from domain issues (This rather generic title will have to be revised.) Nelson’s example of noncommuting exponentials Nelson’s example shows that the rather involved definition of commutativity of two unbounded operators is well motivated, because a more naive one will have unwanted consequences. It is a counterexample to the following conjecture: • Conjecture (false!): Let AA and BB be selfadjoint operators on a Hilbert space \mathcal{H} and DD a dense subset such that both AA and BB restricted to DD are essentially selfadjoint. Suppose that for all xDx \in D we have ABxBAx=0 A B x - B A x = 0. Then AA and BB commute. We have already seen that on 2\mathbb{R}^2 we can define two essentially selfadjoint operators i x-i \partial_x and i y-i \partial_y that generate translations along the x-axis and the y-axis respectively. Both the generated translations and the operators commute (the latter if applied to differentiable functions, of course). The central idea of Nelson’s counterexample is to replace 2\mathbb{R}^2 by a Riemann surface with two sheets, such that walking east, then walking north takes you to one sheet, while walking north then walking east takes you to the other sheet. We use the Riemann surface MM of f(z)=(z)f(z) = \sqrt(z). We give a brief exposition of its construction: take two copies of \mathbb{C}, two “sheets”, and call them I and II. Cut both sheets along (0,)(0, \infty), label the edge of the first quadrant along the cut as ++ and the edge of the fourth quadrant as -. Then attach the ++ edge of I with the - edge of II and the - of I with the ++ edge of II. Let =L 2(M)\mathcal{H} = L^2(M) with respect to Lebesgue measure. As indicated above we define A = i x-i \partial_x and B = i y-i \partial_y, this is with respect to the canonical chart on each sheet that projects it onto \mathbb{C} and then identifies \mathbb{C} with 2\mathbb{R}^2. Let DD be the set of all smooth functions with compact support not containing 00. Then: (a) AA and BB are essentially selfadjoint on DD (b) AA and BB map DD onto DD (c) for all xDx \in D we have ABx=BAxA B x = B A x (d) exp(itA)\exp(i t A) and exp(isB)\exp(i s B) do not commute. Only part (a) needs further explanation… • Konrad Schmuedgen, Unbounded self-adjoint operators on Hilbert space, Springer GTM 265, 2012 Chapter VIII of the following classic volume is devoted to unbounded operators: Nelson’s example is taken from the above reference, the original reference is this: • Edward Nelson: Analytic Vectors Ann.Math. 70, p.572-615, 1959 category: analysis Revised on March 1, 2015 22:31:36 by Gabriele Tornetta? (
16dd45fbaa8beb54
The Wave Mechanics in the Light of the Reciprocal System One of the large areas to which the Reciprocal System is yet to be applied in detail is spectroscopy. The need is all the more urgent as vast wealth of empirical data is available here in great detail and a general theory must explain all the aspects. To be sure, this was one of the earlier areas which Larson1 explored. But he soon found out, he writes, that there were complications too many and too involved that he decided to postpone the investigation until more basic ground was developed by studying other areas. Coupled with this is also the fact that the calculation of the properties of elements like the lanthanides is still beyond the scope of the Reciprocal System as developed to date.2 The question of the appropriateness of the Periodic Table as given by Larson is still open.2,3,4,5 Under these circumstances it is certain that there is lot more to be done toward enlarging the application of the Reciprocal System to the intrinsic structure of the atom. Perhaps it is time to break new ground in the exploration of the mechanics of the Time Region, the region inside unit space. Breaking new ground involves some fresh thinking and leaving no stone unturned. In this context, it may be desirable to examine, once again, such a successful theory as the Wave Mechanics in the light of our existing knowledge of the Reciprocal System. The Fallacies of the Wave Mechanics The fundamental starting point of the Wave Mechanics is the correlation, which Louis de Broglie advanced originally, of a wave with a moving particle. Like every wave has a corpuscular aspect as shown by Planck’s analysis of the blackbody radiation, the photoelectric effect and the Compton effect (the scattering of photons by particles), it is hypothesized that every particle has a wave aspect. Since the characteristics of waves and particles are mutually exclusive in many ways, this concept of associating a wave with a particle had been beset from its inception with a contradiction that had been euphemized by stating that the two are “complementary” aspects. This led to many an epistemological difficulty. The quantum theorists concluded that the phenomena (particles) inside an atom are not localized in physical space, that the electron in the atom does not exist in an objectively real sense, that it is but a mathematical symbol, and that the world is not intrinsically reasonable or understandable in the realm of the very little. One may refer to The Case against the Nuclear Atom6 by Larson for a critical appraisal. While this is so, it must be noted that the Wave Mechanics was successful in explaining the vast wealth of the spectroscopic data. The several quantum numbers, n, l, m, etc. come out in natural way in the theory. Even the “selection rules” that govern the transitions from one energy state to another could be derived. The fine and the hyperfine structures of the spectra, the breadth and intensity of the lines, the effects of electric and magnetic forces on the spectra could all be derived with great accuracy. In addition, it predicts many non-classical phenomena, such as the tunneling through potential barriers or the phenomena connected with the phase, which found experimental verification. Thus we can see that the mathematical success of the Wave Mechanics is accompanied by a gross misunderstanding of the physical concepts involved. It is the latter which Larson points out and condemns in his criticism of the conventional atomic theory.6 It might be worthwhile to examine if the Wave Mechanics could be purged of its conceptual errors, drawing from our knowledge of the Reciprocal System, and see if the transformed version could be integrated into the Reciprocal System scheme with advantage. After all we have seen this happen in the case of the Special Theory of Relativity. Some of its mathematical aspects—like Lorentz transformations or the mass-energy equivalence—could be adopted by the Reciprocal System after purging the Theory of the wrong interpretations. Reinterpretation of the Physical Concepts of the Wave Mechanics Let us take a look at the original points linking the concepts of the wave with that of the moving particle. The frequency ν and the wavelength λ of the wave are respectively given by ν = E/h = M.c2/h λ = h/p = h/(M.v) where E is the energy, p the particle momentum, M the mass, v the particle speed, c the speed of light and h Planck’s constant. Now the product of ν and λ gives the wave velocity u = ν. λ = c2/v That is, measured in the natural units, the propagation speed of the wave associated with the particle is the inverse of the particle speed: unat = u/c = 1/(v/c) = 1/vnat As the speed of the particle increases from zero upwards, the corresponding speed of the associated wave decreases from infinity downwards. It is at this juncture that our knowledge of the Reciprocal System helps clarify the physical situation. In particular, we recall that while speed is reckoned from the standpoint of a three-dimensional spatial reference system, inverse speed is reckoned from the standpoint of a three-dimensional temporal reference system. While the speed of the origin of the three-dimensional spatial reference system is zero in that system, the inverse speed of the origin of the three-dimensional temporal reference system is zero in the latter system. Or what comes to the same thing, the speed of the temporal zero would be infinite in the spatial reference system. It can easily be seen that a particular speed vnat reckoned from the spatial reference system is identical to the inverse speed 1/vnat reckoned from the temporal reference system. Therefore it follows that the switching from the particle speed vnat to the associated wave speed unat = 1/vnat is tantamount to the shifting of the reckoning from the three-dimensional spatial reference system to the three-dimensional temporal reference system. This is exactly what needs to be done at the juncture where the phenomena (motion) under consideration enter the Time Region (see Appendix I). In the Time Region there could be only motion in time, and the relevant reference frame to represent the motion would have to be the three-dimensional temporal reference frame. Since changing from the corpuscular view to the wave view has the significance of shifting from the three-dimensional spatial reference frame to the three-dimensional temporal reference frame, the theorists have been unknowingly adopting the right procedure in connection with the calculations relevant to atomic dimensions. But it is no longer necessary to maintain, as the theorists do, that an entity is a particle as well as a wave at the same time, since these two views are irreconcilable. The truth is that the particle viewed from the three-dimensional spatial reference frame is the wave viewed from the three-dimensional temporal reference frame. While the particle has a definite location in the former reference frame, the associated wave, being monochromatic, has infinite extent. In the temporal reference frame it appears as infinite repetition. We often come across situations where a change of the coordinate frame, say, from the rectangular to the polar, facilitates the mathematical treatment. In such cases, the same geometrical form—or more generally, the space-time configuration, namely, motion—takes on different mathematical forms in the different coordinate frames. In the present context we have the converse situation, wherein different coordinate frames engender different space-time configurations from the same underlying reality (see Appendix II). In other words, a change of coordinate frames transforms one physical object (space-time configuration) into an apparently different physical object. Time and again we find the theorists being compelled to resort to similar transformations (without, of course, the benefit of the insight given by the Reciprocal System). Consider, for example, the phenomenon of diffraction of particles/waves by crystal lattices. Here they customarily work out the interaction in terms of the wave vector k and the reciprocal lattice, instead of the wavelength λ and the direct lattice respectively. The quantity k = 2π / λ is called the wave number. The vector with modulus k and an imputed direction is the wave vector k. From Equation 2 it can be seen that the wave vector represents momentum. If a1, a2 and a3 are the sides of the unit cell of a crystal lattice, then the array of points drawn with unit cell sides b1 = 2π /a1, b2 = 2π /a2 and b3 = 2π /a3 is called the reciprocal lattice. Without genuine insight, it is regarded as the invariant geometrical object whose properties are fundamental in the theory of solids. However, from the Reciprocal System we know that in solids the motion equilibrium is in the Time Region, where space is replaced by equivalent space (reciprocal space). Therefore we can readily see the rationale in adopting the wave vector (reciprocal length) and the reciprocal lattice in place of the wavelength and the direct lattice respectively. The Uncertainty Principle The quantum theorists, being uninformed about the existence of the Time Region, naturally thought that these waves, associated with the particles, exist in the space of the conventional reference system, while the truth is that they exist in the equivalent space of the Time Region. Now a particle is localized whereas its associated wave is spread out infinitely. Since the theorists have been mistaking that both the particle and the associated wave exist in the space of the conventional reference frame, they thought if Δx is the region in which the particle is located then it is reasonable for the wave too to be limited to the same extent Δx. So they took recourse to the concept of wave packet. The latter is a superposition of plane waves, with their wave numbers in the range Δk centered around the de Broglie wave number k (= 2π /λ) and producing a resultant wave whose amplitude is non-zero only for a space of Δx, equal to the “size” of the particle. They then identify the wave packet, rather than the original monochromatic wave, with the particle. The so-called uncertainty principle stems from this procedure, because the range of size Δx, and the range of wave number Δk, of the waves composing the wave packet, are inversely related as could be seen from Fourier analysis. Δx ≅ 1 / Δk Using Eq.(2) we have Δx. Δp ≅ h / 2 π which is the conventional statement of the uncertainty principle. But now, one realizes that while the particle is localized in space, it does not entail that the associated wave is also to be somehow localized in space, since the latter is to be reckoned from the point of view of the three-dimensional temporal reference frame and not the spatial reference frame. It may be a practical difficulty to measure both the location and the momentum of a system of atomic dimensions with unlimited accuracy simultaneously. But the conclusion drawn by the theorists from the uncertainty principle that a system of atomic dimensions does not possess these properties of precise location and precise momentum simultaneously can be seen to be invalid. As Larson rightly points out, conclusions such as these are applicable only to the theorists’ model, not to the actual system. The uncertainty principle is merely the statement of the fact that the characteristic length belonging to space, namely Δx cm, and the characteristic length belonging to equivalent space, namely Δk cm-1, are reciprocally related (Equation 5). The Probability Interpretation The next thing to be recognized is that the wave information is not to be visualized as mapped out in the space of the conventional spatial reference system. The reference frame for the wave is a temporal manifold. As creatures of the material sector we have no direct access to the three-dimensional temporal reference frame: rather we are anchored to the three-dimensional spatial reference frame. But fortunately, we can accomplish the equivalent of the transformation from the spatial to the temporal frame by the contrivance of adopting the wave picture in place of that of the particle. It must continually be borne in mind that the three-dimensional spatial manifold being used in this context is so used as a temporal analogue. This is why the wave function (specifically, the square of the amplitude) takes on the probability interpretation. The action itself is unambiguous and precise, but since it takes place in the temporal reference frame, the outcome in the three-dimensional spatial reference frame is governed by chance and therefore statistical. The randomness of the radioactive disintegration is another example to the point. When the total mass (rotational + vibrational) of the atom builds up to the upper zero point for rotation, the time-zero as we might call, the (excess) motion reverts to the linear status and is jettisoned as radiation or other particles. Since it is the result of reaching the time-zero point the action is in time instead of space. The radioactive disintegration proceeds continuously and contiguously in three-dimensional time. But since locations in the three-dimensional temporal frame are only randomly connected to the locations in the three-dimensional spatial frame, the apparent disintegration of the atoms (as observed from the conventional spatial standpoint) seems utterly random. Again the interference of light is another example. The crests and troughs of the resultant wave in the two-slit experiment coincide respectively with the regions where the maximum and the minimum number of photons reach. But if the beam intensity is very low, say only a few photons are passing the slits, then all that we can say is that a photon has a greater likelihood of arriving at the location indicated by the wave crest rather than at any other place. In other words, the wave (square of the amplitude) takes on a probability interpretation. This is also precisely the reason why the theorists find some of these forces to be non-local in nature—a totally non-classical phenomenon—namely, that they originate in the Time Region and the connection between the locations in three-dimensional time and the locations in three-dimensional space is random. We have discussed this point in connection with the phenomena of ferromagnetism7 and superconductivity.8 Wave Mechanics without the Nucleus In The Case against the Nuclear Atom6 Larson advances arguments to establish that the concept of the nucleus of the atom is untenable. He points out that, in fact, the “size” of the nucleus obtained by the scattering experiments is rather the size of the atom itself. Our calculations in the next section corroborate this. While Larson’s confutation of the nuclear concept proceeds from his original arguments, his criticism of the Quantum Theory, given in the same work, was based entirely on citations from other experts in the field, including those of the pioneers of the Theory. Larson himself does not directly analyze or comment upon any part of the Quantum Theory or the Wave Mechanics. And all those criticisms he quotes deal with the epistemological difficulties only—such as the “lack of rationality,” etc. which we mentioned at the outset—none deal with the mathematical aspects. Now since we realize that the entire confusion in the area arises from the fact that the theorists do not distinguish between the space of the conventional reference system and the equivalent space of the Time Region (of which they do not know), if we set this right by explicitly recognizing that the associated wave is reckoned from the three-dimensional temporal reference frame, we would have achieved much progress. Since according to the Reciprocal System there is no nucleus, we need to give new interpretation to the energy term occurring in the Schrödinger equation for the wave. It cannot be regarded as the energy level of an orbiting electron. But as we shall see below, this can be treated as the energy level of the atom itself. The Size of the Atom Larson6 has pointed out that as the three-dimensional motion that constitutes the atom extends in the Time Region, its measured size in the time-space region (namely, the conventional three-dimensional spatial frame) would be much smaller than one natural unit of space, snat. It is reduced by the inter-regional ratio, 156.444, which was calculated earlier9 as the number of degrees of freedom in the Time Region, and 8, which is the number of degrees of freedom in the time-space region. Since the atomic rotation is three-dimensional, the cube of 156.444 is the applicable value. So the measured atomic radius would be the following snat/(8 * 156.4443) = 1.4883×10-13 cm (adopting snat = 4.558816×10-6 cm from Larson[10]). Since actually it is the volume with which the equation is concerned, rather than the length (radius), there is an additional geometrical factor, f, relating the volume of a cube (of side f*x) with that of a sphere (of diameter x) given by (f * x)3 = π * x3/6 which gives f = 0.806. Adopting this, the measured radius, based on the natural unit of volume concerned, would be f * 1.4883×10-13 cm = 1.1995×10-13 cm But this is specifically the measured radius of an atom of unit atomic weight. If the atomic weight of the atom is A units, then the measured radius of the atom turns out to be rA = 1.2 * A1/3 fm As can be seen, this agrees well with the results obtained from the scattering experiments for the so-called nuclear radius. This therefore confirms Larson’s view that the experimenters are confusing the atom with the nucleus. The Region of One-dimensional Motion We recall that the atom is constituted of three rotations a-b-c. “a” and “b” are two-dimensional rotations (three-dimensional motion) in two of the scalar dimensions, and “c” is the one-dimensional reverse rotation in the third scalar dimension. Since this one-dimensional rotation is not the basic rotation of the atom, the inter-regional ratio applicable to this is the purely rotational factor 128. As the degrees of freedom in the time-space region is 8 as already pointed out, the range of sizes associated with the one-dimensional rotation in the Time Region is snat/(8 * 128) = 4.45×10-9 cm Hence we can expect the discrete speeds which exist within this spatial range, as far as the one-dimensional type of rotation is concerned, to be part of the atomic structure and the origin of the energy levels that explain the line spectra. Our preliminary study suggests that further prospects for the understanding of the spectroscopic data lie in this zone of one-dimensional rotation of the Time Region. It is shown that while the Wave Mechanics has been very successful and accurate mathematically, it is fraught with some fundamental errors. A review of the latter in the light of the Reciprocal System of theory shows that the principal stumbling block was the ignorance of the existence of the Time Region and its peculiar characteristics. Knowledge of the Reciprocal System enables us to recognize two crucial points: (i) that the wave associated with a moving particle, in systems of atomic dimensions, exists in the equivalent space of the Time Region; and (ii) that the switching from the particle view to the wave view is equal in significance to shifting from the standpoint of the three-dimensional spatial reference frame to that of the three-dimensional temporal reference frame. This recognition not only throws new light on the intriguing wave-particle duality, but also corrects the conceptual error that eventually led the theorists to the wrong conclusion that the world of the very small does not conform to the rational laws that are applicable to the macroscopic world. It is shown that the uncertainty principle does not stem from the intrinsic nature of the atomic phenomena, as the theorists would have us believe, but is rather the result of gratuitously assuming that the wave associated with a moving particle is spatially co-extensive with the particle. The probability connotation of the wave function is shown to arise from the two facts that the wave is existent in the three-dimensional temporal manifold, and that locations in the three-dimensional temporal manifold and the three-dimensional spatial manifold respectively are randomly connected. The non-local nature of the forces in the Time Region also follows from this. Calculations based on the inter-regional ratios applicable confirm Larson’s assertion that the measured size of the atom is in the femtometer range and hence the actual atom is being confused with the non-existent nucleus. It is suggested that the investigation of the one-dimensional motion zone of the Time Region, in conjunction with the adoption of the Wave Mechanics corrected of its conceptual errors, will lead to greater understanding of the atomic structure and thereby pave the way for the complete explanation by the Reciprocal System, of the spectroscopic data, as well as the other recalcitrant problems connected with the properties of rare-earths etc. 1. Larson, D.B., The Structure of the Physical Universe, North Pacific Pub., Portland, Oregon, USA, 1959, pp. 122-125 2. Gilroy, D.M., “A Graphical Comparison of the Old and New Periodic Tables,” Reciprocity, Vol. XIII, No. 3, Winter 1985, pp. 1-27 3. Sammer, J., “The Old and New Periodic Tables - Again,” Reciprocity, Vol. XX, No. 4, Winter 1991-92, pp. 7-13 4. Tucek, R.V., “New Periodic Table,” Reciprocity, Vol. XXI, No. 1, Spring 1992, p. 20 5. Kirk, T., “Periodic Table, Revisited,” Reciprocity, Vol. XXI, No. 2, Autumn 1992, pp. 10-13 6. Larson, D.B., The Case Against the Nuclear Atom, North Pacific Pub., Portland, Oregon, USA, 1963 7. Nehru, K.V.K., “Is Ferromagnetism a Co-magnetic Phenomenon?” Reciprocity, Vol. XIX, No. 1, Spring 1990, pp. 6-8 8. Nehru, K.V.K., “Superconductivity: A Time Region Phenomenon,” Reciprocity, Vol. XIX, No. 3, Autumn 1990, pp. 1-6 9. Nehru, K.V.K., “The Inter-regional Ratio,” Reciprocity, Vol. XIV, No. 2-3, Winter 1985-86, pp. 5-9 10. Larson, D.B., Nothing but Motion, North Pacific Pub., Portland, Oregon, USA, 1979, p. 160 Appendix I According to the Reciprocal System space and time occur in discrete units only. If two atoms approach each other in space, they cannot come any nearer than one natural unit of space, snat. Within one natural unit of space no decrease in space is possible since one natural unit is the minimum that can exist. However, since the basic constituents of the physical universe are units of motion, or speed, in which space and time are reciprocally related, an increase in time (t) with space constant is equivalent to a decrease of space (1/t). This is referred to as the equivalent space in the Reciprocal System. Therefore, though the atoms cannot approach each other nearer than one natural unit of space, they can do so in the equivalent space by moving outward in time. As all changes in this region inside unit space are in time only, it is referred to as the Time Region. Appendix II Consider, for instance, a wave motion in the three-dimensional temporal reference frame, of amplitude given by ρ = A + B cos θ with A and B constants, and θ the time coordinate. In order to return to the spatial reference frame, we (i) transform the time coordinate θ into φ , a rotational space coordinate—rotational because all our time measurements are based on cyclical processes; and (ii) transform ρ into 1/r, since equivalent space and actual space are reciprocally related. We then find that the above equation (of the wave configuration) becomes the equation of an ellipse (or hyperbola) that represents the locus of a planetary mass point revolving around a central force 1/r = A + B cos φ where A / (A2 - B2) is the semi-major axis and B/A the eccentricity. (It must be cautioned that though the above example illustrates the point in question, it is not a complete analogy.)
8ecf1f30bafe56e2
Open access peer-reviewed chapter Properties of Macroscopic Quantum Effects and Dynamic Natures of Electrons in Superconductors By Pang Xiao-feng Submitted: October 27th 2010Reviewed: February 28th 2011Published: July 18th 2011 DOI: 10.5772/18380 Downloaded: 1551 1. Introduction So-called macroscopic quantum effects(MQE) refer to a quantum phenomenon that occurs on a macroscopic scale. Such effects are obviously different from the microscopic quantum effects at the microscopic scale as described by quantum mechanics. It has been experimentally demonstrated [1-17] that macroscopic quantum effects are the phenomena that have occurred in superconductors. Superconductivity is a physical phenomenon in which the resistance of a material suddenly vanishes when its temperature is lower than a certain value, Tc, which is referred to as the critical temperature of superconducting materials. Modern theories [18-21] tell us that superconductivity arises from the irresistible motion of superconductive electrons. In such a case we want to know “How the macroscopic quantum effect is formed? What are its essences? What are the properties and rules of motion of superconductive electrons in superconductor?” and, as well, the answers to other key questions. Up to now these problems have not been studied systematically. We will study these problems in this chapter. 2. Experimental observation of property of macroscopic quantum effects in superconductor (1) Superconductivity of material. As is known, superconductors can be pure elements, compounds or alloys. To date, more than 30 single elements, and up to a hundred alloys and compounds, have been found to possess the characteristics [1-17] of superconductors. WhenTTc, any electric current in a superconductor will flow forever without being damped. Such a phenomenon is referred to as perfect conductivity. Moreover, it was observed through experiments that, when a material is in the superconducting state, any magnetic flux in the material would be completely repelled resulting in zero magnetic fields inside the superconducting material, and similarly, a magnetic flux applied by an external magnetic field can also not penetrate into superconducting materials. Such a phenomenon is called the perfect anti-magnetism or Maissner effect. Meanwhile, there are also other features associated with superconductivity, which are not present here How can this phenomenon be explained? After more than 40 years’ effort, Bardeen, Cooper and Schreiffier proposed the new idea of Cooper pairs of electrons and established the microscopic theory of superconductivity at low temperatures, the BCS theory [18-21],in 1957, on the basis of the mechanism of the electron-phonon interaction proposed by Frohlich [22-23]. According to this theory, electrons with opposite momenta and antiparallel spins form pairs when their attraction, due to the electron and phonon interaction in these materials, overcomes the Coulomb repulsion between them. The so-called Cooper pairs are condensed to a minimum energy state, resulting in quantum states, which are highly ordered and coherent over the long range, and in which there is essentially no energy exchange between the electron pairs and lattice. Thus, the electron pairs are no longer scattered by the lattice but flow freely resulting in superconductivity. The electron pairs in a superconductive state are somewhat similar to a diatomic molecule but are not as tightly bound as a molecule. The size of an electron pair, which gives the coherent length, is approximately 10−4 cm. A simple calculation shows that there can be up to 106 electron pairs in a sphere of 10−4 cm in diameter. There must be mutual overlap and correlation when so many electron pairs are brought together. Therefore, perturbation to any of the electron pairs would certainly affect all others. Thus, various macroscopic quantum effects can be expected in a material with such coherent and long range ordered states. Magnetic flux quantization, vortex structure in the type-II superconductors, and Josephson effect [24-26] in superconductive junctions are only some examples of the phenomena of macroscopic quantum mechanics. (2) The magnetic flux structures in superconductor. Consider a superconductive ring. Assume that a magnetic field is applied at T >Tc, then the magnetic flux lines ϕ0produced by the external field pass through and penetrate into the body of the ring. We now lower the temperature to a value below Tc, and then remove the external magnetic field. The magnetic induction inside the body of circular ring equals zero (B= 0) because the ring is in the superconductive state and the magnetic field produced by the superconductive current cancels the magnetic field, which was within the ring. However, part of the magnetic fluxes in the hole of the ring remain because the induced current is in the ring vanishes. This residual magnetic flux is referred to as “the frozen magnetic flux”. It has been observed experimentally, that the frozen magnetic flux is discrete, or quantized. Using the macroscopic quantum wave function in the theory of superconductivity, it can be shown that the magnetic flux is established by Φ'=nϕ0(n=0,1,2,…), where ϕ0=hc/2e=2.07×10-15 Wb is the flux quantum, representing the flux of one magnetic flux line. This means that the magnetic fluxes passing through the hole of the ring can only be multiples of ϕ0[1-12]. In other words, the magnetic field lines are discrete. We ask, “What does this imply?” If the magnetic flux of the applied magnetic field is exactly n, then the magnetic flux through the hole is nϕ0, which is not difficult to understand. However, what is the magnetic flux through the hole if the applied magnetic field is (n+1/4)ϕ0? According to the above, the magnetic flux cannot be (n+1/4)ϕ0. In fact, it should be nϕ0. Similarly, if the applied magnetic field is (n+3/4)ϕ0and the magnetic flux passing through the hole is not (n+3/4)ϕ0, but rather (n+1)ϕ0, therefore the magnetic fluxes passing through the hole of the circular ring are always quantized. An experiment conducted in 1961 surely proves this to be so, indicating that the magnetic flux does exhibit discrete or quantized characteristics on a macroscopic scale. The above experiment was the first demonstration of the macroscopic quantum effect. Based on quantization of the magnetic flux, we can build a “quantum magnetometer” which can be used to measure weak magnetic fields with a sensitivity of 3×10-7 Oersted. A slight modification of this device would allow us to measure electric current as low as 2.5×10-9 A. (3) Quantization of magnetic-flux lines in type-II superconductors. The superconductors discussed above are referred to as type-I superconductors. This type of superconductor exhibits a perfect Maissner effect when the external applied field is higher than a critical magnetic valueHc. There exists other types of materials such as the NbTi alloy and Nb3Sn compounds in which the magnetic field partially penetrates inside the material when the external field His greater than the lower critical magnetic fieldHc1, but less than the upper critical field Hc2[1-7]. This kind of superconductor is classified as type-II superconductors and is characterized by a Ginzburg-Landau parameter greater than 1/2. Studies using the Bitter method showed that the penetration of a magnetic field results in some small regions changing from superconductive to normal state. These small regions in normal state are of cylindrical shape and regularly arranged in the superconductor, as shown in Fig.1. Each cylindrical region is called a vortex (or magnetic field line)[1-12]. The vortex lines are similar to the vortex structure formed in a turbulent flow of fluid. Both theoretical analysis and experimental measurements have shown that the magnetic flux associated with one vortex is exactly equal to one magnetic flux quantumϕ0, when the applied fieldHHc1, the magnetic field penetrates into the superconductor in the form of vortex lines, increased one by one. For an ideal type-II superconductor, stable vortices are distributed in triagonal pattern, and the superconducting current and magnetic field distributions are also shown in Fig. 1. For other, non-ideal type-II superconductors, the triagonal pattern of distribution can be also observed in small local regions, even though its overall distribution is disordered. It is evident that the vortex-line structure is quantized and this has been verified by many experiments and can be considered a result of the quantization of magnetic flux. Furthermore, it is possible to determine the energy of each vortex line and the interaction energy between the vortex lines. Parallel magnetic field lines are found to repel each other while anti-parallel magnetic lines attract each other. (4) The Josephson phenomena in superconductivity junctions [24-26]. As it is known in quantum mechanics, microscopic particles, such as electrons, have a wave property and that can penetrate through a potential barrier. For example, if two pieces of metal are separated by an insulator of width of tens of angstroms, an electron can tunnel through the insulator and travel from one metal to the other. If voltage is applied across the insulator, a tunnel current can be produced. This phenomenon is referred to as a tunneling effect. If two superconductors replace the two pieces of metal in the above experiment, a tunneling current can also occur when the thickness of the dielectric is reduced to about 30A0. However, this effect is fundamentally different from the tunneling effect discussed above in quantum mechanics and is referred to as the Josephson effect. Evidently, this is due to the long-range coherent effect of the superconductive electron pairs. Experimentally, it was demonstrated that such an effect could be produced via many types of junctions involving a superconductor, such as superconductor-metal-superconductor junctions, superconductor-insulator- superconductor junctions, and superconductor bridges. These junctions can be considered as superconductors with a weak link. On the one hand, they have properties of bulk superconductors, for example, they are capable of carrying certain superconducting currents. On the other hand, these junctions possess unique properties, which a bulk superconductor does not. Some of these properties are summarized in the following. (A) When a direct current (dc) passing through a superconductive junction is smaller than a critical value Ic, the voltage across the junction does not change with the current. The critical current Ic can range from a few tens of μA to a few tens of mA. (B) If a constant voltage is applied across the junction and the current passing through the junction is greater than Ic, a high frequency sinusoidal superconducting current occurs in the junction. The frequency is given by υ=2eV/h, in the microwave and far-infrared regions of (5-1000)×109Hz. The junction radiates a coherent electromagnetic wave with the same frequency. This phenomenon can be explained as follows: The constant voltage applied across the junction produces an alternating Josephson current that, in turn, generates an electromagnetic wave of frequency, υ. The wave propagates along the planes of the junction. When the wave reaches the surface of the junction (the interface between the junction and its surrounding), part of the electromagnetic wave is reflected from the interface and the rest is radiated, resulting in the radiation of the coherent electromagnetic wave. The power of radiation depends on the compatibility between the junction and its surrounding. (C) When an external magnetic field is applied over the junction, the maximum dc current, Ice, is reduced due to the effect of the magnetic field. Furthermore, Ic changes periodically as the magnetic field increases. The IcHcurve resembles the distribution of light intensity in the Fraunhofer diffraction experiment, and the latter is shown in Fig. 2. This phenomenon is called quantum diffraction of the superconductivity junction. Figure 1. Current and magnetic field distributionseffect in in a type-II superconductor. Figure 2. Quantum diffractionsuperconductor junction (D) When a junction is exposed to a microwave of frequency, υ, and if the voltage applied across the junction is varied, it can be seen that the dc current passing through the junction increases suddenly at certain discrete values of electric potential. Thus, a series of steps appear on the dc I − V curve, and the voltage at a given step is related to the frequency of the microwave radiation by nυ=2eVn/h(n=0,1,2,3…). More than 500 steps have been observed in experiments. Josephson first derived these phenomena theoretically and each was experimentally verified subsequently. All these phenomena are, therefore, called Josephson effects [24-26]. In particular, (1) and (3) are referred to as dc Josephson effects while (2) and (4) are referred to as ac Josephson effects. Evidently, Josephson effects are macroscopic quantum effects, which can be explained well by the macroscopic quantum wave function. If we consider a superconducting junction as a weakly linked superconductor, the wave functions of the superconducting electron pairs in the superconductors on both sides of the junction are correlated due to a definite difference in their phase angles. This results in a preferred direction for the drifting of the superconducting electron pairs, and a dc Josephson current is developed in this direction. If a magnetic field is applied in the plane of the junction, the magnetic field produces a gradient of phase difference, which makes the maximum current oscillate along with the magnetic field, and the radiation of the electromagnetic wave occur. If a voltage is applied across the junction, the phase difference will vary with time and results in the Josephson effect. In view of this, the change in the phase difference of the wave functions of superconducting electrons plays an important role in Josephson effect, which will be discussed in more detail in the next section. The discovery of the Josephson effect opened the door for a wide range of applications of superconductor theory. Properties of superconductors have been explored to produce superconducting quantum interferometer–magnetometer, sensitive ammeter, voltmeter, electromagnetic wave generator, detector, frequency-mixer, and so on. 3. The properties of boson condensation and spontaneous coherence of macroscopic quantum effects 3.1. A nonlinear theoretical model of theoretical description of macroscopic quantum effects From the above studies we know that the macroscopic quantum effect is obviously different from the microscopic quantum effect, the former having been observed for physical quantities, such as, resistance, magnetic flux, vortex line, and voltage, etc. In the latter, the physical quantities, depicting microscopic particles, such as energy, momentum, and angular momentum, are quantized. Thus it is reasonable to believe that the fundamental nature and the rules governing these effects are different. We know that the microscopic quantum effect is described by quantum mechanics. However, the question remains relative to the definition of what are the mechanisms of macroscopic quantum effects? How can these effects be properly described? What are the states of microscopic particles in the systems occurring related to macroscopic quantum effects? In other words, what are the earth essences and the nature of macroscopic quantum states? These questions apparently need to be addressed. We know that materials are composed of a great number of microscopic particles, such as atoms, electrons, nuclei, and so on, which exhibit quantum features. We can then infer, or assume, that the macroscopic quantum effect results from the collective motion and excitation of these particles under certain conditions such as, extremely low temperatures, high pressure or high density among others. Under such conditions, a huge number of microscopic particles pair with each other condense in low-energy state, resulting in a highly ordered and long-range coherent. In such a highly ordered state, the collective motion of a large number of particles is the same as the motion of “single particles”, and since the latter is quantized, the collective motion of the many particle system gives rise to a macroscopic quantum effect. Thus, the condensation of the particles and their coherent state play an essential role in the macroscopic quantum effect. What is the concept of condensation? On a macroscopic scale, the process of transforming gas into liquid, as well as that of changing vapor into water, is called condensation. This, however, represents a change in the state of molecular positions, and is referred to as a condensation of positions. The phase transition from a gaseous state to a liquid state is a first order transition in which the volume of the system changes and the latent heat is produced, but the thermodynamic quantities of the systems are continuous and have no singularities. The word condensation, in the context of macroscopic quantum effects has its’ special meaning. The condensation concept being discussed here is similar to the phase transition from gas to liquid, in the sense that the pressure depends only on temperature, but not on the volume noted during the process, thus, it is essentially different from the above, first-order phase transition. Therefore, it is fundamentally different from the first-order phase transition such as that from vapor to water. It is not the condensation of particles into a high-density material in normal space. On the contrary, it is the condensation of particles to a single energy state or to a low energy state with a constant or zero momentum. It is thus also called a condensation of momentum. This differs from a first-order phase transition and theoretically it should be classified as a third order phase transition, even though it is really a second order phase transition, because it is related to the discontinuity of the third derivative of a thermodynamic function. Discontinuities can be clearly observed in measured specific heat, magnetic susceptibility of certain systems when condensation occurs. The phenomenon results from a spontaneous breaking of symmetries of the system due to nonlinear interaction within the system under some special conditions such as, extremely low temperatures and high pressures. Different systems have different critical temperatures of condensation. For example, the condensation temperature of a superconductor is its critical temperatureTc, and from previous discussions[27-32]. From the above discussions on the properties of superconductors, and others we know that, even though the microscopic particles involved can be either Bosons or Fermions, those being actually condensed, are either Bosons or quasi-Bosons, since Fermions are bound as pairs. For this reason, the condensation is referred to as Bose-Einstein condensation[33-36] since Bosons obey the Bose-Einstein statistics. Properties of Bosons are different from those of Fermions as they do not follow the Pauli exclusion principle, and there is no limit to the number of particles occupying the same energy levels. At finite temperatures, Bosons can distribute in many energy states and each state can be occupied by one or more particles, and some states may not be occupied at all. Due to the statistical attractions between Bosons in the phase space (consisting of generalized coordinates and momentum), groups of Bosons tend to occupy one quantum energy state under certain conditions. Then when the temperature of the system falls below a critical value, the majority or all Bosons condense to the same energy level (e.g. the ground state), resulting in a Bose condensation and a series of interesting macroscopic quantum effects. Different macroscopic quantum phenomena are observed because of differences in the fundamental properties of the constituting particles and their interactions in different systems. In the highly ordered state of the phenomena, the behavior of each condensed particle is closely related to the properties of the systems. In this case, the wave functionϕ=feiθor ϕ=ρeiθof the macroscopic state[33-35], is also the wave function of an individual condensed particle. The macroscopic wave function is also called the order parameter of the condensed state. This term was used to describe the superconductive states in the study of these macroscopic quantum effects. The essential features and fundamental properties of macroscopic quantum effect are given by the macroscopic wave function ϕand it can be further shown that the macroscopic quantum states, such as the superconductive states are coherent and are Bose condensed states formed through second-order phase transitions after the symmetry of the system is broken due to nonlinear interaction in the system. In the absence of any externally applied field, the Hamiltonian of a given macroscopic quantum system can be represented by the macroscopic wave function ϕand written as Here H’=H presents the Hamiltonian density function of the system, the unit system in which m=h=c=1 is used here for convenience. If an externally applied electromagnetic field does exist, the Hamiltonian given above should be replaced by or, equivalently whereFji=jAitAjis the covariant field intensity, H˜=×Ais the magnetic field intensity, e is the charge of an electron, and e*=2e, Ais the vector potential of the electromagnetic field, αand λcan be said to be some of the interaction constants. The above Hamiltonians in Eqs.(1) and (2) have been used in studying superconductivity by many scientists, including Jacobs de Gennes [37], Saint-Jams [38], Kivshar [39-40], Bullough [41-42], Huepe [43], Sonin [44], Davydov [45], et al., and they can be also derived from the free energy expression of a superconductive system given by Landau et al [46-47]. As a matter of fact, the Lagrangian function of a superconducting system can be obtained from the well-known Ginzberg-Landau (GL) equation [47-54] using the Lagrangian method, and the Hamiltonian function of a system can then be derived using the Lagrangian approach. The results, of course, are the same as Eqs. (1) and (2). Evidently, the Hamiltonian operator corresponding to Eqs. (1) and (2) represents a nonlinear function of the wave function of a particle, and the nonlinear interaction is caused by the electron-phonon interaction and due to the vibration of the lattice in BCS theory in the superconductors. Therefore, it truly exists. Evidently, the Hamiltonians of the systems are exactly different from those in quantum mechanics, and a nonlinear interaction related to the state of the particles is involved in Eqs. (1) –(2). Hence, we can expect that the states of particles depicted by the Hamiltonian also differ from those in quantum mechanics, and the Hamiltonian can describe the features of macroscopic quantum states including superconducting states. These problems are to be treated in the following pages. Evidently, the Hamiltonians in Eqs. (1) and (2) possess the U (1) symmetry. That is, they remain unchanged while undergoing the following transformation: where Qjis the charge of the particle,θ is a phase and, in the case of one dimension, each term in the Hamiltonian in Eq. (1) or Eq. (2) contains the product of theϕj(x,t)s, then we can obtain: Since charge is invariant under the transformation and neutrality is required for the Hamiltonian, there must be (Q1 + Q2 + + Qn) = 0 in such a case. Furthermore, since θis independent of x, it is necessary thatϕjeiθQjϕj. Thus each term in the Hamiltonian in Eqs. (1) is invariant under the above transformation, or it possesses the U(1) symmetry[16-17]. If we rewrite Eq. (1) as the following We can see that the effective potential energy,Ueff(ϕ), in Eq. (3) has two sets of extrema, ϕ0=±α/2λand ϕ0=0, but the minimum is located at rather than atϕ0=0. This means that the energy at ϕ0=±α/2λis lower than that at ϕ0=0. Therefore, ϕ0=0 corresponds to the normal ground state, while ϕ0=±α/2λis the ground state of the macroscopic quantum systems. In this case the macroscopic quantum state is the stable state of the system. This shows that the Hamiltonian of a normal state differs from that of the macroscopic quantum state, in which the two ground states satisfy 0|ϕ|00|ϕ|0under the transformation,ϕϕ. That is, they no longer have the U(1) symmetry. In other words,the symmetry of the ground states has been destroyed. The reason for this is evidently due to the nonlinear term λϕ4in the Hamiltonian of the system. Therefore, this phenomenon is referred to as a spontaneous breakdown of symmetry. According to Landau’s theory of phase transition, the system undergoes a second-order phase transition in such a case, and the normal ground stateϕ0==0 is changed to the macroscopic quantum ground stateϕ0=±α/2λ. Proof will be presented in the following example. In order to make the expectation value in a new ground state zero in the macroscopic quantum state, the following transformation [16-17] is done: so that After this transformation, the Hamiltonian density of the system becomes Inserting Eq. (4) into Eq. (7), we haveϕ0|4λϕ022α|ϕ0=0. Consider now the expectation value of the variation δH'/δϕin the ground state, i.e.0|δH'δϕ|0=0, then from Eq. (1), we get After the transformation Eq. (6), it becomes where the terms 0|ϕ3|0and 0|ϕ|0are both zero, but the fluctuation 12λϕ00|ϕ2|0of the ground state is not zero. However, for a homogeneous system, at T=0K, the term 0|ϕ2|0is very small and can be neglected. Then Eq. (9) can be written as Obviously, two sets of solutions, ϕ0=0, andϕ0=±α/2λ, can be obtained from the above equation, but we can demonstrate that the former is unstable, and that the latter is stable. If the displacement is very small, i.e.ϕ0ϕ0+δϕ0=ϕ0', then the equation satisfied by the fluctuation δϕ0is relative to the normal ground state ϕ0=0and is Its’ solution attenuates exponentially indicating that the ground state, ϕ0=0is unstable. On the other hand, the equation satisfied by the fluctuationδϕ0, relative to the ground state ϕ0=±α/2λis2δϕ0+2αδϕ0=0. Its’ solution, δϕ0, is an oscillatory function and thus the macroscopic quantum state ground state ϕ0=±α/2λis stable. Further calculations show that the energy of the macroscopic quantum state ground state is lower than that of the normal state byε0=α2/4λ0.Therefore, the ground state of the normal phase and that of the macroscopic quantum phase are separated by an energy gap of α2/(4λ)so then, at T=0K, all particles can condense to the ground state of the macroscopic quantum phase rather than filling the ground state of the normal phase. Based on this energy gap, we can conclude that the specific heat of the macroscopic quantum systems has an exponential dependence on the temperature, and the critical temperature is given by: Tc=1.14ωpexp[1/(3λ/α)N(0)][16-17]. This is a feature of the second-order phase transition. The results are in agreement with those of the BCS theory of superconductivity. Therefore, the transition from the state ϕ0=0to the state ϕ0=±α/2λand the corresponding condensation of particles are second-order phase transitions. This is obviously the results of a spontaneous breakdown of symmetry due to the nonlinear interaction,λϕ4. In the presence of an electromagnetic field with a vector potentialA, the Hamiltonian of the systems is given by Eq. (2). It still possesses the U (1) symmetry.Since the existence of the nonlinear terms in Eq. (2) has been demonstrated, a spontaneous breakdown of symmetry can be expected. Now consider the following transformation: Since 0|ϕi|0=0under this transformation, then the equation (2) becomes We can see that the effective interaction energy of ϕ0is still given by: and is in agreement with that given in Eq. (4). Therefore, using the same argument, we can conclude that the spontaneous symmetry breakdown and the second-order phase transition also occur in the system. The system is changed from the ground state of the normal phase,ϕ0=0to the ground state ϕ0=±α/2λof the condensed phase in such a case. The above result can also be used to explain the Meissner effect and to determine its critical temperature in the superconductor. Thus, we can conclude that, regardless of the existence of any external field macroscopic quantum states, such as the superconducting state, are formed through a second-order phase transition following a spontaneous symmetry breakdown due to nonlinear interaction in the systems. 3.2.The features of the coherent state of macroscopic quantum effects Proof that the macroscopic quantum state described by Eqs. (1) - (2) is a coherent state, using either the second quantization theory or the solid state quantum field theory is presented in the following paragraphs and pages. As discussed above, when δH'/δϕ=0 from Eq. (1), we have It is a time-independent nonlinear Schrödinger equation (NLSE), which is similar to the GL equation. Expandingϕin terms of the creation and annihilation operators, bp+and bp where V¯is the volume of the system. After a spontaneous breakdown of symmetry, ϕ0, the ground-state ofϕ, for the system is no longer zero, butϕ0=±α/2λ. The operation of the annihilation operator on |ϕ0no longer gives zero, i.e. A new field ϕ'can then be defined according to the transformation Eq. (5), where ϕ0is a scalar field and satisfies Eq. (10) in such a case. Evidently, ϕ0can also be expanded into The transformation between the fields ϕand ϕ'is obviously a unitary transformation, that is ϕand ϕ'satisfy the following commutation relation From Eq. (6) we now have0|ϕ'|0=ϕ0'=0. The ground state |ϕ0'of the field ϕ'thus satisfies From Eq. (6), we can obtain the following relationship between the annihilation operator ap of the new fieldϕ'and the annihilation operator bp of the ϕfield Therefore, the new ground state|ϕ0'and the old ground state |ϕ0are related through|ϕ0'=eS|ϕ0. Thus we have According to the definition of the coherent state, equation (25) we see that the new ground state |ϕ0'is a coherent state. Because such a coherent state is formed after the spontaneous breakdown of symmetry of the systems, thus, it is referred to as a spontaneous coherent state. But whenϕ0=0, the new ground state is the same as the old state, which is not a coherent state.The same conclusion can be directly derived from the BCS theory [18-21]. In the BCS theory, the wave function of the ground state of a superconductor is written as whereb^k-k+=A^k+A^-k+. This equation shows that the superconducting ground state is a coherent state. Hence, we can conclude that the spontaneous coherent state in superconductors is formed after the spontaneous breakdown of symmetry. By reconstructing a quasiparticle-operator-free new formulation of the Bogoliubov-Valatin transformation parameter dependence [55], W. S. Lin et al [56] demonstrated that the BCS state is not only a coherent state of single-Cooper-pairs, but also the squeezed state of the double-Cooper- pairs, and reconfirmed thus the coherent feature of BCS superconductive state. 3.3. The Boson condensed features of macroscopic quantum effects We will now employ the method used by Bogoliubov in the study of superfluid liquid helium 4He to prove that the above state is indeed a Bose condensed state. To do that, we rewrite Eq. (16) in the following form [12-17] Since the field ϕdescribes a Boson, such as the Cooper electron pair in a superconductor and the Bose condensation can occur in the system, we will apply the following traditional method in quantum field theory, and consider the following transformation: where N0is the number of Bosons in the system andδ(p)={0,ifp01,ifp=0. Substituting Eqs. (27) and (28) into Eq. (1), we can arrive at the Hamiltonian operator of the system as follows Because the condensed density N0/V¯must be finite, it is possible that the higher order terms 0(N0/V¯)and 0(N0/V¯2)may be neglected. Next we perform the following canonical transformation wherevpand upare real and satisfy(up2υp2)=1. This introduces another transformation the following relations can be obtained We will now study two cases to illustrate the concepts. (A) LetMp=0, then it can be seen from Eq. (32) that ηp+is the creation operator of elementary excitation and its energy is given by Using this concept, we can obtain the following form from Eqs. (32) and (34) From Eq. (32), we know that ξp+is not a creation operator of the elementary excitation. Thus, another transformation must be made We can then prove that Now, inserting Eqs. (30), (37)-(38) and Mp=0into Eq. (29), and after some reorganization, we have Both Uand E0are now independent of the creation and annihilation operators of the Bosons. U+E0gives the energy of the ground state. N0can be determined from the condition, δ(U+E0)δN0=0, which gives This is the condensed density of the ground stateϕ0. From Eqs. (36), (37) and (40), thus we can arrive at: These correspond to the energy spectra of ηp+andBp+,respectively, and they are similar to the energy spectra of the Cooper pair and phonon in the BCS theory. Substituting Eq. (42) into Eq. (36), thus we now have: (B) In the case of Mp=0, a similar approach can be used to arrive at the energy spectrum corresponding to ξp+asEp=εp2+α, while that corresponding to Ap+=χpηp++μpη-pisgp=εp2+α, where Based on experiments in quantum statistical physics, we know that the occupation number of the level with an energy ofεp, for a system in thermal equilibrium at temperature T(0)is shown as: wheredenotes Gibbs average, defined as=SP[eΗ/KBT]SP[eΗ/KBT], here SP denotes the trace in a Gibbs statistical description. At low temperatures, orT0K, the majority of the Bosons or Cooper pairs in a superconductor condense to the ground state withp=0. Thereforeb0+b0N0, where N0is the total number of Bosons or Cooper pairs in the system andN01, i.e.b+b=1b0+b0. As can be seen from Eqs. (27) and (28), the number of particles is extremely large when they lie in condensed state, that is to say: Because γ0|ϕ0=0andβ0|ϕ0=0, b0and b0+can be taken to beN0. The average value of ϕϕin the ground state then becomes Substituting Eq. (41) into Eq. (47), we can see that: which is the ground state of the condensed phase, or the superconducting phase, that we have known. Thus, the density of states, N0/V¯, of the condensed phase or the superconducting phase formed after the Bose condensation coincides with the average value of the Boson’s (or Copper pair’s) field in the ground state. We can then conclude from the above investigation shown in Eqs. (1) - (2) that the macroscopic quantum state or the superconducting ground state formed after the spontaneous symmetry breakdown is indeed a Bose-Einstein condensed state. This clearly shows the essences of the nonlinear properties of the result of macroscopic quantum effects. In the last few decades, the Bose-Einstein condensation has been observed in a series of remarkable experiments using weakly interacting atomic gases, such as vapors of rubidium, sodium lithium, or hydrogen. Its’ formation and properties have been extensively studied. These studies show that the Bose-Einstein condensation is a nonlinear phenomenon, analogous to nonlinear optics, and that the state is coherent, and can be described by the following NLSE or the Gross-Pitaerskii equation [57-59]: wheret=t/,x=x2m/. This equation was used to discuss the realization of the Bose-Einstein condensation in the d+1dimensions (d=1,2,3)by H. K. Bullough [41-42]. Too, Elyutin et al [60-61]. gave the corresponding Hamiltonian density of a condensate system as follows: where H’=H, the nonlinear parameters of λare defined asλ=2Naa1/a02, with N being the number of particles trapped in the condensed state, a is the ground state scattering length, a0 and a1 are the transverse (y, z) and the longitudinal (x) condensate sizes (without self-interaction) respectively, (Integrations over y and z have been carried out in obtaining the above equation). λis positive for condensation with self-attraction (negative scattering length).The coherent regime was observed in Bose-Einstein condensation in lithium. The specific form of the trapping potential V (x’) depends on the details of the experimental setup. Work on Bose-Einstein condensation based on the above model Hamiltonian were carried out and are reported by C. F. Barenghi et al [31]. It is not surprising to see that Eq. (48) is exactly the same as Eq. (15), corresponding to the Hamiltonian density in Eq. (49) and, where used in this study is naturally the same as Eq. (1). This prediction confirms the correctness of the above theory for Bose-Einstein condensation. As a matter of fact, immediately after the first experimental observation of this condensation phenomenon, it was realized that the coherent dynamics of the condensed macroscopic wave function could lead to the formation of nonlinear solitary waves. For example, self-localized bright, dark and vortex solitons, formed by increased (bright) or decreased (dark or vortex) probability density respectively, were experimentally observed, particularly for the vortex solution which has the same form as the vortex lines found in type II-superconductors and superfluids. These experimental results were in concordance with the results of the above theory. In the following sections of this text we will study the soliton motions of quasiparticles in macroscopic quantum systems, superconductors. We will see that the dynamic equations in macroscopic quantum systems do have such soliton solutions. 3.4 Differences of macroscopic quantum effects from the microscopic quantum effects From the above discussion we may clearly understand the nature and characteristics of macroscopic quantum systems. It would be interesting to compare the macroscopic quantum effects and microscopic quantum effects. Here we give a summary of the main differences between them. 1. Concerning the origins of these quantum effects; the microscopic quantum effect is produced when microscopic particles, which have only a wave feature are confined in a finite space, or are constituted as matter, while the macroscopic quantum effect is due to the collective motion of the microscopic particles in systems with nonlinear interaction. It occurs through second-order phase transition following the spontaneous breakdown of symmetry of the systems. 2. From the point-of-view of their characteristics, the microscopic quantum effect is characterized by quantization of physical quantities, such as energy, momentum, angular momentum, etc. wherein the microscopic particles remain constant. On the other hand, the macroscopic quantum effect is represented by discontinuities in macroscopic quantities, such as, the resistance, magnetic flux, vortex lines, voltage, etc. The macroscopic quantum effects can be directly observed in experiments on the macroscopic scale, while the microscopic quantum effects can only be inferred from other effects related to them. 3. The macroscopic quantum state is a condensed and coherent state, but the microscopic quantum effect occurs in determinant quantization conditions, which are different for the Bosons and Fermions. But, so far, only the Bosons or combinations of Fermions are found in macroscopic quantum effects. 4. The microscopic quantum effect is a linear effect, in which the microscopic particles and are in an expanded state, their motions being described by linear differential equations such as the Schrödinger equation, the Dirac equation, and the Klein- Gordon equations. On the other hand, the macroscopic quantum effect is caused by the nonlinear interactions, and the motions of the particles are described by nonlinear partial differential equations such as the nonlinear Schrödinger equation (17). Thus, we can conclude that the macroscopic quantum effects are, in essence, a nonlinear quantum phenomenon. Because its’ fundamental nature and characteristics are different from those of the microscopic quantum effects, it may be said that the effects should be depicted by a new nonlinear quantum theory, instead of quantum mechanics. 4. The nonlinear dynamic natures of electrons in superconductors 4.1. The dynamic equations of electrons in superconductors It is quite clear from the above section that the superconductivity of material is a kind of nonlinear quantum effect formed after the breakdown of the symmetry of the system due to the electron-phonon interaction, which is a nonlinear interaction. In this section we discuss the properties of motion of superconductive electrons in superconductors and the relation of the solutions of dynamic equations in relation to the above macroscopic quantum effects on it. The study presented shows that the superconductive electrons move in the form of a soliton, which can result in a series of macroscopic quantum effects in the superconductors. Therefore, the properties and motions of the quasiparticles are important for understanding the essences and rule of superconductivity and macroscopic quantum effects. As it is known, in the superconductor the states of the electrons are often represented by a macroscopic wave function, ϕ(r,t)=f(r,t)ϕ0eiθ(r,t), or as mentioned above, whereϕ02=α/2λ. Landau et al [45,46] used the wave function to give the free energy density function, f, of a superconducting system, which is represented by in the absence of any external field. If the system is subjected to an electromagnetic field specified by a vector potentialA, the free energy density of the system is of the form: where e*=2e, H˜=×A, αand λare some interactional constants related to the features of superconductor, m is the mass of electron, e* is the charge of superconductive electron, c is the velocity of light, h is Planck constant, =h/2π, fn is the free energy of normal state. The free energy of the system isFs=fsd3x. In terms of the conventional field,Fjl=jAllAj, (j, l=1, 2, 3), the term H˜2/8πcan be written asFjlFjl/4. Equations (50) - (51) show the nonlinear features of the free energy of the systems because it is the nonlinear function of the wave function of the particles,ϕ(r,t). Thus we can predict that the superconductive electrons have many new properties relative to the normal electrons. From δFs/δϕ=0we get in the absence and presence of an external fields respectively, and Equations (52) - (54) are just well-known the Ginzburg-Landau (GL) equation [48-54] in a steady state, and only a time-independent Schrödinger equation. Here, Eq. (52) is the GL equation in the absence of external fields. It is the same as Eq. (15), which was obtained from Eq. (1). Equation (54) can also be obtained from Eq. (2). Therefore, Eqs. (1)-(2) are the Hamiltonians corresponding to the free energy in Eqs. (50)- (51). From equations (52) - (53) we clearly see that superconductors are nonlinear systems. Ginzburg-Landau equations are the fundamental equations of the superconductors describing the motion of the superconductive electrons, in which there is the nonlinear term of2λϕ3. However, the equations contain two unknown functions ϕand Awhich make them extremely difficult to resolve. 4.2. The dynamic properties of electrons in steady superconductors We first study the properties of motion of superconductive electrons in the case of no external field. Then, we consider only a one-dimensional pure superconductor [62-63], where and whereξ'(T)is the coherent length of the superconductor, which depends on temperature. For a uniform superconductor, ξ'(T)=0.94ξ0[Tc/(TcT)]2, where Tcis the critical temperature and ξ0is the coherent length of superconductive electrons at T=0. In boundary conditions of φ(x′=0)=1, and φ(x′±) =0, from Eqs. (52) and (54) we find easily its solution as: This is a well-known wave packet-type soliton solution. It can be used to represent the bright soliton occurred in the Bose-Einstein condensate found by Perez-Garcia et. al. [64]. If the signs of αand λin Eq. (52) are reversed, we then get a kink-soliton solution under the boundary conditions of φ(x′=0)=0, φ(x′±)=±1, The energy of the soliton, (56), is given by We assume here that the lattice constant, r0=1. The above soliton energy can be compared with the ground state energy of the superconducting state, Eground=α2/4λ. Their difference isEso1Eground=α3/2(α+1632m)/2λ0. This indicates clearly that the soliton is not in the ground state, but in an excited state of the system, therefore, the soliton is a quasiparticle. From the above discussion, we can see that, in the absence of external fields, the superconductive electrons move in the form of solitons in a uniform system. These solitons are formed by a nonlinear interaction among the superconductive electrons which suppresses the dispersive behavior of electrons. A soliton can carry a certain amount of energy while moving in superconductors. It can be demonstrated that these soliton states are very stable. 4.3. The features of motion of superconductive electrons in an electromagnetic field and its relation to macroscopic quantum effects We now consider the motion of superconductive electrons in the presence of an electromagnetic fieldA; its equation of motion is denoted by Eqs. (53)-(54).Assuming now that the field Asatisfies the London gauge A=0[65], and that the substitution of ϕ(r,t)=φ(r,t)ϕ0eiθ(r)into Eqs. (53) and (54) yields [66-67]: For bulk superconductors, J is a constant (permanent current) for a certain value ofA, and it thus can be taken as a parameter. LetB2=m2J2/2(e*)2ϕ04, b=2mα/2=ξ'2, from Eqs. (59) and (60), we can obtain [66-67]: where Ueff is the effective potential of the superconductive electron in this case and it is schematically shown in Fig. 2. Comparing this case with that in the absence of external fields, we found that the equations have the same form and the electromagnetic field changes only the effective potential of the superconductive electron. WhenA=0, the effective potential well is characterized by double wells. In the presence of an electromagnetic field, there are still two minima in the effective potential, corresponding to the two ground states of the superconductor in this condition. This shows that the spontaneous breakdown of symmetry still occurs in the superconductor, thus the superconductive electrons also move in the form of solitons. To obtain the soliton solution, we integrate Eq. (62) and can get: Where E is a constant of integration which is equivalent to the energy, the lower limit of the integral, φ1, is determined by the value of φat x=0, i.e.E=Ueff(φ0)=Ueff(φ1). Introduce the following dimensionless quantitiesφ2=u,E=b2ε,2d˜=4J2mλ(e*)2α2, and equation (63) can be written as the following upon performing the transformation u→−u, It can be seen from Fig. 3 that the denominator in the integrand in Eq. (64) approaches zero linearly when u=u1=φ12, but approaches zero gradually when u=u2=φ02. Thus we give [66-67] where g= u0−u1 and satisfies It can be seen from Eq. (65) that for a large part of sample, u1 is very small and may be neglected; the solution u is very close to u0. We then get from Eq. (65) that Substituting the above into Eq. (61), the electromagnetic field Ain the superconductors can be obtained For a large portion of the superconductor, the phase change is very small. Using H=×Athe magnetic field can be determined and is given by [66-67] Equations (67) and (68) are analytical solutions of the GL equation.(63) and (64) in the one-dimensional case, which are shown in Fig. 3. Equation (67) or (65) shows that the superconductive electron in the presence of an electromagnetic field is still a soliton. However, its amplitude, phase and shape are changed, when compared with those in a uniform superconductor and in the absence of external fields, Eq. (66). The soliton here is obviously influenced by the electromagnetic field, as reflected by the change in the form of solitary wave. This is why a permanent superconducting current can be established by the motion of superconductive electrons along certain direction in such a superconductor, because solitons have the ability to maintain their shape and velocity while in motion. It is clear from Fig.4 that H˜(x)is larger where ϕ(x)is small, and vice versa. Whenx0, H˜(x)reaches a maximum, while ϕapproaches to zero. On the other hand, whenx, ϕbecomes very large, while H˜(x)approaches to zero. This shows that the system is still in superconductive state.These are exactly the well-known behaviors of vortex lines-magnetic flux lines in type-II superconductors [66-67]. Thus we explained clearly the macroscopic quantum effect in type-II superconductors using GL equation of motion of superconductive electron under action of an electromagnetic-field. Figure 3. The effective potential energy in Eq. (67). Figure 4. Changes of ϕ(x) and | H ˜ (x) | with x in Eqs. (67)-(68) Recently, Garadoc-Daries et al. [68], Matthews et al. [69] and Madison et al.[70] observed vertex solitons in the Boson-Einstein condensates. Tonomure [71] observed experimentally magnetic vortexes in superconductors. These vortex lines in the type-II-superconductors are quantized. The macroscopic quantum effects are well described by the nonlinear theory discussed above, demonstrating the correctness of the theory. We now proceed to determine the energy of the soliton given by (67). From the earlier discussion, the energy of the soliton is given by: which depends on the interaction between superconductive electrons and electromagnetic field. From the above discussion, we understand that for a bulk superconductor, the superconductive electrons behave as solitons, regardless of the presence of external fields. Thus, the superconductive electrons are a special type of soliton. Obviously, the solitons are formed due to the fact that the nonlinear interactionλ|ϕ|2ϕsuppresses the dispersive effect of the kinetic energy in Eqs. (52) and (53). They move in the form of solitary wave in the superconducting state. In the presence of external electromagnetic fields, we demonstrate theoretically that a permanent superconductive current is established and that the vortex lines or magnetic flux lines also occur in type-II superconductors. 5. The dynamic properties of electrons in superconductive junctions and its relation to macroscopic quantum effects 5.1. The features of motion of electron in S-N junction and proximity effect The superconductive junction consists of a superconductor (S) which contacts with a normal conductor (N), in which the latter can be superconductive. This phenomenon refers to a proximity effect. This is obviously the result of long- range coherent property of superconductive electrons. It can be regarded as the penetration of electron pairs from the superconductor into the normal conductor or a result of diffraction and transmission of superconductive electron wave. In this phenomenon superconductive electrons can occur in the normal conductor, but their amplitudes are much small compare to that in the superconductive region, thus the nonlinear term λ|ϕ|2ϕin GL equations (53)-(54) can be neglected. Because of these, GL equations in the normal and superconductive regions have different forms. On the S side of the S-N junction, the GL equation is [72] while that on the N side of the junction is Thus, the expression for Jremains the same on both sides. In the S region, we have obtained solution of (69) in the previous section, and it is given by (65) or (67) and (68). In the N region, from Eqs. (70)- (71) we can easily obtain where b'=2mα'2=1ξ¯'2,2d˜2=4J2mλ(e*)2α'2, hereε'is an integral constant. A graph of ϕvs. x in both the S and the N regions, as shown in Fig.5, coincides with that obtained by Blackbunu [73]. The solution given in Eq. (72) is the analytical solution in this case. On the other hand, Blackbunu’s result was obtained by expressing the solution in terms of elliptic integrals and then integrating numerically. From this, we see that the proximity effect is caused by diffraction or transmission of the superconductive electrons 5.2. The Josephson effect in S-I-S and S-N-S as well as S-I-N-S junctions A superconductor-normal conductor -superconductor junction (S-N-S) or a superconductor-insulator-superconductor junction (S-I-S) consists of a normal conductor or an insulator sandwiched between two superconductors as is schematically shown in Fig.6a.The thickness of the normal conductor or the insulator layer is assumed to be L and we choose the z coordinate such that the normal conductor or the insulator layer is located atL/2xL/2. The features of S-I-S junctions were studied by Jacobson et al.[74]. We will treat this problem using the above idea and method [75-76]. The electrons in the superconducting regions (|x|L/2) are depicted by GL equation (69). Its’ solution was given earlier in Eq.(67). After eliminating u1 from Eq.(66), we have [73-74]J=12e*αu0αmλ(1)u0. Figure 5. Proximity effect in S-N junction Figure 6. Superconductive junction of S-N(I)-S and S-N-I-S The electrons in the superconducting regions (|x|L/2) are depicted by GL equation (69). Its’ solution was given earlier. SettingdJ/du0=0, we get the maximum currentJc=e*α3α3mλ. This is the critical current of a perfect superconductor, corresponding to the three-fold degenerate solution of Eq.(66), i.e.,u1=u0. From Eq.(71), we haveA=mJc(e*)2ϕ02φ2+hce*θ. Using the London gauge, .A=0, we can get[75-76]d2θdx2=mJe*ϕ02ddx(1φ2). Integrating the above equation twice, we get the change of the phase to be whereφ2=u, andφ2=u0. Here we have used the following de Gennes boundary conditions in obtaining Eq. (73) If we substitute Eqs.(64) - (67) into Eq.(73), the phase shift of wave function from an arbitrary point x to infinite can be obtained directly from the above integral, and takes the form of: For the S-N-S or S-I-S junction, the superconducting regions are located at |x|L/2and the phase shift in the S region is thus According to the results in (70) - (71) and the above similar method, the change of the phase in the I or N region of the S-N-S or S-I-S junction may be expressed as [75-76] whereh'=8mλα2J˙2e*tan(ΔθN/2)tan(b'L/2), mJ˙L2e*h'μ0is an additional term to satisfy the boundary conditions (74),and may be neglected in the case being studied. Near the critical temperature (T<Tc), the current passing through a weakly linked superconductive junction is very small (J˙1), we then have μ1'=4J˙2mλ(e*)2α2=2A¯2,and g’=1. Since ηφ2and dφ2/dxare continuous at the boundary x=L/2, we have where ηsand ηNare the constants related to features of superconductive and normal phases in the junction, respectively. These give [75-76] whereε1=ηN/ηS. From the two equations, we can get Equation (78) is the well-known example of the Josephson current. From Section I we know that the Josephson effect is a macroscopic quantum effect. We have seen now that this effect can be explained based on the nonlinear quantum theory. This again shows that the macroscopic quantum effect is just a nonlinear quantum phenomenon. From Eq. (79) we can see that the Josephson critical current is inversely proportional to sin (b'L), which means that the current increases suddenly whenever b'Lapproaches tonπ, suggesting some resonant phenomena occurs in the system. This has not been observed before. Moreover J˙maxis proportional to e*αs/22mλb'=(e*αS/4mλαN), which is related to (T-Tc)2 Finally, it is worthwhile to mention that no explicit assumption was made in the above on whether the junction is a potential well (α<0) or a potential barrier (α>0). The results are thus valid and the Josephson effect in Eq. (2.78), occurs for both potential wells and for potential barriers. We now study Josephson effect in the superconductor -normal conductor-insulator-superconductor junction (SNIS) is shown schematically in Fig. 6b. It can be regarded as a multilayer junction consists of the S-N-S and S-I-S junctions. If appropriate thicknesses for the N and I layers are used (approximately 20 °A– 30 °A), the Josephson effect similar to that discussed above can occur in the SNIS junction. Since the derivations are similar to that in the previous sections, we will skip much of the details and give the results in the following. The Josephson current in the SNIS junction is still given by but, where Δθ=Δθs1+ΔθN+ΔθI+Δθs2and It can be shown that the temperature dependence ofJmaxisJmax(TcT0)2,which is quite similar to the results obtained by Blackburm et al[73] for the SNIS junction and those by Romagnan et al[7] using the Pb-PbO-Sn-Pb junction. Here, we obtained the same results using a complete different approach. This indicates again that we can obtain some results, which agree with the experimental data. 6. The nonlinear dynamic-features of time- dependence of electrons in superconductor 6.1. The soliton solution of motion of the superconductive electron We studied only the properties of motion of superconductive electrons in steady states in superconductors in section 2.3.2, and which are described by the time-independent GL equation. In such a case, the superconductive electrons move as solitons. We ask, “What are the features of a time-dependent motion in non-equilibrium states of a superconductor?” Naturally, this motion should be described by the time-dependent Ginzburg-Landau (TDGL) equation [48-54,77] in this case. Unfortunately, there are many different forms of the TDGL equation under different conditions. The one given in the following is commonly used when an electromagnetic field Ais involved herei=1,××A=1ct(1cAtμ)+4πJcand σis the conductivity in the normal state, Γis an arbitrary constant, and μis the chemical potential of the system. In practice, Eq. (80) is simply a time-dependent Schrödinger equation with a damping effect. In certain situations, the following forms of the TDGL equation are also used. hereξ'=/2m, and equation (82) is a nonlinear Schrödinger equation under an electromagnetic field having soliton solutions. However, these solutions are very difficult to find, and no analytic solutions have been obtained. An approximate solution was obtained by Kusayanage et al [78] by neglecting the ϕ3term in Eq. (80) or Eq. (82), in the case of A=(0,H˜x,0),μ=KE˜x,H=(0,0,H˜)andE=(E˜,0,0), where His the magnetic field, while Eis the electric field.We will solve the TDGL equation in the case of weak fields in the following. TDGL equation (83) can be written in the following form when Ais very small[80-81] Where αand Γare material dependent parameters, λis the nonlinear coefficient, m is the mass of the superconductive electron. Equation (84) is actually a nonlinear Schrödinger equation in a potential fieldα/Γ2eμ. Cai, Bhattacharjee et al [79], and Davydov [45] used it in their studies of superconductivity. However, this equation is also difficult to solve.In the following, Pang solves the equation only in the one-dimensional case. For convenience, lett=t/, x=x2mΓ/, then Eq. (84) becomes If we letαΓ2eμ=0, then Eq. (85) is the usual nonlinear Schrödinger equation whose solution is of the form [80-81] hereθ0(x,t)=12υe(xυct). In the case ofαΓ-2eμ0, letμ=KE˜x, where K is a constant, and assume that the solution is of the form [80-81] Substituting Eq. (88) into Eq. (86), we get: Now let φ'(x,t)=φ(ξ),ξ=xu(t),u(t)=2E˜Ke(t)2+υt+d, where u(t')describes the accelerated motion ofφ'(x,t). The boundary condition at ξrequires φ(ξ)to approach zero rapidly. When 2θ/ξu˙0, equation (90) can be written as:φ2=g(t)(θ/ξu˙/2), or whereu˙=du/dt'. Integration of (91) yields: and whereh(t')is an undetermined constant of integration. From Eq. (92) we can get: Substituting Eqs. (92) and (93) into Eq. (89), we have: Since2φ(x)2=d2φdξ2, which is a function of ξonly, the right-hand side of Eq. (94) is also a function of ξonly, so it is necessary thatg(t)=g0=constant, and(2KE˜ex'+αΓ)+u¨2x'+H˙(t')+u˙24+gu˙f2|x'=0=V¯(ξ). Next, we assume thatV0(ξ)=V¯(ξ)β, where βis real and arbitrary, then Clearly in the case discussed, V0(ξ)=0, and the function in the brackets in Eq. (95) is a function of t′. Substituting Eq. (95) into Eq. (94), we can get [80-81]: This shows that φ˜is the solution of Eq. (96) when βand g are constant. For large|ξ|, we may assume that|φ˜|β/|ξ|1+Δ, when Δis a small constant. To ensure that φ˜and d2φ˜/dξ2approach zero when |ξ|, only the solution corresponding to g0=0 in Eq. (96) is kept, and it can be shown that this soliton solution is stable in such a case. Therefore, we choose g0=0 and obtain the following from Eq. (91): Thus, we obtain from Eq. (95) that Substituting Eq. (98) into Eqs. (92) - (93), we obtain: Finally, substituting the Eq. (99) into Eq. (96), we can get Whenβ0, the solution of. Eq. (100) is of the form Thus [80-81] This is also a soliton solution, but its shape,amplitude and velocity have been changed relatively as compared to that of Eq. (87). It can be shown that Eq. (102) does indeed satisfy Eq. (85). Thus, equation (85) has a soliton solution. It can also be shown that this solition solution is stable. 6.2. The properties of soliton motion of the superconductive electrons For the solution of Eq. (102), we may define a generalized time-dependent wave number, k=θx=υ22KE˜etand a frequency The usual Hamilton equations for the superconductive electron (soliton) in the macroscopic quantum systems are still valid here and can be written as [80-81]dkdt=ωx|k=2KE˜e, then the group velocity of the superconductive electron is This means that the frequency ω still represents the meaning of Hamiltonian in the case of nonlinear quantum systems. Hence, dωdt=dωdk|xdkdt+ωx|kdxdt=0, as seen in the usual stationary linear medium. These relations in Eqs. (103)-(104) show that the superconductive electrons move as if they were classical particles moving with a constant acceleration in the invariant electric-field, and that the acceleration is given by4KE˜e. If υ>0, the soliton initially travels toward the overdense region, it then suffers a deceleration and its velocity changes sign. The soliton is then reflected and accelerated toward the underdense region.The penetration distance into the overdense region depends on the initial velocityυ. From the above studies we see that the time-dependent motion of superconductive electrons still behaves like a soliton in non-equilibrium state of superconductor. Therefore, we can conclude that the electrons in the superconductors are essentially a soliton in both time-independent steady state and time-dependent dynamic state systems. This means that the soliton motion of the superconductive electrons causes the superconductivity of material. Then the superconductors have a complete conductivity and nonresistance property because the solitons can move over a macroscopic distances retaining its amplitude, velocity, energy and other quasi- particle features. In such a case the motions of the electrons in the superconductors are described by a nonlinear Schrödinger equations (52), or (53) or (80) or (82) or (84). According to the soliton theory, the electrons in the superconductors are localized and have a wave-corpuscle duality due to the nonlinear interaction, which is completely different from those in the quantum mechanics. Therefore, the electrons in superconductors should be described in nonlinear quantum mechanics[16-17]. 7. The transmission features of magnetic-flux lines in the Josephson junctions 7.1. The transmission equation of magnetic-flux lines We have learned that in a homogeneous bulk superconductor, the phaseθ(r,t)of the electron wave function ϕ(r,t)=f(r,t)eiθ(r,t)is constant, independent of position and time. However, in an inhomogeneous superconductor such as a superconductive junction discussed above, θbecomes dependent of rand t. In the previous section, we discussed the Josephson effects in the S-N-S or S-I-S, and SNIS junctions starting from the Hamiltonican and the Ginzburg-Landau equations satisfied byϕ(r,t), and showed that the Josephson current, whether dc or ac, is a function of the phase change,φ=Δθ=θ1θ2. The dependence of the Josephson current on φis clearly seen in Eq. (78). This clearly indicates that the Josephson current is caused by the phase change of the superconductive electrons. Josephson himself derived the equations satisfied by the phase differenceφ, known as the Josephson relations, through his studies on both the dc and ac Josephson effects. The Josephson relations for the Josephson effects in superconductor junctions can be summarized as the following, Js=Jmsinφ,φt=2eV, φx=2ed'H¯y/c, where d’ is the thickness of the junction. Because the voltage V and magnetic field Hare not determined, equation (105) is not a set of complete equations. Generally, these equations are solved simultaneously with the Maxwell equation×H=(4π/c)J. Assuming that the magnetic field is applied in the xy plane, i.e.H=(H¯x,H¯y,0), the above Maxwell equation becomes In this case, the total current in the junction is given by J=Js(x,y,t)+Jn(x,y,t)+Jd(x,y,t)+J0 In the above equation,Js(x,y,t)is the superconductive current density, Jn(x,y,t)is the normal current density in the junction (Jn =V/R(V ) if the resistance in the junction is R(V ) and a voltage V is applied at two ends of the junction),Jd(x,y,t)is called a displacement current and it is given byJd=CdV(t)/dt, where C is the capacity of the junction, and J0is a constant current density. Solving the equations in Eqs,(102) and (106) simultaneously, we can get where v0=c2/4πCd',γ0=1/RC, Equation (107) is the equation satisfied by the phase difference. It is a Sine-Gordon equation (SGE) with a dissipative term. From Eq.(105), we see that the phase difference φdepends on the external magnetic fieldH, thus the magnetic flux in the junction Φ'=H¯ds=A·dl=cc*φdlcan be specified in terms ofφ, where Ais vector potential of electromagnetic field, dlis line element of vortex lines. Equation (107) represents transmission of superconductive vortex lines. It is a nonlinear equation. Therefore, we know clearly that the Josephson effect and the related transmission of the vortex line, or magnetic flux, along the junctions are also nonlinear problems. The Sine-Gordon equation given above has been extensively studied by many scientists including Kivshar and Malomed[39-40]. We will solve it here using different approaches. 7.2. The transmission features of magnetic-flux lines Assuming that the resistance R in the junction is very high, so thatJn0, or equivivalentlyγ00, setting also I0 = 0, equation (107) reduces to DefineX=x/λJ,T=v0t/λJ, then in one-dimension, the above equation becomes which is the 1D Sine-Gordon equation. If we further assume thatφ=φ(X,T)=φ(θ')with it becomes(1v2)φθ'2(θ')=2(A'cosφ),where A’ is a constant of integration. Thus whereν=1/1v2,δ=±1. Choosing A’=1, we have A kink soliton solution can be obtained as follows ±νθ'=ln[tan(φ/2)],'orφ(θ')=4tan1[exp(±νθ')]. Thus yields From the Josephson relations, the electric potential difference across the junction can be written as where φ0=πc=2×107Gauss/cm2'is a quantum fluxon, c is the speed of light. A similar expression can be derived for the magnetic field We can then determine the magnetic flux through a junction with a length of L and a cross section of 1 cm2. The result is Therefore, the kink (δ=+1) carries a single quantum of magnetic flux in the extended Josephson junction. Such an excitation is often called a fluxon, and the Sine-Gordon equation or Eq.(107) is often referred to as transmission equation of quantum flux or fluxon. The excitation corresponding to δ= −1 is called an antifluxon. Fluxon is an extremely stable formation. However, it can be easily controlled with the help of external effects. It may be used as a basic unit of information. This result shows clearly that magnetic flux in superconductors is quantized and this is a macroscopic quantum effect as mentioned in Section 1. The transmission of the quantum magnetic flux through the superconductive junctions is described by the above nonlinear dynamic equation (107) or (108).The energy of the soliton can be determined and it is given by However, the boundary conditions must be considered for real superconductors. Various boundary conditions have been considered and studied. For example, we can assume the following boundary conditions for a 1D superconductor,φx(0,t)=φx(L,t)=0. Lamb[47] obtained the following soliton solution for the SG equation (108) where h and g are the general Jacobian elliptical functions and satisfy the following equations with a’, b’, and c’ being arbitrary constants. Coustabile et al. also gave the plasma oscillation, breathing oscillation and vortex line oscillation solutions for the SG equation under certain boundary conditions. All of these can be regarded as the soliton solution under the given conditions. Solutions of Eq.(108) in two and three-dimensional cases can also be found[80-81]. In two- dimensional case, the solution is given by In addition, Pi, qi and Ωisatisfydet|p1q1Ω1p2q2Ω2p3q3Ω3|=0. In the three-dimensional case, the solution is given by where X, Y, and T are similarly defined as in the 2D case given above, andZ=z/λJ. The functions f and g are defined as here y3is a linear combination ofy1andy2, i.e.,y3=αy1+βy2. We now discuss the SG equation with a dissipative termγ0φ/t. First we make the following substitutions to simplify the equation In terms of these new parameters, the 1D SG equation (107) can be rewritten as The analytical solution of Eq.(113) is not easily found. Now let Equation (113) then becomes This equation is the same as that of a pendulum being driven by a constant external moment and a frictional force which is proportional to the angular displacement. The solution of the latter is well known, generally there exists an stable soliton solution[80-81]. LetY=dφ'/dη, equation (115) can be written as For0B'1, we can let B'=sinφ0(0φ0π/2)andφ'=πφ0+φ1, then, equation (116) becomes Expand Y as a power series ofφ1, i.e., Y=ncnφ1n,and inserting it into Eq.(117), and comparing coefficients of terms of the same power of φ1on both sides, we get and so on. Substituting these cn'sinto Y=dφ'/dη=ncnφ12,the solution ofφ1may be found by integratingη=dφ1/ncnφ12. In general, this equation has soliton solution or elliptical wave solution. For example, when dφ'/dη=c1φ1+c2φ12+c3φ13it can be found that where F(k,φ1)is the first Legendre elliptical integral, and A, B and C are constants. The inverse functionφ1of F(k,φ1)is the Jacobian amplitude 'φ1=amF. Thus, where snF is the Jacobian sine function. Introducing the symbol cscF = 1/snF, the solution can be written as This is a elliptic function. It can be shown that the corresponding solution at |η|is a solitary wave. It can be seen from the above discussion that the quantum magnetic flux lines (vortex lines) move along a superconductive junction in the form of solitons. The transmission velocity v0can be obtained from h=αv01v02and cnin Eq. (118) and it is given byv0=1/1+[α/h(φ0)]2. That is, the transmission velocity of the vortex lines depends on the current I0injected and the characteristic decaying constant αof the Josephson junction. When αis finite, the greater the injection current I0 is, the faster the transmission velocity will be; and when I0 is finite, the greater the αis, the smaller thev0will be, which are realistic. 8. Conclusions We here first reviewed the properties of superconductivity and macroscopic quantum effects, which are different from the microscopic quantum effects, obtained from some experiments. The macroscopic quantum effects occurred on the macroscopic scale are caused by the collective motions of microscopic particles, such as electrons in superconductors, after the symmetry of the system is broken due to nonlinear interactions. Such interactions result in Bose condensation and self-coherence of particles in these systems. Meanwhile, we also studied the properties of motion of superconductive electrons, and arrived at the soliton solutions of time-independent and time-dependent Ginzburg-Landau equation in superconductor, which are, in essence, a kind of nonlinear Schrödinger equation. These solitons, with wave-corpuscle duality, are due to the nonlinear interactions arising from the electron-phonon interaction in superconductors, in which the nonlinear interaction suppresses the dispersive effect of the kinetic energy in these dynamic equations, thus a soliton states of the superconductive electrons, which can move over a macroscopic distances retaining the energy, momuntum and other quasiparticle properties in the systems, are formed. Meanwhile, we used these dynamic equations and their soliton solutions to obtain, and explain, these macroscopic quantum effects and superconductivity of the systems. Effects such as quantization of magnetic flux in superconductors and the Josephson effect of superconductivity junctions,thus we concluded that the superconductivity and macroscopic quantum effects are a kind of nonlinear quantum effects and arise from the soliton motions of superconductive electrons. This shows clearly that studying the essences of macroscopic quantum effects and properties of motion of microscopic particles in the superconductors has important significance of physics. How to cite and reference Link to this chapter Copy to clipboard Cite this chapter Copy to clipboard Pang Xiao-feng (July 18th 2011). Properties of Macroscopic Quantum Effects and Dynamic Natures of Electrons in Superconductors, Superconductivity - Theory and Applications, Adir Moyses Luiz, IntechOpen, DOI: 10.5772/18380. Available from: chapter statistics 1551total chapter downloads More statistics for editors and authors Access personal reporting Related Content This Book Next chapter FFLO and vortex states in superconductors with strong paramagnetic effect By Masanori Ichioka, Kenta M. Suzuki, Yasumasa Tsutsumi and Kazushige Machida Related Book First chapter Overview of Possible Applications of High Tc Superconductors By Adir Moyses Luiz More About Us
f4fa4096badc0563
Sunday, July 28, 2019 The Forgotten Solution: Superdeterminism Welcome to the renaissance of quantum mechanics. It took more than a hundred years, but physicists finally woke up, looked quantum mechanics into the face – and realized with bewilderment they barely know the theory they’ve been married to for so long. Gone are the days of “shut up and calculate”; the foundations of quantum mechanics are en vogue again. It is not a spontaneous acknowledgement of philosophy that sparked physicists’ rediscovered desire; their sudden search for meaning is driven by technological advances. With quantum cryptography a reality and quantum computing on the horizon, questions once believed ephemeral are now butter and bread of the research worker. When I was a student, my prof thought it questionable that violations of Bell’s inequality would ever be demonstrated convincingly. Today you can take that as given. We have also seen delayed-choice experiments, marveled over quantum teleportation, witnessed decoherence in action, tracked individual quantum jumps, and cheered when Zeilinger entangled photons over hundreds of kilometers of distance. Well, some of us, anyway. But while physicists know how to use the mathematics of quantum mechanics to make stunningly accurate predictions, just what this math is about has remained unclear. This is why physicists currently have several “interpretations” of quantum mechanics. I find the term “interpretations” somewhat unfortunate. That’s because some ideas that go as “interpretation” are really theories which differ from quantum mechanics, and these differences may one day become observable. Collapse models, for example, explicitly add a process for wave-function collapse to quantum measurement. Pilot wave theories, likewise, can result in deviations from quantum mechanics in certain circumstances, though those have not been observed. At least not yet. A phenomenologist myself, I am agnostic about different interpretations of what is indeed the same math, such as QBism vs Copenhagen or the Many Worlds. But I agree with the philosopher Tim Maudlin that the measurement problem in quantum mechanics is a real problem – a problem of inconsistency – and requires a solution. And how to solve it? Collapse models solve the measurement problem, but they are hard to combine with quantum field theory which for me is a deal-breaker. Pilot wave theories also solve it, but they are non-local, which makes my hair stand up for much the same reason. This is why I think all these approaches are on the wrong track and instead side with superdeterminism. But before I tell you what’s super about superdeterminism, I have to briefly explain the all-important theorem from John Stewart Bell. It says, in a nutshell, that correlations between certain observables are bounded in every theory which fulfills certain assumptions. These assumptions are what you would expect of a deterministic, non-quantum theory – statistical locality and statistical independence (together often referred to as “Bell locality”) – and should, most importantly, be fulfilled by any classical theory that attempts to explain quantum behavior by adding “hidden variables” to particles. Experiments show that the bound of Bell’s theorem can be violated. This means the correct theory must violate at least one of the theorem’s assumptions. Quantum mechanics is indeterministic and violates statistical locality. (Which, I should warn you has little to do with what particle physicists usually mean by “locality.”) A deterministic theory that doesn’t fulfill the other assumption, that of statistical independence, is called superdeterministic. Note that this leaves open whether or not a superdeterministic theory is statistically local. Unfortunately, superdeterminism has a bad reputation, so bad that most students never get to hear of it. If mentioned at all, it is commonly dismissed as a “conspiracy theory.” Several philosophers have declared superdeterminism means abandoning scientific methodology entirely. To see where this objection comes from – and why it’s wrong – we have to unwrap this idea of statistical independence. Statistical independence enters Bell’s theorem in two ways. One is that the detectors’ settings are independent of each other, the other one that the settings are independent of the state you want to measure. If you don’t have statistical independence, you are sacrificing the experimentalist’s freedom to choose what to measure. And if you do that, you can come up with deterministic hidden variable explanations that result in the same measurement outcomes as quantum mechanics. I find superdeterminism interesting because the most obvious class of hidden variables are the degrees of freedom of the detector. And the detector isn’t statistically independent of itself, so any such theory necessarily violates statistical independence. It is also, in a trivial sense, non-linear just because if the detector depends on a superposition of prepared states that’s not the same as superposing two measurements. Since any solution of the measurement problem requires a non-linear time evolution, that seems a good opportunity to make progress. Now, a lot of people discard superdeterminism simply because they prefer to believe in free will, which is where I think the biggest resistance to superdeterminism comes from. Bad enough that belief isn’t a scientific reason, but worse that this is misunderstanding just what is going on. It’s not like superdeterminism somehow prevents an experimentalist from turning a knob. Rather, it’s that the detectors’ states aren’t independent of the system one tries to measure. There just isn’t any state the experimentalist could twiddle their knob to which would prevent a correlation. Where do these correlations ultimately come from? Well, they come from where everything ultimately comes from, that is from the initial state of the universe. And that’s where most people walk off: They think that you need to precisely choose the initial conditions of the universe to arrange quanta in Anton Zeilinger’s brain just so that he’ll end up turning a knob left rather than right. Besides sounding entirely nuts, it’s also a useless idea, because how the hell would you ever calculate anything with it? And if it’s unfalsifiable but useless, then indeed it isn’t science. So, frowning at superdeterminism is not entirely unjustified. But that would be jumping to conclusions. How much detail you need to know about the initial state to make predictions depends on your model. And without writing down a model, there is really no way to tell whether it does or doesn’t live up to scientific methodology. It’s here where the trouble begins. While philosophers on occasion discuss superdeterminism on a conceptual basis, there is little to no work on actual models. Besides me and my postdoc, I count Gerard ‘t Hooft and Tim Palmer. The former gentleman, however, seems to dislike quantum mechanics and would rather have a classical hidden variables theory, and the latter wants to discretize state space. I don’t see the point in either. I’ll be happy if the result solves the measurement problem and is still local the same way that quantum field theories are local, ie as non-local as quantum mechanics always is.* The stakes are high, for if quantum mechanics is not a fundamental theory, but can be derived from an underlying deterministic theory, this opens the door to new applications. That’s why I remain perplexed that what I think is the obvious route to progress is one most physicists have never even heard of. Maybe it’s just a reality they don’t want to wake up to. Recommended reading: • The significance of measurement independence for Bell inequalities and locality Michael J. W. Hall • Bell's Theorem: Two Neglected Solutions Louis Vervoort FoP, 3,769–791 (2013), arXiv:1203.6587 * Rewrote this paragraph to better summarize Palmer’s approach. 1. It's misleading to say that Maudlin in particular has declared that superdeterminism means abandoning scientific methodology. That point was made already in 1976 by Clauser, Shimony and Holt. "In any scientific experiment in which two or more variables are supposed to be randomly selected, one can always conjecture that some factor in the overlap of the backwards light cones has controlled the presumably random choices. But, we maintain, skepticism of this sort will essentially dismiss all results of scientific experimentation. Unless we proceed under the assumption that hidden conspiracies of this sort do not occur, we have abandoned in advance the whole enterprise of discovering the laws of nature by experimentation." I recommend that anyone interested in the status of the statistical independence assumption start by reading the 1976 exchange between Bell and CSH, discussed in section 3.2.2 of this: 1. With "in particular" I didn't mean to imply he's the first or only one. In any case, I'll fix that sentence, sorry for the misunderstanding. 2. WayneMyrvold, If statistical independence is such a generaly accepted principle it follows that no violations should be found, right? Let me give some examples of distant physical systems that are known not to be independent: 1. Stars in a galaxy (they all orbit around the galaxy'a center) 2. Planets and their star. 3. Electrons in an atom. 4. Synchronized clocks. So, it seems that it is possible after all to do science and accept that some systems are not independent, right? Now, if you reject superdeterminism you need to choose a different type of theory. Can you tell me what type of theory you prefer and just give a few examples of known experiments that provide evidence for that type of theory? For example, if you think non-locality is the way to go can you provide at least one experiment where the speed of light was certainly exceeded? 2. This would be easier for me to understand if I could figure out what the definition is of a successful superdeterminism model. Suppose you and your postdoc succeed! What would follow in the abstract of your article following "My postdoc and I have succeeded in creating a successful superdeterministic model. It clearly works because we show here that it..." (does exactly what?). Thanks. 1. Leibniz, On a theoretical level I would say it's successful if it solves the measurement problem. But the more interesting question is of course what a success would mean experimentally. It would mean that you can predict the outcome of a quantum measurement better than what quantum mechanics allows you to (because the underlying theory is deterministic after all). Does that answer your question? 3. The reference cited (by WayneMyrvold) above Bell’s Theorem (substantive revision Wed Mar 13, 2019) Is worth reading its entirety. Bell's Theorem: Two Neglected Solutions Louis Vervoort 'supercorrelation' appears more interesting than 'superdeterminism' (and perhaps more [Huw] Pricean). 1. Violation of the Bell-Inequality in Supercorrelated Systems Louis Vervoort (latest version 20 Jan 2017) 4. You have a HUGE problem with thermodynamics! Take Alice and Bob intending to participate in an EPR-Bell experiment. None of them knows yet what spin direction she/he is going to choose to measure. This is a decision they’ll make only at the very last second. Superdeterminism must invoke two huge past light-cones stretching from each of them backwards to the past, to the spacetime region where the two cones overlap. If Alice and Bob are very far away, then this overlap goes back millions of years ago, extending over millions of light-years over space. Then, all – but absolutely ALL – particles within this huge area must carry – together! – the information required for a deterministic computation that can predict the two human's decision, which is not yet known even to themselves (they are not born yet ��). Miss one of these particles and the computation will fail. Simple, right? Wrong. The second law of thermodynamics exacts a fundamental price for such a super-computation. It requires energy proportionate to the calculation needed: Zillions of particles, over a huge spacetime region, for predicting a minute event to take place far in the far future. Whence the energy? How many megawatts for how many megabits? And where is the computation mechanism? I once asked Prof. t'Hooft this question. He, being as sincere and kind as everybody who have met him knows, exclaimed “Avshalom I know what you are saying and it worries me tremendously!” You want superdeterminism? Very simple. Make quantum causality time-symmetric, namely allow each of the two particle to communicate to their common origin BACKWARDS IN TIME and there you go. No zillions of particles and light-years, only the two particles involved. This is the beauty of TSVF. Here is one example: there are many more, with surprising new predictions. Yours, Avshalom 1. Avshalom, That's right, you can solve the problem of finetuning initial conditions by putting a constraint on the future, which is basically what we are doing. Devil is in the details. I don't see what the paper you refers to achieves that normal quantum mechanics doesn't. 2. Our paper makes retrocausality the most parsimonious explanation. No conspiracy, no need to return to the Big Bang, just a simple spacetime zygzag. 3. Parsimonious explanation... for what? Do you or do you not solve the measurement problem? 4. Parsimonious for the nonlocality paradox. Our advance concerning the measurement problem is here: We show that wavefunction collapse occurs through a multiplicity of momentary "mirage particles" of which n-1 have negative mass, such that the particle's final position occurs after the mutual cancellation of all the other positive and negative particles. The mathematical derivation is very rigorous, right from quantum theory. A laboratory confirmation by Okamoto and Takeuchi is due within weeks. 5. What do you mean by wavefunction collapse? Is or isn't your time-evolution given by the Schrödinger equation? 6. It is, done time-symmetrically. Two wave functions along the two time directions between source and absorber (pre- and post-selections). This gives you information about the particle during the relevant time-interval much more than the uncertainty principle seems to allow. Under special choices of such boundary conditions the formalism yields unusual physical values: too large/small or even negative. Again the derivation is rigorous. The paper is very lucid. 7. Well, if your time evolution is given by the Schrödinger equation, then you are doing standard quantum mechanics. Again, what problem do you think you are solving? 8. Again, we solve the nonlocality problem by retrocausality, and elucidate the measurement problem by summing together the two wave-functions. No point going into the simple math here. It's all in the paper. 9. Avshalom Elitzur, Correlations are normal in field theories. All stars in a galaxy orbit the galactic center. They do not perform any calculations, they just respond to the gravitational field at their location. In a Bell test you have an electromagnetic system (Alice, Bob and the source of entangled particles are just large groups of electrons and nuclei). Just like in the case of gravity, in electromagnetism the motion of charged particles is correlated with the position/momenta of other charged particles. Nature does the required computations. No need for retrocausality. 10. Wrong, very wrong. All stars affect one another by gravity, a classical force of enormous scales, obeying Newton's inverse square law and propagating locally. No computation needed. With Alice's and Bob's last-minute decision, the particles must predict their decision by computing the tiniest subatomic forces of zillions of particles over a huge spacetime region. Your choice of analogy only stresses my argument. 11. Avshalom Elitzur, There is no "last-minute decision", this is the point of determinism. What we consider "last-minute decisions" are just "snapshots" from the continuous deterministic evolution of the collections of electrons and quarks Alice and Bob are made of. The motion of these charged particles inside our bodies (including our brains that are responsible for our "decisions") are not part of the information available to our consciousness, this is why our decisions appear sudden to us. The exact way these charged particle move is not independent on the way other, distant charged particles move (as per classical electromagnetism) so I would not see why our decisions would necessarily be independent of the hidden variable. 12. Precisely! But this is the difference between stars and galaxies on the one hand, and particles and neurons on the other. In the former case no computation is needed because the gravitational forces are huge. In the latter, you expect an EPR particle to compute in advance the outcome of neuronal dynamics within a small brain very far away! Can you see the difference? 13. The magnitude of the force is irrelevant. It is just a constant in the equations. Changing that constant does not make the equations fail (as required by independence). The systems are not independent because the state of any one also depends on the state of the distant systems. This is a mathematical fact. Moving a planet far from its star does not result in a non-elliptical, random orbit, but in a larger ellipse. You do not get independence by either playing with the strength of the force or by increasing the distance so that the force gets weaker. 14. So each EPR particle can infer from the present state of Alice's and Bob's past light-cone's particles what decision they will take, even arbitrarily much later? Sabine, is this what the model says? 15. Avshalom Elitzur, It's not clear if the question was directed only to Sabine, but I will answer it anyway. In classical electromagnetism any infinitesimal region around a particle contains complete information about the state (position and momentum) of all particles in the universe in the form of electric and magnetic fields at each point. In a continuous space the number of points in any such region is infinite so you will always have enough of them. Each particle responds to those fields according to Lorentz force. That's it. The particle does not "infer" anything. The states of particles in the universe will be correlated in a way specified by the solution of the N-body problem where N is the number of particles in the universe. My hypothesis is that if one would solve that N-body problem, he will find that for any initial state only solutions that describe particle trajectories compatible with QM's prediction would be found. I think that such a hypothesis is testable for a computer simulation using a small enough N. N should be large enough to accommodate a simplified Bell test though (obviously without humans or other macroscopic objects). 16. Sorry I am still not getting a clear answer to my simple question about the the physics: Two distant experimenters are going to make a last-minute decision about a measurement, a decision which they themselves do not know yet. Then the two particles arrive, each at another experimenter, and the measurements are made. How can the measurement outcome of the one particle be correlated with the measurement taken on the other distant particle? I (and probably other readers) will appreciate a straightforward answer from you as well as from Sabine. 17. Avshalom, You are asking "Why is the initial state what it is?" but there is never an answer to this question. You make assumptions about the initial state to the end of getting a prediction. Whatever works, works. Let me ask you in return how can the measurement outcome *not* be correlated? There isn't an answer to this either, other than postulating it. 18. Sorry this is certainly not my question. Please let me reiterate the issue. Two distant particles, each being affected by the choice of measurement carried out AT THAT MOMENT on the distant one. Either i) Something goes between them at infinite velocity; ii) The universe's present state deterministically dictates the later choices in a way that affects also the particles to yield the nonlocal correlations iii) Quantum causality is time-symmetric and the distant effects go through spacetime zigzag. Each option has its pluses and minuses. It seems that (i) has no advocates here; I am pointing out serious difficulties emerging from (ii); and advocating (iii) as the most elegant, fully according with quantum theory yet most fruitful in terms of novel predictions. I'd very much appreciate comments along these lines. Sorry if I am nagging - I have no problem continuing the discussion elsewhere. 19. Avshalom, Yours is a false dichotomy (trichotomy?). If you think otherwise, please prove that those are the only three options. There is also, of course, no way to follow your "zig zag" from option iii. I already said several times that retrocausality is an option I quite like, but I do not think that it makes sense if the Schroedinger equation remains unmodified because that doesn't solve any problem. 20. I was sure that you are advocating a certain variant of (ii). Not so? What then is this (iv)? Again, the Schrodinger equation is unmodified, only used TWICE. And the result is stunningly powerful. See this example 21. Avshalom, If you have a time-reversible operator, postulating an early-time state, present state, or late state is conceptually identical, so I don't see the distinction you draw between ii and iii. For this reason any superdeterministic theory could be said to be retrocausal, though one may quibble about just what's "causal" here. (I think the word is generally best avoided if you have a deterministic evolution.) The other problem with your 3-option solution is that option 3 is more specific than what is warranted. There isn't any "zig zag" that follows just from having a boundary condition in the future. You can use as many Schroedinger equations as you want, the result will still be a linear time evolution. 22. Not when the 2nd Law of Thermodynamics is taken into account. Whereas (ii) invokes odd conspiracies and impossible computations with no mechanism, (iii) is free of them. And the main fact remains: TSVF has derived a disappearance of a particle from one box, reappearance into an arbitrarily distant one and vice versa - all with certainty 1! See "The case of the disappearing (and re-appearing) particle" Nature Sci. Rep. 531 (2017). The actual experiment is underway. This derivation id obliged by quantum theory, but the fact that it has been derived only by (iii) indicates that it is not only methodologically more efficient but also more ontologically sound. 23. The 2nd law of thermodynamics is a statement about the occupation in state space. If you don't know what the space is and its states, you cannot even make the statement. Again, you are making a lot of assumptions here without even noticing. 24. I could locate another TSVF paper: Quote: "The photons do not always follow continuous trajectories" ... but Bohmian Mechanics require continuous trajectories. Can you repair it? 25. Avshalom Elitzur, As I have pointed out before all this talk about "last-minute decisions" that "they themselves do not know yet" is a red herring. There are a lot of facts about our brains that are true, yet unknown to us. The number of neurons is such an example. If you are not even aware about the number of neurons in your head (and it is possible in principle to count them using a high-resolution MRI) why would you expect to be aware about the motion of each electron and quark? Yes, we do not know what decisions we will make and it is expected to be that way regardless of your preferred view of QM. In order to understand the reason for the observed results you need to understand: 1. What initial states are possible? 2. How these initial states evolve and produce the observed experimental outcomes? As long as those questions are left unanswered you cannot expect to have a proper explanation. Let me give you an example of a different type of non-trivial correlations that can only be explained by understanding the initial state. You probably know that most planets in a planetary system orbit in the same direction around their star and also in the same plane (more or less). How do you explain it? There is no obvious reason those planets could not orbit in any direction and in any plane. Once you examine the past you find out that any planetary system originates in a cloud of gas and dust, and such a cloud, regardless of its original state will tend to form a disk where all the material orbits the center of the disk in the same direction. As the planets are formed from this material it becomes clear that you expect them to move in the same direction and in the same plane. In the case of EPR understanding the initial state is more difficult because the hidden variable and the detector settings are not statistical parameters (like the direction of rotation of a cloud of gas) but depend on the position and momenta of all electrons and quarks involved in the experiment. In other words I do not expect such a calculation to be possible anytime soon. This unpleasant situation forces us to accept that Bell tests are not of any use in deciding for or against local hidden variable theories. That decision must be based on simpler systems, like a hydrogen atom, or a molecule where detailed calculations can be performed. 5. An excellent article. "The stakes are high, for if quantum mechanics is not a fundamental theory, but can be derived from an underlying deterministic theory, this open the door to new applications" Great to see that Einstein's dream lives on:-) 6. How do you arrange any experimental test at all if you assume superdeterminism? If you have a regular probabilistic theory, you can test it. If you perform an experiment that is supposed to have probability ½ of result A and ½ of result B, and you get result A 900 times out of 1000, you can conclude that there is something wrong with your theory (or maybe with your calculations). But in superdeterminism, if you perform such an experiment, can you conclude that your theory is wrong? Maybe the initial conditions of the universe were designed to yield exactly this result. It certainly seems to me that, if you assume superdeterminism, you cannot rely on probabilistic outcomes being correct; the whole point of superdeterminism is to get around the fact that Bell's theorem shows that quantum mechanics gives probability distributions that can't be explained by a standard classical local probability theory. So if you can't rely on probability theory in the case of the CHSH inequality, how do you justify using probability for anything at all? 1. Peter, You arrange your experiment exactly the same way you always arrange your experiment. I don't know why you think there is something different in dealing with a superdeterministic theory than dealing with a deterministic theory. You can rely on probabilistic outcomes if you have a reason to think that you have them properly sampled. This is the case, arguably, for all experiments we have done so far. So, nothing new to see there. Really, think about this for a moment. A superdeterministic theory reproduces quantum mechanics. It therefore makes the same predictions as quantum mechanics. (Or, well, if it doesn't, it's wrong, so forget about it.) Difference is that it makes *more* predictions besides that. (Because it's not probabilistic.) No, it's not because a superdeterministic theory doesn't have to be "classical" in any sense. 2. Peter, A superdeterministic theory only needs to posit that the emission of entangeled particles and their detection are not independent events (the way the particles are emitted depends on the way the particles are detected). Such a theory does not need to claim that all events are correlated. So my answer to your question: is this: If your experiment was based on emission/detection of entangled particles you may be skeptical about the fact that your theory is wrong. If not, you may use statistics in the usual way. 3. Sabine, I think you have underestimated the consequences of superdeterminism. If it is true, there is no way to really test a theory against measurements, thus we do not know if our theory really describes the world. In my opinion, to defend locality by accepting superdeterminism is not worth it. Superdeterminism is an assumption at the epistemic level, underlying the possibility to check our theory about the world, while (non-)locality is at the theory content level. 4. tytung, "Superdeterminism" is not a theory. It's a property of a class of models. You can only test the models. What I am saying is that if you do not actually go and develop such a model you will, of course, never be able to test it. 5. Sabine, Let's say there is a God, and He/She behaves kind at certain times and has an ugly side at other sometimes. You want to check if this is true. But if the universe's fate is such that you always observe this God on those times He/She is kind, then you will build a theory of a kind God. Yes as you said, you can perform any test as usual, but due to the fate, the truth is very different, and forever beyond the reach of your knowledge. So the statement that God is sometimes unkind becomes unfalsifiable if fate is accepted. (I am an atheist.) 6. tytung, I don't know what any of that has to do with me, but you seem to have discovered that science cannot tell you what is true and what isn't; it can only tell you what is a good explanation for your observations. 7. Sabine, No, of course I do think Science can tell us what is true and what isn't, but this is precisely because Science implicitly assumes the freedom of choosing our measurements. In a super-deterministic model, all measurement choices are predetermined. You are not able to falsify a theory through those measurements that it does not allow you to make. You can only build a theory through allowed measurements, and this theory may be very different from the underlying model. I think this is what Shor and many others are concern about superdeterminism. 8. tytung, I know perfectly fine what they are saying and I have explained multiple times why it is wrong. Science does not assume any such thing as the freedom of choosing measurements. Science is about finding useful descriptions for our observations. Also, as I have said repeatedly, the objection you raise would hold equally well for any deterministic theory, be that Newtonian mechanics or many worlds. It is just wrong. You are confused about what science can and cannot do. 7. I was playing backgammon with a friend who is a nuclear engineer. The subject of stochasticity came up and how if one were able to computer the motion of dice deterministically from the initial conditions of a throw that stochastic or probability behavior could be vanquished. I mentioned that if we had quantum dice this would not be the case. Superdeterminism would counter that such predictions are possible, at least in principle. It is tough to completely eliminate any possible correlation that might sneak the obedience to Bell inequalities into quantum outcomes. If we are to talk about the brain states of the experimenter then in some ways for now that is a barrier. On the past light cone of the device and the experimenter at the moment of measurement are many moles of quantum states, or putative superdeterministic states. This gets really huge if the past light cone is considered to the emergence of the observable universe. Trying to eliminate causal influences of such is a game of ever more gilding the lily. Physical measurements are almost by necessity local, and with QM the nonlocality of the wave function plays havoc with the measurement locality. I would then tend to say that at some point one has to say there are no effective causal influences that determine a quantum outcome on a FAPP basis. My sense then is that superdeterminism is most likely not an effective theory. Of course I could be wrong, but honestly I see it as a big non-starter. 8. Sabine, "...they are non-local, which makes my hair stand up..." Well, reality may not be concerned with your coiffure. Isn't this the kind of emotional reaction to an idea your book argues against? More specifically, why is superdeterminism more palatable to you than nonlocality? Superficially they seem similar: the usual notion of nonlocality refers to space, while superdeterminism is nonlocality in time. 1. Andrew, Please excuse the flowery expression; I was trying to avoid repeating myself. It's not consistent with the standard model, that's what I am saying. Not an emotional reaction but what Richard Dawid calls the meta-inductive argument. (Which says, in essence, stick close to what works.) 2. Andrew, I can give you many examples of physical systems that are not independent: stars in a galaxy, planets and their star in a planetary system, electrons and the nucleus in an atom, synchronized clockes, etc. How many examples do you have where the speed of light has been exceeded? 9. Sabine, not to be dragged onto the slippery slope of a never ending “free will” discussion, I would suggest to exclude right from the beginning a human experimenter and use two random number generators (RNGs) instead. (either pseudo RNGs or quantum RNGs or CMB photon detection from opposite directions, ...) [These RNGs select (statistical independently) e.g. the orientations of the polarization filters in the Bell type experiment.] Otherwise it will distract from what I understand your intention is, namely to show in a yet unknown model “... that the detectors’ states aren’t independent of the system ...” and that this yet unknown model will predict more than QM can. 1. Reimond, Yes, that's right, alluding to free will here is unnecessary and usually not helpful. On the other hand, I think it is relevant to note that this seems to be one of the reasons people reject the idea, that they are unwilling or unable to give up on free will. 2. Conway and Kochen showed how QM by itself had conflicts with free will. This is without any superdeterminism idea. The switch thrown by an experimenter could be replaced with a radioactive nucleus or some other quantum system that transitions to give one choice or if in some interval of time there is no decay the other choice is made. I think in a way we have to use the idea of a verdict "beyond a reasonable doubt" used in jurisprudence. We could in principle spend eternity trying to broom out some superdeterministic event. We may not be able to ever rule out how a brontosaurus, I think now called Apatosaurus, farted 100 million years ago and the sound generated phonons that have reverberated around and the hidden variable there interacted with an experiment. Maybe a supernova in the Andromeda galaxy 2.5 million years ago launched a neutrino that interacts with the experiment, and on it can go. So going around looking for these superdeterministic effects strikes me as George W Bush doing his "comedy routine" of looking for WMD. 3. Lawrence Crowell, The problem is that the assumption of statistical independence can be shown to be wrong for all modern theories (field theories). A charged particle does not move independently of other charged particles, even if those particles are far away. Likewise, in general relativity a massive body does not move independently of other massive bodies. What exactly makes you think that in a Bell test the group of charged particles (electrons and nuclei) that make up the source of the entangled particles evolves independently of the other two groups of charged particles (Alice and Bob)? 4. The proposal here is that everything in a sense is slightly entangled with everything else. That I have no particular problem with, though this causes a lot of people to get into quantum-woo-woo. An entanglement sum_ip_i|ψ_i>|φ_i> for some small probability p_i for a mixture is an aspect of decoherence. So in the states of matter around us are highly scrambled or mixed entanglements with states that bear quantum information from the distant past. I probably share tiny entanglements or quantum overlaps with states bearing quantum information held by states composing Abraham Lincoln or Charlemagne or Genghis Khan, but can never tractably find those. The superdeterminist thesis would be there are subquantum causal effects in these massive mixed entanglements which can trip up our conclusions about violations of Bell inequalities. This would mean such violations really occur because we can't make a real accounting of things. This is where I think an empirical standard similar to the legal argument of a verdict “beyond all reasonable doubt” must come into play. People doing these Bell experiments, such as forms of the Aspect experiment, try to eliminate classical influences sneaking in. I think Zeilinger made a statement a few years ago that mostly what is left is with the mind of the observer. If we are now to look at all weak entanglements of matter in an experimental apparatus to ferret out possible classical-like superdeterministic causes the work is then almost infinite. 5. Lawrence Crowell, The way I see EPR correlations explained in a superdeterministic context is as follows: Under some circumstances, a large number of gravitating bodies correlate their motion in such a way as to create a spiral galaxy. Under some circumstances, a large number of electromagnetically interacting objects (say water molecules) correlate their motion in such a way as to create a vortex in the fluid. Under some other circumstances, a large number of electromagnetically interacting objects correlate their motion in such a way as to produce EPR-type correlations. As long as the requirements for the above physical phenomenons are met, spiral galaxies, vortices or EPR correlations will be present. I think this hypothesis has the following implications in regards to the arguments you presented: 1. There is no specific cause, no singular event in the past that explains the correlations (like the fart of the apatosaurus). The correlations are "spontaneously" generated as a result of how many particles interact. 2. The efforts of Aspect or Zeilinger are not expected to make any difference because there is no way to change how those many particles interact. Electrons behave like electrons, quarks behave like quarks regardless of how you set the detector, if a human presses a button, or a monkey or a dog, or a computer doing some random number algorithm. The correlations arise from a more fundamental level that has nothing to do with the macroscopic appearance of the experimental setup. 6. Physics such as vortex flow or turbulence is primarily classical. I am not an expert on the structure of galaxies, but from what I know the spiral shape occurs from regions of gas and dust that slow the passage of stars through them. Galactic arms then do not rotate around as does a vortex, but the spiral arms are somewhat fixed in their shape. In fact this is what the whole issue with dark matter is over. 7. I also think the the quantum phenomena are in essence classical and could be explain by a classical theory, more exactly a classical field theory. The best candidate is, I think, stochastic electrodynamics. 8. This is unlikely. Bell's inequalites tell us how a classical system is likely to manifest a probability distribution. Suppose I were sending nails oriented in a certain direction perpendicular to their path. These then must pass through a gap. The orientation of the nails with gap would indicate the probability. We think of the orientation with respect to the slats as similar to a spinning game, such as the erstwhile popular TV game show Wheel of Fortune. If the nails were oriented 60 degrees relative to the slats we would expect the nails to have a ⅔ chance of passing through. Yet the quantum amplitude for the probability is cos^2(π/3) = .25. The classical estimate is larger than the actual quantum probability. This is a quick way of seeing Bell inequality, and the next time you put those RayBan classes with polarizing lenses on you are seeing a violation of Bell inequaltieis. Physics has quantum mechanics that is an L^2 system, with norm determined by the square of amplitudes that determine probabilities. Non-quantum mechanical stochastic systems are L^1 systems. Here I am thinking of macroscopic systems that have pure stochastic measures. For convex systems or hull with an L^p measure there is a dual L^q system such that 1/p + 1/q = 1. For quantum physics there is a dual system, it is general relativity with its pseudo-Euclidean distance. For my purely stochastic system the dual system is an L^∞ system, which is a deterministic physics such as classical mechanics of Newton, Lagrange and Hamilton. There is a fair amount of mathematics behind this, which I will avoid now, but think of the L^∞ system where there are no distributions fundamental to the theory. Classical physics and any deterministic system, say a Turing machine, is not about some distribution over a system state. The classical stochastic system is just a sum of probabilities, so there is no trouble with seeing that as L^1. The duality between quantum physics and spacetime physics is very suggestive of some deep physics.. In this perspective a quantum measurement is then where there is a shifting of the system from p = ½ to p = 1, thinking of a classical-like probability system after decoherence, or with the einselection this flips to an L^∞ system as a state selection closest to the classical or greatest expectation value. I read last May on how experiments were performed to detect how a system was starting to quantum tunnel. This might be some signature of how this flipping starts. It is still not clear how this flipping can be made into a dynamical principle. Avshalom pointed out an issue with thermodynamics, which I think is germane to this. With superdeterminism there would be more information in quantum systems, and this would lead to problems with the second law of thermodynamics and probably violations of entropy bounds. I looked at Motl's blog last evening, and he declares with confidence there is no measurement problem. Of course he is steeped in his own certitude on most things, such as everything from climate change to string theory. In the spirit of what he calls and “anti-quantum zealot” I think it is not likely he has this all sewed up and hundreds of physicist who think otherwise are all wrong. Motl likes Bohr, but as Heisenberg pointed out there is an uncertain cut-off between what is quantum and what is classical. So Bohr and CI are an interpretation with holes. There is lots of confusion over QM, and even some of the best of us get off the rails on this. I have to conclude that 't Hooft went gang oft aglay on this as well. 9. Lawrence Crowell, "Bell's inequalites tell us how a classical system is likely to manifest a probability distribution." This is not true. Bell's inequalites tell us how a system that allows a decomposition in independent subsystems should behave. As I have argued many times on this thread classical field theories (like classical electrodynamics) are not of this type. No matter how far you place two or more groups of charged particles they will never become independent, this is a mathematical fact. So, the statistical independence assumption does not apply here (or at least it cannot be applied for the general case). In other words classical field theories are superdeterministic, using Bell's terminology. This means that, in principle, one could violate Bell's inequalities with classical systems even in theories like classical electromagnetism. Stochastic electrodynamics is a classical theory (please ignore the Wikipedia info, it has nothing to do with Bohm's theory), in fact it is just classical electrodynamics plus the assumption of the zero-point field (a classical EM field originating at the Big-Bang). In such a theory one can actually derive Planck's constant and the quantum phenomena (including electron's spin) are explained by the interaction between particles and the zero-point field. Please find an introductory text here: Boyer, T.H. Stochastic Electrodynamics: The Closest Classical Approximation to Quantum Theory. Atoms 2019, 7, 29. The theory is far from completely reproducing QM but it is a good example of a theory I think is on the right track. Please take a look at this paper: Classical interpretation of the Debye law for the specific heat of solids R. Blanco, H. M. França, and E. Santos Phys. Rev. A 43, 693 – Published 1 January 1991 It seems that entropy could be correctly described in a classical theory as well. "I looked at Motl's blog last evening, and he declares with confidence there is no measurement problem." I have discussed with him a few ideas regarding the subjective view of QM he is proposing. I had to conclude that the guy has no clue about what he is speaking about. He is completely confused about the role of the observer (he claims that the fact that different observers observe different things proves the absence of objective reality), he contradicts himself when trying to explain EPR (one time he says the measured quantity does not exist prior to measurement, then he claims the contrary). He claims there are observer independent events, but when asked he denies, etc. I cannot evaluate his knowledge about strings, I suppose he is good but his understanding of physics in general is rudimentary. All this being said, I actually agree with him that there is no measurement problem, at least for classical superdeterministic theories. The quantum state reflects just our incomplete knowledge about the system but the system is always in a well-defined state. 10. Lawrence Crowell: "but from what I know the spiral shape occurs [...]" That's not how spiral arms form, nor how they evolve. You'll find textbooks (and papers) which confidently tell you the answers to spiral arm formation and evolution; when you dig into the actual observations, you quickly realize that there's almost certainly more than one set of answers, and plenty of galaxies which simply refuse to be neatly pigeon-holed. 11. @ JeanTate: I am not that versed on galactic astrophysics. However, this wikipedia entry bears what I said for the most part. These are density waves, and are regions of gas and dust that slow the orbital motion of stars and compress gas entering them. So I think I will stick to what I said above. I just forgot to call these regions of gas density waves. As for the evolution of galactic structure. I think that is a work in progress and not something we can draw a lot of inferences from. 12. The Bell inequalities refer to classical probabilities. I read a paper on stochastic electrodynamics last decade. As I recall the idea is that the electric field has a classical part plus a stochastic variation E = E_c + δE, where (δE(t)) = 0 and (δE(t')δE(t)) = E^2δ(t' - t) if the stochastic process is Markovian. BTW, I am using parentheses for bra and ket notation because a lot of these blogs do not like carrot signs. I think Milgrom did some analysis of this sort. If the fluctuations are quantum then this really does not change the quantum nature of things. Quantum mechanics really tells us nothing about the existential nature of ψ. Bohr said ψ had no ontology, but rather was a prescription for determining measurements. Bohm and Everett said ψ does exist. The problem is with trying to make something ontological that is complex valued. Ontology seems to reflect mathematics of real valued quantities, such as expectations of Hermitean operators. Yet epistemic interpretations leave a gap between the quantum and classical worlds. I think this is completely undetermined; there is not way I think we can say with any confidence that quantum waves are ψ-epistemic or ψ-ontic. As a Zen Buddhist would say MU! Motl is a curious character, and to be honest I think that since he places a lot of ideology and political baggage ahead of actual science that his scientific integrity is dubious. I agree his stance on QM is highly confused, and based on his definition I am proud to be what he calls an anti-quantum zealot. He also engages in a lot of highly emotional negativity towards people he disagrees with. He will excoriate a physicist for something, but then for some reason found great interest in a 15 year old girl who does political videos that are obscene and in your face. His blog is useful for looking at some of the papers he references. I would say you can almost judge string theory from his blog; string may be a factor in physical foundations, but the vast number of stringy ideas illustrate there is no "constraint" or something that defines what might be called a contact manifold on the theory. 13. @ Lawrence Crowell: the Wikipedia article is, I think, a reasonable summary of some aspects of the topic of spiral arms in galaxies and their formation and evolution ... but it's rather out of date, and quite wrong in places. Note that density waves are just one hypothesis; the WP article mentions a second one (the SSPSF model), and there are more in the literature. Curiously, a notable morphological feature of a great many spiral galaxies is omitted entirely (rings). This is all rather OT for this blogpost, so just one more comment on this topic from me: galaxies are not like charm quarks, protons, or atoms. 10. Typo: "While philosophers on occasional". 11. Two folks whom I admire greatly, John Bell and Gerard 't Hooft, have shown an interest in superdeterminism for resolving entanglement correlations — Bell obliquely, perhaps, and 't Hooft both recently and very definitely. I understand and even appreciate the reasoning behind superdeterminism, but my poor brain just cannot accept it for a very computer-think kind of reason: efficiency. Like Einstein's block universe, superdeterminism requires pre-construction of a 4-dimensional causality "crystal" or "block" to ensure that all physics rules are followed locally. Superdeterminism simply adds a breathtakingly high new level of constraints onto pre-construction of this block: human-style dynamic state model extrapolation ("thinking") and qualia (someday physics will get a clue what those are) must be added to the mix of constraints. But why should accepting superdeterminism be any worse than believing in a block universe, which the most physicists already do — relativists via Einstein's way of reconciling diversely angled foliations, and quantum physicists via schools of thought such as Wheeler-Feynman advanced and retarded waves? It is not. That is, superdeterminism is not one whit less plausible than the block universe that most physicists already accept as a given. It just adds more constraints, such as tweaking sentient behavior. Pre-construction of the block universe is by itself so daunting that adding a few more orders of magnitude of orders of magnitude of complexity to create a "superblock" universe cannot by itself eliminate superdeterminism. In for a penny, in for a pound! (And isn't it odd that we Americans still use that expression when we haven't used pounds for centuries, except in the even older meaning of the word for scolding ourselves about eating too many chips?) Here is why efficiency is important in this discussion: If you are a scientist, you must explain how your block universe came into existence. Otherwise it's just faith in a mysterious Block Creator, the magnitude of whose efforts makes a quick build of just seven days pretty puny by comparison. The mechanism needed is easy to identify: It's the Radon transform, the algorithm behind tomography. To create a block universe, you apply the Radon transform iteratively over the entire universe for the entirety of time, shuffling and defuzzing and slowing clarifying the world lines until a sharp, crystallized whole that meets all of the constraints is obtained. This is why I can respect advocates of superdeterminism, and can agree fully that it is unfair for superdeterminism not to be taught. If you accept the block universe, you have already accepted the foundations of superdeterminism. All you need to do is just the Radon sauce a bit more liberally! My problem? I don't accept the necessity of the block universe. I do however accept the concept (my own as best I can tell) of causal symmetry, by which I mean that special relativity is so superbly symmetric that if you take the space-like details of any given foliation, you have everything to you need to determine the causal future of the universe for any and all other foliations. However, causal symmetry is a two-edged sword. If _any_ foliation can determine the future, then only _one_ foliation is logically required from a computational perspective. It doesn't make other foliations any less real, but it does dramatically simplify how to calculate the future: You look at space around _now_, apply the laws of physics, and let universe itself do the calculation for you. But only once, and only if you accept entanglement as proof that classical space and classical time are not the deepest level of how mass-energy interacts with itself. Space and time just become emergent features of a universe where that most interesting and Boltzmannian of all concepts, information and history, arose with a Big Bang, and everything has been moving on at a sprightly pace ever since. 12. A more promising way of attempting a deterministic extension of quantum mechanics seems to me a theory with some mild form of non-locality, possibly wormholes? 1. Leonard Susskind has advanced the idea of ER = EPR, or that the Einstein Rosen bridge of the Schwarzschild solution is equivalent to the nonlocality of EPR. This is a sort of wormhole, but not traversable. So there are event horizons that make what ever causal matter or fields there are unobservable and not localizable to general observers. 2. Very hard to make compatible with Lorentz-invariance. (Yes, I've tried.) 3. Sabine, are you saying that a wormhole would violate Lorentz-invariance? Why would that be? 4. Well, one wormhole would obviously not be Lorentz-invariant, but this isn't what I mean. What I mean is if you introduce any kind of non-locality via wormholes and make this Lorentz-invariant, you basically have non-locality all over the place which is more than what you may have asked for. Lorentz-invariance is a funny symmetry. Very peculiar. 5. This problem is related to traversable wormholes. A traversable worm hole with two openings that are within a local region are such that for the observer passing through there is a timelike path connecting their initial and final positions. To an observer who remains outside these two points are connected by a spacelike interval. An elementary result of special relativity is that a timelike interval can not be transformed into a spacelike interval. But if we have a multiply connected topology of this sort there is an ambiguity as to how any two points are connected by spacelike and timelike intervals. Ok, one might object, special relativity is a global flat theory, wormholes involve curvature with locally Lorentzian regions. However, the ambiguity is not so much with special vs general relativity but with the multiply connected topology. This matter becomes really odd if one of the wormhole openings is accelerated outwards and then accelerated back. For a clock near the opening there is the twin paradox issue. It is then possible to pass through the wormhole, travel back by ordinary flight and arrive at a time before you left. Now there are closed timelike loops. The mixing of spacelike and timelike intervals becomes a horrendous mix, as now timelike and spacelike regions overlap. Traversable wormholes also run afoul with quantum mechanics as I see it. A wormhole converted into the time machine as above would permit an observer to duplicate a quantum state. The duplicated quantum state would emerge from an opening, and then later that observer had better throw one of these quantum states into the wormhole to travel back in time to herself. The cloning of a quantum state is not a unitary process, and yet in this toy model we assume the quantum state evolves in a perfectly unitary manner. Even with a nontraversable wormhole one might think there is a problem, for Alice and Bob could hold entangled pairs and Alice enters this black hole. If Bob times things right he could teleport a state to Alice, enter the black hole, meet Alice so they have duplicated states without performing a LOCC operation. However, now nature is more consistent, for Bob would need a clock so precise that its would have a mass comparable to the black hole. That would then perturb things and prevent Bob from making this rendez vous. 6. Thank you, Sabine, for explaining why wormholes violate Lorentz-invariance, and thank you, Lawrence for the detailed clarification of what Sabine meant. I want to give your responses some more thought, but currently the heat and humidity is not conducive to deep thinking. I'll await the passage of a cold front in a few days before exercising the brain muscle. 13. I think you should re-read Tim Maudlin's book more carefully. He spent a lot of paragraphs explaining that superdeterminism has nothing to do with "free will". 1. I didn't say it does. I said that many people seem to think it does but that this is (a) not a good argument even if it was correct and (b) it is not correct. 14. Superdeterminism is a difficult concept for a lay person to unpack and to help understand it I ask whether it operates on the classical, "visible" world of things or, as the quantum was sometimes evaluated, exists only at the atomic and subatomic level? In short, do "initial conditions" govern ALL subsequent events or only events at the particle level? If the first is the case then does superdeterminism complicate our understanding of other scientific endeavors? Here is an example of what I mean. The evolution of modern whales from a "wolf-like" terrestrial creature is sufficiently documented that I kept a visual of it in my classroom. Biological evolution is understood to operate through random genetic changes (e.g. mutations or genetic drift) that are unforeseeable and that are selected for by a creature's environment. If true randomness is removed from the theory then evolution becomes telelogical -- and even smacks of "intelligent design." I mean that in this instance (the whale) its ultimate (for us) phenotype was an inevitable goal of existence from when the foundation of creation was laid. The same may be said of any living thing and, if so, evolutionary biology becomes a needless complication. Leopards were destined to have spots from the first millisecond of the Big Bang and camouflage had nothing to do with it. (In a similar vein I once wrote that railroad stations are always adjacent to railroad tracks because this, too, was destined by the initial conditions of 13.5 billion years ago and not because human engineers found such an arrangement maximal for the swift and efficient movement of passengers. So . . . are interested lay people better advised to understand superdeterminacy as a "quirk" of very small things or as an operating principle for the universe and without regard to scale. Thank you. 1. A. Andros, The theories that we currently have all work the same way and in them the initial conditions govern all subsequent events, up to the determinism that comes from wave-function collapse. Yes, if you remove the indeterminism of quantum mechanics then you can predict the state at any time given any one initial state. An initial state doesn't necessarily have to be an early time, it could be a late time. (I know this is confusing terminology.) No, this is not intelligent design as long as the theory has explanatory power. I previously wrote about this here. Please let me know in case this doesn't answer your question. 2. Intelligent design requires a purpose, the assumption that the state at the Big-Bang was "chosen" so that you get whales. Determinism just states that the state at the Big-Bang is the ultimate cause for the existence of the whales. If that state were different, different animals would have evolved. What is the problem? If the mutations were truly random, a different mutation would play exactly the same role as the different initial state. The railroad stations are always adjacent to railroad tracks because they both originate from the same cause (they were planed together by some engineer. That plan is also in principle traceable to the Big-Bang. Again, I do not see what the problem is supposed to be. 3. It must be... Is that right? 4. For a railroad station to be located adjacent to railroad tracks (a silly example, I know, but illustrative) the number of "coincidences" to flow from initial conditions (under superdeterminism) is so staggering as to strain belief. We must believe that 13.8 billion years ago the universe "determined" that all the following would coincide in time and place on just one of trillions (?) of planets: the invention of the steam locomotive; the manufacture of steel rails; the surveying of a rail route; appropriate grading for the right-of-way; the delivery of ballast to that right-of-way; the laying of that ballast; the cutting and processing of cross-ties; the shipment of cross-ties to the right place; the laying of cross-ties; the laying of rails; the installation of signals; the decision to build a railroad station; the creation of architectural plans for such a station; the simultaneous arrival of carpenters and masons at the site of the proposed station . . . and so on. All of these factors, and numerous others, were on the "mind" of the infant universe at the Big Bang? As an alternative, one can believe that the various contractors and engineers have AGENCY and thus could created the station/tracks. And, if the Rolling Stones were superdetermined to hold a performance at such-and-such a place did the infant universe also determine that 50,000 individuals would show up at the right place and time, each clutching an over-priced ticket? I wonder whether superdeterminism is just an attempt to get around Bell's Theorem and Quantum Mechanics and so restore a form of classical physics. And, since the concept cannot be tested -- is it not similar to string theory or the multi-verse? Dr. Richard Dawkins showed how random events can give the impression of purpose. Superdetermism, though, hints at purpose disguised as random events. As for the leopard and his spots . . . since everything that exists or acts results from initial conditions then biological evolution violates the teachings of the late Mr. Occam: it is simply an unnecessary embellishment of what was inevitably "meant to be." Now, I don't doubt evolution for a moment -- but it is not consistent (because of its random nature) with a superdetermined universe. Also . . . I thank Dr. H. for her helpful and kind reply to my note. 5. A. Andros, Big-bang created the engineer, the engineer created both railroad stations and tracks. Railroad stations and tracks were not independently created at the Big-Bang. The correlations between them is perfectly explainable by their common cause. Evolution does not require randomness, only multiple trials. You could do those trials in perfect order (mutate the first base, then the second and so on) and still have evolution. Superdeterminism can be tested using computer simulations. 6. To belabor my point, the Big Bang did not create the engineer who "created both railroad stations and tracks." These are separate engineering feats requiring different skills. Literally hundreds of separate skills/decisions must perfectly coincide in time and place for such projects to exist. This is simply asking too much of coincidence. There is no need for "multiple trials" in evolution if the phenotype of an organism was dictated at the Big Bang. What is there to submit to trial? The outcome 13.8 billion years after creation was inevitable from the start --- what is there to evolve? As for testing superdeterminsim on computers, the outcome of such simulations must also be dictated by initial conditions at the Big Bang and so are unreliable -- which, incidentally,seems to be Dr. H.'s opinion of numerous experiments that validate Prof. Bell's conclusions -- they cannot be trusted because the "fix" is in. If one believes in superdeterminism then one is stuck with something akin to intelligent design. Or, did the 3.5 million parts of the Saturn V, designed by thousands of engineers and all working perfectly together, originate in a "whim" of the Big Bang? It seems that superdeterminism provides a possible way out of Bell's theorem and quantum indeterminacy. But, it does so by endowing the universe with purpose and something like "foresight." IBM was down a fraction last week -- is that due to "initial conditions" almost 14 billion years ago? 7. A. Andros “…does superdeterminism complicate our understanding of other scientific endeavors?” Yes, and thanks for broadening the discussion to include complications that superdeterminism would present for understanding the how and why of complex biological systems. Your examples are part of a nearly endless list of biological phenomena for which there is presently a strong explanatory rational within evolutionary biology. These interrelated rationales would be completely undercut by a theory superdeterminism and left hanging for some ad hoc explanation. As you note, evolutionary biology then becomes a needless complication. Going further, the existence of these unnecessary phenomena (spots on leopards) would in fact violate the least action principle of physics that governs state transitions. It doesn’t hang together. Granted, we cannot free ourselves from the constraints created by the laws of physics, but neither can we free ourselves from the addendum of further constraints created by biological systems. “In general, complex systems obviously can bring new causal powers into the world, powers that cannot be identified with causal powers of more basic, simpler systems. Among them are the causal powers of micro-structural, or micro-based, properties of a complex system” — Making Sense Of Emergence, Jaegwon Kim, (f.n. #37) Sense Emergence.1999.pdf Evolutionary biology is a stepwise saga of contesting futures. Similarly, science is a stepwise saga of contesting theories. The fitness criteria are eventually satisfied. 8. A. Andros, The purpose of a superdeterministic theory is to reproduce QM. You have provided no evidence that, mathematically this is not possible. If you have such evidence please let me know. So, as long as all your examples with railroads and the evolution of animals are compatible with QM they will also be compatible with a superdeterministic interpretation of QM. 9. If one accepts superdeterminism then one must accept Intelligent Design. It may be that your mathematics do not align with 150 years of study of biological evolution but if that is the case then all the worse for superdeterminism. And, as I have said, the "coincidences" necessary for a superdeterministic universe to produce complex human interactions on a whim of initial conditions in a universe some 13.8 billion years ago simply strains credulity. There is ample evidence that biological evolution is, in large part, powered by random genetic events that provide the raw material on which natural selection works. Since superdeterminism disallows anything that is random then it, perforce, is a form of Creationism. All the math in the world, however cleverly employed, seems unlikely to verify a Genesis-like account of organic diversity over time. One might even become lost in math. 10. "If one accepts superdeterminism then one must accept Intelligent Design." This is complete rubbish. 11. Forty years ago Fred Hoyle delighted fundamentalists by stating that the complexity of DNA is such that it was as likely to have evolved spontaneously in nature as the chances of a tornado assembling a 747 by sweeping through a junk yard. Superdetermnism does the tornado one better -- it assembles entire technological civilizations from initial conditions of the universe. Since you seem to allege that this is due neither to random events (which SD prohibits) or agency (also excluded by SD) then what device is left to explain complex social and technological systems other than intelligent design? SD (superdetermnism) is simply a restatement of Augustinian theology -- as amplified by John Calvin. The term SD could just as easily be relabeled as "predetermination" and it would be fully acceptable in 17th century Geneva. And, as part of the intelligent design, biological evolution simply goes out the window. I was taught that organisms evolve due to random genetic and environmental changes -- but SD insists that "randomness" does not exist. Thus, organisms cannot evolve and each living thing is created de novo -- as declared in Genesis. Since the universe decreed the emergence of, say, giraffes from the moment of it (the universe's) inception then evolution becomes so much hand-waving. Your comment that my remarks are "complete rubbish" contain no arguments. I am merely trying to learn here. I am mystified how Dr. H. can believe that fantastically complex events in history (our 747, for example) could be implicit in initial conditions without intelligent design or random events or agency. I also hope to learn from you how the gradual evolution of one organism into another, somewhat different organism over geological time is possible if the phenotype is implicit in the infant universe. This is simply teleology. For all math in the world, SD seems to simply unpack Augustinian/Cavinistic thought and "science" and martial it for battle against the peculiar findings of Mr. John Bell. I enjoy your column (and book) greatly. But, the implications of SD are not easily dismissed -- or believed (unless one is an Augustinian and then it all makes perfect sense.) 12. A. Andros: If you have a time-reversible evolution, they are always present in the initial conditions, as in the present state, and in any future state. If you want to call that intelligent design, then Newtonian mechanics and many worlds are also intelligent design. 13. Sabine, I believe the point is that the design of today's complexity of biological structure and human artifact had to evolve as part of a natural process or be created by fiat of some agency. Unless the past is some kind of reverse engineering of the present. Ilya Prigogine believed that the very foundations of dissipative structures [biological systems] impose an irreversible and constructive role for time. In his 1977 Nobel Lecture,2 he noted in his concluding remarks, “The inclusion of thermodynamic irreversibility through a non-unitary transformation theory leads to a deep alteration of the structure of dynamics. We are led from groups to semigroups, from trajectories to processes. This evolution is in line with some of the main changes in our description of the physical world during this century.” A good article with experiments and references here: And consider that, one photon in a thousand striking the earth’s surface will land upon a chloroplast which is about one fifth of the diameter of a human hair and comprised of some three thousand proteins. Significantly, as the excitation moves through the photosynthetic process, one of these proteins is triggered to block it from returning to its previous state. Not to say that that process itself could not,in theory be rewound, but ingenious biology. 15. I seem to recall Ian Stewart remarking in one of his books that one way Bell's proof can be challenged is via its implicit assumption of non-chaotic dynamics in evolution of hidden variables. Any mileage to that? 1. Maybe he talked to Tim Palmer :p 2. No assumption is made of non-chaotic evolution of the hidden variables. Just go through the proof, and you'll see that. 16. Last paragraph, "this open the door ..." Should be "this opens the door ..." 17. Sabine, do you think it possible for a superdeterminism model to get some place useful without incorporating gravity into the equation? Offhand, it seems unlikely to me, since any kind of progress on the measurement issue would have to answer the gravitational Schrodinger's cat question, discussed on your blog at some point. Namely, where does the gravitational attraction point while the cat is in the box. 1. Sergei, I don't see how gravity plays a role here, just by scale. Of course it could turn out that the reason we haven't figured out how quantize gravity is that we use the wrong quantum theory, but to me that's a separate question. So the answer to your question is "yes". 18. Not John "Steward" Bell, but John Stewart Bell. 19. If the theory that space-time is a product of a multitude of interconnected entangled space-time networks, then it might be reasonable to suspect that any test of the state of a component of one of those networked entangled components of reality would be unpredictable as probed by an experimenter. The entangled nature of the network is far too complex and beyond the ability of the experimenter to determine. This complexity leads to the perception that the experiment is inherently unpredictable. To get a reliable test result, the experimenter may need to join the entangled network in whose component the experimenter is interested in testing. All the equipment that the experimenter intends to use in the experiment would be required to be entangled with the network component under test. The test instrumentation would now be compatible and in sync with the entangled component and be in a state of superposition with the entangled network that the component belongs to. During experiment initialization, the process of entangling the test infrastructure would naturally modified the nature of the network to which the component to be tested belongs as that infrastructure joins the common state of superposition. As in a quantum computer, the end of the experiment is defined by the decoherence of the coherent state of the experiment. Then upon decoherence of the experiment, the result of the experiment can be read out. 20. "...and the latter wants to discretize state space. I don’t see the point in either." There is no mathematically rigorous definition of the continuum: 21. Has anyone considered the possibility of spacetime itself being wavelike instead of trying to attach wavelike properties to particles? That is to say, the energy in space (time), varies sinusoidally over duration? This wave/particle duality seems to be a major problem with all "interpretations". This notion might define the "pilot wave" 1. Yes, wave structure of matter started by Milo Wolff proposes that what we observe as particles is the combination of an in-coming wave and an out-going wave. The combination of these two waves (which are more fundamental and are the constituents of both photons or matter waves) remove wave-particle duality issues, singularities of near field behavior including the need for renormalization) and many other issues. The two best features that I see are the ability to calculate the mass of standard particles and the wave nature of gravity (we did measure gravity waves in 2015) joining nicely with the wave nature of particles. 22. What I don’t understand about this super determinism is why, with all the fine tuning necessary to simulate non-locality , the mischief-maker responsible didn’t go that little bit further and ensure my eggs were done properly this morning. It seems an extravagant degree of planning to achieve what? 1. Dan, What makes you think fine-tuning is necessary? Please be concrete and quantify it. 2. Dan, correlations between distant systems are a consequence of all field theories (general relativity, classical electromagnetism, fluid mechanics, etc.). The correlations need not be explained by any fine-tuning of the initial conditions. For example, the motions of a planet and its star are correlated (one way of detecting distant planets is by looking at the motion of the star) but this is not a result of a fine-tuning. All planetary orbits are ellipses. The reason they are not triangles or spheres has nothing to do with the initial state. 23. Sabine, my assumption is that the distinction between a super deterministic universe and ia merely deterministic universe must be that in the super deterministic universe, the initial conditions (at the Big Bang) must be precisely those that will simulate nonlocality in all Bell tests forever more. 1. Dan, I suggest you don't assume what you want to prove. 24. I should have been clearer. I do not seek to prove any aspect of super determination. I believe it to be as extravagant and inaccessible to empirical demonstration as many worlds. All I was trying to say was that super determinism appears to presume that a highly contrived and thus extravagant set of initial conditions existed at the onset of the big bang. But congratulations on an absorbing discussion of a very interesting topic. 1. The universe can into existence as a unitary entity. As the universe expanded and eventually factionalized, its various substructures remained entangled as constrained by its unitary origin. The initial condition of the universe is global entanglement and as such Super determination must be a fallout of primordial global entanglement. 2. Dan, That you "believe" a theory "appears to presume" is not an argument. 3. Axil, With superdeterminism there is no entanglement (at least the version that mostly seems is being discussed here). It's being relied on to explain Bell Inequality violations locally. 4. Just a generic set of initial states for the early universe would do. If you take the physical state in which you do an experiment, then it's obviously not true that you could have made another decision w.r.t. to the experimental set-up while leaving everything else unchanged. Such a different future state in which the experiment is performed would under inverse time evolution not evolve back to an acceptable early universe state. The physical states we can find ourselves in are, as far as inverse time evolution is concerned, specially prepared states whose entropies will have to decrease as we go back in time. But because of time reversal invariance, any generic state will always show an increase in entropy whether or not we consider forward or backward time evolution. 25. Cited above (by Avshalom Elitzur). Very interesting article. Thanks. "The Weak Reality That Makes Quantum Phenomena More Natural: Novel Insights and Experiments" Yakir Aharonov, Eliahu Cohen, Mordecai Waegell, and Avshalom C. Elitzur November 7, 2018 26. Hi Sabine, I am not sure where Cramer stands in those pictures. Please could you tell me? 1. I don't know. I haven't been able to make sense of his papers. 27. If this is still about physics and not personal religion, there's no experimental basis for either super-determinism, or "classical determinism" for that matter. Instead, determinism was always a baseless extension in Newtonian mechanics and easily disproved by chaos experiments. If there was any lingering doubt, Quantum Mechanics should have killed determinism for good, but, zombies never die, hence the super-determinism hypothesis. Now, how about some experimental evidence? 1. What do you mean by "no experimental basis"? If no one is doing an experiment, of course there isn't an experimental basis. You cannot test superdeterminism by Bell-type tests, as I have tried to get people to understand for a decade. Yet the only thing they do are Bell-type tests. 2. Chaos theory is classical determinism. It is called deterministic chaos, where the path of a particle is completely determined, but due to exponential separation in phase space one is not able to perfectly track the future of the particle. 3. Chaos doesn't disprove determinism; it can be the result of an entirely deterministic evolution. Chaos addresses immeasurability. We have difficulty predicting the weather for precisely this reason, and the "butterfly effect" illustrates the issue: We cannot measure the atmosphere to the precision necessary to pick up every tiny event that may matter. In fact it was discovered because of a deterministic computer program: When the experimenter restarted an experiment using stored numbers for an initial condition, the results came out completely differently; because he had not stored enough precision. He rounded them off to six digits or whatever, and rounding off to one part in a million was enough to change the outcome drastically, due to exponential feedback effects in the calculations. That doesn't make the equations or evolution non-deterministic, it just means it is impossible to measure the starting conditions with enough precision to actually predict the evolution very far into the future, before feedback amplification of errors will overwhelm the predictions. I believe there are even theoretical physics reasons for why this is true, without having to invoke any quantum randomness at all. There's just no way to take measurements on every cubic centimeter of air on the earth, and its physically impossible to predict the future events (and butterfly wing flaps) that will also have influence. All of it might be deterministic, but the only way WE can model it is statistically. And I'll disagree with Dr. Hossenfelder on one point, namely that "shut up and compute" is not dead; often that is the best we humans can do, in a practical sense. Speculating about "why" is well and good, it might lead to testable consequences; but if theories are devised to produce exactly the same consequences with zero differences, that is a waste of time and brains; it is inventing an alternative religion. IMO The valid scientific response to "why" is "I don't know why, not yet." And if we already have a predictive model that always works -- we might better serve ourselves and our species working on something else more pressing. 28. Sabine, do you have any specific suggestions for how to build a superdeterminism test? I think this is an intriguing idea. One reason why I love the way John Bell's mind worked was that he pushed he pilot model (which I should note I do not accept) to the extreme, and found that it produced clear, specific models that led him to his famous inequality. Bell felt he might never have landed upon his inequality if he had instead relied on the non-intuitive, mooshy-smooshy (technical term) fuzzy-think of Copenhagen. Thus similarly, I am intrigued by your assertion that pushing superdeterminism to the limit may also lead to specificity in how to test for it. (Also, my apologies in advance if you have already done so and I simply missed it.) 1. So you discard Bohmian Mechanics? What are youre objections? 29. Is there a way to distinguish whether quantum mechanics is truly stochastic, or merely chaotic (in the mathematical chaos theory sense that the outcome is so sensitive to initial conditions that it is impossible to predict outcomes more than statistically because we can't measure the initial conditions with sufficient sensitivity)? 1. Can you help me understand that comment, please? I mean Schroedinger's equation yields a wave function - nothing chaotic or stochastic there. However when the wave function is used to determine (say) the position of an electron, using the Born rule, the result is a probability density. How would that be reformulated in terms of chaos? 30. Sabine, Bell's inequality has been tested to a level of precision that most people in the field are accepting the results. To my knowledge this has been done with entangled particles, are you aware of anyone trying to do an inverted experiment using maximally decohered particles (Quantum discord?). Is this such an obvious move that all the Bell experimenters are already doing it as a control group? If Suskin is right and ER == EPR then can we use that knowledge to build thrusters that defy conservation of momentum locally (transmit momentum to the global inertial frame a-la Mach effect?) Just an engineer looking for better engines. 31. Is superdeterminism compatible with the assumption of measurement settings independence? 32. I think that the implications of this (possibly the most overlooked in 20th C QM IMO) paper "Logic, States, and Quantum Probabilities" by Rachel Wallace Garden exposes a flawed assumption in the argument that violation of Bell inequalities necessitates some form of non-locality. A key point being that the outcomes of quantum interactions do not produce the same kind of information as a classical measurement, and this has clear-cut mathematical implications as to the applicability of the Bell inequality to quantum interactions. Rachel points out that the outcome of a quantum measurement on Hilbert space (such as a polarization test) is a denial (e.g. "Q: What is the weather? A: It is not snowing". In effect, all you can say about a photon emerging from channel A of a polarizing beamsplitter is that it was not exactly linearly aligned with B - all other initial linear and circular states are possible. As Rachel shows, the proof of the Bell inequality depends on outcome A implyng not B. That is, it relies on an assumption that there is a direct equivalence between the information one gets from a classical measurement of a property (e.g. orientation of a spinning object) and the interaction of a photon with a polarization analyzer. It is clear, while Bell inequalities do show that a measurement that determines a physical orientation in classical system is subject to a limit, nevertheless one cannot reasonably conclude that quantum systems are somehow spookily non-local when the experiments produce a different kind of information (a denial) to the classical determination. As Rachel's mathematical analysis shows, when a measurement produces a denial, then violation of the Bell inequality is an expected outcome. International Journal of Theoretical Physics, Vol. 35, No. 5, 1996 33. What are your thoughts on validation of different models using experiments in fluid dynamics? 34. A layman like me sees it this way: The detectors are observers, and there is reality. This reality is in turn an interpretation or description of the actuality. In quantum mechanics the detectors are observing a reality that came about by the subject of the experiment i.e., the experiment itself is a program, an interpretation, a description of say actuality because given the conditions of the double slit experiment the results are invariable the same up until this day. It is the output of this program that the detectors observe, and they in turn interpret this output or reality according to the their settings--like mind set. This interpretation is the influence of the detectors on the output which we see as a collapse of a number of possibilities to a single possibility i.e., the absence of interference. The detector is itself a program, a hardware program in that. We can go on wrapping in this way, interpretation after interpretation, until we arrive at something well defined and deterministic. That is, it is movement from infinite possibilities or indeterminism tending towards a finite possibility or determinism or clear definition with each interpretation along the way. And all along the way there is an increasing degree of complexity and with complexity come form and definition. Lets say there is the uninterpreted primordial to begin with. Then there are programs which interpret. Among the programs there is a common factor: the least interpreted primordial lets say light or photons. Starting with the least interpreted primordial if we unravel or reverse engineer the program, we get to the program. Once we have got the program then cant we predict the output, which is the interpretation, the description or "the observed"? 35. If the big bang produced a singular primordial seed, then the number of ways that the subsequent universe evolved from that seed can only be partitioned in a finite number of ways. The universe must then possess a finite dimensional state space. There is a finite number of ways that the universe can be partitioned. This constraint imposed on the universe by this finite dimensional state space mean that all these various states can be knowable. This ability to know all the states that the universe can assume is where Superdeterminism is derived from. There was a period at the very earliest stages of the universe’s differentiation; it was possible to know the states of all its component parts. At that time the universe was in a state of Superdeterminism. This attribute of the universe is immutable and therefore cannot change with the passage of time. It follows that the state of Superdeterminism must still be in place today. 36. It is true that determinism kills free will. But as David Hume pointed out, the inverse case is also relevant: free will requires some kind of determinism to work. You need a causal connection between decisión and action, ideally we expect the same decision to cause the same action...Not an easy problem indeed. 37. The easily overlooked energy cost of ab initio superdeterminism: Set up your pool table with a set of end goals. Set your launcher angles for each ball, eg to 5 digits of precision, and see how many generations of bounce you can control. Let's naively say it's roughly linear, 5 generations of bounce control for 5 digits of launch precision. Thus controlling a million generations of bounce requires (naively) a million digits of launch precision, _for each particle_. The actual ratio will be more complex, but inevitably will have this same more-means-more relationship. What is easily overlooked is that since information is a form of entropy, it always has a mass-energy cost. Efficient coding can make that cost almost vanishingly small for classical systems such as computers, but for subatomic particles with quantum-small state spaces, figuring out even how to represent such a very large launch precision number per particle becomes deeply problematic, particularly in terms of the mass-energy cost of the launch number in comparison to the total mass-energy of the particle itself. This in a nutshell is the hidden energy cost of any conceivable form of ab initio superdeterminism: The energy cost of a launch with sufficient precision to predetermine the entire future history of the universe quickly trends towards infinity. In another variant, this same argument is why I don't believe in points of any kind, except as succinct summaries of the limit behavior of certain classes of functions. 38. @Sabine Perhaps one reason why people do not like superdeterminism (SD) is that, as to giving us a comprehensible picture of reality, it scores even worse than standard QM. It amounts to saying that the strange correlations we observe (violations of Bell's inequalities) are due to some mysterious past correlations that arose somewhere at the beginning of the universe. By some magical tour de force, such correlations turn out to be exactly those predicted by QM. Except that, without the prior development of QM, we would be totally incapable of making any predictions based on SD alone. Yes, SD is a logical possibility, but so far-fetched and so little fruitful that it mainly reveals how contemptors of Copenhagen are running out of arguments. 1. opamanfred, Yes, that's right, that is the reason they don't like it. As I have explained, however, that reason is just wrong. Look how you pull up words like "mysterious" and "magical" without justifying them. You are assuming what you want to argue, namely that there is no simple way to encode those correlations. This is obviously wrong, of course, because you can encode them on a future boundary condition in a simple way, that simple way being what's the qm outcome. Of course you may say now, all right, then you get back QM, but what's the point of doing that? And indeed, there is no point in doing that - if that's the only thing you do. But of course what you want is to make *more* predictions than what QM allows you to. 2. @sabine Of course, you can. But you must admit it is a strange way to proceed: You observe the present, and then carefully tailor the past so that the present is a consequence of that past. If that's not fine tuning... Besides, I see another problem. QM clearly shows a lot of regularity in Nature. How would you accommodate such a regularity using past correlations, if not by even more fine tuning? Like, imagine a roulette that yields with clockwise regularity red-black-red-black-r-b-r-b-r-b-r-b and so on. Would you be comfortable with ascribing that to some past correlations? 3. opamanfred, (a) If you think something is fine-tuned, please quantify it. (b) I already said that superdeterminism trivially reproduces QM, so you get the same regularities for the same reason. Sounds like straightforward deduction to me; if Sherlock finds a dead man skewered by an antique harpoon, he deduces a past that could lead to the present, e.g. somebody likely harpooned him. There's no fine-tuning, if anything is "fine-tuned" it is the present highly unusual event of a man being harpooned on Ironmonger Lane. 5. (a) Perhaps I misunderstand SD. But how do you fix the past correlations in such a way that QM is correctly reproduced now? I had the impression this is done completely ad hoc. Slightly different correlations would yield predictions vastly different from QM, hence "fine tuning". (b) For the same reason?? In one case the regularities are the result of a consistent theory (QM) that is rather economical in its number of basic postulates. In the other, you choose an incredibly complex initial condition for the sole purpose of reproducing the results of the aforementioned theory. 6. opamanfred "Slightly different correlations would yield predictions vastly different from QM, hence "fine tuning"." You cannot make such a statement without having a model that has a state space and a dynamical law. It is an entirely baseless criticism. If you do not understand why I say that (and evidently you do not), then please sit down and try to quantify what is fine-tuned and by how much. 7. Sabine, “... superdeterminism trivially reproduces QM, so you get the same regularities for the same reason.” In QM the reason for the non-local correlation of measurement results of EPR pairs is their (e.g. singlet) state. Is this also the reason in SD? So far, I thought the reason for the correlation in SD is sacrificing statistical independence ... I am confused ... Maybe the prediction might be the same (or more), but the explanation or reason is a completely different one. (Remark: I find how TSVF trades non-locality for retrocausality quite interesting. And since the unitary evolution is deterministic, retrocausality works. But as you said this does not solve the measurement problem.) Maybe I am misunderstanding something, but naively it seems easy to come up with a model that does this. Let's suppose Alice and Bob prepare N spin-1/2 particles in a singlet state. They each have a measuring device that can measure the projection of the spin of a particle along the axis of the device. For simplicity, the relative alignments are fixed to be either perfectly anti-aligned, or else misaligned by a small angle theta. Quantum mechanics predicts the expected cross correlation of the spins will depend on theta like -1+theta^2/2. So for large enough N, by measuring the cross correlation for a few different values of theta, Alice and Bob can measure the expected quantum mechanical correlation function. A super-determinist could also explain the correlation function by imposing a future boundary condition on each particle. Given the mis-alignment of the detectors which the pair will experience -- the relative alignment of the detectors is assumed known since we are imposing this condition on the particles in the future after the measurement is made -- we pick the final spin values from a distribution which reproduces the quantum mechanical behavior. I will also assume a simple evolution rule, that the final spin we impose as a final boundary condition does not change when we evolve backwards in time -- for example, imagine a electron moving through a region with no electric or magnetic fields. Here's why I think a super-determinist would have a fine-tuning problem. From the super-determinist's point of view, there are 2N future boundary conditions to choose (2N numbers in the range [-1,1]) -- the value of the spin projected on the measuring device for each particle in each of the N entangled pairs. The number of possible future boundary conditions is exponential in N. Given the statistical error associated with N measurements, there will be an exponential in sqrt(N) possible configurations which will be consistent with the quantum mechanical correlation function. The super-determinist knows nothing about quantum mechanics (otherwise what is the point), so I will assume it's natural to use a distribution that does not privilege any particular choice of initial conditions, like a uniform distribution from -1 to 1. In that case, it's exponentially unlikely (in sqrt(N)) that the drawn distribution of spin components will be consistent with the quantum mechanical prediction. If the point is that the super-determinist should choose future boundary conditions according to a distribution which is highly peaked around values that will reproduce the quantum mechanical correlations, I agree this is mathematically self consistent, but I don't see what is gained over using ordinary quantum mechanics. It seems that one has just moved the quantum mechanical correlations, into a very special distribution that (a) is naturally expressed as a future boundary condition (which raises some philosophical issues about causality, but I'm willing to put those aside) and (b) one has to use a probability distribution which (as far as I can tell) can't be derived from any principles except by reverse engineering the quantum mechanical result. Maybe I am just missing the point... 9. Reimond, Statistical dependence is not the reason for the correlation in the measurements; statistical dependence is just a property of the theory that prevents you from using Bell's theorem to falsify it. Just where the correlations come from depends on the model. Personally I think the best way is to stick with quantum mechanics to the extent possible and just keep the same state space. So the correlations come from where they always come from. I don't think that's what 't Hooft and Palmer are doing though. 10. Andrew, Thanks for making an effort. Superdeterminism reproduces QM and is therefore equally predictive for the case you mention. It is is equally fine-tuned or not fine-tuned for the probabilistic predictions as is quantum mechanics. The probabilistic distribution of outcomes in QM, however, are not the point of looking at superdeterministic models. You want to make *more* predictions than that. Whether those are fine-tuned or not (and hence have explanatory power or not) depends on the model, ie on the required initial state, state space, and dynamical law. 39. ... ... let the paper speech by itself. 40. Sabine Hossenfelder (9:05 AM, July 30, 2019) wrote: This a very interesting comment that I think is right—that retrocausality would require a modified [current quantum formulation]* to make sense. Perhaps there will be a future post on this? * Schroedinger equation, or path integral 1. Philip, What I meant to say, but didn't formulate it very well, is that to solve the measurement problem you need a non-linear time evolution. Just saying "retrocausal" isn't sufficient. Yes... will probably have a post on this in the future, but it will take more time for me to sort out my thoughts on this. (As you can tell, it took me a long time to write about superdeterminism for the same reason. It's not all that easy to make sense of it.) 2. I have never thought this time reversal idea was worth much. QM is time reversal invariant, so as I see things it will not make any difference if you have time reversed actions. The idea of nonlinear time is interesting, and this is one reason of the quantum interpretations I have a certain interest in the Montevideo interpretation. This is similar to Roger Penrose's idea that gravitation reduces waves. I have though potentially the GRW might fit into this. Quantum fluctuations of the metric which result in a g_{tt} fluctuation and nonlinear time evolution might induce the spontaneous collapse. If one is worried about conservation of quantum information, the resulting change in the metric is a very weak gravitational wave with BMS memory. 3. Lawrence Crowell: I read this summary of Montevideo interpretation ( That seems like a promising route. But what I got was that the nonlinear evolution of (env + detector + system) exponentially diminishes all but one eigenstate, to the point that it is impossible to measure anything else but that one state. So 'collapse' isn't an instantaneous event but more of reaching an asymptotic state, so it would require a detector larger than the universe to distinguish any but the dominant eigenstate. It also makes the "detection" entirely environmental, no observer required; which fits with my notion that a brain and consciousness are nothing special, just a working matter machine and part of the environment, so 'observation' is just one type of environmentally caused wavefunction collapse. I like it! 4. The Montevideo Interpretation proposes limitations on measurement due to gravitation. Roger Penrose has proposed that metric fluctuations induce wave function collapse. Gambini and Pullen are thinking in part with a quantum clock. It has been a long time since I read their paper, but their proposal is that gravitation and quantum gravitation put limits on what can be observed, which in a standard decoherence setting is a wave function reduction. The Bondi metric, written cryptically as ds^2 = (1 - 2m/r)du^2 - dudr - γ_{zz'}dzdz' + r^2 C_{zz}dz^2 + CC + D_zC_{zz} + CC gives a Schwarzschild metric term, plus the metric of a 2-sphere, plus Weyl curvature and finally boundary terms on Weyl curvature. We might then think of concentrating on the first term. Think of the mass as a fluctuation, so we have m → m + δm, where we are thinking of a measurement apparatus as having some quantum fluctuation in the mass of one of its particles or molecules. With 2m = r this is equivalent to a metric fluctuation, and we have some variance δL/L for L the scale of the system. For the system scale ~ 1meter and δL = ℓ_p = √(Għ/c^3) = 1.6×10^{-35}m δL/L ≈ p or the probability for this fluctuation and p ≈ 10^{-35}. Yet this is for one particle. If we had 10^{12} moles then we might expect some where a molecule has a mass fluctuation approaching the Planck scale. If so we might then expect there to be this sort of metric shift with these Weyl curvatures. Does this make sense? It is difficult to say, for clearly no lab instrument is on the order of billions of tons. On the other hand the laboratory is made of states that are entangled with the building, which are entangled with the city that exists in, which is entangled with the Earth and so forth. Also there is no reason for these fluctuations to be up to the Planck scale. The Weyl curvatures would then correspond to very weak gravitational waves produced. They can be very IR and still carry qubits of information. If we do not take these into account the wave function would indeed appear to collapse, and since these gravitational waves are so weak and escape to I^+ there is no reasonable prospect for a recurrence. In this way Penrose's R-process appears FAPP fundamental. This is unless an experiment is done to carefully amplify the metric superposition, such as what Sabine refers to with quantization of large systems that might exhibit metric superpositions. We have the standard idea of the Planck constant intertwining a spread in momentum with a spread in position ħ ≤ ΔxΔp, and the same for time and energy. Gravitation though intertwines radial position directly with mass r = 2GM/c^2, and it is not hard to see with the Gambini-Pullen idea of motion we can do the same with r = r0 + pt/√(p^2 + m^2) that we can include time. The variation in time, such as in their equation 2 due to a clock uncertainty spread can just as well be due to the role of metric fluctuations with a mass. 41. Sabine: I know what you mean because you have reiterated sufficiently often in previous posts :) You are technically correct, but your position precludes any meaningful way to do science. You can "explain" everything by positing initial conditions. Fine. But it leads nowhere. Come on, don't be disingenuous. Sherlock's deduction would be entirely reasonable of course. Much less so if Sherlock had maintained: "We have found this dead mean skewered by an antique harpoon because of some correlations that arose 13.8 billion years ago at the birth of the universe". Sherlock was notorious for substance abuse, but not to that point... 1. Opamanfred, You are missing the point. (a) You always need to pose initial conditions to make any prediction. There is nothing new about this here. Initial conditions are always part of the game. You do not, of course, explain everything with them. You explain something with them if they, together with the dynamical law, allow you to describe observations (simpler than just collecting data). Same thing for superdeterminism as for any other theory. (b) Again, the point of superdeterminism is *not* to reproduce quantum mechanics. The point is to make more predictions beyond that. What do you mean by "it leads nowhere". If I have a theory that tells you what the outcome of a measurement is - and I mean the actual outcome, not its probability - how does this "lead nowhere"? 42. I think the discussion above overlooks the possibility that hidden variables based on IMPERFECT FORECASTS by nature are behind "quantum" phenomena. I provide convincing arguments for this hypothesis in two papers. I copy the titles and abstracts below. You can search for them online. Both papers cite experimental publications. The second paper departs from an experimental observation made by Adenier and Khrennikov, and also makes note of an experiment performed on an IBM quantum computer which I believe provides further evidence pointing towards forecasts. Title: A Prediction Loophole in Bell's Theorem Abstract: We consider the Bell's Theorem setup of Gill et. al. (2002). We present a "proof of concept" that if the source emitting the particles can predict the settings of the detectors with sufficiently large probability, then there is a scenario consistent with local realism that violates the Bell inequality for the setup. Title: Illusory Signaling Under Local Realism with Forecasts Abstract: G. Adenier and A.Y. Khrennikov (2016) show that a recent ``loophole free'' CHSH Bell experiment violates no-signaling equalities, contrary to the expected impossibility of signaling in that experiment. We show that a local realism setup, in which nature sets hidden variables based on forecasts, and which can violate a Bell Inequality, can also give the illusion of signaling where there is none. This suggests that the violation of the CHSH Bell inequality, and the puzzling no-signaling violation in the CHSH Bell experiment may be explained by hidden variables based on forecasts as well. 43. "How much detail you need to know about the initial state to make predictions depends on your model." Better, on your particular experiment. If the knob on the left device is controlled by starlight from the left side, and that of the right side controlled by the right side, you need quite a lot of the universe to conspire. No, this decision is quite trivial. If there is superdeterminism, no statistical experiment is able to falsify anything, thus, one has to give up statistical experiments as useless. No. The original state of the device, say, a0, may be as dependent from the measured state as you like. If you use the polarization of incoming starlight s together with your free will decision f simply as additional independent inputs, (both evenly distributed between 0 and 360 degrees), so that the resulting angle is simply a = a0 + s + f, then the statistical independence of f or s is sufficient to lead also to the statistical independence of a. That a0 is correlated does not disturb this at all. So, at least one source of independent random numbers would be sufficient - all you have to do is Bell's experiment with devices controlled (just influenced in a sufficiently heavy way is sufficient) by these independent random numbers. Thus, the whole world has to conspire completely. Non-Einstein-local. A return to the Lorentz ether, that's all. As it is, quantum theory is indeed nonlocal, but it could be easily approximated by a local (even if not by an Einstein-local) theory. Nothing more seriously non-local than the situation with Newtonian gravity. The configuration space trajectory q(t) is, like in Newtonian theory, continuous in all the realist interpretations of QT, all the real change is a local, continuous one. Only how it is changed possibly depends on the configuration far away too. So, there is nothing worse with the non-locality than in NT. What's the problem with such a harmless revival of a problem scientists have lived with already in classical physics is beyond me. 1. Ilja, Which is why it's hard to see evidence for it. You better think carefully about what experiment to make. Patently wrong. You cannot even make such a statement without having an actual model. "superdeterminism" is not a model. It's a principle. (And, as such, unfalsifiable, just like determinism.) 2. Ilja, This is completely false. The only claim a superdeterministic theory has to make is that a particular type of electromagnetic phenomena (emission of an entangled particle pair and the later behavior of those particles in a magnetic field, such as the one in a Stern-Gerlach device) are not independent. There are plenty of examples of physical systems that are not independent, an obvious one being the motion of any of two massive objects due to gravity. Such objects will orbit around their common center of mass, regardless of how far they are. Some other examples are stars in a galaxy, electrons in an atom, synchronized clocks, etc.). Superdeterminism maintains that the particles we call "entangled" are examples of such systems. The only difference is that we can directly intervene and "mess-up" with the correlations due to gravity, we can de-synchronize clocks but we cannot control the behaviour of quantum particles because we ourselves are built out of them. We are the result of their behavior so whatever we do is just a manifestation of how they "normally" behave. So, superdeterminism brings nothing new to physics, it is in complete agreement with all accepted physical principles, including the statistical ones. A slightly different way to put this is to observe that a superdeterministic interpretation of QM will have the same predictions as QM. So, as long as you think that statistical experiments are not useless if QM is correct, those experiments will also not be useless if a superdeterministic interpretation of QM is correct. On the other hand if you do believe that statistical experiments are useless if QM is true they will also be useless if a non-local interpretation (such as de-Broglie-Bohm theory) of QM is true. As you can see, the most reasonable decision between non-locality and superdeterminism is superdeterminism because this option does not conflict with any currently accepted physical principle and it does not require us to go back to the time of Newton. 44. The quantum world is fully deterministic based on vacuum quantum Jumps: God plays dice indeed with the universe ! However living creatures seem to be able to influence initiatives suggested by RP I to Veto actions by RP II at all levels of consciousness according to Benjamin Libet. ( Trevena and Miller) 45. Sabine, Do you know sbout the Calogero conjecture ( and do you think it is related to SD? 1. Opamanfred, Thanks for bringing this to my attention. No, I haven't heard of it. I suppose you could try to understand the "background field" as a type of hidden variable, in which case that would presumably be superdeterministic. Though, if the only thing you know about it is that it's "stochastic", it seems to me that this would effectively give you a collapse model rather than a superdeterministic one. 46. Sabine, "It’s not like superdeterminism somehow prevents an experimentalist from turning a knob." why not? How can an experimentalist possibly turn the knob other then how she/he has been determined to turn it? If hidden variables exist their evolution would have to be highly chaotic. What's the likelihood that any predictions could be made, even if we could somehow find those variables and ascertain their (approximate) initial values? 1. i aM wh, She cannot, of course, turn the knob other than what she's been determined to do, but that's the case in any deterministic theory and has nothing to do with superdeterminism in particular. That's a most excellent question. You want to make an experiment in a range where you have a reasonable chance to see additional deterministic behavior. This basically means you have to freeze in the additional variables as good as possible. The experiments that are currently being done just don't probe this situation, hence the only thing they'll do is confirm QM. (I am not sure about chaotic. May be or may not be. You clearly need some attractor dynamics, but I don't see why this necessarily needs to be a chaotic one.) 2. Is this here (chapter 4) still the actual experiment you want to be performed? 3. Reimond, No, it occurred to me since writing the paper that I have made it too complicated. You don't need to repeat the measurement on the same state, you only need identically prepared states. Other than that, yes, that's what you need. A small, cold, system where you look for time-correlations. 4. Sabine, “...don't need to repeat the measurement on the same state...” This means you remove the mirrors? But then there is no correlation time left, only the probability that e.g. a photon passes the two polarizers. SD, if true, would then give a tiny deviation from what QM tells us. Is it like this? 5. Reimond, You can take any experiment that allows you to measure single particle coherence. Think, eg, double slit. The challenge is that you need to resolve individual particles and you need to make the measurements in rapid succession, in a system that has as few degrees of freedom as possible. (The example in the paper is not a good one for another reason, which is that you need at least 3 pointer states. That's simply because a detector whose states don't change can't detect anything.) 47. Why not just accept Lagrangian mechanics, successfully used from QFT to GR, which is not only deterministic (Euler-Lagrange), but additionally time/CPT-symmetric, well seen in equivalent action optimization formulation. In contrast, "local realism" is for time-asymmetric "evolving 3D" natural intuition. Replacing it with time-symmetric spacetime "4D local realism" like in GR: where particle is its trajectory, considering ensemble of such objects, Feynman path ensemble is equivalent with QM. Considering statistical physics of such basic objects: Boltzmann distribution among paths in euclidean QM or simpler MERW ( ) we can see where Born rules come from: in rho ~ phi^2 one psi comes from past ensemble (propagator from -infinity), second psi from future ensemble (propagator from +infinity), analogously to Two State Vector Formalism of QM. Hence we directly get Born rules from time-symmetry, they allow not to satisfy inequalities derived in standard probabilistics: without this square. I have example construction of violation of Bell-like inequality for MERW (uniform path ensemble) on page 9 of 48. Pure determinism doesn’t work as an explanatory system because it doesn’t have the basis for explaining how new (algorithmic) relationships could come into existence in living things. Equations can’t morph into algorithms, except in the minds of those people who are religious believers in miraculous “emergence”. 1. @Lorraine Ford, these days when I encounter the word "emergence" in an article, I lose interest. Because I know the hand waving has begun. 2. jim_h: "emergence" is not hand waving. Cooper pairs and the low-energy effective field theory of superconductivity are emergent. Classical chaotic behavior (KAM) is emergent from quantum mechanics, and easily demonstrated by computation. 3. "Equations can’t morph into algorithms" Perhaps then equations (from a Platonic realm of Forms) should be left behind by physicists, and replaced by algorithms. 4. dtvmcdonald: "emergence" is a vapid truism. All phenomenological entities are emergent. 49. Well if the sacred cow random is sacrificed and reconize that nature does not play fair and is not playing with a double sided coin but a double head coin where the ohsevation point is always unpredictible and dump the cosmos as a machine and a more organic perspective yeah it works. It's consistent with weather, forest fires, earthquakes etc. In regards Free will we have an infinite choice of wrongs and one right. That's neither deterministic nor freewill. That's also inspite of ourselves When we get it right. 50. I believe that there may be a way to “circumvent the initial conditions of the universe” quandary but more to the point the more complicated case dealing with the “initial conditions that affect the process that the observer wants to examine” can be dealt with. In a takeoff of the classic Einstein though experiment the rider/observer can observe what is happening on the relativistic train that he is traveling on. The observer is synchronized in terms of initial conditions that apply to both he and what he wants to observe. He is also completely synchronized in space-time with the condition that he wants to observe. Now if the observer becomes entangled with the condition that he wants to observe, he now becomes totally a part of that condition since he has achieved complete simpatico with it. In an illustrative example of how this could be done, the person who wants to observe what a photon does when it passes through a double slit, both the double slit mechanism, the photon production mechanism, and the sensors sampling the environment in and around each slit would need to be entangled as a system. This entanglement would also include the data store that is setup to record the data. After the data has been recorded, the observer decoherers the entanglement condition of the system. After this step, the observer can examine the data store and is free from quantum mechanical asynchrony to finalize his observations and make sense of it. 51. Even if one could construct a superdeterministic quantum theory in Minkowski space, what would be the point? We do not know what the geometry of the spacetime is that we live in and if it is deterministic in the first place, and I don't think that we will ever know. 52. Reading about the umpteen "interpretations" of QM is interesting, e.g. Review of stochastic mechanics Edward Nelson Department of Mathematics, Princeton University Abstract. "Stochastic mechanics is an interpretation of nonrelativistic quantum mechanics in which the trajectories of the configuration, described as a Markov stochastic process, are regarded as physically real. ..." but all together it reminds be of the witches' song in Macbeth: Double, double toil and trouble; Fire burn and caldron bubble. Fillet of a fenny snake, In the caldron boil and bake; Eye of newt and toe of frog, Wool of bat and tongue of dog, 53. If one accepts the block-universe picture (BU) entailed by relativity theory (and one should on consistency grounds) then Bell's theorem becomes ill formulated. Ensembles in the BU are those of 4D `spacetime structures', not of initial conditions. QM then becomes just a natural statistical description of a classical BU, with wave functions encoding various ensembles; no measurement problem either. I have recently published a paper on the topic which made little to no impact. The reason for this is (I think, and would be happy to be corrected) that physicists are jailed in IVP reasoning (Initial Value Problem). They say: we accept the BU, but ours is a very specific BU; complete knowledge of data (hidden variables included) on any given space-like surface, uniquely determines the rest of the BU (a ridiculously degenerated BU). In this case, QM being a statistical description of the BU, though still consistent, becomes truly strange. IVP reasoning started with Newton's attempt to express local momentum conservation in precise mathematical terms. However, there are other ways to do so without IVP. In fact, moving to classical electrodynamics of charged particles, one apparently MUST abandon IVP to avoid the self-force problem, and similarly with GR. Without IVP, QM becomes just as `intuitive' as Relativity. Moreover, QM in this case is just the tip of the iceberg, as certain macroscopic systems as well should exhibit `spooky correlations' (with the relevant statistical theory being very different from QM if at all expressible in precise mathematical terms). arXiv:1804.00509v1 [quant-ph] 54. Sabine, I've heard an argument that if superdeterminism is true quantum computers won't work the way we expect them to work. I guess the idea is that if many calculations are done, correlations would show up in places we wouldn't expect to find them. Is there anything to that? Would it be evidence in favor of SD if quantum computers don't turn out to be as "random" as we expect them to be? 1. i aM wh, That's a curious statement. I haven't heard this before. Do you have a reference? By my own estimates, quantum computing in safely in the range where you expect quantum mechanics to work as usual. Superdeterminism doesn't make a difference for that. 2. 't Hooft made that suggestion here: see page 12 and 13. I'm not sure if 't Hooft has changed his opinion on this matter. 55. Dear Sabine, I have a high opinion of your book, and consequently have a low opinion of myself, since I don't understand superdeterminism at all. I hope you or some other kind person can give me a small start here. I am not a physicist but do have a background of research in mathematical probability. I skimmed through the two reference articles you cited but didn't see the kind of concrete elementary example I needed. Here is the sort of situation I am thinking about. A particle called Z decays randomly into two X-particles with opposite spins. Occasionally a robot called Robot A makes a spin measurement for one X-particle , after its program selects a direction of spin mesaurement using a giant super-duper random number generator. Perhaps Robot A chooses one of a million containers each containing a million fair coins, or takes some other pseudo-random steps, and finally tosses a coin to select one of two directions. On these occasions Robot B does the same for the other X-particle, also using a super-super random number generator to find its direction of measurement. A clever physicist assumes that three things are statistically independent: namely the result of the complex random-number generator for A, the result of complex random-number generator for B, and the decay of particle Z. With some other assumptions, the physicist cleverly shows that the results should satisfy a certain inequality, and finds that Nature does not agree. If I understand the idea of superdeterminism, somewhere back at the beginning of time a relationship was established that links two giant super-duper random number generators together with particle Z in a special way, resulting in a violation of the inequality, while preserving the all the statistical independence that we constantly observe in so many places, and at so many scales. Is this the idea? Probably I have missed something. Of course there is no free will in what I described. I will be very happy to understand a different approach to this! 1. jbaxter, Several misunderstandings here. First, the phrase "a relationship was established" makes it sound as if there was agency behind it. The fact is that such a relationship, if it exists at one time, has always existed at any time, and will continue to exist at any time, in any deterministic theory. This has absolutely nothing to do with superdeterminism. All this talk about the initial conditions of the universe is therefore a red herring. No one in practice, of course, ever writes down the initial conditions of the universe to make a prediction for a lab experiment. Second, regarding statistical independence. What we do observe is that statistical independence works well to explain certain classical observations. One could argue that violations of Bell's inequality in fact demonstrate that it is not fulfilled in general. This is a really important point. You are taking empirical knowledge from one type of situation and apply it to another situation. This isn't justified. Am I correct to understand this as saying that every two points in space-time are "correlated"? IIRC, Susskind's EP = EPR combined with the holographic universe (the bulk can be described by a function over the surface) implies that there should be a lot of entanglement between any two points in the universe. Such entanglements would mean that there is, indeed, no statistical independence between points in space (-time). 3. Thanks Sabine for taking the time to respond. Somewhat related to your second point, I would say that no one really understands how the usual independence which we observe, ultimately arises. We do understand how independence _spreads_ though, e.g. small shaking in my arm muscles leads to a head rather than a tail when I try to toss a coin. All this for our classical lives, of course. The infectious nature of noise is part of the reason that avoiding statistical independence is a tough problem, I suppose. Obviously it would be great to get progress in this direction. Classical mechanics is as "emergent" from quantum mechanics as Cooper pairs and a low energy limit field theory of superconductivity are emergent from quantum mechanics. And thus chaos, in the KAM sense, is emergent as a low energy / long distance scale phenomenon from QM. This chaos can be fed back into QM through, e.g. deciding whether to have your polarizer at 0 or 45 or 90 degrees from Brownian motion. 57. Superdeterminism can only be an explanation for entanglement if the universe was created by a mind who followed a goal. And that is religion in my understanding. 1. antooneo, Any model with an evolution law requires an initial condition. That's the same for superdeterminism as for any theory that we have. By your logic all theories in current science are therefore also "religion". 2. Sabine, what do you mean by "REQUIRES an initial condition"? In a Lagrangian formulation, for example, initial and final conditions are required, and only on position variables (this, BTW, is the case for retro-causality arguments). The universe, according to Relativity (block-universe) couldn't have been created/initiated at some privileged space-like three-surface, and then allowed to evolve deterministically. This is the IVP jail I was talking about. 3. Sabine, think about Lagrangian mechanics - it has 3 mathematically equivalent formulations: Euler-Lagrange equation to evolve forward in time, or backward just switching sign of time, or action optimizing between two moments of time. Theories we use are fundamentally time/CPT-symmetric, we should be extremely careful if enforcing some time asymmetry in our interpretation - what leads to paradoxes like Bell violaition. Spacetime view (block universe) as in general relativity is a safe way, repairing such paradoxes. 4. Sabine: Any model has initial conditions. Particularly of course a superdeterministic system. But that is not sufficient to have the results which are supposed to be explained by it, e.g. the entanglement of particles in experiments where also the setup has to be chosen by experimenters influenced in the necessary way. So, even if Superdeterminism is true, the initial conditions have to be set with this very special goal in mind. The assumption of such a mind is religion; what else? 5. antooneo, That is correct, the initial conditions are not sufficient. You also need a state space and a dynamical law. Your statement that "the initial conditions have to be set with this very special goal in mind" is pure conjecture. Please try to quantify how special they are and you should understand what I mean: You cannot make any such statement without actually writing down a model 6. antooneo, Sabine, If you want the reason for initial conditions being prepared for future measurnments, instead of enforcing time-asymmetric way of thinking to time/CPT symmetric theories, try mathematically equivalent time-symmetric formulations, e.g. action optimizing for Lagrangian mechanics: history of the Universe as the result of Big Bang in the past and e.g. Big Crunch in the future, or using path-ensembles: like Feynman's equivalent with QM - leading to Born rule from time symmetry ( one amplitude from the past (path enesemble/propagator), second from the future. 58. A schema (e.g. one consisting of nothing but a set of deterministic equations and associated numbers), that doesn’t have the potential to describe the logical information behind the evolution of life, is not good enough these days. The presence of algorithmic/ logical information in the world needs to be acknowledged, but you can’t get algorithmic/ logical information from equations and their associated numbers. 59. When we say that there is a Renaissance going on in QM, do we mean the community actually going to take a serious look at new ideas? Is there any experimental evidence that points to one new flavor over another one? The last time we revisited the subject a few years ago I remember hearing that very few professionals were willing to consider alternatives. I know there is a fledgling industry of quantum computing and communications, but getting industry to pursue and finance research into new flavors of QM could be harder than getting academia to do it, even if one flavor does offer tantalizing features. 60. This comment has been removed by the author. 61. Do you have any reference on how "Pilot wave theories can result in deviations from quantum mechanics"? Moreover, what's your opinion on a stochastic interpretation of quantum mechanics? 62. Taking up Lorraine Ford's thread, is life deterministic? It is deterministic as long as the pattern continues, the moment you break out of the pattern as a matter of insight that insight may be inventive and therefore we come upon something totally new. Evolution talks about the animal kingdom and the plant kingdom let us say the evolution of the animate world. Then I think there is also the evolution of the inanimate world. Let us start with a big bang and arrive at iron, thereafter, matter evolved into gold and so on up to uranium. Now the radioactive elements are unstable because they are recent species, they have not stabilized or rather fully fallen into a pattern. What I am trying to say is life is only a branch of inanimate evolution i.e., the evolution of the universe branched of into inanimate evolution and animate evolution. And animate evolution is not deterministic it was a quirk. 63. @Sabine There is a weird version of Creationism that appears to be worryingly similar to SD. To put it shortly, some creationists accept that all the evidence from the archeological record is real and that the conclusion that life forms have evolved is a logical deduction. However, these creationists believe that all this evidence was put there on purpose by God to make us believe that the world is several billion years old, while in fact it was created only a few thousand years ago. It's just that God put evry stone and every bone in the right place to make us jump to the Darwinian conclusion. So I wonder. How's that different from SD when you replace God by Big Bang, and Darwinian evolution with QM? Boshaft ist der Herrgott, after all... 1. Opamanfred, I explaiend this very point in an earlier blogpost. The question is whether your model (initial condition, state space, dynamical law) provides a simplification over just collecting data. If it does, it's a perfectly fine theory. Superdeterminism, of course, does that by construction because it reproduces quantum mechanics. The question is whether it provides any *further* simplification. To find out whether that's the case you need to write down a model first. 64. Think, just think please... is evolution possible without memory? Is life possible without memory? Certainly not. Is memory possible without "recording"? Certainly not. Could life on earth have been possible if "recording" was not already going on in the inanimate world, If "memory" was not there in the inanimate world? Taking this questioning further, I suspect that "recording" takes place even at the quantum level, and "memory" is part of the quantum world. And it seems that this capacity to "record" and hold in "memory" gradually led to the appearance of gold and life on earth. Comment moderation on this blog is turned on.
e34d3d21623b00e8
Computation of products of phase space factors and nuclear matrix elements for double beta decay Get Citation . doi: 10.1088/1674-1137/43/6/064108 .  doi: 10.1088/1674-1137/43/6/064108 shu Received: 2018-08-22 Revised: 2018-10-26 Article Metric Article Views(738) PDF Downloads(16) Cited by(0) Policy on re-use To reuse of subscription content published by CPC, the users need to request permission from CPC, unless the content was published under an Open Access license which automatically permits that type of reuse. 通讯作者: 陈斌, • 1.  沈阳化工大学材料科学与工程学院 沈阳 110142 1. 本站搜索 2. 百度学术搜索 3. 万方数据库搜索 4. CNKI搜索 Email This Article • 1. International Centre for Advanced Training and Research in Physics, Subsidiary of INCDFM, Magurele 077125, Romania • 2. Horia Hulubei National Institute of Physics and Nuclear Engineering, Magurele 077125, Romania Abstract: Nuclear matrix elements (NME) and phase space factors (PSF) entering the half-life formulas of the double-beta decay (DBD) process are two key quantities whose accurate computation still represents a challenge. In this study, we propose a new approach of calculating these, namely the direct computation of their product as an unique formula. This procedure allows a more coherent treatment of the nuclear approximations and input parameters appearing in both quantities and avoids possible confusion in the interpretation of DBD data due to different individual expressions adopted for PSF and NME (and consequently their reporting in different units) by different authors. Our calculations are performed for both two neutrino ($ 2\nu\beta\beta $) and neutrinoless ($ 0\nu\beta\beta $) decay modes, for five nuclei of the most experimental interest. Further, using the most recent experimental limits for $ 0\nu\beta\beta $ decay half-lives, we provide new constraints on the light mass neutrino parameter. Finally, by separating the factor representing the axial-vector constant to the forth power in the half-life formulas, we advance suggestions on how to reduce the errors introduced in the calculation by the uncertain value of this constant, exploiting the DBD data obtained from different isotopes and/or decay modes. 1.   Introduction • The double-beta decay (DBD) is a rare nuclear process intensively studied due to its potential to test nuclear structure methods and investigate beyond standard model (SM) physics [13]. According to the number and type of released leptons, there are several possible DBD modes that can be classified in two categories. One category is where two anti-neutrinos or two neutrinos are emitted in the final states along with the two electrons ($ 2\nu\beta^-\beta^- $) or two positrons ($ 2\nu\beta^+\beta^+ $). The double-positron decays can also be accompanied by one or two electron capture processes ($ 2\nu\beta^+ EC $, $ 2\nu ECEC $). These decay modes occur with lepton number conservation (LNC) and are allowed within the SM. In the other category, the decay processes are similar with the one described above, however no anti-neutrinos or neutrinos are emitted in the final states. They are generically called neutrinoless DBD processes ($ 0\nu\beta\beta $), such that we may have $ 0\nu\beta^-\beta^- $, $ 0\nu\beta^+\beta^+ $, $ 0\nu\beta^+ EC $, and $ 0\nu ECEC $ decays in this category. All these processes violate LNC, hence they are not allowed within the original framework of the SM, however they can appear in theories that are more general than the SM. The discovery of any $ 0\nu\beta\beta $ decay mode would first demonstrate the lepton number violation by two units, but would also provide us with valuable information on other beyond SM processes. From the $ 2\nu\beta\beta $ decay study, information about nuclear structure can be obtained, different nuclear methods can be tested, and the violation of the Lorentz symmetry can be investigated in the neutrino sector. Meanwhile, from the $ 0\nu\beta\beta $ decay study, the neutrino character can be decided (is it a Dirac or a Majorana particle?), one can constrain beyond SM parameters associated with different mechanisms that may contribute to this decay mode, and one can obtain information about neutrino mass hierarchy, the existence of heavy neutrinos, right-handed components in the weak interaction currents, etc. Therefore, the DBD study is a highly important and timely topic. The first step in the theoretical study of the DBD process is to derive half-life expressions and calculate the quantities therein, for each possible decay mode and for different transitions and mechanisms that may contribute to the $ 0\nu\beta\beta $ decay mode. With a good approximation, the DBD half-life formulas can be written in factorized form, as follows [4], [5]: $ \left( T^{2\nu}_{1/2} \right)^{-1} = G^{2\nu}(E_0, Z) \times g_A^4 \times \mid m_e c^2 M^{2\nu} \mid^2 , $ $ \left( T^{0\nu}_{1/2} \right)^{-1} = G^{0\nu}(E_0, Z) \times g_A^4 \times \mid M_l^{0\nu}\mid^2 \left( \langle\eta_{l}\rangle \right)^2 , $ where $ G^{(2,0)\nu} $ denotes the phase space factor (PSF), $ M^{(2,0)\nu} $ is the NME for the $ (2,0)\nu $ decay modes, and $ \langle\eta_{l}\rangle $ is a parameter related to the specific mechanism l that can contribute to the $ 0\nu\beta\beta $ decay. We note that the half-life expressions above are written such that the product of the nuclear (NME) and atomic part (PSF) is expressed in $ [\rm yr^{-1}] $. Moreover, we note that the axial-vector constant to the forth power is separated from other components. Such form of the half-life expressions allows facilitated use of the theoretical results for interpreting DBD data and the possibility to make connections between data from different decay modes and experiments in an attempt to find solutions to reduce the errors in computation related to the value of the axial-vector constant, which is not precisely known. As previously shown, to estimate/predict DBD lifetimes and derive beyond SM parameters, a precise, reliable computation of both the PSF and NME is mandatory. The largest uncertainties in the DBD calculations originate from the NME. They are calculated with different nuclear methods, the most currently employed method being pnQRPA [3, 610], Shell Model [1114], IBA2 [1517], PHFB [18], and GCM with EDF[19]. They differ mainly by the choice of model spaces and the type of correlations taken into account in the calculation. Each of these methods has its own advantages and drawbacks, and errors in the NME computation associated with each of method have been extensively debated in the literature over time [3, 619]. The differences in the NME values computed by different methods may stem from different sources such as i) the choice of the model space of single-particle orbitals and type of the nucleon-nucleon correlations included in calculation, which are specific to different nuclear methods, ii) the nuclear structure approximations associated with short range correlations (SRC), finite nucleon size (FNS), higher order terms in the nucleon currents (HOC), inclusion of deformation, etc., or iii) the use of input parameters whose values are not precisely known, like nuclear radius, the average energy of the virtual states in the intermediate odd-odd nuclei or the value of the axial-vector constant, $ g_A $, etc. Particularly important is the value of $ g_{A} $, (which can be 1.0 = quark value; 1.273 = free nucleon value; or other quenched value (0.4–0.9)) because of the strong dependence of the half-life on this constant. We note that errors originating from the different choice of values of these parameters can significantly increase uncertainty in the half-life computation, hence appropriate attention should also be paid to this source or errors. In contrast, the PSF have long been considered to be computed with sufficient accuracy [3, 2025]. However, newer calculations [45, 26] using more rigorous methods, i.e., employing exact electron Dirac wave functions (w.f.) and improving the method by which the finite nuclear size (FSN), electron screening effects, and a more realistic form of the Coulomb potential is considered, revealed notable differences in the PSF values as compared with older results, especially for heavier nuclei, positron emission and $ {\rm EC} $ decay modes, and transitions to excited states. Errors in the PSF computation originate from i) the method of calculation of the electron w.f., namely - the non-relativistic approach [20]; -relativistic approach with approximate electron w.f. [3]; - relativistic approach with exact electron w.f. [45, 26]; ii) numerical accuracy both in the resolution of the Dirac equations for obtaining the electron radial functions and the integration of PSF expressions for different decay modes. We also note that some input parameters appear both in the NME and PSF expressions, such as the axial-vector constant $ g_{A} $, the nuclear radius $ R_A $ $ (R_A = r_0 A^{1/3}) $, the value of the average energy of the virtual states in the intermediate odd-odd nucleus used in the closure approximation, $ \langle E_N \rangle $, etc. Moreover, when these quantities are calculated separately, different groups have sometimes used different values for these parameters. Furthermore, the NME and PSF have been reported in different units depending on which factors were included in their expressions, and this led sometimes to some confusion/difficulty in theoretical predictions and the interpretation of experimental data. In this study, we propose a new approach of calculating the NME and PSF entering the DBD half-lives to calculate their product directly, in an unique formula, instead of calculating them separately. This is indeed natural, since to predict half-lives and obtain information beyond SM physics from the DBD study, we need to precisely know the product $ {\rm NME} \times {\rm PSF} $. The computation of the product as a whole has some advantages. By calculating its values in units of $ [\rm yr^{-1}] $, the prediction and interpretation of experimental DBD data is facilitated, thereby removing any confusion related to the units, whose components are reported when these are calculated separately. Moreover, the formula of the product has the unique dependence on a certain parameter, which assumes a single value. Thus, the computation of the atomic and nuclear part of the DBD half-life obtains coherence, which has not been paid attention to so far. Finally, we note that the separation of the $ g_{A}^4 $ factor in the half-life expressions also provides advantages. For example, by combining experimental data and information from different DBD isotopes and/or decay modes and transitions, one can reduce the uncertainty of the calculation related to this parameter. 2.   Products of phase space factors and nuclear matrix elements • We define the products as follows: $ P^{2\nu} = G^{2\nu} \; \times \; |m_e c^2 M^{2\nu}|^2, $ $ P^{0\nu} = G^{0\nu}\; \times \; |M^{0\nu}_l|^2, $ therefore, the half-life expressions become: $ \left( T^{2\nu} \right)^{-1} = \left( g^{2\nu}_{A, {\rm eff}} \right)^4 \; \times \; P^{2\nu} , $ $ \left( T^{0\nu} \right)^{-1} = \left( g^{0\nu}_{A,{\rm eff}} \right)^4 \; \times \; P^{0\nu} \; \times \; \langle \eta_l \rangle^2 , $ where $ g_{A,{\rm eff}} $ is the effective value of the $ g_{A} $ constant, which can be different for different nuclei and decay modes, as it can depend on nuclear medium and many-body effects. Hence, providing the products $ P^{(2,0)\nu} $ in $ [\rm yr^{-1}] $, one can use them easily for predicting half-lives and/or constraining beyond SM parameters. The detailed expressions of these products read: $ \begin{split} {{P}^{2\nu }}=&\frac{{{{\tilde{A}}}^{2}}{{\left( G\cos {{\theta }_{C}} \right)}^{4}}}{96{{\pi }^{7}}\hbar {\rm ln}2}|{{M}^{2\nu }}{{|}^{2}}\times \int_{{{m}_{e}}{{c}^{2}}}^{{{Q}_{\beta \beta }}+{{m}_{e}}{{c}^{2}}}{\rm d}{{\epsilon }_{1}}\int_{{{m}_{e}}{{c}^{2}}}^{{{Q}_{\beta \beta }}+2{{m}_{e}}{{c}^{2}}-{{\epsilon }_{1}}}{\rm d}{{\epsilon }_{2}}\int_{0}^{{{Q}_{\beta \beta }}+2{{m}_{e}}{{c}^{2}}-{{\epsilon }_{1}}-{{\epsilon }_{2}}}{\rm d}{{\omega }_{1}}f_{11}^{(0)}{{\epsilon }_{1}}{{\epsilon }_{2}}\omega _{1}^{2}\omega _{2}^{2}({{p}_{1}}c)({{p}_{2}}c) \\ & \times \left[ {{\langle {{K}_{N}}\rangle }^{2}}+{{\langle {{L}_{N}}\rangle }^{2}}+\langle {{K}_{N}}\rangle \langle {{L}_{N}}\rangle \right] ,\\ \end{split} $ $ \it{P^{{\rm 0}\nu}} = \frac{\left( G \cos\theta_C \right)^{\rm 4} (m_e c^{\rm 2})^{\rm 2}(\hbar) c^{\rm 2}} {{\rm 32} \pi^{\rm 5} R^{\rm 2}\ln {\rm 2}} |M^{{\rm 0}\nu}|^{\rm 2} \times \int_{m_e c^2}^{Q_{\beta\beta}+m_e c^2} \epsilon_1\epsilon_2(p_1c)(p_2c){\rm d}\epsilon_1 f_{11}^{(0)}\left[\langle K_N \rangle-\langle L_N \rangle\right]^2 , $ where $ G $ is the Fermi constant, $ \theta_C $ is the Cabbibo angle, $ Q_{\beta\beta} $ denotes the Q-value for the DBD, $ m_e $ depicts the electron mass, and $ \epsilon_{1,2} $ and $ \omega_{1,2} $ are the electron and neutrino energies, respectively. Moreover, $ \tilde{A} = \left[\frac{1}{2}Q_{\beta\beta} + 2m_ec^2+ \langle E_N\rangle -E_I \right], $ where $ \langle E_N \rangle $ is an average energy of the states $ E_I $ in the odd-odd intermediate nucleus that contribute to the decay. $ \langle K_N \rangle $ and $ \langle L_N \rangle $ are quantities that depend on the electron and neutrino energies, as well as on the energies $ \langle E_N \rangle $ and $ E_I $ [21]. $ f_{11}^{(0)} $ depicts the combinations of the radial electron functions $ g_k $ and $ f_k $, which are solutions of the Dirac equations [26]. Finally, $ M^{(2,0)\nu} $ is the NME for the $ 2\nu $ and $ 0\nu $ decay modes. To compute the products $ P^{2\nu} $ and $ P^{0\nu} $, we construct numerical codes by taking advantage of our previous codes to separately compute the NME and PSF quantities [13, 26, 27]. The expression of the product $ P^{(2,0)\nu} $ contains factors outside the integrals that stem from the simplification and separate multiplication of the nuclear and kinetic parts. Further, their kinetic part (PSF) and the nuclear part (NME) have common input parameters $ R_A $, $ \langle E_N \rangle $, and $ g_A $. First, we refer to the $ P^{2\nu} $ computation. The kinetic part is computed following the main lines of the approach developed in our previous work from Refs. [5-26]. Here, we shortly review the main ingredients of the code and computation. We first use a subroutine where the electron wave functions are obtained as radial solutions ($ g_k $ and $ f_k $) with appropriate asymptotic behavior of the Dirac equations with a Coulomb-type potential, including the finite nuclear size and electron screening effects. The Coulomb-type potential is deduced from a realistic proton density in the daughter nucleus. To obtain the single particle densities inside the daughter nucleus, we solve the Schrödinger equation for a spherical Woods-Saxon potential with a spin orbit and Coulomb terms [5]–[26]. Subsequently, the PSF part of the code is completed by performing the integrals over the electron phase factors constructed with the Dirac radial functions. The code exhibits an improved numerical accuracy for finding the electron w. f. and a better interpolation procedure for integrating the PSF final expressions, particularly at low electron energies. For the NME part, we employ a code similar to that from Ref. [13] for computing the double Gamow-Teller transitions, assuming the following effective nucleon-nucleon interactions: GXPF1A [28] for $ ^{48}{\rm{Ca}} $, JUN45 [29] for $ ^{76}{\rm{Ge}} $, and $ ^{82}{\rm{Se}} $ and gcn50:82 [30] for $ ^{130}{\rm{Te}} $ and $ ^{136}{\rm{Xe}} $. The values for the products $ P^{2\nu} $ are presented in the third column of Table 1 for the five nuclei of experimental interest. With the values of $ g^{2\nu}_{A, {\rm eff}} $, written in the forth column of the table as first entries, we reproduced the most recent measured half-lives found in literature, which are displayed in the second column. In the forth column, we also show the $ g^{2\nu}_{A,{\rm exp}} $ values taken from Refs. [3435], which were obtained by comparing the theoretical B(GT) strengths with the experimental ones extracted from charge-exchange reactions. In the last column, we present the difference in percentages between the $ g_{A,{\rm eff}} $ values obtained within our calculations to reproduce the experimental DBD half-lives and those obtained by fitting the B(GT) experimental data, estimated in percentages, $ \epsilon = \left(g^{2\nu}-g^{2\nu}_{A,{\rm eff}}\right)/g^{2\nu} $. As observed, the two sets of values are close to each other, the smallest differences being in the case of $ ^{76}{\rm{Ge}} $ and $ ^{48}{\rm{Ca}} $ nuclei. nucleus$T_{1/2}^{2\nu}/\rm yr$$P^{2\nu}/\rm yr^{-1}$$g^{2\nu}_{A,{\rm eff}}/g^{2\nu}_{\rm exp}$$\epsilon $(%) 48Ca$6.40\times 10^{19}$ [31]$123.81\times10^{-21}$0.65/0.71[34]8.45 76Ge$1.92 \times 10^{21}$ [32]$5.16 \times 10^{-21}$0.56/0.60[35]6.60 82Se$0.92 \times 10^{20}$ [32]$186.62 \times 10^{-21}$0.49/0.60[35]18.33 130Te$8.20 \times 10^{20}$ [33]$25.26 \times 10^{-21}$0.47/0.57[35]17.33 136Xe$2.16 \times 10^{21}$ [31]$20.30 \times 10^{-21}$0.39/0.45[35]13.33 Table 1.  Results for $2\nu\beta\beta $ decay mode. Subsequently, we calculated the $ P^{0\nu} $ products in the case of the light Majorana neutrino exchange mechanism, with $ \langle\eta_l\rangle = \langle m_{\nu}\rangle/{m_e} $ and the light neutrino parameter defined as: $ \langle m_{\nu} \rangle = \displaystyle\sum _{k = 1}^3 U_{ek}^2 m_k\mid, $ where $ U_{ek} $ denotes the first row elements of the Pontecorvo-Maki-Nakagawa-Sakata neutrino mixing matrix, and $ m_k $ depicts the light neutrino masses [36]. The expression of the nuclear matrix elements can be written as a sum of Gamow-Teller ($ GT $), Fermi ($ F $), and tensor ($ T $) components [9, 27]: $ M^{0 \nu} = M^{0 \nu}_{GT}-\left( \frac{g_V}{g_A} \right)^2 M^{0 \nu}_F + M^{0 \nu}_T \ , $ where $ M^{0 \nu}_{GT} $, $ M^{0 \nu}_F $, and $ M^{0 \nu}_T $ are these components. These are defined as follows: $ M_{\alpha }^{0\nu }=\sum\limits_{m,n}{\left\langle 0_{f}^{+}\|{{\tau }_{-m}}{{\tau }_{-n}}O_{mn}^{\alpha }\|0_{i}^{+} \right\rangle },$ $ O^\alpha_{mn} $ are transition operators ($ \alpha = GT,F,T $) and the summation covers all nucleon states. Correspondingly, the two-body transition operators $ O^{\alpha}_{12} $ can be expressed in factorized form as [27]: $ O^{\alpha}_{12} = N_{\alpha} S_{\alpha}^{(k)} \cdot \left[R_{\alpha}^{(k_r)}\times C_{\alpha}^{(k_c)}\right]^{(k)}, $ where $ N_\alpha $ is a numerical factor including the coupling constants, and $ S_\alpha $, $ R_\alpha $, and $ C_\alpha $ are operators acting on the spin, relative, and center-of-mass wave functions of the two-particle states, respectively. Thus, the calculation of the matrix elements of these operators can be decomposed into products of reduced matrix elements within the two subspaces [14]. The expressions of the two-body transition operators are: $\begin{split} &O_{12}^{GT} = \sigma _1 \cdot \sigma _2 H(r), \; \; O_{12}^{F} = H(r) , \\ & O_{12}^{T} = \sqrt{\frac{2}{3}} \left [ \sigma _1 \times \sigma _2 \right ]^2 \cdot \frac{r}{R} H(r) C^{(2)}(\hat r). \end{split}$ The $ O^{\alpha}_{12} $ operators contain three components, namely the spin, center-of-mass, and the radial component, and the expectation values of the first two components are easily managed. The radial part is the most difficult to calculate, as it contains neutrino potentials written in different approximations, where the expectation values are double integrals over these potentials. Moreover, short-range correlations and finite nucleon size corrections are introduced in this part of the computation. The neutrino potentials depend weakly on intermediate states, and they are defined by integrals over momentum carried by the virtual neutrino exchanged between the two nucleons [9]. They include Fermi (F), Gamow-Teller (GT), and tensor (T) components: $ \begin{split} H(r) = & \frac{2R}{\pi} \int^\infty_0 \frac{q {\rm d}q}{q + \langle E_N \rangle} \\ &\times \left[ j_0(qr) \left(h_F(q) + h_{GT}\right) + j_2(qr) h_T\right] ,\end{split} $ where $ R = r_0 A^{1/3} $ fm, with $ r_0 = 1.2{\rm fm} $, $ j_{0,2}(qr) $ are the spherical Bessel functions, and the integrals are over the neutrino exchange momentum $ {\it q} $. In our calculations, we use the closure approximation. $ \langle E_N\rangle $, as mentioned above, represents the average energy of the virtual states in the intermediate odd-odd nucleus included in the description of the decay. Furthermore, we note that the factor $ 2R $a is canceled by a similar factor from the denominator of the PSF expression, such that $ P^{0\nu} $ does not depend on the nuclear radius. The expressions of neutrino potentials $ h_{F,GT,T} $ can be found in many references (see for example [9]). These expressions include FNS effects taken into account through vector and axial-vector form factors, $ G_V $ and $ G_A $ [9]. $ G_A \left(q^2 \right) = g_A \left( \frac{\Lambda^2_A}{\Lambda^2_A+q^2} \right)^2, \ G_V \left( q^2 \right) = g_V \left( \frac{\Lambda^2_V}{\Lambda^2_V+q^2} \right)^2. $ The vector and axial vectors form factors are specified as $ \Lambda_V = 850 \;{\rm{MeV}} $ and $ \Lambda_A = 1086 \;{\rm{MeV}} $ [2]. For computing the radial matrix elements $ \langle nl|H_{\alpha}|n'l'\rangle $, we use the harmonic oscillator $ {\rm HO} $ wave functions $ \psi_{nl}(lr) $ and $ \psi_{n^\prime l^\prime}(r) $ corrected by a factor $ [1 + f(r)] $, which takes into account the SRC effects induced by the nuclear interaction [27]: $ \psi_{nl}(r) \rightarrow \left[ 1+f(r) \right] \psi_{nl}(r). \ $ For the correlation function, we take the functional form $ f(r) = - c \cdot e^{-ar^2} \left( 1-br^2 \right). \ $ For the $ a $, $ b $, and $ c $ constants we use the parametrization employed in Ref. [37]. Taking into account the HOC and FNS effects, the radial matrix elements of the neutrino potentials become: $\begin{split} \langle nl || H_\alpha(r) || n^{\prime} l^{\prime} \rangle =& \int^\infty_0 r^2 {\rm d}r \psi_{nl}(r) \psi_{n^\prime l^\prime}(r)\left[ 1+f(r) \right]^2 \\ & \times \int^\infty_0 q^2 {\rm d}q V_{\alpha} (q) j_0 (qr).\end{split} $ We note that in the case of $ P^{0\nu} $ products, the axial-vector constant also enters the expressions of the neutrino potentials, in addition to the factor $ g_{A,{\rm eff}}^4 $, such that the half-life expression for the $ 0\nu\beta\beta $ decay, Eq. (5), contains a "double" dependence on this constant. Naturally, for coherence, the same values of the $ g_{A,{\rm eff}} $ constant should be assumed in both places, i.e., both in the $ {\it P^{0\nu}} $ and in the half-life computation. We note that these values may differ from the values of this constant used in the $ 2\nu\beta\beta $ decay mode. Because presently we do not know the correct value of $ g_{A,{\rm eff}} $ for the $ 0\nu\beta\beta $ decay mode, we calculate the $ P^{0\nu} $ products for the free nucleon value (1.273). As input parameters, the $ P^{0\nu} $ values can be easily computed for other effective values of the gA,eff constant. The obtained values of the products $ P^{0\nu} $, in $ [\rm yr^{-1}] $ units, are presented in the third column of Table 2. nucleus$T_{1/2}^{0\nu} /\rm yr$$P_\nu^{0\nu} /\rm yr^{-1}$$\langle m_\nu\rangle$/eV 48Ca$> 2.0 \times 10^{22}$ [31]$7.30 \times 10^{-15}$<26.49 76Ge$> 8.0 \times 10^{25}$ [38]$9.95 \times 10^{-15}$<0.29 82Se$> 3.6\times10^{23}$ [39]$34.45 \times 10^{-15}$<2.87 130Te$> 4.0\times10^{24}$ [40]$71.45 \times 10^{-15}$<0.59 136Xe$> 1.8\times10^{25}$ [41]$71.01 \times 10^{-15}$<0.28 Table 2.  Results for the $0\nu\beta\beta$ decay mode. The values of the products $ P^{(2,0)\nu} $ from Tables 1 and 2, obtained with the approach described above, are very close to the computed values, obtained by multiplying the separately calculated values of NME and PSF. This is understandable, as in their calculation by the two methods, we employed the same values of input parameters and the same nuclear approximations and parametrizations. The small differences stem from the numerical precision of numerical codes we used. We emphasize, however, that the importance of our current approach is the elimination of the incoherence by using NME and PSF values calculated separately with different values for common nuclear parameters, which can introduce significant errors in the evaluation of $ {\rm NME} \times {\rm PSF} $ product as a whole. The errors in the evaluation of these products can indeed be significant if different values of $ g_A $ are assumed in the computation of $ {\rm NME} $ and $ {\rm PSF} $, and if these values are not the same as the value used in the $ g_A^4 $ factor. For example, the errors introduced in the NME computation by the use of a quenched (1.0) or an unquenched (1.27) value of $ g_A $ were analyzed in Ref. [27] for $ ^{48}{\rm{Ca}} $, $ ^{76}{\rm{Ge}} $, and $ ^{82}{\rm{Se}} $ nuclei, and found to be within 10%–14% (without the factor $ g_A^4 $). The use of different values for the other (common) parameters involved in calculations as the nuclear radius, $ <E_N> $, etc. can bring additional uncertainties of the same order. The errors can be amplified by the use of different values of these parameters in the PSF computation. Thus, relevant errors may occur in calculating the products $ {\rm NME} \times {\rm PSF} $ when the $ {\rm NME} $ and $ {\rm PSF} $ values are taken from separate calculations reported in the literature. Subsequently, we revise the limits of the light neutrino mass parameter $ \langle m_{\nu}\rangle $ using our calculations and the most recent experimental limits reported for the $ 0\nu\beta\beta $ decay half-lives. These results are presented in the last column of Table 2. One observes that presently, the most stringent constraints on this parameter stem from the nuclei $ ^{76}{\rm{Ge}} $ and $ ^{136}{\rm {Xe}} $, because of both the experimental results (the lowest limits measured at present for the $ 0\nu\beta\beta $ decay half-lives [38], [41]) and the accurate theoretical calculations. An important issue in this case remains the use of a correct value for the $ g_A $ constant. As far as this value remains unknown, for accurate half-life predictions and constrains of beyond SM parameters, the goal is to reduce the errors associated with this constant. One suggestion is to apply information from different decay modes and/or from DBD experiments on different nuclei. For example, for a particular nucleus, the ratio of the $ 2\nu $ and $ 0\nu $ half-life expressions reads: $ \left(\frac{T^{2\nu}}{T^{0\nu}}\right) = \left( \frac{g^{0\nu}_{A,{\rm eff}}}{g^{2\nu}_{A,{\rm eff}}} \right)^4 \times \frac{P^{0\nu}}{P^{2\nu}} \times \langle \eta_l \rangle^2. $ As seen from the above formula, any information that we obtain regarding the relative magnitude of the $ g_{A, {\rm eff}} $ values for the $ 2\nu $ and $ 0\nu $ decays in the same nucleus can be exploited to improve the constraints on the neutrino mass parameter, when improved calculations of $ P^{(2,0)} $ are available. Further, referring to two different nuclei, denoted with $ m $ and $ n $, the ratio of their half-lives reads: For the $ 2\nu\beta\beta $ decay mode: $ \left( T^{2\nu} \right)_n = \left( \frac{g^{2\nu}_{A,{\rm eff}}(m)}{g^{2\nu}_{A,{\rm eff}}(n)} \right)^4 \times \left( \frac{P^{2\nu}_m}{P^{2\nu}_n} \right) \times \left( T^{2\nu} \right)_m. $ For the $ 0\nu\beta\beta $ decay mode: $ \left( T^{0\nu} \right)_n = \left( \frac{g^{0\nu}_{A, {\rm eff}}(m)}{g^{0\nu}_{A, {\rm eff}}(n)} \right)^4 \times \left( \frac{P^{0\nu}_m}{P^{0\nu}_n} \right) \times \left( T^{0\nu} \right)_m. $ Therefore, the $ g^{2\nu}_{A,{\rm eff}} $ for one particular nucleus can be deduced if one knows with (more) precision the value of this constant for another nucleus, using the experimental half-lives and the calculated $ P^{2\nu} $ for both nuclei. For example, one can take advantage of the possible experimental determination of this parameter for some particular isotopes, as recently proposed in Ref. [42]. Similar considerations, i.e., the exploitation of data from several experiments, are valid for predicting $ 2\nu\beta\beta $ decay half-life for a nucleus that was not yet measured, if accurate data for another nucleus and good estimations of the $ g^{2\nu}_{A,{\rm eff}} $ value from other (non-DBD) experimental data are available. For such predictions, information on DBD half-lives not yet measured obtained from empirical formulas, as proposed in Ref. [43], is valuable. Similarly, for the $ 0\nu\beta\beta $ decay, more information about the effective value of $ g^{0\nu}_A $ can be deduced for a particular nucleus if we this value is known for another nucleus. For example, we might know $ g^{0\nu}_{A,{\rm eff}} $ with more precision in the case of nuclei where the single state dominance (SSD) approximation is valid, where the half-life can be computed with reasonable precision by taking into account only one state in the intermediate odd-odd nucleus (for example $ ^{100}{\rm Mo} $ case), and where $ g^{2\nu}_{A,{\rm eff}} $ and $ g^{0\nu}_{A,{\rm eff}} $ might have close values. 3.   Conclusions • We proposed a new approach for calculating the NME and PSF for DBD, by directly computing their product. The product as a whole can be computed more consistently, with a unique dependence on some parameters that were previously entered separately in the NME and PSF expressions, which thus assumed single values. The values of the product are given in the same units as $ T_{1/2}^{-1} $ (i.e. [$ \rm yr^{-1}] $), removing any possible confusion by using the theoretical calculations for interpretation of DBD data. The new codes for calculating $ {\rm NME} \times {\rm PSF} $ product include improved routines used in our previous studies for the separate computation of these two quantities. We provide the values of these products for the $ 2\nu $ and $ 0\nu $ DBD modes for the five nuclei of most experimental interest. Subsequently, using our calculations and the newest half-life values for the $ 0\nu\beta\beta $ decays reported in literature, we revise the upper limits for the light mass neutrino parameter. In the half-life formulas, we separate the strong dependence on the axial-vector constant, i.e., the factor $ g_A^4 $, which brings a large uncertainty in the calculation and suggest some ways to reduce/avoid the errors related to the uncertain value of this constant. This can be done by employing ratios of $ g^{(2,0)\nu}_{A, {\rm eff}} $ and $ P^{(2,0)\nu} $ (instead of their individual values) and exploiting data on the same nucleus but for different decay modes and/or DBD data from experiments on different nuclei, including the possibility of experimentally determining this constant for some particular isotopes. We hope that our work proves to be a step forward for more consistent DBD calculations, which will aid in predicting and interpreting experimental data. Reference (43) DownLoad:  Full-Size Img  PowerPoint
b26cbf8afaf2ff89
The Energy That Holds Things Together Matt Strassler [April 27, 2012] In my article on energy and mass and related issues, I focused attention on particles — which are ripples in fields — and the equation that Einstein used to relate their energy, momentum and mass. But energy arises in other places, not just through particles. To really understand the universe and how it works, you have to understand that energy can arise in the interaction among different fields, or even in the interaction of a field with itself. All the structure of our world — protons, atoms, molecules, bodies, mountains, planets, stars, galaxies — arises out of this type of energy. In fact, many types of energy that we talk about as though they are really different — chemical energy, nuclear energy, electromagnetic energy — either are a form of or involve in some way this more general concept of interaction energy. In beginning physics classes this type of energy includes what is called “potential energy”. But both because “potential” has a different meaning in English than it does in physics, and because the way the concept is explained in freshman physics classes is so different from the modern viewpoint, I prefer to use a different name here, to pull the notion away from any pre-conceptions or mis-conceptions that you might already hold. Also, in a previous version of my mass and energy article I called “interaction energy” by a different name, “relationship energy”.  You’ll see why below; but I’ve decided this is a bad idea and have switched over to the new name. Preamble: Review of Concepts In the current viewpoint favored by physicists and validated (i.e. shown to be not false, but not necessarily unique) in many experiments, the world is made from fields. The most intuitive example of a field is the wind: • you can measure it everywhere, • it can be zero or non-zero, and • it can have waves (which we call sound.) Most fields can have waves in them, and those waves have the property, because of quantum mechanics, that they cannot be of arbitrarily small height. • The wave of smallest possible height — of smallest amplitude, and of smallest intensity — is what we call a “quantum”, or more commonly, but in a way that invites confusion, a “particle.” A photon is a quantum, or particle, of light (and here the term `light’ includes both visible light and other forms); it is the dimmest flash of light, the least intense wave in the electric and magnetic fields that you can create without having no flash at all. You can have two photons, or three, or sixty-two; you cannot have a third of a photon, or two and a half. Your eye is structured to account for this; it absorbs light one photon at a time. The same is true of electrons, muons, quarks, W particles, the Higgs particle and all the others. They are all quanta of their respective fields. A quantum, though a ripple in a field, is like a particle in that • it retains its integrity as it moves through empty space • it has a definite (though observer-dependent) energy and momentum • it has a definite (and observer-independent) mass • it can only be emitted or absorbed as a unit. Fig. 1: A sketch of how the presence of a quantum of one field (blue ripple) creates a response in a second field (in green) which is largest near the ripple and fades off at larger distances. [Recall how I define mass according to the convention used by particle physicists; E = mc2 only for a particle at rest, while a particle that is moving has E > mc2, with mass-energy mc2 and motion-energy which is always positive. My particle physics colleagues and I do not subscribe to the point of view that it is useful to view mass as increasing with velocity; we view this definition of mass as archaic. We define mass as velocity-independent — what people used to call “rest mass”, we just call “mass”.   I’ll explain why elsewhere, but it is very important to keep this convention in mind while reading the present article.] The Energy of Interacting Fields Now, with that preamble, I want to turn to the most subtle form of energy. A particle has energy through its mass and through its motion. And remember that a particle is a ripple in a field — a well-defined wave. Fields can do many other things, not just ripple. For example, a ripple in one field can cause a non-ripple disturbance in another field with which it interacts. I have sketched this in Figure 1, where in blue you see a particle (i.e. a quantum) in one field, and in green you see the response of a second field. Suppose now there are two particles — for clarity only, let’s make them ripples in two different fields, so I’ve shown one in blue and one in orange in Figure 2 — and both of those fields interact with the field shown in green.  Then the disturbance in the green field can be somewhat more complicated. Again, this is a sketch, not a precise rendition of what is a bit too complicated to show clearly in a picture, but it gives the right idea. Ok, so what is the energy of this system of two particles — two ripples, one in each of two different fields — and a third field with which both interact? The ripples are quanta, or particles; they each have mass and motion energy, both of which are positive. Fig. 2: Compare to Figure 1; with the addition of another quantum (orange ripple) in a third field that also interacts with the second field, the response of the second field becomes more complex. The green field’s disturbance has some energy too; it’s also positive, though often quite small compared to the energy of the particles in a case like this. That’s often called field energy. But there is additional energy in the relationship between the various fields; where the blue and green fields are both large, there is energy, and where the green and orange fields are both large, there is also energy. And here’s the strange part. If you compare Figures 1 and 2, both of them have energy in the region where the blue and green fields are large. But the presence of the ripple in the orange field in the vicinity alters the green field, and therefore changes the energy in the region where the blue field’s ripple is sitting, as indicated in Figure 3. Fig. 3: The presence of the second quantum alters the green field in the vicinity of the blue quantum; the energy stored in that general region (indicated by the blue sphere) changes between Figure 1 and Figure 2. This change in the energy --- the interaction energy --- may be either positive or negative. Depending upon the details of how the orange and green fields interact with each other, and how the blue and green fields interact with each other, the change in the energy may be either positive or negative. This change is what I’m going to call interaction energy.  The possibility of negative shifts in the energy of the blue and green field’s interaction, due to the presence of the orange ripple (and vice versa) — the possibility that interaction energy can be negative — is the single most important fact that allows for all of the structure in the universe, from atomic nuclei to human bodies to galaxies. And that’s what comes next in this story. The Earth and the Moon The Earth is obviously not a particle; it is a huge collection of particles, ripples in many different fields. But what I’ve just said applies to multiple ripples, not just one, and they all interact with gravitational fields.  So the argument, in the end, is identical. Imagine the Earth on its own. Its presence creates a disturbance in the gravitational field (which in Einstein’s viewpoint is a distortion of the local space and time, but that detail isn’t crucial to what I’m telling you here.) Now put the Moon nearby. The gravitational field is also disturbed by the Moon. And the gravitational field near the Earth changes as a result of the presence of the Moon. The detailed way that gravity interacts with the particles and fields that make up the Earth  assures that the effect of the Moon is to produce a negative interaction energy between the gravitational field and the Earth.  The reverse is also true. And this is why the Moon and Earth cannot fly apart, and instead remain trapped, bound together as surely as if they were attached with a giant cord. Because if the Moon were very, very far from the Earth, the interaction energy of the system — of the Earth, the Moon, and the gravitational field — would be zero, instead of negative. But energy is conserved. So to move the Moon far from the Earth compared to where it is right now, positive energy — a whole lot of it — would have to come from somewhere, to allow for the negative interaction energy to become zero. The Moon and Earth have positive motion-energy as they orbit each other, but not enough for them to escape each other. Fig. 4: In precise analogy to Figure 3, the system of the Earth, Moon and gravitational field has a lower energy (because of a negative interaction energy that is more important than the positive motion-energy of the Moon and Earth) than would be the case if the Earth and Moon were very far apart; and for this reason to move the Moon far away from the Earth would require an input of a large amount of additional positive energy. Images from NASA. Short of flinging another planet into the moon, there’s no viable way to get that kind of energy, accidentally or on purpose, from anywhere in the vicinity; all of humanity’s weapons together wouldn’t even come remotely close. So the Moon cannot spontaneously move away from the Earth; it is stuck here, in orbit, unless and until some spectacular calamity jars it loose. You may know that the most popular theory of how the Earth and Moon formed is through the collision of two planet-sized objects, a larger pre-Earth and a Mars-sized object; this theory explains a lot of otherwise confusing puzzles about the Moon. Certainly there were very high-energy planet-scale collisions in the early solar system as the sun and planets formed over four billion years ago! But such collisions haven’t happened for a long, long, long time. The same logic explains why artificial satellites remain bound to the Earth, why the Earth remains bound to the Sun, and why the Sun remains bound to the Milky Way Galaxy, the city of a trillion stars which we inhabit. The Hydrogen Atom And on a much smaller scale, and with more subtle consequences, the electron and proton that make up a hydrogen atom remain bound to each other, unless energy is put in from outside to change it. This time the field that does the main part of the job is the electric field. In the presence of the electron, the interaction energy between the electric field and the proton (and vice versa) is negative. The result is that once you form a hydrogen atom from an electron and a proton (and you wait for a tiny fraction of a second until they settle down to their preferred configuration, know as the “ground state”) the amount of energy that you would need to put in to separate them is about 14 electron-volts. (What’s an electron-volt? it’s a quantity of energy, very very small by human standards, but useful in atomic physics.) We call this the “binding energy” of hydrogen. Fig. 5: Inside a hydrogen atom, the electron ripple spreads out in something like a cloud around the proton; the interaction energy involving the proton, the electron and the electric field is minus 28 electron-volts, which is partly canceled (mainly by the motion-energy of the electron) to give a binding energy of minus 14 electron-volts. We can measure that the binding energy is -14 electron-volts by shining ultraviolet light (photons with energy a bit too large to be detected by your eyes) onto hydrogen atoms, and seeing how energetic the photons need to be in order to break hydrogen apart. We can also calculate it using the equations of quantum mechanics — and the success of this prediction is one of the easiest tests of the modern theory of quantum physics. But now I want to bring you back to something I said in my mass and energy article, one of Einstein’s key insights that he obtained from working out the consequences of his equations. If you have a system of objects, the mass of the system is not the sum of the masses of the objects that it contains. It is not even proportional to the sum of the energies of the particles that it contains. It is the total energy of the system divided by c2, as viewed by an observer who is stationary relative to the system.  (For an observer relative to whom the system is moving, the system will have additional motion-energy, which does not contribute to the system’s mass.)  And that total energy involves • the mass energies of the particles (ripples in the fields), plus • the motion-energies of the particles, plus • other sources of field-energy from non-ripple disturbances, plus • the interaction energies among the fields. What do we learn from the fact that the energy required to break apart hydrogen is 14 electron volts? Well, once you’ve broken the hydrogen atom apart you’re basically left with a proton and an electron that are far apart and not moving much. At that point, the energy of the system is • the mass energies of the particles  = electron mass-energy + proton mass-energy = 510, 999 electron-volts + 938,272,013 electron-volts • the motion-energies of the particles = 0 • other sources of field-energy from non-ripple disturbances = 0 • the interaction energies among the fields = 0 Meanwhile, we know that before we broke it up, the system of a hydrogen atom had energy that was 14 electron volts less than this. Now the mass-energy of an electron is always 510, 999 electron-volts and the mass-energy of a proton is always 938,272,013 electron-volts, no matter what they are doing, so the mass-energy contribution to the total energy is the same for hydrogen as it is for a widely separated electron and proton.  What must be the case is that • the motion-energies of the particles inside hydrogen • PLUS other sources of field-energy from non-ripple disturbances (really really small here) • PLUS the interaction energies among the fields • MUST EQUAL the binding energy of -14 electron volts. In fact, if you do the calculation, the way the numbers work out is (approximately) • the motion-energies of the particles = +14 electron volts • other sources of field-energy from non-ripple disturbances = really really small • the interaction energies among the fields = -28 electron volts. and the sum of these things is -14 electron volts. It’s not an accident that the interaction energy is -2 times the motion energy; roughly, that comes from having a 1/r2 force law for electrical forces. Experts: it follows from the virial theorem. What is the mass of a hydrogen atom, then? It is • the electron mass + the proton mass + (binding energy/c2 ) and since the binding energy is negative, thanks to the big negative interaction energy, • mhydrogen < mproton + melectron This is among the most important facts in the universe! Why the hydrogen atom does not decay I’m now going to say these same words back to you in a slightly different language, the language of a particle physicist. Hydrogen is a stable composite object made from a proton and an electron, bound together by interacting with the electric field. Why is it stable? Any object that is not stable will decay; and a decay is only possible if the sum of the masses of the particles to which the initial object decays is less than the mass of the original object. This follows from the conservation of energy and momentum; for an explanation, click here. The minimal things to which a hydrogen atom could decay are a proton and an electron. But the mass of the hydrogen atom is smaller (because of that minus 14 electron volts of binding energy) than the mass of the electron plus the mass of the proton, let me restate that really important equation. • mhydrogen < mproton + melectron There is nothing else in particle physics to which hydrogen can decay, so we’re done: hydrogen cannot decay at all. [This is true until and unless the proton itself decays, which may in fact be possible but spectacularly rare — so rare that we’ve never actually detected it happening. We already know it is so rare that not a single proton in your body will decay during your lifetime. So let’s set that possibility aside as irrelevant for the moment.] The same argument applies to all atoms. Atoms are stable because the interaction energy between electrons and an atomic nucleus is negative; the mass of the atom is consequently less than the sum of the masses of its constituents, so therefore the atom cannot simply fall apart into the electrons and nucleus that make it up. The one caveat: the atom can fall apart in another way if its nucleus can itself decay. And while a proton cannot decay (or perhaps does so, but extraordinarily rarely), this is not true of most nuclei. And this brings us to the big questions. • Why is the neutron, which is unstable when on its own, stable inside some atomic nuclei? • Why are some atomic nuclei stable while others are not? • Why is the proton stable when it is heavier than the quarks that it contains? To be continued… 103 responses to “The Energy That Holds Things Together 1. With you so far and pleased to see the wind-field analogy here. I was the giant black rabbit in the front row during your talk in Terra Nova on Saturday but I had to give up at the point you started talking about wind as a field because the sound feed kept breaking up. I am now on the edge of my seat to discover why the neutron is not stable on its own… 2. … or rather why it IS stable in the nucleus (your linked article on conservation of energy and momentum explains the on-its-own neutron instability)…. 3. This is a great article from great MATT. , but please do not forget to tell us how non-material abstract statistics control / direct / confine the exact half-life time of all decaying nucleus so that the lump never deviate from that strict value , how the nucleus know that it cannot decay or it must decay or else half-life time will change !! this dilemma was mentioned in arther caostler book ( the roots of coincidence) but no answer was given. GOD bless you matt. 4. As the minus sign must be related to a fixed agreed-upon datum , what is the datum with which comparing binding energy we find that it is less / opposite / the other way around / ……etc.from that datum ? 5. Dear Professor, I’m visualizing these things pretty clearly, thank you. I’m ready for the next lesson! 6. Just how heretical is the notion of omnipresent fields? Is space time usefully thought about as a field? If so, it is the only field with broken Lorentz symmetry? • Not heretical in the slightest. It’s standard fare; every university in the country with a particle physics program has a Quantum Field Theory course. Space-time isn’t quite the field itself — this is a tricky point. That’s why I glossed over it. The fields in gravity are a bit more complicated than that. But I don’t think I want to try to answer this clearly now. In flat and unchanging space, gravity doesn’t break Lorentz symmmetry at all. Of course, the universe is expanding, and that defines a preferred sense of time (in the part of the universe that we can see, at least). And that does mean that the gravitational fields in our universe do, on the largest distance scales, break Lorentz symmetry, yes. And no other fields break Lorentz symmetries on the large scale, no. But the answer to your last question depends upon exactly what you meant. Globally across the universe, nothing but gravity breaks Lorentz symmetry (at least no one has ever detected of any other source of Lorentz breaking.) In small regions there’s all sorts of breaking by all sorts of things. For instance, there are stray magnetic fields in all sorts of places around the universe, and so locally those break Lorentz symmetry. And hey, even the earth breaks Lorentz symmetry (that’s why up is different from down, for instance, when you’re near the earth.) 7. Is attraction by definition a negative energy ? what is its physical meaning ? for ex. in hydraulics we call a negative potential energy of dammed water if we chose the datum line above water surface so water-head is BELOW IT. • attraction results from the fact that the negative interaction energy becomes more and more negative as you bring the earth and moon closer and closer together. • Q: Why is moon moving away from the earth when it should be getting closer and closer to it if gravity holds the two bodies. I know its said that in early Earth’s history, moon was much closer to Earth. Shouldn’t larger body win the tag of war? Sorry for being sightly off the topic but I’m still sorting out the data presented. Huh ? I just thought this results from looking at Einstein`s classical field equations without taking the detailed interactions between quantum fields into account … ? • didn’t you ever wonder where Einstein’s classical field equations come from, given that we live in a quantum world and the earth and moon are made from things that are described by quantum mechanics? But in any case, that’s not very relevant — because everything I said is also true of the classical field equations; I just used a quantum language to describe it, but the math is essentially identical. 9. Thank you so much for finally getting around to this topic 10. Vladimir Kalitvianski “interaction among different fields, or even in the interaction of a field with itself” I wrote a paper to show our errors in guessing the interaction energy (interaction Lagrangian), see here 11. Torbjörn Larsson, OM Glad to se the virial theorem at work on smaller scales than clusters. For us non-experts, the 1/r potential case is done in Wikipedia (for gravity). The meaning of validation is seldom stated clearly, this was a refreshing brief! “Your eye is structured to account for this; it absorbs light one photon at a time.” Allow me a more detailed model – when I was doing my PhD work we were preparing a book on sensors (which was never to be) and I made the biological photon sensor chapter. Mind that this is several decades old biology and from the top of my head: – It is only at low light intensities (dark adapted) that the eye is counting photons. Not with much quantum efficiency mind, but that is because our eye evolved to be lousy. (For example, cats doubles dark adaption efficiency by having layered index tissue mirrors beneath the neural layers – cat’s eyes.) – At moderate intensities rods and cones absorb several photons faster than they can transmit nerve impulses. Our eyes have evolved to regulate response by many mechanisms, from non-linear response in the photochemical receptors instantaneously and over time (bleaching, part of light vs dark adaption) over regulation of cellular cascades resulting in neural signal and of pigment refresh response to bleaching (both part of light vs dark adaption) to neural tissue feedbacks and feed-forwards (probably also part of light vs dark adaptation). The idea is correct, for some reason or other pigments absorbing one photon at a time have been utilized in biology many times over. But it is not always (in fact, seldom) the function of the eye. 12. Forgive me for being so dumb , but am i right in saying that the minus sign is ONLY in equations not reality and what you mean is that part of motion energy is transformed to binding energy so that part IS TAKEN from the motion energy so here we see the ( -) sign OF PART TAKEN FROM WHOLE with no negative energy in reality?????? • You are right that this is an accounting issue. What I really mean is that the energy of the system has decreased, and it is useful to account for that decrease in a particular way, through what I call interaction energy. In the end the total system has total energy which is positive. 13. Negative energy, eh? I guess this is where the notion of the Theory of Nothingness came from? We are not even close are we? 🙂 Is gravity the fundamental field of the vacuum? Is the interaction (potential) energy the energy required to start (“release”) the ripple from vacuum. I can visualize the mass-energy of the ripple as being the defining level for that specific particle and the potential energy as the energy requires to keep pushing it in the positive direction (spacetime). Are you confusing the vacuum reference potential with a zero datum? Would quantum tunneling disprove negative energy? • The notion that the universe is a zero-energy object with positive energy in particles and fields and negative energy in overall interaction energy between gravity and fields is not a new one. Gravity is not the fundamental field of the vacuum; indeed your question doesn’t mean anything, because the vacuum is empty space, but all fields of a universe are to be found in empty space. “Is the interaction (potential) energy the energy required to start (“release”) the ripple from vacuum.” No. Particles are created through interactions of particles with each other and with fields. But the reason there are particles to start with is that many of them were created in the Big Bang (we don’t know precisely how, because there are many possibilities) and once they start banging into each other and the universe becomes very hot, you can make many more of them. The ones that were left over could coalesce into galaxies, form dark matter halos, stream out as cosmic microwave background, etc. In collisions among these particles, new particles can be created; e.g., fusion inside a star. I don’t know what you mean by a “vacuum reference potential” and a “zero datum”, so I assume I am not confusing them. Quantum tunneling is a fact ( ) and so is negative energy (atoms, nuclei, stars, planets, satellites,…) so there’s nothing to your last question, at least, not as you stated it. 14. I will try an tie my line of questioning to make my point. I don’t believe there is “empty” space. I believe exothermic reaction of the Big Bang created space. Expansion of this energy over the same space it created began “clumping” into variable densities and hence the first field. I call it a field because there would have been a very symmetrical pattern over the “space” as the space expanded. As the expansion continued thermal gradients would begin to become more pronounced to a point where rotations would begin. This rotations, energy flows turning backwards at some radius to define the speed of light, i.e. it is this radius which is the first constraint of this universe. As mentioned in an other post, confinement would take the shape of a sphere and the smallest spheres of the “space” is what I call the vacuum potential. Hence, the interaction of adjacent sphere would then create the second field, gravity. The mechanism as I posted before would be a Newton’s cradle in reverse. The repeating collapsing and generation of the spheres would create an attraction force while the thermal cavities between the spheres are the ‘ripples” we perceive and formulate with the Lagrangian equations. I know that there are more and more physicists are giving up on a simple unified theory, (zero energy?, hologram?, strings? … amazing how fast one can lose himself/herself in the math, lol), but I am willing to bet on Einstein’s initial instincts and look for a nice simple solution. I know you jump on the use of the “exothermic reaction” and yes I am inferring that this bubble we are living in is one of maybe infinite bubbles that percolate up (down?) and coalesce into the magnificent universe we see. In this context I will ask again for your intuitive opinion since your knowledge base is so advanced, what is E, the left side of the equation? In one of your response to my post you described it as temperature and that we don’t know where and/or why there was such a high temperature at the Big Bang. Could you answer it by assuming it sipped in through a ruptured space-time of another manifold? Could this E be an entity of one temperature, I refuse to use string, particle, and I don’t know if space at one temperature, (temperature quanta/) makes any sense either. • I’m afraid I have no idea what you’re trying to suggest. • How can Nature create a microscopic spherical tornado spinning at the speed of light (v = c), from nothing? E = ( h / 2 x pi ) x ( c / r ) E = h-bar x ( L / T) / L E ~ 1 / T (?!) In 1932 Dirac offered a precise definition and derivation of the time-energy uncertainty relation in a relativistic quantum theory of “events”. But a better-known, more widely used formulation of the time-energy uncertainty principle was given in 1945 by L. I. Mandelshtam and I. E. Tamm, as follows. For a quantum system in a non-stationary state ψ and an observable B represented by a self-adjoint operator , the following formula holds: σE x ( σB / | d / dt | >= h-bar Observation: This simple but elegant relationship tells me that there is a unified theory as simple and elegant as Mandelshtam’s and Tamm’s interpretation of the time-energy uncertainty. One interpretation is at the initial state the energy was almost infinite ( E ~ 1 / T ), however another interpretation is that time (existence) started with a spark, an almost infinite infusion of energy ( T ~ 1 / E ). So my question to today’s theoretical physicists is the time variable in the uncertainty relationship and the time variable in Schrödinger equation the same? • PS, Sorry some of text did show up … correction: • Oops, again the observable {B} is missing… try again. σE x ( σB / | d {B} / dt | >= h-bar … It is a lifetime of the state ψ with respect to the observable B. In other words, this is the time after which the expectation value, {B} changes appreciably. 15. I am caught a bit off-guard by the small inconsistency with regard to mass. For a single particle, you view “mass” as not including motion-energy, but for a system of particles, you *do* consider motion-energy to be part of the mass. I know it’s just a definition, but it still causes me to stumble a bit. • Ah!!! You are right to be caught off guard — I left out an important statement in the text. The motion energy of the particles INSIDE the system is part of the system’s mass, but the energy from the SYSTEM’S OVERALL MOTION is not to be included. Thank you for pointing out this error of omission — I will fix it immediately. • I expected that was what was meant. But I really should have asked two separate questions. When viewing a single particle in motion, you say that it has “mass” plus “motion-energy” which is not considered to be part of the particle’s mass. But when viewing a system of particles (to be clear, let’s consider ourselves at rest with respect to the center of mass of the system), you say that the internal motion-energy of the particles now *does* count as mass. • If you try to move the system as a whole, yes, you will discover that the system satisfies the equation E^2 – p^2 c^2 = (M c^2)^2 where E is the total energy of the moving system, p is its momentum, and m, the mass of the system, includes internal motions of the particles that it contains. This is the same equation that a particle of mass M would satisfy. This is verified in great detail in a few cases… and it is essential in explaining why neutrons are stable inside of nuclei. It also follows from the modern (post-1920s? I’m not sure…) theoretical understanding of what special relativity means and how it works mathematically. • I don’t wish to appear to be flogging a deceased equine; I quite agree with what you are saying, and I hope you understand that I am using “you” as a group-inclusive of physicists in general and not in the singular sense. I’m not disagreeing; I just find the applied definitions a bit curious. It is interesting that, when viewing an individual moving particle, you say that it has mass-energy and motion-energy, the latter not counting as mass, by definition. But then if you zoom out the microscope and observe that the particle in question is part of a system of particles (with respect to which we are at rest), then the motion-energy of that same particle *does* count as mass of the system, because it contributes to the inertia of the system. But, alternatively, the motion-energy of the individual particle likewise contributes to the inertia of the particle itself, hence the original distinction of “rest mass” vs. “relativistic mass”. I prefer the new point of view, actually — I just need to get used to thinking that way. • There’s a whole story of mathematics that lies behind the preferences that particle physicists take here. E^2 – p^2 c^2 = (m c^2)^2 is a relationship between two things that are observer-dependent and one thing that isn’t. It’s like the equation for a circle: x^2 + y^2 = r^2 , where r is the radius of the circle and x and y are coordinates; x and y depend on how you draw your coordinate axes, but the radius of the circle doesn’t care. So we define mass for a single particle to be an observer-independent quantity. Next, the goal is to define that quantity which is observer-independent for a *system*. And as Einstein showed in one of his early papers on relativity, there was only one consistent answer — the one I gave you. If this weren’t true, then for a particle (such as a proton) that later, after further experiments, turns out to be a system of many particles (such as the system of quarks and antiquarks and gluons that make up a proton) there’d be an inconsistency in what you’d mean by its mass if you treated it as a particle versus what you’d mean if you viewed it as a composite system. That wouldn’t make any sense. 16. Nice! To be read & enjoyed more than once. I would be interested in the inversion of your last question: not “Why is the proton stable when it is heavier than the quarks that it contains?” but “Why is the proton heavier than the quarks it contains?” given the naive idea that interaction energy as described seems to be typically negative. (I think I know the rough shape of the answer but you would carve it elegantly.) 17. What is the causal mechanism responsible to convert part of protons+neutrons masses to binding energy ? Is it a rule /principle of Q.M. that must be ” obeyed” for which no more explanation exists ? Is it a given property of EMF and gluon fields interaction ? Does equations describe it or explain it ? • I wouldn’t say you’re converting the proton and neutron masses to binding energy; notice I did not say the atom’s binding energy comes from the electron and proton masses. I said it comes from interaction energy involving the electron, proton and the electric field. For a nucleus, it is actually a complicated process, but it does arise from the interaction energy involving quarks and gluons in a not entirely dissimilar way. Because the effect is complicated, our equations are less reliable than for atoms, and it is harder to predict the interaction energy for all nuclei. Nuclear physicists are pretty good at it — but it isn’t simple. 18. So is it correct to say that the interactions system among the quarks ripples and fields plus the EMF plus the gluon field result in pumping mass from the system converting it to binding energy ? • There’s no pumping going on. You’re looking for a deeper explanation of the deep thing itself; the deep thing is that the interaction itself changes the energy of the system. Period. It’s not taking energy from something else. 19. m all protons + m all neutrons – binding energy –in the nucleus — exactly = extracting the last factor from the first and second ones……am i correct ? binding energy must come from somewhere , interactions are energy users not energy generators……….or else i am totally confused. • Yes, you are confused on this point — interactions are not things that require energy to occur — they do not use energy, the way an engine does. Nor do they produce energy the way a power plant does. The interactions themselves simply occur. Energy (possibly positive, and possibly negative) is present as a result of interactions taking place — but the interaction is not mechanically producing it, or using it. • People often ask where the energy of the big bang came from. i get the feeling from what ur saying that energy isnt anything fundamental.. its not a thing by itself…just a conserved quantity and that this question isnt meaningful. I take it is meaningful to ask where the laws and fields came from however…a i making sense? • Hmm. I’m far from sure we know what the question is yet. Often in physics (and in other areas of scientific research) the key is to figure out what the right question is. Sometimes, by the time you do that, you already see the answer. So I would say: regarding the right questions to ask about the universe as a whole, I don’t know that anyone yet knows what they are. 20. P.S. : So you mean that the interactions REDISTRIBUTE the overall system of mass/energy ?……are the fields designed to do this ? ie. it is a rule , a principal , a fundamental one ? • This is indeed very fundamental to quantum field theory. I’m trying to think about how I can answer this — whether it has an answer that is meaningful, or whether I just have to say: “this is what fields do”. Remember that I explained that energy is (according to Emmy Noether, ) that quantity which is conserved (i.e. does not change with time) because the laws of nature are constant in time. Operationally, one first writes down equations that describe fields that interact with each other. Second, one asks, using Noether’s theorem, “what is the energy of the system of fields”? One finds there is energy that is associated with the interaction of the fields, though the amount depends in detail on what the fields are doing. So I think you’re imagining energy comes first, and then you put fields in it. But no, you start with fields and the laws which govern their interactions, and then you ask: what is the conserved quantity associated with this system of fields and interactions? Not the other way round. 21. P.S. 2 : You cannot say that B is a result of A but B is not produced by A unless A is designed so that its mere existence is always accompanied with B. • I’m trying to address what I think your confusion is; I might be misinterpreting it. If A and B are sufficiently intertwined it can become impossible to state, in words, how they are related. In equations this would be very easy. 22. I do not agree that “the Moon and Earth cannot fly apart, and instead remain trapped”. There is one additional piece of energy you didn’t consider – the rotational energy of the Earth and Month. The Earth’s rotation pulls the Month apart by a few inches per century and in a couple of billions of years the Earth will loose the Month. However, it is realy difficult to imagine, how this tidal forces pulls the objects apart. • This is something that I left out, yes. I should probably supplement the article to explain it. As you say, it is tricky. • Actually, I think your statement isn’t correct, or at least there is serious debate. “Given its present-day rate of retreat, the moon eventually would reach synchronous orbit with Earth in about 15 billion years, Zakharian said in an interview. In synchronous orbit, the moon and Earth would orbit together as planet and satellite in fixed position, locked face-to-face, about 560,000 kilometers (336,000 miles) apart. The moon now is about 384,000 kilometers (240,000 miles) away.” There are also statements in the literature that if the sun warms and the earth’s oceans boil away, the retreating due to tides will slow. But there seems to be more debate about the precise rates of tidal losses than I realized. Something to learn more about. I also hadn’t realized that there was actual data that gave information on tides and the moon’s location over geologic time scales. 23. THANKS MATT. : I very much accept that……….i mean this is the way our world is designed….this is the way every thing is connected to every thing , some times we have to take it as given. It is a great honor to have a dialogue with such a nice expert as your good self. 24. Hi Matt ! At the genesis of the universe, when all the fundamental fields were concentrated into a singularity, then presumably the negative energy would have been of infinite intensity. As the universe came into being and the fundamental fields expanded, then the negative energy within the fields would have dissipated to a point at which it was replaced by mass and motion energy , allowing the formation of stable structures. Would this be a fair summation ? • Well — first of all, we don’t know about the very earliest periods of the universe. We don’t know there was a singularity (and indeed, since a singularity by definition is a place where our equations break down, there’s no reason to think the equations we have right now actually work there.) So if I tried to answer your questions I’d be speculating wildly. Not that this stops theoretical cosmologists — it’s their job to speculate. But we do not know why the universe became hot and populated with ordinary matter (and presumably dark matter.) Also, DO NOT visualize the Big Bang as an explosion from a point. That is wrong. The Big Bang is an expansion of space, not an explosion into pre-existing space. But what it looked like when it began to expand was not a point — it may have been infinite to start with, or it may have been a region within a much more complicated pre-existing space-time, or its features may not have been interpretable as space at all. We certainly do not know the universe’s extreme past. 25. Now is it possible that one day we may discover that fields are ONLY our mathematical representation of what we observe with nice match , but the MOST fundamental ingredient of the world is something we never imagined ? related to this ; is our knowledge as per NOW can confirm that fields ARE the MOST primary ingredient with ultimate final confidence ? • Absolutely it is possible; it is even likely. Science does not provide final confidence; it provides tools for predictions and for consistent thinking. Those tools are always potentially subject to update with new information. The only thing we know is that those updates will preserve successful predictions of the past, as Einstein’s revisions of the laws of motion cleverly maintained all of Newton’s predictions. 26. Hi Matt, Are the disturbances in the field (well behaved or not) changes (fluctuations) in the value of the field? The following questions assume the answer to this one to be “Yes”. But even if it’s “No” you might still be able to see what’s confusing in my mind. Is the quantum limitation a property of the fields (i.e the change in the value of the field cannot be smaller than a quanta)? Does the energy of a general disturbance in the field (not a particle) have the same components as the particles (mass energy and motion energy)? Does a field have energy? Can two fields interact (or have a relationship) in a different way than the one described in the article (i.e a disturbance in one field generates a disturbance in the second field)? Is there energy just because two fields that can iteract have large values (in the same region I suspect) without the need of any disturbance? • So — delayed answer: Disturbances in the field do involve changes in the value of the field, yes. They aren’t changes in the average value of the field over all of space, but rather localized changes. (Caution: this is quantum field theory, so just as we can’t simultaneously know the position and velocity of a particle, we cannot simultaneously know the value of a field and how it is changing… ) The statement that ripples in fields are quantized, however, is NOT a statement that a change in the value of the field cannot be smaller than a quantum. The value of the field can change continuously. It is the statement that a RIPPLE (an up-and-down change that resembles a ripple on a pond, in that the average change in the value of the field is *zero*) cannot have arbitrarily small height (i.e. `amplitude’). For a given field, its ripples (i.e. its particles) all have the same mass. (Small caveat if the particles are very short-lived, but let’s ignore that for the moment.) The electron is a ripple in the electron field; all electrons have the same mass. General disturbances can have any mass. You can, if you want, think of them as having mass-energy and motion-energy. It’s not as useful as for particles, because (unlike a particle) these disturbances tend to fall apart right away (even faster than most unstable particles do.) So they don’t tend to move very far, and if they bounce off of something they will typically emerge with a changed mass — very different from electrons, which can travel macroscopic distances and retain their mass even if they bounce off of something. Fields do have generally have energy, yes; if they are changing over time or over space, they always do. Yes, it is possible for two fields that are non-zero but not disturbed to have energy due to their interaction. An example would be if there are two types of Higgs fields in nature rather than one; the average values of the two Higgs fields in nature will be determined by the requirement that their energy of interaction with each other and with themselves be minimized. 27. Elizabeth Maas About quantum units being dependent upon the “grid lines” of the field in which the are derived, is it conceivable that at an earlier stage of the universe’s evolution, these grid lines and quantum units were of a grander scale or at least of a different scale than known to us today experimentally? Consider time dilation for a mass, defined by spacetime’s absence or field knotting, traveling at near light speed relative to our frame of reference. Although there may or may not be a quantal unit of time, time’s arrow is a relativistic constant within it’s frame of reference. Time’s arrow accomodates the frame of reference. Furthermore, there is a fractal nature to the expansion of the universe – as well as a fluid nature with boundary partitions – some fields extend WITHOUT a time component – accounting for quantal entanglement. Some mysteries remain eternal! My point: Is it conceivable that the scale of a quantal unit is dependent upon the scale of the field’s “grid lines” from which it is derived, so that the fields from which particles are formed have evolved and rescaled simultaneously with that of cosmic evolution. My answer to myself: “Yes.” • Sorry, I have no idea what you are talking about. Please define “quantum units” and “grid lines”; most fields could not have “grid lines”, by any definition I can think of, so I don’t know what you’re talking about. And “quantum units” is a non-standard and ambiguous term. 28. Elizabeth Maas Of all isotopes, iron-56 has the lowest mass per nucleon. With 8.8 MeV binding energy per nucleon, iron-56 is one of the most tightly bound nuclei. How would you explain this stability? What is it about the nuclear geometry of fields allowable by this number of protons and neutrons to account for this energy of mass defect? I have been intrigued by knot theory within-between fields so I have been exploring for representative phenomena. 29. Hi Matt, Have I asked my questions in a wrong way? I was wondering what is a ripple in a field. Is that an oscillation of the value (the property of the field you said can be measured everywhere) of the field? Thank you, 30. you correct pavel by being out by a factor of 100 from this site , but the site itself is also out by a factor of 100 I believe with this statement ‘Laser pulses are aimed at the reflector from Earth and bounce quickly back at 3 million meters per second – that’s about 186,000 miles per second, so the round trip takes less than three seconds’ also it says there “As an interesting sidelight, there will be no full moon in February of 2010. There will be fewer in the short month as time goes by because the Moon will take longer to orbit the Earth as it spirals away” really ? the slow rate of the moon drifting away is more than compensated by the slowing rotation of the earth, making lunar months longer in scientific seconds, but shorter in days. it would seem that if we have anough patience, there will never be a february without a full moon. • That’s why I gave two sites; any given site can either be wrong or have a typo. Of course if a typo gets copied things are correlated. But still — you’re not implying that the 2 cm/year number was wrong, are you? Instead you just mean that that website should say 300 million meters per second. 186,000 miles per second (and three seconds for the round trip) is correct. Looks like a simple typo there. Your point about February is more substantive and seems correct, because isn’t the end result of tidal friction (except for one subtlety) that the earth day and lunar month become equal? So the day becomes very long, the moon maintains a fixed position in the sky, and the moon is new and full every day; some parts of the earth see the moon all the time and some never see it. And I’m not even sure we can talk about February; how many days will there be in a year, by then? The subtlety is that the earth’s oceans may boil away, due to the sun becoming hotter, long before that happens, vastly reducing tidal friction and pushing this end-point off for I have no idea how long, but certainly longer than the earth is likely to survive the sun’s expansion as it reaches old age. 31. Prof. Strassler, Just to be clear: we are restricting ourselves to a flat Minkowski spacetime, are we not? The whole concept of “energy” and its conservation becomes rather problematic in the curved spacetime of general relativity unless some univeral Killing feld is imposed (which violates the general covariance requirement of general relativity). When both time and space can “bend” depending on spacetime’s contents and on the motion of mass-energy-stress through it, the symmetries required for a meaningful definition of energy as a conserved quantity aren’t present. • Edit: That should be “… unless some universal Killing field is imposed …” • What you say is true and not true. We are approximating the curved space picture by assuming only the time-time component of the metric is curved, and representing that as the gravitational field. Any effects that go beyond this approximation are minuscule. Physicists always make useful approximations in order to capture the physical phenomena to the extent possible. You are rejecting the approximation in which energy can be used because it isn’t exactly accurate. You say we should use Einstein’s general relativity to do this correctly. And you talk about the curvature of spacetime instead. But in that case, shouldn’t you worry about the fact that general relativity is also wrong, because it doesn’t account for the fact that the earth and moon’s particles should be described using quantum mechanics? And in fact, that’s not enough, because spacetime itself is quantum mechanical at very short scales. In other words, your picture is incomplete too. One of the hardest things to learn in physics is when subtle effects matter and when they don’t. You’re so focused on getting things exactly right that (a) you have forgotten that you don’t have them exactly right either, and (b) there is a physics point which you are making much more confusing than it needs to be in the process. First let’s make sure we understand energy and why things are bound together; then we can try to understand how, in some circumstances (but not this one), general relativity forces us to account for the fact that this is not a sufficiently good approximation. 32. According to general relativity, the Earth and the Moon are not feeling any “force” of gravity. They are both traveling in geodesic orbits around their common center of mass — i.e., they are in free-fall along geodesic paths that are curved due to the curvature of spacetime: both space and time are curved as a result of the presence of their mass-energy-stress within the spacetime. In GR, a body moves along a geodesic (not along a straight line) unless affected by an outside force. There is no Newtonian “gravitational field”, just the dynamic metric of spacetime. So your earth-moon diagram is roughly correct, according to GR, if you substitute “4-dimensional manifold with a semi-Riemannian metric that varies according to Einsteins equation” for “gravitational field”. But that is a quite significant detail, and very different from how you view the problem, and approach it mathematically, in Minkowski spacetime. • Most of what you say here is wrong… you have Einstein correct, but you have not understood that what I said is also consistent with Einstein. First, I did not say the words “gravitational force” in my article. Nor did I say “Newtonian field”. You put words in my mouth — so why are you criticizing me for using them? You are right there is no Newtonian gravitational field — however, you are wrong beyond that point. The metric IS associated with the Einsteinian gravitational fields — and in particular, in situations where you have two slowly moving, weakly gravitating objects, the only component of the metric which is significantly different from flat space is the time-time component, and the only components of the Einsteinian gravitational fields which are significantly different from zero are those that are derived from the time-time component of the metric. See Weinberg’s book on the weak-gravity limit. (You are perhaps not familiar with the field language, but it works just fine.) The approximation I am making is that the other components of the gravitational field are very small — an approximation whose limitations can be measured with precise techniques, but which is accurate enough that everything I said about binding, and binding energy, gives the correct result. Just as we should not waste our time worrying about the quantum mechanical corrections to the earth-moon system, we should not worry about the components of the Einsteinian gravitational fields that are so small that they do not affect the dynamics of the earth-moon system. 33. A slight correction: the force that the Earth and Moon do feel is a tidal force: Because the curvature of spacetime in which they travel is not uniform, the paths that some parts of these bodies travel is slightly different from the paths that their neighboring regions are trying to travel. This tends to try to pull them apart. But because they are semi-rigid bodies, these sheer forces are of course resisted by the electromagnetic forces holding them together, so the motion of the body as a whole is affected. And that is why the Earth’s rotation is slowing down and the moon’s orbital velocity speeding up (causing its orbit to expand) – due to the tidal forces induced by differential curvature of the spacetime which they inhabit. 34. Because spacetime in GR is curved, there is no general definition of parallel vectors, nor parallel transport. In most spacetimes in general relativity, there can be no global family of inertial observers. That is, spacetime in GR is Lorentz _covariant_ only locally, not globally. Although energy at a point (or in a sufficiently local region where spacetime curvature is negligible) can be defined, in general, an observer cannot know the energy at an arbitrary distant point. And if that local energy is unbounded from below, or sufficiently negative, spacetime itself becomes unstable. So I was surprised by your use of the Earth-Moon/gravitational system to illustrate a rather semi-classical mechanics view of energy. You seem to have crossed The Line That Should Not Be Crossed – conflating quantum mechanics based on Hamiltonians and Minkowski spacetime with gravitation based on Einstein’s equations and curved spacetime. As with your goal of correcting the common misrepresentation of “particles”, shouldn’t we be careful to use the most accurate, up-to-date description that we have – currently, still Einstein’s General Relativity — while being very explicit as to its limitations? • Oh, come on. This is ridiculous. Please stop talking to me as though I’m an idiot. The most up-to-date description of gravity would treat the earth and the moon as quantum mechanical systems. What’s your argument for not doing that? Are you seriously suggesting that comparing hydrogen to the earth-moon system is so completely wrong that absolutely nothing useful can be learned by doing so? And that it would be better to leave people so confused about hydrogen (AND the earth-moon system) that they cannot understand why structure forms in the universe? If so, I advise you to run your own website, and explain things your own way. 35. Hi Matt, I think I know why you’re not answering my questions. I sincerely apologize for my (childish) behavior. • Calin — the reason I haven’t answered is that your first question was tough to answer without a long reply, and I set it aside. Then I forgot about the restatement. Let me try to get back to it. I’ve had a lot of comments (and a lot of work too) in the last few days. p.s. Now I’ve answered it. 36. 1)so the moon moves 3.8cm per year away or 3.8 metres per century which is about 12 meters extra travelling distance and 12 milli-secs per century for its orbit as the moon travel about 1000 meters/sec But the solar day becomes 1.7 ms longer every century and so a month will last about 51 ms longer /per century. if you can put the 2 figures together then a lunar month should be about about 39 ms shorter per century(51-12). for the moon always to be visible in february, it would have to lose 37 hours so that it would only be 28 days long. as there are about 3,400,000 sets of 39ms in 37 hours, it should take 340,000,000 years until you can be assured of a visible moon in february with the day being 25.6 hours and 342 days in a year. your link says 15 billion times 3.8cm =570,000 km but it only has to travel another 176,000 km to get into synchronous orbit which is about 4.5 billion years. (3)in 15 billion years the earth should take another 71 hours to rotate daily (@1.7 ms longer every century )for a total of 95 hours in a day when the moon reaches synchronous orbit with Earth . currently the moon takes 708 hours to orbit the earth. is that what happens as the moon gets into synchronous orbit, it gets a rapid increase in speed ? • If it’s not too late to follow up on this discussion of tidal locking, let me mention that it was the topic of a summer project I did as an undergraduate, more years ago than I care to admit. My job was just to make educational animations, but I had to learn something about the related physics as well (at the Newtonian level, of course). My employer seems to have abandoned his Web site before adding this project, so I went ahead and resurrected it here. There may well be errors, but I hope the animations and accompanying discussion and references will prove educational. In discussions like (1) and (3), we need to carefully distinguish sidereal time and synodic time. A sidereal month is the amount of time it takes for the moon to orbit once around the Earth, with respect to the background stars, currently 656 hours. A synodic (lunar) month is the amount of time between successive new moons from the perspective of an observer on the earth, currently 708 hours. Similarly, a synodic (solar) day is the time from one sunrise to the next, while a sidereal day is the amount of time it takes for the earth to rotate about its axis, with respect to the background stars. (Of course there are further caveats, refinements, and so on, which are unimportant here.) In the synchronous orbit expected in the far future, a sidereal day will last as long as a sidereal month (the number I have in that old project is about 1130 hours), the same point on the Moon will always face the Earth (this is already true today), and the same point on the Earth will always face the Moon. Whether or not an observer sees the moon in February or any other month would depend on where that observer were located on the earth: part of the planet would always see the moon, the rest would never see it. Regarding (2), as the moon moves farther away from the earth, the rate of this motion away from the earth decreases. And (re: 3) as the radius (semi-major axis) of the Moon’s orbit increases, its angular velocity also decreases, as described by Kepler’s third law. 37. Hi Matt Let’s say I superglue two strong magnets together with the south poles touching. This object has some positive interaction energy in addition to just the masses of the original magnets and the glue; so, if I take a very accurate scale, and weigh this object against a similar object but with a south pole glued to the north pole, the first object would actually be heavier? 38. David Schaich | liked your website. does it have the actual mathematical equations though ? 39. Hey Matt, A physics newbie here, trying to wrap my mind around interaction (“potential”) energy and total system energy. I think I understand the gist of what you say in this (great) article about the relationship between mass, interaction energy, and inter-system relationships. I do get that interaction energy is essentially what defines the energetic boundary of a system that keeps it from fragmenting into separate, basically isolated systems (is that the right way of saying it?). But I am still somewhat confused, as I try to explain below. This post gets a bit longer than I meant it to, but I’m not sure how to get my thoughts across any more succinctly, so my appologies for that. =P In my current reference texts, I am being introduced to the idea of attractive forces as having negative interaction energy, such that two isolated systems starting from rest and given even a small attractive force between them, over an infinite distance, will show a net kinetic-interactive energy sum of 0, even as the interaction energy decreases infinitely, and the kinetic energy increases infinitely. As the separation distance approaches infinity, the interaction energy approaches zero (as in the case of gravity). And by further reasoning, as the separation distance approaches zero, then the potential energy approaches negative infinity. But my mind trips over the accounting of it, I guess you could say? I am just not quite clear on why the interaction energy is negative, even as I understand the reasoning that leads to the conclusion that the interaction and kinetic energy, summed, must = 0; since both isolated systems started with only their rest energies. I find it mentally far more clear to restate the situation. Because while we are going from two isloated systems to a compound, two-object system, we are “injecting” this new factor, the separation distance, into the behaviour of the system. And wherever there is separation distance, and a force that can act over that distance, there is interaction energy, at least as far as I understand it. So the interaction energy of a system with an infinite separation distance, but some acting attractive force (however small), is in fact *infinite* by this reasoning (even as it is somehow -> 0, which is in fact an increase from infinitely more negative states…). And as separation distance approaches 0, so does interaction energy (just as when one approaches infinity, so does the other). No separation distance, no distance for any attractive force to perform internal work over. So if you have two massive particles that start at rest, *with a separation distance between them*, and assume these particles have no electric/contact forces (only gravity is present), then you basically get a gravitational oscillator, where – relative to one particle – the second particle oscillates through the first along one linear path, up to a maximum distance equalling the initial separation distance. Also, the kinetic energy = 0 at either extreme of the oscillation, since the attractive force has been working across that distance as it moves away from the other particle the entire time, while the interaction energy = 0 at the single instant where both systems occupy the same space, since particle has been accelerating that entire time (kinetic energy = total energy – rest energy), and there is now no distance between the two particles whatsoever. In no instant in this system is the interaction energy ever 0, and the sum of the kinetic and interaction energies of the system is always a constant. Indeed, as far as I can tell, the only reason the interaction energy becomes negative at all is because we define the ‘zero point’ of potential energy to be some point where the separation distance IS NOT zero. And I’m not clear as to why this is a useful assumption. Anyway, having said all that, I am unclear as to how to reconcile this “gravitational oscillator” perspective – where separation distance and interaction energy are both always non-negative, and increase with each other – and the case in which interaction energy and kinetic energy start at 0 from two isolated systems, and then the former decreases without bound as the latter increases without bound. (It is worth noting that the ‘escape energy’ of this sort of system, from the perspective I describe, would be a point at which the potential energy suddenly starts dropping to zero as the attractive forces move ‘out of range’, and the systems become effectively isolated, even as the particle we deem to be ‘moving’ relative to the other retains a nonzero kinetic energy. At least, that’s as far as I can reason it.) Hopefully this reasoning isn’t just a giant jumble, and any insight you could perhaps provide would be greatly appreciated. These relationships are just not quite coming together in my head in the way they have been presented to me so far. Thanks again, – Chris • Chris — your confusions are very natural and common, and your reasoning (I admit I didn’t go through every detail) seems sound. As you say, it is an option, when dealing with energy in classical freshman-level physics, to set the zero of energy wherever we like, and either perspective you outline is allowed. Experience, however, will teach you that setting the energy at zero is a more consistent thing to do. For instance, suppose, that you set the zero of energy so that comet #1 has zero interaction (i.e. “potential”) energy at its closest approach to the sun and positive energy further away. Well, now if comet #2 has an orbit that brings it closer to the sun, it will have negative potential energy. So what you’d have to do to describe a whole solar system using only positive interaction energy would be to find the comet with the closest approach to the sun, and set the zero of potential energy there. Or even more appropriately, put the zero of potential energy at the dead center of the sun (where it is finite because the sun is a spread-out sphere.) But you see: to describe this system’s energy you need to know many of its details. This is not convenient, and it is very system-dependent; add one more comet, or make the sun a little more or less dense because of its evolution over time, and you may wish you’d set the point of zero potential energy differently. In contrast, if you always set the zero of potential energy at infinite separation, this is system-independent, and always works (as long as you’re not dealing with a significant fraction of the universe, or something else that renders ordinary classical physics insufficient.) You don’t need to know anything about what’s in the system to do this. And that’s why, with experience, you’ll see this is by far the best choice. The alternatives work in specific situations, but they don’t lead to a useful and general theoretical picture. • Thanks Matt; I almost hope you didn’t go through it all, rambling as it was! Anyway, I was increasingly sure that negative interaction energy had to be a conscious, yet arbitrary choice on the part of the physics community for some reason of conceptual simplicity, but for whatever reason my text (“Matter and Interactions” for the curious, which has worked very well for me so far short of this exception) just didn’t go into the reasoning that lead to that choice in any detail, and I was having difficulty finding other good articles explaining the justification. Your explanation helps clarify that, so thank you very much. I’m not sure I completely grasp your points as to your example of the solar system, though: I do get what you are saying about the interaction energy becoming negative if, in this situation, we set the zero point at any arbitrary distance from the center of the sun, with respect to comet orbits or anything else. The same sort of approach may be applied to objects on Earth’s surface with respect to the surface of the Earth, below which they cannot move, even as the force of gravity still pulls upon them from Earth’s center of mass. So I think I get that. If I understand correctly, you’re saying that – assuming a positive interaction energy perspective – if we place the zero of the solar system’s interaction energy at the dead center of the Sun, this basically makes the most physical sense (and this is basically the approach I outlined above re; gravitational oscillator), since that is basically the point to which the gravitational force is always trying to pull all solar objects. You say that we need to know many details of this system; can you give me one or two examples? I can see that we need to know the Sun’s radius, so that we know the closest any object can ever get to the zero-point at the center of the sun. But I’m not clear what other factors would be critically important to bear in mind. One thought that occurs is that while this approach is simple when we assume the Sun is stationary, it becomes far more complicated if our frame of reference sees the sun moving around with respect to us, which basically means the zero-point of the solar interaction energy is wandering around too… You also say that adding one more comet, or changing the density of the Sun, can affect the way we interpret the interaction energy for objects within this system, based on this approach. I am presuming this is because such changes would influence the center of mass of the solar system, and thus the point to which objects are trying to gravitate, and so ‘moves’ our zero-point. Is this reasoning correct? Thanks again for humouring my questions; your assistance in understanding these ideas is very much appreciated. =) • No, it doesn’t change the center of mass much — that wasn’t my point. The changes in the solar system will assure that certain objects will have negative interaction energy despite your best efforts to avoid it. For example: set the energy for an object located at the center of the sun as the zero of energy. Now imagine a comet falls into the sun, making the sun’s mass larger. Well, the interaction energy for an object at the center of the sun just decreased. So in this process, the energy at the center of the sun has now gone negative. This is not very convenient. Or suppose you put the interaction energy of Mercury and the Sun to be zero. Now in comes a comet; it passes Mercury and goes closer to the sun. Now its interaction energy is negative; do you want to redefine where you put the zero just because a new comet came closer than Mercury? Best to put the zero of energy at infinity, and not be affected by these details at all. • Slobodan Nedic Thank you very much for artuculating the apparent inconsistencies in the definition of system’s energy, and relationship of interaction and potential energies. Not only that you mentioned the “Gravitation Oscillator”, on there is an article and supporting material on plausibility of founding the systems’ energy on the minimal work needed to be done for moving of its parts over *closed* paths, whereby it turns out to be non-zero – meaning the essential non-conservativeness of all natural orbital systems, contrary to the common assumption (i.e. energy AND angular momentum conservation) … 40. Hey Matt, thanks again for the clarifications. I think I’ve about got the idea now, between my readings and your responses to my questions. So, I wanted to ask, for purposes of clarification; Does it make sense to define the “interaction energy” of a multiparticle system, in terms of the individual rest energies, as essentially – in attractive systems – the amount of rest energy that the two particles, when interacting, are able to convert into kinetic energy and, in some form, eject from the interacting-state system (as in the case of a proton-electron pair that ejects a photon/quanta of energy)? Or, in the case of repulsive forces, the amount of kinetic energy that a particle can, by interaction with another particle, convert into additional rest energy within the interacting-state system? If one assumes that all physical particles are always trying to enter states with a lower total rest energy (for whatever reasons I don’t yet grasp), as I understand is essentially the case from my limited experience with the basics of chemistry, then that seems to make sense. That said, I’m wondering if I am connecting dots that aren’t there. Is this an accurate conceptual interpretation of interaction energy, or am I on the wrong track? • I don’t think I’m understanding the way you’re thinking. What do you mean by “individual rest energies”, or by “convert into kinetic energy”? In an atom, the rest energies of the particles are just E_rest = mc^2 for each elementary particle. The interaction energy of a system of elementary particles would be the Mc^2 for the whole multi-particle system, minus the sum of the mc^2 for each elementary particle, minus the kinetic energies of each particle. That is the simplest way to say it. I don’t know what it could mean to say “all physical particles are always trying to enter states with a lower total rest energy”. All electrons have the same rest energy: E_rest = m_electron c^2. And physical particles in a multi-particle system don’t do things independently; the system does things. Within a system, energy is conserved; energy can only be lowered if some energy leaves the system. For example, an atom can fall from an excited state to a less excited state, lowering its total energy and making its interaction energy more negative (though also the electron’s kinetic energy more positive, but not by as much) — but this can only happen if the atom emits a photon, which leaves the atom. So I don’t know where you’re going with this line of thinking, or what you mean. What are the basics of chemistry that you are trying to rely upon? • Slobodan Nedic This post: was primarily intended to Chris Rowlands … 41. Upon binding, say a proton and electron, the loss of potential energy has to go *somewhere*, right? I’m pretty sure a 13.6 eV photon is emitted. The equation m_atom < m_proton + m_electron would be more clearly written, with an addition, m_atom + m_photon = m_proton + m_electron (You know what I mean when I say m_photon… of course the equation would be more correctly written in terms of energies to avoid any confusion about the photon having a rest mass.) 42. Pingback: Courses, Forces, and (w)Einstein | Of Particular Significance 43. I just wanted to say thank you for the clear and informative articles on your site and for taking the time to produce them and answer people’s questions. I’m sure I speak for many others when I say that your work is really appreciated. Long may it continue! 44. Pingback: Page not found | Of Particular Significance 45. Pingback: A Short Break | Of Particular Significance 46. I get pleasure from, cause I found exactly what I used to bee taking a look for. Have a great day. Bye 47. Pingback: Quora 48. dr md zakir Hussain Loved the article….a very beautiful post. 49. Pingback: What Interaction Holds Two Atoms Together | Jemiya1 50. In a 2014, Helen Quinn (from SLAC) was recommending to use the expression “interaction energy” instead of “potential energy” ( ) This expression does not seem to be used commonly — and on this webpage, it seems to be “self-invented” term. Is it possible to say a little about the “origin” or “tradition of use” for this expression “interaction energy”? 51. Pingback: Moving Through Time or Space, Where Does Your Energy Go? « Jim Ritchie-Dunham 52. Hi, Professor Strassler, Could you say me what is a non-ripple disturbance in a field (as said at the beginning of your post) ? Kind regards 53. Could you say what is a non-ripple disturbance in a field (said at the beginning of your post) , 54. Pingback: Moving Through Time or Space, Where Does Your Energy Go? – ISC 55. While reading this article on the search for Dark Matter particles, I am reminded by this article on binding energy. Why aren’t we accounting for gravitational binding energy as the mass that makes up Dark Matter? Leave a Reply to Matt Strassler Cancel reply You are commenting using your account. Log Out /  Change ) Google photo Twitter picture Facebook photo Connecting to %s
762dd52e72a8a6da
Model for High-Temperature Superconductivity Модераторы: mike@in-russia, varlash Аватара пользователя Сообщения: 34397 Зарегистрирован: Вт май 17, 2005 18:44 Откуда: с Уралу Контактная информация: Model for High-Temperature Superconductivity Номер сообщения:#1   morozov » Solving a Macroscopic Model for High-Temperature Superconductivity ORNL researchers settle 20-year dispute in solid-state physics Thomas Schulthess The potential impact of a major breakthrough in superconductivity research is difficult to overstate. Currently, electrical utilities must transmit electricity in excess of thousands of volts ? despite the fact that actual electrical usage is typically in volts or hundreds of volts ? simply because so much energy is otherwise lost during transmission. If we were able to build large-scale power grids with superconducting materials ? that is, with materials that can carry a current with zero resistance ? we could generate and transmit power over long distances at much smaller voltages. This would eliminate enormous amounts of wasted energy, render entire segments of today?s electrical transmission systems unnecessary, allow us to systematically exploit alternative energy sources, and dramatically reduce the cost of energy worldwide. Of course, this ideal has not yet been achieved, primarily because no materials currently exist that can become superconductive at room temperatures. Historically, scientists had to cool conventional conductors close to absolute zero to produce the phenomenon. Then, in the 1980s, researchers discovered a new class of materials that become superconductive at much higher temperatures, kicking off a new era of exploration in the field. Although still requiring very cold temperatures, these new materials could be cooled with liquid nitrogen, which is much easier and less expensive to produce than the liquid helium required by the old materials. Today, many scientists and engineers are engaged in research to develop practical high-temperature superconductors for power transmission and many other applications.However, the phenomenon of high-temperature superconductivity is still poorly understood.While a microscopic theory explaining conventional superconductivity has existed for half a century, most scientists agree that it is not applicable to this new class of materials. The most promising model proposed to describe high-temperature superconductivity, called the twodimensional (2-D) Hubbard model, has been unproven and controversial for many years. Now, thanks to new techniques developed by a research team at the National Center for Computational Sciences at Oak Ridge National Laboratory (ORNL), the 2-D Hubbard model has finally been solved. By explicitly proving that the model describes high-temperature superconductivity, we have helped to settle a debate that has raged for two decades, and have opened the door to a much deeper understanding of this phenomenon. Even more significantly, the work may mark an important step toward the development of a canonical set of equations for understanding the behavior of materials at all scales. С уважением, Морозов Валерий Борисович Аватара пользователя Сообщения: 34397 Зарегистрирован: Вт май 17, 2005 18:44 Откуда: с Уралу Контактная информация: Номер сообщения:#2   morozov » Understanding superconductivity In normal materials, electrical resistance causes energy loss during conduction of a current. Superconductive materials conduct electricity with no resistance when cooled below a certain temperature. Superconductivity is the result of a transition to a new phase in a material ? a macroscopic quantum phenomenon ? in which a macroscopic number of electrons (on the scale of 1023) condense into a coherent quantum state. If this state is that of a current flowing through a material, the current will flow, theoretically indefinitely (provided the material is kept in the superconducting state), and the material will be able to transmit electric power with no energy loss. Conventional superconductors were first discovered in the early 20th century, when Dutch physicist Heike Kamerlingh Onnes cooled mercury to four degrees Kelvin (-269 degrees Celsius), and observed that the material?s electrical resistance dropped to zero. Research into conventional superconductors progressed for several decades until 1957, when three American physicists ? John Bardeen, Leon Cooper and John Schrieffer ? advanced the first strong mathematical explanation for conventional superconductivity, which came to be known as BCS Theory. (This work won the physicists the Nobel Prize in 1972.) In the 1980s, a new class of superconductive materials was discovered, which become superconductive at much higher temperatures. These new ceramic materials ? copper-oxides, or cuprates, combined with various other elements ? achieve superconductivity at temperatures as high as 138 degrees Kelvin, representing a major jump toward room-temperature superconductors. Physicists quickly recognized that superconductivity in these new materials, while ultimately producing the same effect as conventional superconductivity, was a very different phenomenon. As a result, BCS Theory, which had served as the standard model for describing superconductivity for decades, was simply not adequate to describe it. С уважением, Морозов Валерий Борисович Аватара пользователя Сообщения: 34397 Зарегистрирован: Вт май 17, 2005 18:44 Откуда: с Уралу Контактная информация: Mystery of high-temperature superconductors Номер сообщения:#3   morozov » Mystery of high-temperature Within a few years of the discovery of high-temperature superconductors, a number of physicists suggested several new mechanisms to describe this phenomenon based on the 2-D Hubbard model. This model is derived from chemical structures, and purports to describe superconductivity with a few microscopically derivable parameters: the probability that carriers (electrons or holes) hop from one site to another on a lattice of atoms, an energy penalty for two carriers to occupy the same site at once (one can easily imagine that two electrons repel each other) the concentration of carriers. However, there was disagreement within the scientific community about whether the model encompassed a superconducting state in the typical parameter and temperature range of the cuprate superconductors and, as a result, whether the model was appropriate at all. (If the model did not include a high-temperature superconducting state, then it was not an appropriate description of reality.) At the time, the 2-D Hubbard model was unsolvable due to the scale of the computation required. Since superconductivity is a macroscopic effect, any simulation would need to encompass a lattice on the scale of 1023 sites.At the same time, since the model also must describe the behavior of individual electrons hopping from site to site and interacting with each other, the simulation also had to include calculations on the scale of a few lattice spacings at a time. In short, the model presented a difficult multi-scale problem, making it extremely computationally complex ? if not impossible ? to solve numerically. Because of these difficulties, the questions about the 2-D Hubbard model remained unresolved for more than a decade. С уважением, Морозов Валерий Борисович Аватара пользователя Сообщения: 34397 Зарегистрирован: Вт май 17, 2005 18:44 Откуда: с Уралу Контактная информация: Номер сообщения:#4   morozov » Solving the 2-D Hubbard model In the 1990s, one of the Center?s team members,Mark Jarrell of the University of Cincinnati, finally began to break this deadlock with the development of dynamical cluster approximation (DCA), an extension of dynamical mean field theory to systematically include non-local correlations.Mean field theory is the standard tool used in statistical physics to describe multi-scale phenomena, such as the interaction between an entire system of particles and an individual particle in the system. Dynamical mean field theory is a quantum version of this theory, allowing the study of quantum fluctuations in one atom that is embedded within many other atoms. Jarrell?s innovation was a technique for embedding a cluster of atoms ? on which the many-body problem is solved rigorously with quantum Monte Carlo (QMC) simulations ? into a mean field, allowing for the description of the cluster?s interactions with the macroscopic soup of particles in the system. Using DCA and QMC techniques, the team was finally able to simulate a correlated system encompassing the macroscopic scale, as well as the scale of a small cluster of atoms within the system. Jarrell worked with Thomas Maier of ORNL to perform simulations using these techniques with the QMC/DCA code to solve for small clusters (foursites) on an infinite lattice, using a super-scalar computing system at ORNL. The results were promising, reproducing the phase diagram of the cuprates, and demonstrating that the model does achieve a superconductive state at higher temperatures.However, the work was not conclusive because it also produced a magnetically ordered state at finite temperatures that seemed to be an artifact of the mean field approximation. (Magnetic ordering at finite temperature in the 2-D Hubbard model is forbidden by a mathematical theorem, the Mermin-Wagner theorem.) Since the magnetic ordering was a result of the very small clusters used in the simulations, it remained to be seen if the superconductivity was also an artifact of the small clusters and the meanfield approximation.However, the super-scalar computer system used at the time was simply not adequate to run QMC simulations for larger clusters. С уважением, Морозов Валерий Борисович Аватара пользователя Сообщения: 34397 Зарегистрирован: Вт май 17, 2005 18:44 Откуда: с Уралу Контактная информация: Номер сообщения:#5   morozov » Employing a vector supercomputer Frequently, an application cannot take advantage of a massively parallel system because its physical domain is too small to be distributed over a large number of processors. In ORNL?s case, the QMC/DCA code (specifically, an individual Markov chain in the Monte Carlo simulation) does not scale well on scalar processors, and the algorithm the team was using became too computationally inefficient when expanded beyond foursite clusters. The algorithm includes a step at which the system must perform an outer-product. As the vectors became larger, the cache of the scalar system was not able to keep up, causing the performance to decline drastically. At that point, adding parallel processors offered no advantages. The team needed a different type of supercomputer. Given the limited scalability of the application, the researchers needed a system with very high performance and high memory bandwidth on the processors to supply as many floating-point operations as possible on fewer processors. They felt the best approach to this problem was to employ a system with the fastest vector processors available. The team began using the Cray X1 system and later moved to the Cray X1E system at ORNL. The X1E supercomputer is a dual processor implementation of the X1 system. With the doubling of the number of processors on a board and a higher clock speed, the X1E is able to deliver more than twice the performance in the same package as the X1. It is binary compatible to the X1 system and uses multi-streaming vector processors (MSP) to achieve very high performance per processor. The multi-streaming architecture gives the programmer yet another level of parallelism to facilitate a higher sustained peak performance. ORNL?s Cray X1E used in this study contains 1024 MSP units each capable of 18 GFLOPS. It was necessary to perform two major areas of computation: level-3 basic linear algebra subprograms (BLAS), which are highly computationally-intensive and typically achieve a very high fraction of peak performance (80 to 90 level-2 BLAS, which are extremely memory bandwidth-limited.With the ability to fetch and store vector arguments directly into the vector registers, the Cray X1 and Cray X1E systems offered much higher memory bandwidth than typical massively parallel systems. Thanks to the system?s vector processors and its high memory bandwidth, calculations could be performed with clusters of up to 30 atoms with only minimal changes to the code. This scale turned out to be enough to show that the magnetic order that had plagued the earlier work disappears monotonically as the clusters grow larger, while the superconductivity effect remains. In short, the team proved for the first time that the 2-D Hubbard model does include a high-temperature superconductive state and is an accurate representation of the phenomenon ? settling a 20-year dispute in solid-state physics. С уважением, Морозов Валерий Борисович Аватара пользователя Сообщения: 34397 Зарегистрирован: Вт май 17, 2005 18:44 Откуда: с Уралу Контактная информация: Номер сообщения:#6   morozov » Looking ahead While solving the 2-D Hubbard model represents an important step forward in understanding the physics of high-temperature superconductivity, we still have much work to do in this area. The next step is to demonstrate that a generalized version of this model can not only describe superconductivity in principle, but also can accurately reflect the behavior of specific materials and explain why different materials become superconductive at different temperatures. One potential outcome of gaining this level of understanding would be the ability to design and produce even higher- temperature superconductive compounds that could be used in a variety of applications ? from electric grids to quantum supercomputers. However, putting aside the practical engineering possibilities, we believe this work could have even more profound ramifications by serving as a first step toward a true canonical solution for quantum problems in complex materials. In the field of computational fluid dynamics, for example, the Navier- Stokes equations provide us with the ability to solve a broad range of engineering problems. For materials sciences, chemistry and nanoscience, the many-body Schrödinger equation plays a similar role.However, in contrast to Navier-Stokes, no canonical approach currently exists to solve the many-body Schrödinger equation. Finding one would change solid-state physics forever, and greatly expand the role of computation in scientific discovery. Thomas Schulthess is Group Leader, Computational Material Sciences and Nanomaterials Theory Institute, ORNL. He can be reached at С уважением, Морозов Валерий Борисович Вернуться в «Физика твердого тела / Solid-state Physics»
5036693a1c4227fe
Talk:Canonical commutation relation From formulasearchengine Jump to navigation Jump to search Anyone read Dirac's book on QM? I'm missing something on this derivation. -ub3rm4th <br\> Quoted (section 22, page 93): showing that or, with the help of , Question, more urgent! Gauge invariant? "The non-relativistic Hamiltonian for a particle of mass m and charge q in a classical electromagnetic field is where A is the three-vector potential and is the scalar potential. This form of the Hamiltonian, as well as the Schroedinger equation , the Maxwell equations and the Lorentz force law are invariant under the gauge transformation and is the gauge function." It is not true that this H is invariant under a gauge transformation. Substituting for the transformed potentials with simple algebra H' becomes not the expression given in the article. However the Schroedinger equation, the Maxwell equations and the Lorentz force law are invariant under the gauge transformation. Xxanthippe 04:02, 11 June 2007 (UTC) Would you mind citing some references for this? I think the text (wikipedia article) is a little bit unclear about whether the Hamiltonian or the Schrödinger equation are gauge invariant. It should be nice if someone review this section. User: The Hamiltonian of semi-classical electrodynamics, given above, is, by elementary algebra (consider the potential term), not gauge invariant. The field equation that it is associated with, the Schrödinger equation is gauge invariant because of the time derivative of the wavefunction that occurs in it. On the other hand, the Hamiltonian and the field equations of quantum electrodynamics are all gauge invariant. Puzzled? The reason is that semi-classical electrodynamics is an incomplete theory because it does not take account of the internal dynamics of the (electromagnetic) field as it describes the field by externally specified potentials. Quantum electrodynamics is a complete theory in this respect but it suffers from disadvantages like giving infinite results. However, it can be shown that the probability amplitudes for transitions of the matter field in semi-classical electrodynamics are gauge invariant. So semi-classical electrodynamics is a pretty good theory for most of chemistry and much of physics, so much so that some people don't realise its shortcomings. Xxanthippe (talk) 00:34, 20 June 2010 (UTC). Proof of Canonical Commutation Relations? The "uncertainty principle" wikipedia page explains how to derive the uncertainty principle from these commutation relations, but how does one physically prove the canonical commutation relations? i.e. What experiments or logic led to the proposition [x,p_x] = ih/(2*pi) being accepted as true? I ask this because I think the answer would make a great addition to this article. Good point. It's too bad that it has received no response. For starters, there are two things involved in your question. One is a question of math (and not exactly of logic). I think you will find your answer in Thomas F. Jordan's Quantum Mechanics in Simple Matrix Form. The situation, as well as I can understand it, is that the relationship discussed here came out of the math. If I figure it out I'll try to remember to come back and supply here what I hoped would be provided in the article. The other issues is whether what is mathematically true happens to describe reality. For a long time, Euclidean geometry was taken as a description, indeed the only description, of space. Euclidean geometry can be made axiomatically valid, but so can other geometries. The question then becomes whether, e.g., at huge distances the sum of the angles of a triangle are indeed equal to 180 degrees. If space itself is curved, then that shows experimentally that some other geometry must be the correct one. There is lots of discussion about how experimental verifications would have to be made in the case of pq not being equal to qp, but I don't know whether the experiments have been pursued. There is another answer, I suspect, in that the bright line spectrum lines are not perfect geometrical lines as they would be if your laser pointer produced light at exactly 660 nanometers, or if the bright lines of the hydrogen spectrum were made entirely of photons of exactly one wavelength. There are differences in fuzziness depending on the frequencies involved, too. It's all a little fuzzy in my memory, but if and when I run onto that discussion again I'll try to remember to post it here. P0M (talk) 01:54, 22 June 2009 (UTC) See Jordan, p. 130-132. I think this is where the proof appears. (I wish his English was a little clearer.) P0M (talk) 06:05, 24 June 2009 (UTC) See also, Aitchison, MacManus, and Snyder, "Understanding Heisenberg's 'magical' paper of July 1925," Appendix A. Mehra's three volume book on the history of quantum mechanics refers over and over again to the enthusiasm and stamina with which people in Heisenberg's circle had been spectrographically investigating the hydrogen spectrum -- and that over the course of several years. So it seems likely to me that the data was already there. Planck's constant is a very small number, so they would have had to prepare the experiments very carefully and make many measurements I have one book, which I can't find at the moment, that has some details (or maybe speculation) on how experiments need to be done in this regard. P0M (talk) 05:20, 25 June 2009 (UTC) "It is relatively easy to see that two operators satisfying the canonical commutation relations cannot both be bounded." - Can anyone provide a reference for this claim? —Preceding unsigned comment added by (talk) 00:46, 5 March 2009 (UTC) Self-evident: Trace. Cuzkatzimhut (talk) 01:15, 4 March 2013 (UTC) Would you like to expand for the benefit of those with less intelligence than yourself? Xxanthippe (talk) 01:22, 4 March 2013 (UTC). But under no circumstances in the article: less is more. As the aside invitation to take the trace of the commutation relation suggests, the right hand side is of order N for N-dimensional vector spaces and infinite for the Hilbert space in question. The left hand side, by the cyclicity of the trace is 0 for finite dimensional spaces, and ambiguous for Hilbert space, that is as N grows infinite, the zero and the emergent delta functions conspire to yield an infinite expression equal to the right hand side. To sum up, for finite N, the commutation relation cannot hold, as seen by thus tracing, but it tends to the one given in the article provided either x or p are unbounded, in our case both. My favorite reference is Template:Cite doi where that limit emerges magnificently, but it would be overkill to foist it on the reader right here. The reader is steered to the Stone-von Neumann theorem, and might go to standard QM texts if interested. I think just the plain statement and the exhortation to take the trace suffice to let the reader in the know recall the fact, and the one who is not to shrug it off.... One could always cite the canonical derivation of Ch VIII of Reed and Simon, ISBN 0125850506, based on this commutator, but I fear this is overkill for an elementary introduction−−maybe the cite could belong in the wiki on boundedness. Cuzkatzimhut (talk) 12:59, 4 March 2013 (UTC) Poor writing Para 2 starts out: "In contrast..." In contrast to what? How is the reader supposed to guess? P0M (talk) 01:40, 22 June 2009 (UTC) ? I can't guess what your question asks. "By contast" to the two conjugate variables of the preceding paragraph, of course, which fail to commute. What else? Where is the lack of clarity hiding? Cuzkatzimhut (talk) 00:56, 3 December 2012 (UTC)
17d4e1a79f3af003
So I followed this lecture: which starts of with the statement: If you have a Schrödinger equation for an energy eigenstate you have $$-\frac{\hbar}{2m}\frac{d^2}{dx^2}\psi(x) + V(x)\psi(x) = E \psi(x)\tag{1}$$ Question 1: What does it mean to have a energy eigenstate in this context? All eigenstates I ever cared about were the eigenstates and eigenfunctions of Hamiltonians. Question 2: Is equation (1) a general statement or specific to some conditions? Usually I assumeed that Schrödingers-equation is used for time-evolutions but this doesn't seem to be the case here. This is the time independent Schrödinger equation. It is basically an eigenvalue problem $$\hat{H}\psi=E\psi $$ where $$\hat{H}=-\frac{\hbar^2}{2m}\nabla^2+V(x)$$ is the Hamiltonian of the system. Since you yourself mentioned eigenstates of the Hamiltonian I'm going to guess you already know about why the Hamiltonian has this form. The solution of this equation are the eigenstates of the Hamiltonian operator, a set of eigenvectors and eigenvalues. Probably, the Schrödinger equation you have in mind is the time dependent Schrödinger equation $$i\hbar\frac{\partial}{\partial t}|\psi\rangle=\hat{H}|\psi\rangle $$ Why do we need two separate equations? Well, the true equation of motion of the state is the time dependent version, nevertheless, considering the appearance of the hamiltonian, it's useful to solve the time independent one and get the eigenstates. Why? Because once you have some states $|n\rangle$ such that $\hat{H}|n\rangle=E_n|n\rangle$ you can verify that $$ |\psi(t)\rangle=\sum_n a_n\exp \left(-i\frac{E_n t}{\hbar} \right)|n\rangle$$ is a solution to the time dependent version for some coefficients $a_n$. • $\begingroup$ First: Thank you very much that was very helpful! Second: You said that the solution of the time independent version are ' energy eigenstates of the Hamiltonian operator, a set of eigenvectors and eigenvalues.' I always assumed that the vectors are the states ? At least that's the case in the bracket notation. Am I missing here something ? What is the difference between an eigenstate and an eigenvector? $\endgroup$ – CatoMaths Jan 11 '19 at 23:15 • $\begingroup$ @CatoMaths no difference - sorry for using both terms, it can be confusing. "Eigenstate" is just a name we give to an eigenvector of a physical observable. It's just a name we give out of physical interpretation. The solution of the time independent equation is a set of eigenvectors $|n\rangle$ and eigenvalues $E_n$. The eigenvectors are usually called eigenstates, and the eigenvalues are interpreted as the energies of the corresponding eigenstates. $\endgroup$ – user2723984 Jan 11 '19 at 23:21 • $\begingroup$ Also I should probably not have written "energy eigenstates of the Hamiltonian", usually one says "energy eigenstates" to refer to the eigenstates of the Hamiltonian, but writing both is redundant and confusing, I edited it. $\endgroup$ – user2723984 Jan 11 '19 at 23:24 • $\begingroup$ Perfect, finally I am starting to understand! Thank you so much. If you don't mind I would have a minor question on this topic. We said that the solution of this equation is the eigenvector and corresponding eigenvalue. Let's call them $\mid \psi_n \rangle$ and $E_n$. I am having a hard time to make the connection between the normal notation $\psi_n$ and the bracket notation $\mid \psi_n \rangle$. This may sound stupid but I am used to be always in the backet notation so seeing $\psi_n$ now as non-vector is somewhat confusing to me. $\endgroup$ – CatoMaths Jan 11 '19 at 23:37 • $\begingroup$ In bra-ket notation, vectors are denoted by $|\psi\rangle$. If you have an eigenbasis $|x\rangle$ of some observable, for example position, you can decompose $|\psi\rangle$ as $\int dx \langle x|\psi\rangle |x\rangle$ and we note by $\psi(x):=\langle x|\psi\rangle$, this is the wave function. It's not a vector in an Hilbert space, it's a function (or more generally a tempered distribution). Often physicists interchange the two in somewhat confusing ways, see this question of mine from when I was confused too :) $\endgroup$ – user2723984 Jan 12 '19 at 8:31 You have stated that the only eigenstates you care about are for the Hamiltonian. That equation IS the Hamiltonian for a non-relativistic particle. The differential operator on the left hand side is H, the Hamiltonian operator, the E on the right hand side is the energy, a scalar number. Solving this equation for the allowed wavefunctions and energies provides you with a complete set of eigenfunctions and eigenvalues {psi_n, E_n}. By the way I think you are missing a square on your h_bar. As for the time-dependent equation, its solutions can be built up from the eigenfunctions since the span Hilbert space. This is a common approach to solving time dependent PDE, and is used in acoustics, optics and all other wave mechanics. As for it being a special case? The only things special about the equation you have posted are (1) it is non-relativistic, (2) it is 1-dim (in 3-dim the second derivative would be replaced with the Laplacian operator), and (3) no boundary conditions are explicitly mentioned, e.g. psi(x0) = 0 or psi(infinity) = 0, etc. Your Answer
fbdb861013069c3e
I'm a software developer interested in learning quantum mechanics to simulate chemistry. I know it's a very difficult topic, so I consider it a long term "someday/maybe" goal, and I’m not sure it's even possible. I've listened to some video lectures in introductory QM courses like Susskind's and Brant Carlson's youtube videos, and the content so far seems far removed from computing "chemistry" things like electron orbital shapes or bond energies. Is it possible to simulate the time evolution of something "simple" like the colliding and reacting molecules in: $2\mathrm{H}_2 + \mathrm{O}_2 \rightarrow 2\mathrm{H_2O}$? I mean simulate from first principles - pure quantum mechanics without any estimates like "pretend this atom is a mass on a spring", etc. If it is possible, what is a rough outline of the college courses required to go from point A to B - from intro quantum mechanics to the understanding needed to write code for that simulation? (Maybe it's less about the physics and more about tricky computational techniques of estimating solutions to equations?) • 2 $\begingroup$ I have no definitive answer, just an observation that $H_2 O$ from the perspective of "pure QM" would probably be modeled as 13 different bodies interacting (2 protons and 2 electrons for $H_2$ and 1 nucleus and 8 electrons for Oxygen)...when you get 2 of those in there, that's 26 bodies you're looking at. Solving the Schroedinger equation for 26 (coupled) bodies simultaneously even numerically seems very difficult to me. $\endgroup$ – enumaris Oct 12 '18 at 20:47 • $\begingroup$ People do this, but usually just for approaches along some symmetry directions, I think. That is difficult enough. To do the complete quantum chemistry for all possible approaches (not just straight lines) - I doubt whether that is attempted. $\endgroup$ – Pieter Oct 12 '18 at 20:57 • 1 $\begingroup$ Even the interaction of electrons with hydrogen is difficult. $\endgroup$ – Keith McClary Oct 13 '18 at 0:43 • $\begingroup$ I found Susskind's lectures (and corresponding book amazon.com/Quantum-Mechanics-Theoretical-Leonard-Susskind/dp/…) a useful introduction to QM. A more thorough treatment of the maths is given by Schuller youtube.com/playlist?list=PLPH7f_7ZlzxQVx5jRjbfRGEzWY_upS5K6 $\endgroup$ – Tom Collinge Oct 13 '18 at 7:32 • $\begingroup$ A definitive answer is Landau vol III, but it is not a "college" textbook. And just the few first chaptters wont suffice. You will need the full scattering theory treatments in the advanced chapters to calculate things like collide 2H2+02→2H20 from first principles. $\endgroup$ – Kostas Oct 13 '18 at 19:59 Yes, it is possible. Working with pure quantum mechanics means you will need to solve the many-body Schrödinger equation, which has no exact solution, so some approximation must be done numerically. Different approaches into solving this equations gave birth to different numerical methods, and some methods are more efficient for solving specific problems, like the one you mentioned. You may want to look for the terms: Ab initio, First Principles methods, computational chemistry, Density Functional Theory. I have even seen some dedicated courses on youtube. Some popular softwares used in this field are: Gaussian, VASP, GAMESS, DMol, Quantum Espresso. Yes, this is possible -- I used to study it in undergrad, actually. I would say that the prerequisites are probably a few semesters of quantum mechanics -- enough to learn concepts like Born-Oppenheimer approximations, perturbation theory, and angular momentum theory. A course specifically in atomic and molecular physics would also help. As you say, and as a comment points out, the computational requirements for exactly solution of the Schrodinger equation for even something comparatively simple can be immense. There's plenty of computational effort on trying to simply this problem, and for something as large as what you propose I doubt you'd see "exact from first principles" treatment; approximations likely enter it to it. (My numerical work in undergrad would take days to run, for reactions like F + H2.) The key word is "reactive scattering" -- the process of two molecules colliding and then a different configuration emerging. This seems like a decent review paper, if you can access it. The other answers hit on the software that exists already and touched on how long "pure" ab inito (from first principals) calculations can take. Note: for interesting systems supercomputers are used (my group as access to one, would recommend) The most important classes to take if you want to write your own code (I'm assuming you know how to code): Linear Algebra - 99% of all Quantum Chemistry is matrix math Physical Chemistry - make sure you understand underlying concepts Quantum Physics/Chemistry - either will definitely get to into shouting distance of being able to code - but the focus is going to be on the exact answer that we get for Hydrogen not coding the modern methods If you want a course that will actually teach you how to code the modern methods, you will probably have to go to a school that as a focused center for Quantum Chemistry or a Physics department that is branching into chemistry. Example class titles: Advanced Quantum Chemistry, 'Beyond Hartree-Fock', or Couple Cluster Methods If you have an interest in playing with a quantum chemistry package you can download from github: Psi4 - written in C++ and python with a python interface for input Your Answer
ba810bec201ddc0a
mathematical physics Universal Pattern Explains Why Materials Conduct Mathematicians have found that materials conduct electricity when electrons follow a universal mathematical pattern. Art for "Universal Pattern Explains Why Materials Conduct" The movement of electrons inside a conductor is impossible to calculate exactly. In a wire, electrons rebound off each other in such a complicated fashion that there’s no way to follow exactly what’s happening. But over the last 50 years, mathematicians and physicists have begun to grasp that this blizzard of movement settles into elegant statistical patterns. Electron movement takes one statistical shape in a conductor and a different statistical shape in an insulator. That, at least, has been the hunch. Over the last half-century mathematicians have been searching for mathematical models that bear it out. They’ve been trying to prove that this beautiful statistical picture really does hold absolutely. And in a paper posted online last summer, a trio of mathematicians have come the closest yet to doing so. In that work, Paul Bourgade of New York University, Horng-Tzer Yau of Harvard University, and Jun Yin of the University of California, Los Angeles, prove the existence of a mathematical signature called “universality” that certifies that a material conducts electricity. “What they show, which I think is a breakthrough mathematically … is that first you have conduction, and second [you have] universality,” said Tom Spencer, a mathematician at the Institute for Advanced Study in Princeton, New Jersey. The paper is the latest validation of a grand vision for quantum physics set forward in the 1960s by the famed physicist Eugene Wigner. Wigner understood that quantum interactions are too complicated to be described exactly, but he hoped the essence of those interactions would emerge in broad statistical strokes. This new work establishes that, to an extent that might have surprised even Wigner, his hope was well-founded. Universally Strange Even seemingly isolated and unrelated events can fall into a predictable statistical pattern. Take the act of murder, for example. The stew of circumstances and emotions that combine to lead one person to kill another is unique to each crime. And yet someone observing crime statistics in the heat of an urban summer can predict with a high degree of accuracy when the next body will fall. There are many different types of statistical patterns that independent events can follow. The most famous statistical pattern of all is the normal distribution, which takes the shape of a bell curve and describes the statistical distribution of a wide range of uncorrelated events (like heights in a population or scores on the SAT). There’s also Zipf’s law, which describes the relative sizes of the largest numbers in a data set, and Benford’s law, which characterizes the distribution of first digits in the numbers in a data set. In the 1950s, Wigner confronted a problem and needed the help of a new statistical pattern to solve it. More than a decade after he’d helped instigate the Manhattan Project, he wanted to model interactions between the hundreds of particles inside the uranium nucleus. The problem was too complicated to tackle directly. “A large nucleus is a complicated thing; we have no idea how to understand it from first principles,” Spencer said. So Wigner simplified the problem: He ignored individual particle interactions, which were too hard to map, and instead focused on the average statistical behavior of the whole system, which was more tractable. Wigner implemented this picture using a grid of numbers that specify how particles interact. This grid is known as a matrix. It’s like a technical appendix for the Schrödinger equation, which is the equation used to describe the behavior of subatomic particles. By specifying the numbers in the matrix exactly, you specify the interactions exactly. Wigner couldn’t do that, so instead he filled the matrix with random numbers. He hoped this simplification would enable him to proceed with his calculations, while still producing a useful description of the uranium nucleus at the end. Which it did. Wigner found that he was able to extract a pattern from his “random” matrix. The pattern involved a second layer of numbers called eigenvalues, which are like the DNA of a matrix. Puzzlingly, his random matrix had correlated eigenvalues. On a number line, the eigenvalues seemed to exhibit a somewhat regular spacing — never clustered together nor spread too far apart. It was almost as if they were magnets, pushing each other toward an even spacing. The resulting distribution is now often referred to as the Wigner-Dyson-Mehta distribution (after the three physicists who contributed to its discovery). It describes a phenomenon called universality. To get a sense of universality, consider how tall people are. In the real world, if you started plucking people two at a time from a crowd in Times Square, there’s a reasonable chance that you’d find pairs of people with approximately the same height. But if heights in a population followed the Wigner-Dyson-Mehta distribution, you wouldn’t expect two randomly selected people to have similar heights at all. The heights would be correlated in such a way that the first person’s height was always different from the second person’s. Universality describes many different kinds of things: the frequency and size of avalanches, the timing of buses in decentralized transit systems, and even the spacing of cells in the retina of a chicken. It pertains, in general, to complex, correlated systems. Scientists have discovered a mysterious pattern that somehow connects a bus system in Mexico and chicken eyes to quantum physics and number theory. Emily Driscoll for Quanta Magazine Wigner’s experience modeling the uranium nucleus led him to hypothesize that random matrices should be able to describe any quantum system in which particles are correlated with one another (meaning that all particles influence the others). “Wigner’s great vision was, he believed that you can take any quantum system, and if it’s highly correlated, then its [eigenvalue distribution] will be similar to random matrices,” Yau said. (Later researchers articulated the flip side of the picture: They conjectured that when the particles in the physical system are moving in an uncorrelated way, as they do in an insulator, the eigenvalues should fall into the “Poisson” distribution, which is related to the normal distribution.) When materials conduct, it’s precisely because their electrons are interacting in an orderly correlated way — moving together as if in lockstep, carrying the electrical current along. So Wigner’s conjecture showed that if the eigenvalues of a quantum system exhibited universality, it would be proof that the particles within the system were interacting in a correlated way, and thus that the system was a conductor. Mathematicians and physicists began to fill in the details of his vision almost immediately, but it would take a half-century for mathematicians to start to prove facts about the statistics of conductors in real-world settings. Broken Simplicity When mathematicians create models of physical systems, they’d like those models to be as realistic as possible. Wigner’s model of the uranium nucleus was not a very realistic one for conductors in one sense: It included the assumption that every particle was equally likely to interact with every other particle. The model made no allowance for the fact that in a material, particles that are closer together are more likely to interact than particles that are farther apart. “Because particles in his system are all tightly confined to this small area called a nucleus, every guy interacts with every other guy, and Wigner did not take into account any spatial structure,” Spencer said. Physical models that don’t take into account the distance between particles are called “mean field” models. They’re simpler to work with, but more tenuously connected to the physical world. “There are no geometry considerations; we make the huge simplification that all the atoms in your medium interact with everyone in the same way,” Bourgade said. In two papers published a decade ago, mathematicians proved that eigenvalues for conducting materials follow Wigner’s universal pattern — but the proof only applied to the mean field model. That left open the more physically relevant case of proving universal eigenvalues arise in non-mean-field models, in which particles are only allowed to interact with particles right around them. This new paper gets almost all the way there. The three authors work with models in which particles interact with more particles than just their immediate neighbors, but not with all the particles in the system. The matrices describing such interactions are called random-band matrices (with “band” referring to the zone around each particle in which interactions occur). “The band matrix has a certain structure where you only talk to your neighbors and interactions aren’t very far away,” Yau said. The authors proved that eigenvalues in certain random-band matrices — those where the band is a certain minimum width — still follow the distribution Wigner observed in mean-field matrices. This means that even when you restrict electrons to interacting only with other particles in their neighborhood, the entire physical system still maintains the same type of average statistical behavior — the distribution of its eigenvalues — that Wigner found in his more stripped-down framework. “We proved that for the random-band-matrix model, the eigenvalues repulse each other … which means conduction,” Bourgade said. Bourgade, Yin and Yau would like to extend this work to the full non-mean-field case, cinching the relationship between conduction and its mathematical representation. It’s a tight alignment that might have seemed improbable when Wigner first discovered universally distributed eigenvalues. Now it’s starting to feel inevitable. “I am still amazed that Wigner’s vision is correct,” Bourgade wrote in an email. Comment on this article
e0b5cda96bdb3db7
From Wikiquote Jump to navigation Jump to search An Experiment on a Bird in an Air Pump by Joseph Wright of Derby     Vacuum pump and Bell jar chamber • If Dirac’s idea restores the stability of the spectrum by introducing a stable vacuum where all negative energy states are occupied, the so-called Dirac sea, it also leads directly to the conclusion that a single-particle interpretation of the Dirac equation is not possible. • Luis Álvarez-Gaumé, Miguel Á. Vázquez-Mozo, An Invitation to Quantum Field Theory (2012) Ch. 1 : Why Do We Need Quantum Field Theory After All? • Aristotle believed that the world did not come into being at some time in the past; it had always existed and it would always exist, unchanged in essence for ever. He placed a high premium on symmetry and believed that the sphere was the most perfect of all shapes. Hence the universe must be spherical. ...An important feature of the spherical shape... was the fact that when a sphere rotates it does not cut into empty space where there is no matter and it leaves no empty space behind. ...A vacuum was impossible. It could no more exist than an infinite physical quantity. ...Circular motion was the most perfect and natural movement of all. • John D. Barrow, The Book of Universes: Exploring the Limits of the Cosmos (2011) • These rays, as generated in the vacuum tube, are not homogeneous, but consist of bundles of different wave-lengths, analogous to what would be differences of colour could we see them as light. Some pass easily through flesh, but are partially arrested by bone, while others pass with almost equal facility through bone and flesh. • Andrew Hamilton, "Brains that Click", Popular Mechanics 91 (3), March 1949, (pp. 162 et seq.) p. 258. • With respect to the ultimate constitution of... masses, the same two antagonistic opinions which had existed since the time of Democritus and of Aristotle were still face to face. According to the one, matter was discontinuous and consisted of minute indivisible particles or atoms, separated by a universal vacuum; according to the other, it was continuous, and the finest distinguishable, or imaginable, particles were scattered through the attenuated general substance of the plenum. A rough analogy to the latter case would be afforded by granules of ice diffused through water; to the former, such granules diffused through absolutely empty space. • The real value of the new atomic hypothesis... did not lie in the two points which Democritus and his followers would have considered essential—namely, the indivisibility of the 'atoms' and the presence of an interatomic vacuum—but in the assumption that, to the extent to which our means of analysis take us, material bodies consist of definite minute masses, each of which, so far as physical and chemical processes of division go, may be regarded as a unit—having a practically permanent individuality. ...that smallest material particle which under any given circumstances acts as a whole. • Thomas Henry Huxley, The Advance of Science in the Last Half-Century (1889) • The primitive atomic theory, which has served as the scaffolding for the edifice of modern physics and chemistry, has been quietly dismissed. I cannot discover that any contemporary physicist or chemist believes in the real indivisibility of atoms, or in an interatomic matterless vacuum. Atoms appear to be used as mere names for physico-chemical units which have not yet been subdivided, and 'molecules' for physico-chemical units which are aggregates of the former. And these individualised particles are supposed to move in an endless ocean of a vastly more subtle matter—the ether. • In general, the rate of evaporation (m) of a substance in a high vacuum is related to the pressure (p) of the saturated vapor by the equation Red phosphorus and some other substances probably form exceptions to this rule. • Irving Langmuir, "The Constitution and Fundamental Properties of Solids and Liquids. Part I. Solids" (September 5, 1916) Journal of the American Chemical Society • All those who maintain a vacuum are more influenced by imagination than by reason. When I was a young man, I also gave in to the notion of a vacuum and atoms; but reason brought me into the right way. ...The least corpuscle is actually subdivided in infinitum, and contains a world of other creatures, which would be wanting in the universe, if that corpuscle was an atom, that is, a body of one entire piece without subdivision. In like manner, to admit a vacuum in nature, is ascribing to God a very imperfect work... space is only an order of things as time also is, and not at all an absolute being. ...Now, let us fancy a space wholly empty. God could have placed some matter in it, without derogating in any respect from all other things: therefore he hath actually placed some matter in that space: therefore, there is no space wholly empty: therefore all is full. The same argument proves that there is no corpuscle, but what is subdivided. ...there must be no vacuum at all; for the perfection of matter is to that of a vacuum, as something to nothing. And the case is the same with atoms: What reason can any one assign for confining nature in the progression of subdivision? These are fictions merely arbitrary, and unworthy of true philosophy. The reasons alleged for a vacuum, are mere sophisms. • To those who maintained the existence of a plenum as... principle, nature's abhorrence of a vacuum was a sufficient reason for imagining an all-surrounding aether, even though every other argument should be against it. ...Descartes ...made ...matter a necessary condition of extension... It is only when we remember the extensive and mischievous influence on science... that we can appreciate the horror of aethers which sober-minded men had during the 18th century, and which... descended even to... John Stuart Mill. ...Newton himself... endeavoured to account for gravitation by differences of pressure in an aether... The only aether which has survived is that which was invented by Huygens to explain the propagation of light. The evidence for... the luminiferous aither has accumulated as additional phenomena of light and other radiations have been discovered; and the properties of this medium... have been found to be... those required to explain electromagnetic phenomena. ...the interplanetary and interstellar spaces are not empty... • Dennis Overbye, Lonely Hearts of the Cosmos (1992) Ref: Edward P. Tryon, "Is the Universe a Vacuum Fluctuation?" Nature (Dec 14, 1973); Robert H. Dicke, Jim Peebles, "The Big Bang Cosmology—Enigmas and Conundrums," Nature (1979) Also see False vacuum. • When the Higgs field froze and symmetry broke, Tye and Guth knew, energy had to be released... Under normal circumstance this energy went into beefing up the masses of particles like the weak force bosons that had been massless before. If the universe supercooled, however, all this energy would remain unreleased... according to Einstein, it was the density of matter and energy in the universe that determined the dynamics of space-time. ...The issue of vacuum energy had been a tricky problem for physics ever since Einstein. According to quantum theory, even the ordinary "true" vacuum should be boiling with energy—infinite energy... due to the the so-called vacuum fluctuations that produced the transient dense dance of virtual particles. This energy... could exert a repulsive force on the cosmos just like the infamous cosmological constant... quantum theories had reinvented it in the form of vacuum fluctuations. The orderly measured pace of the expansion of the universe suggested strongly that the cosmological constant was zero, yet quantum theory suggested it was infinite. Not even Hawking claimed to understand the cosmological constant problem... a trapdoor deep at the heart of physics. • I have endeavoured to attain this end (viz. the production of a vacuum in the cylinder) in another way. As water has the property of elasticity, when converted into steam by heat, and afterwards of being so completely recondensed by cold, that there does not remain the least appearance of this elasticity, I have thought that it would not be difficult to work machines in which, by means of a moderate heat and at a small cost, water might produce that perfect vacuum which has vainly been sought by means of gunpowder. • Denis Papin, Recueil de diverses Pièces touchant quelques nouvelles Machines (1695) p. 53 as quoted by Dionysius Lardner, The Steam Engine Explained and Illustrated (1840) pp. 45-46. • The first machine of Papin was very similar to the gunpowder-engine... of Huyghens. In place of gunpowder, a small quantity of water is placed at the bottom of the cylinder, A; a fire is built beneath it, "the bottom being made of very thin metal," and the steam formed soon raises the piston, B, to the top where a latch, E, engaging a notch in latch engaging the piston rod, H, holds it up until it is desired that it shall drop. The fire being removed, the steam condenses, and a vacuum is formed below the piston, and the latch, E, being disengaged, the piston is driven down by the superincumbent atmosphere and raises the weight which has been, meantime, attached to a rope... passing from the piston rod over pulleys... The machine had a cylinder two and a half inches in diameter, and raised 60 pounds once a minute; and Papin calculated that a machine of a little more than two feet diameter of cylinder and of four feet stroke would raise 8,000 pounds four feet per minute—i.e., that it would yield about one horse-power. Thomas Savery's 'Miner's Friend' steam-driven water pump Fig.2 from Thomas Tredgold, The Steam Engine.. • In June, 1699, Captain Savery exhibited a model of his engine before the Royal Society, and the experiments he made with it succeeded to their satisfaction. ...One of the steam vessels being filled with steam, condensation was produced by projecting cold water, from a small cistern E, against the vessel; and into the partial vacuum made by that means, the water, by the pressure of the atmosphere, was forced up the descending main D, from a depth of about twenty feet... • [E]xperiments with a simple little machine, designed to mimic certain elementary features of animal behavior... Consisting only of two vacuum tubes, two motors, a photoelectric cell and a touch contact, all enclosed in a tortoise-shaped 'shell, the model was a species of artificial creature which could explore its surroundings and seek out favorable conditions. It was named Machine speculatrix. • There is one topic I was not sorry to skip: the relativistic wave equation of Dirac. It seems to me that the way this is usually presented in books on quantum mechanics is profoundly misleading. Dirac thought that his equation was a relativistic generalization of the non-relativistic time-dependent Schrödinger equation that governs the probability amplitude for a point particle in an external electromagnetic field. For some time after, it was considered to be a good thing that Dirac’s approach works only for particles of spin one half, in agreement with the known spin of the electron, and that it entails negative energy states, states that when empty can be identified with the electron’s antiparticle. Today we know that there are particles like the [W bosons] that are every bit as elementary as the electron, and that have distinct antiparticles, and yet have spin one, not spin one half. The right way to combine relativity and quantum mechanics is through the quantum theory of fields, in which the Dirac wave function appears as the matrix element of a quantum field between a one-particle state and the vacuum, and not as a probability amplitude. • It is quite easy to include a weight for empty space in the equations of gravity. Einstein did so in 1917, introducing what came to be known as the cosmological constant into his equations. His motivation was to construct a static model of the universe. To achieve this, he had to introduce a negative mass density for empty space, which just canceled the average positive density due to matter. With zero total density, gravitational forces can be in static equilibrium. Hubble's subsequent discovery of the expansion of the universe, of course, made Einstein's static model universe obsolete. ...The fact is that to this day we do not understand in a deep way why the vacuum doesn't weigh, or (to say the same thing in another way) why the cosmological constant vanishes, or (to say it in yet another way) why Einstein's greatest blunder was a mistake. • The phase transition paradigm: The standard model of fundamental physics incorporates, as one of its foundational principles, the idea that “empty space” or “vacuum” can exist in different phases, typically associated with different amounts of symmetry. Moreover, the laws of the standard model itself suggest that phase transitions will occur, as functions of temperature. Extensions of the standard model to build in higher symmetry (gauge unification or especially supersymmetry) can support effective vacua with radically different properties, separated by great distance or by domain walls. That would be a form of failure of universality, in our sense, whose existence is suggested by the standard model. Pneumatica (c. 50)[edit] Hero of Alexandria, as quoted in The Pneumatics of Hero von Alexandria (1851) Tr. Bennet Woodcroft, unless otherwise noted. • Some assert that there is absolutely no vacuum; others that, while no continuous vacuum is exhibited in nature, it is to be found distributed in minute portions through air, water, fire and all other substances: and this latter opinion, which we will presently demonstrate to be true from sensible phenomena, we adopt. • Vessels which seem to most men empty are not empty, as they suppose, but full of air. Now the air, as those who have treated of physics are agreed, is composed of particles minute and light, and for the most part invisible. If, then, we pour water into an apparently empty vessel, air will leave the vessel proportioned in quantity to the water which enters it. This may be seen from the following experiment. Let the vessel which seems to be empty be inverted, and, being carefully kept upright, pressed down into water; the water will not enter it even though it be entirely immersed: so that it is manifest that the air, being matter, and having itself filled all the space in the vessel, does not allow the water to enter. Now, if we bore the bottom of the vessel, the water will enter through the mouth, but the air will escape through the hole. Again, if, before perforating the bottom, we raise the vessel vertically, and turn it up, we shall find the inner surface of the vessel entirely free from moisture, exactly as it was before immersion. • The particles of the air are in contact with each other, yet they do not fit closely in every part, but void spaces are left between them, as in the sand on the sea shore: the grains of sand must be imagined to correspond to the particles of air, and the air between the grains of sand to the void spaces between the particles of air. Hence, when any force is applied to it, the air is compressed, and, contrary to its nature, falls into the vacant spaces from the pressure exerted on its particles: but when the force is withdrawn, the air returns again to its former position from the elasticity of its particles, as is the case with horn shavings and sponge, which, when compressed and set free again, return to the same position and exhibit the same bulk.     Cupping Vessel Ancient Egypt • Thus, if a light vessel with a narrow mouth be taken and applied to the lips, and the air be sucked out and discharged, the vessel will be suspended from the lips, the vacuum drawing the flesh towards it that the exhausted space may be filled. It is manifest from this that there was a continuous vacuum in the vessel. The same may be shown by means of the egg-shaped cups used by physicians, which are of glass, and have narrow mouths. When they wish to fill these with liquid, after sucking out the contained air, they place the finger on the vessel's mouth and invert them into the liquid; then, the finger being withdrawn, the water is drawn up into the exhausted space, though the upward motion is against its nature. Very similar is the operation of cupping-glasses, which, when applied to the body, not only do not fall though of considerable weight, but even draw the contiguous matter toward them through the apertures of the body. • They... who assert that there is absolutely no vacuum may invent many arguments on this subject, and perhaps seem to discourse most plausibly though they offer no tangible proof. If, however, it be shewn by an appeal to sensible phenomena that there is such a thing as a continuous vacuum, but artificially produced; that a vacuum exists also naturally, but scattered in minute portions ; and that by compression bodies fill up these scattered vacua, those who bring forward such plausible arguments in this matter will no longer be able to make good their ground. Commentarius in VIII Libros Physicorum Aristoteles (c. 1230-1235)[edit] Robert Grosseteste (title translates as Commentary on Aristotle's 8 Books of Physics) • Vacuum stands and remains a mathematical space. A cube placed in a vacuum would not displace anything, as it would displace air or water in a space already containing those fluids. • In a vacuum which is imagined as infinite there cannot be local differences, both on account of its infinity, and also because of the fact that the vacuum, if it exists, would have no nature but a privation, and therefore it can have no natural differences. • The space of the real physical world must be considered full, that is a plenum, because a vacuum could have no physical existence. Pascal's Life, Writings, and Discoveries (1844)[edit] The North British Review Vol. 1 (August, 1844) p. 285, Art. I.—Lettres écrites à un Provincial par Blaise Pascal, précédées d'un Eloge de Pascal, par M. Bordas Demoulin, Discours qui a remporté le Prix décerné par l'Académie Française, le 30 Juin 1842, et suivies d'un Essai sur les Provinciales et le style de Pascal. Par Francois de Neufchateau. Paris, 1843. See also "Life, Genius, and Scientific Discoveries of Pascal", The Provincial Letters of Blaise Pascal (1866) ed. O. W. Wight, pp. 15-63. Hand suction pump • When the engineers of Cosmo de Medicis wished to raise water higher than thirty-two feet by means of a sucking-pump, they found it impossible to take it higher than thirty-one feet. Galileo, the Italian sage, was applied to in vain for a solution of the difficulty. It had been the belief of all ages that the water followed the piston, from the horror which nature had of a vacuum, and Galileo improved the dogma by telling the engineers that this horror was not felt, or at least not shown, beyond heights of thirty one feet! At his desire, however, his disciple Toricelli investigated the subject. He found, that when the fluid raised was mercury, the horror of a vacuum did not extend beyond 30 inches, because the mercury would not rise to a greater height; and hence he concluded that a column of water 31 feet high, and one of mercury 30 inches, exerted the same pressure upon the same base, and that the antagonist force which counterbalanced them must in both cases be the same; and having learned from Galileo that the air was a heavy fluid, he concluded, and he published the conclusion in 1645, that the weight of the air was the cause of the rise of water to 31 feet and of mercury to 30 inches. Pascal repeated these experiments in 1646, at Rouen before more than 500 persons, among whom were five or six Jesuits of the College, and he obtained precisely the same results as Toricelli. The explanation of them, however, given by the Italian philosopher, and with which he was unacquainted, did not occur to him; and though he made many new experiments on a large scale with tubes of glass 50 feet long, they did not conduct him to any very satisfactory results. He concluded that the vacuum above the water and the mercury contained no portion of either of these fluids, or any other matter appreciable by the senses; that all bodies have a repugnance to separate from a state of continuity, and admit a vacuum between them; that this repugnance is not greater for a large vacuum than a small one; that its measure is a column of water 31 feet high, and that beyond this limit, a great or a small vacuum is formed above the water with the same facility, provided no foreign obstacle prevents it. These experiments and results were published by our author in 1647, under the title of Nouvelles Experiences touchant le Vuide; but no sooner had they appeared, than they experienced, from the Jesuits, and the followers of Aristotle, the most violent opposition.     Toricellian tube • To these objections Pascal replied in two letters, addressed to [Stephen] Noel; but though he had no difficulty in overturning the contemptible reasoning of his antagonist, he found it necessary to appeal to new and more direct experiments. The explanation of Toricelli had been communicated to him a short time after the publication of his work; and assuming that the mercury in the Toricellian tube was suspended by the weight or pressure of the air, he drew the conclusion that the mercury would stand at different heights in the tube, if the column of air was more or less high. These differences, however, were too small to be observed under ordinary circumstances; and he therefore conceived the idea of observing the mercury at Clermont, a town in Auvergne... and on the top of the Puy de Dome, a mountain 500 toises above Clermont The state of his own health did not permit him to undertake a journey... but in a letter dated the 15th November 1647, he requested his brother-in-law, M. Perier, to go... M. Perier began the experiment by pouring into a vessel sixteen pounds of quicksilver which he had rectified... He then took two [straight] glass tubes, four feet long, of the same bore, and hermetically sealed at one end, and open at the other; and making the ordinary experiment of a vacuum with both, he found that the mercury stood in each of them at the same level... This experiment was repeated twice with the same result. One of these... was left under the care of M. Chastin... who undertook to observe and mark any changes... and the party... set out, with the other tube, for the summit of the Puy de Dome... Upon arriving there, they found that the mercury stood at the height of 23 inches, and 2 lines—no less than 3 inches and 1½ lines lower... The party was "struck with admiration and astonishment at this result;" and "so great was their surprise, that they resolved to repeat the experiment under various forms." During their descent of the mountain, they repeated the experiment at Lafond de l'Arbre, an intermediate station... and they found the mercury to stand at the height of 25 inches, a result with which the party was greatly pleased, as indicating the relation between the height of the mercury and the height of the station. Upon reaching the Minimes, they found that the mercury had not changed its height... • Pascal's Treatise [De la Pesanteur de la Masse de l'Air] on the weight of the whole mass of air forms the basis of the modern science of Pneumatics. In order to prove that the mass of air presses by its weight on all the bodies which it surrounds, and also that it is elastic and compressible, he carried a balloon half filled with air to the top of the Puy de Dome. It gradually inflated itself as it ascended, and when it reached the summit it was quite full, and swollen, as if fresh air had been blown into it; or what is the same thing, it swelled in proportion as the weight of the column of air which pressed upon it was diminished. When again brought down, it became more and more flaccid, and when it reached the bottom, it resumed its original condition. ...[H]e shews that all the phenomena and effects hitherto ascribed to the horror of a vacuum arise from the weight of the mass of air; and after explaining the variable pressure of the atmosphere in different localities, and in its different states, and the rise of water in pumps, he calculates that the whole mass of air round our globe weighs 8,983,889,440,000,000,000 French pounds. A Short Story of Thomas Newcomen (1904)[edit] by Dwight Goddard, source Newcomen's atmospheric steam engine. • Newcomen's invention was radically different from that of Savery or any other single person. Papin invented the cylinder and piston as a means for transforming energy into motion. At first he used the explosive force of gunpowder, and later the use of the expansive force of steam, to raise the piston, and then by removing the fire to cause it to fall again. He made no further use of this principle. Savery discovered that the sudden condensation of steam made a vacuum that he utilized to draw up water. His pumps were actually used to drain mines, but were never satisfactory. They had to be placed within the mine to be drained, not over forty feet from the bottom, and then could be used to force up water an additional height of perhaps 100 feet. Beyond this the process must be repeated. ... Newcomen used Papin's cylinder and piston, and Savery's principle of the condensation of steam to produce a vacuum. But unlike Papin he used the expansive force of steam to do his work, and unlike Savery he used a cylinder and piston actuated by alternate expansion and condensation of steam to transform heat into mechanical motion. • At first [Newcomen] made a double cylinder, using the space between for condensing water. This was not very satisfactory. The vacuum was secured very slowly and imperfectly. ...One day the engine made two or three motions quickly and powerfully. Newcomen immediately examined the cylinder and found a small hole, through which a small jet from the water that was on top of the piston to make it steam tight, was spurting into the cylinder. He... dispensed with the outer water jacket and injected the water for condensation, through a small pipe in the bottom of the cylinder. It... increased the speed of the engine from eight to fifteen strokes a minute, besides getting the advantage of a good vacuum. The Book of Nothing (2009)[edit] by John D. Barrow • The spooky ether was persistent. It took an Einstein to remove it from the Universe. ...Gradually, over the last twenty years, the vacuum has turned out to be more unusual, more fluid, less empty, and less intangible than even Einstein could have imagined. Its presence is felt on the very smallest and largest dimensions over which the forces of Nature act. • Preface • The physicist's concept of nothing—the vacuum... began as empty space—the void... turned into a stagnant ether through which all the motions of the Universe swam, vanished in Einstein's hands, then re-emerged in the twentieth-century quantum picture of how Nature works. • Chapter nought "Nothingology—Flying to Nowhere" • The quantum revolution showed us why the old picture of a vacuum as an empty box was untenable. ...Gradually, this exotic new picture of quantum nothingness succumbed to experimental exploration... in the form of vacuum tubes, light bulbs and X-rays. Now the 'empty' space itself started to be probed. ...There was always something left: a vacuum energy that permeated every fibre of the Universe. • Chapter nought "Nothingology—Flying to Nowhere" • Einstein showed us that the Universe might contain a mysterious form of vacuum energy. ...Last year, two teams of astronomers used Earth's most powerful telescopes... to gather persuasive evidence for the reality of the cosmic vacuum energy. Its effects are dramatic. It is accelerating the expansion of the Universe. • Chapter nought "Nothingology—Flying to Nowhere" See also[edit] External links[edit] Wikipedia has an article about:
e69ed466f2022205
Online Publications Physics by Arnold Neumaier Below are abstracts and downloadable preprints of my published (and some unpublished) papers in physics and chemistry. (online versions of my mathematical publications - complete list of my publications) For manuscripts with an e-print number, you can also get the latex source (of some version of the paper) from an e-print archive such as For the published version see the references given. I do not send out paper copies of my manuscripts. If you have problems downloading, decoding, or printing a file, look at downloading/printing problems? A. Neumaier, A theoretical physics FAQ (begun 2004, continually growing) Contains over 240 sections with explanations answering questions from theoretical physics, collected from my answers to postings to various physics discussion groups. Most topics are related to quantum mechanics, quantum field theory, renormalization, the measurement problem, randomness, and philosophical issues in physics. A. Neumaier, Classical models for quantum light, Slides of a lecture given on April 7, 2016 at the University of Linz. pdf file (350K) In this lecture, a timeline is traced from Huygens' wave optics to the modern concept of light according to quantum electrodynamics. The lecture highlights the closeness of classical concepts and quantum concepts to a surprising extent. For example, it is shown that the modern quantum concept of a qubit was already known in 1852 in fully classical terms. A. Neumaier, Classical models for quantum light II, Slides of a lecture given on April 8, 2016 at the University of Linz. pdf file (425K) In this lecture the results of the historical review given in my lecture ''Classical models for quantum light'' are utilized to reassess the meaning of observables and stochastic processes for the classical and quantum description of light. In particular we discuss the description of partially coherent, fluctuating light through classical stochastic Maxwell equations (with uncertainty in the initial conditions only), and look at a generalization that works for all quantum aspects of arbitrary quantum systems. A. Neumaier, Misconceptions about Virtual Particles, PhysicsForums Insights (April 2016). Insight Article This article goes though a number of wikipedia pages and comments on their misleading statements about virtual particles and Feynman diagrams. In addition, the article discusses some textbooks on quantum field theory and the extent to which they contain problematic formulations about virtual particles. A. Neumaier and U.K. Deiters, The characteristic curves of water, Int. J. Thermophysics, published online July 23, 2016. DOI: 10.1007/s10765-016-2098-1 pdf file (365K) In 1960, E.H. Brown defined a set of characteristic curves (also known as ideal curves) of pure fluids, along which some thermodynamic properties match those of an ideal gas. These curves are used for testing the extrapolation behaviour of equations of state. This work is revisited, and an elegant representation of the first-order characteristic curves as level curves of a master function is proposed. It is shown that Brown's postulate - that these curves are unique and dome-shaped in a double-logarithmic p,T representation - may fail for fluids exhibiting a density anomaly. A careful study of the Amagat curve (Joule inversion curve) generated from the IAPWS-95 reference equation of state for water reveals the existence of an additional branch. A. Neumaier, The Physics of Virtual Particles, PhysicsForums Insights (March 2016). Insight Article In discussions on the internet (including a number of wikipedia pages) and in books and articles for non-experts in particle physics, there is considerable confusion about various notions around the concept of particles of subatomic size, and in particular about the notion of a virtual particle. This is partly due to misunderstandings in the terminology used, and partly due to the fact that subatomic particles manifest themselves only indirectly, thus leaving a lot of leeway for the imagination to equip these invisible particles with properties, some of which sound very magical. The aim of this Insight article is a definition of physical terms essential for an informed discussion of which of these properties have a solid basis in physics, and which of these are gross misconceptions or exaggerations that shouldn't be taken seriously. U.K. Deiters and A. Neumaier, Computer simulation of the characteristic curves of pure fluids, J. Chem. Eng. Data (2016), DOI: 10.1021/acs.jced.6b00133. pdf file (313K) Brown's characteristic curves (also known as ideal curves) describe states at which one thermodynamic property of a real pure fluid matches that of an ideal gas; these curves can be used for testing the extrapolation behaviour of equations of state. In this work, some characteristic curves are computed directly - without recourse to an equation of state - for some pair potentials by Monte Carlo computer simulation. The potentials used are an ab-initio potential for argon, the 1-center Lennard-Jones potential, and a softer pair potential whose short-range part is in accordance with quantum mechanical predictions. The influence of the short-distance repulsion on the characteristic curves is found to be significant even in the 10-100 MPa pressure range. A. Neumaier, Causal Perturbation Theory, PhysicsForums Insights (June 2015). Insight Article Relativistic quantum field theory is notorious for the occurrence of divergent expressions that must be renormalized by recipes that on first sight sound very arbitrary and counterintuitive. This Insight article shows that it doesn't have to be this way! A. Neumaier, Analytic representation of critical equations of state, J. Statist. Phys. 155 (2014), 603-624. arXiv:1401.0291 pdf file (460K) We propose a new form for equations of state (EOS) of thermodynamic systems in the Ising universality class. The new EOS guarantees the correct universality and scaling behavior close to critical points and is formulated in terms of the scaling fields only -- unlike the traditional Schofield representation, which uses a parametric form. Close to a critical point, the new EOS expresses the square of the strong scaling field as an explicit function of the thermal scaling field and the dependent scaling field. A numerical expression is derived, valid close to critical points. As a consequence of the construction it is shown that the dependent scaling field can be written as an explicit function of the relevant scaling fields without causing strongly singular behavior of the thermodynamic potential in the one-phase region. Augmented by additional scaling correction fields, the new EOS also describes the state space further away from critical points. It is indicated how to use the new EOS to model multiphase fluid mixtures, in particular for vapor-liquid-liquid equilibrium (VLLE) where the traditional revised scaling approach fails. A. Neumaier, Phenomenological thermodynamics in a nutshell. Manuscript (2014). arXiv:1404.5273 phenTherm.pdf (207K) This paper gives a concise, mathematically rigorous description of phenomenological equilibrium thermodynamics for single-phase systems in the absence of chemical reactions and external forces. From the formulas provided, it is an easy step to go to various examples and applications discussed in standard textbooks (such as those by Callen or Reichl). A full discussion of global equilibrium would also involve the equilibrium treatment of multiple phases and chemical reactions. Since their discussion offers no new aspects compared with traditional textbook treatments, they are not treated here. The present phenomenological approach is similar to that of Callen, who introduces in his well-known thermodynamics book the basic concepts by means of a few postulates from which everything else follows. The present setting is a modified version designed to match the more fundamental approach based on statistical mechanics. By specifying the kinematical properties of states outside equilibrium, his informal thermodynamic stability arguments (which depend on a dynamical assumption close to equilibrium) can be replaced by rigorous mathematical arguments. A. Neumaier, A multi-phase, multi-component critical equation of state, Manuscript (2013). arXiv:1307.8391 pdf file (193K) Realistic equations of state valid in the whole state space of a multi-component mixture should satisfy at least three important constraints: (i) The Gibbs phase rule holds. (ii) At low densities, one can deduce a virial equation of state with the correct multicomponent structure. (iii) Close to critical points, plait points, and consolute points, the correct universality and scaling behavior is guaranteed. This paper discusses semiempirical equations of state for mixtures that express the pressure as an explicit function of temperature and the chemical potentials. In the first part, expressions are derived for the most important thermodynamic quantities. The main result of the second part is the construction of a large family of equations of state with the properties (i)--(iii). A. Neumaier and D. Westra, Classical and Quantum Mechanics via Lie algebras. Manuscript (2008, enlarged revision 2011) pdf file (3165K) The goal of this book is to present classical mechanics, quantum mechanics, and statistical mechanics in an almost completely algebraic setting, thereby introducing mathematicians, physicists, and engineers to the ideas relating classical and quantum mechanics with Lie algebras and Lie groups. The book emphasizes the closeness of classical and quantum mechanics, and the material is selected in a way to make this closeness as apparent as possible. Much of the material covered here is not part of standard textbook treatments of classical or quantum mechanics (or is only superficially treated there). For physics students who want to get a broader view of the subject, this book may therefore serve as a useful complement to standard treatments of quantum mechanics. Almost without exception, this book is about precise concepts and exact results in classical mechanics, quantum mechanics, and statistical mechanics. The structural properties of mechanics are discussed independent of computational techniques for obtaining quantitatively correct numbers from the assumptions made. The standard approximation machinery for calculating from first principles explicit thermodynamic properties of materials, or explicit cross sections for high energy experiments can be found in many textbooks and is not repeated here. A. Neumaier, Renormalization without infinities - an elementary tutorial, Manuscript (2011). pdf file (362K) Renormalization is an indispensable tool for modern theoretical physics. At the same time, it is one of the least appealing techniques, especially in cases where naive formulations result in divergences that must be cured - a step that is often done in a mathematically dubious way. In this paper, it is shown how the renormalization procedure works both in singular cases where it removes naive divergences and in regular cases where a naive approach is possible but renormalization improves the quality of perturbation theory. In fact, one can see immediately that the singular situation is simply a limiting case of the regular situation. After discussing generalities, the paper introduces a large family of toy examples, defined by special perturbations of an arbitrary Hamiltonian with a discrete spectrum. The examples show explicitly many of the renormalization effects arising in realistic quantum field theories such as quantum chromodynamics: logarithmic divergences, running couplings, asymptotic freedom, dimensional transmutation, the renormalization group, and renormalization scheme dependent results at any order of perturbation theory. Unlike in more realistic theories, everything is derived rigorously and nonperturbatively in terms of simple explicit formulas. Thus one can understand in detail how the infinities arise (if they arise) - namely as an unphysical infinitely sensitive dependence on the bare coupling constants. One also sees that all spurious infinities are cured automatically by the same renormalization process that gives robust physical results in the case where no infinities occur. A. Neumaier, Optical models for quantum mechanics, Slides of a lecture given on February 16, 2010 at the Institute for Theoretical Physics, University of Giessen. pdf file (154K) This lecture (the second of three) discusses work towards a new, classical view of quantum mechanics. It is based on an analysis of polarized light, of the meaning of quantum ensembles in a field theory, of classical simulations of quantum computing algorithms, and resulting optical models for the simulation of quantum mechanics. In particular, it is shown that classical second-order stochastic optics is precisely the quantum mechanics of a single photon, with all its phenomenological bells and whistles. A. Neumaier, Classical and quantum field aspects of light, Slides of a lecture given on January 29, 2009 at the Institute of Quantum Optics and Quantum Information of the Austrian Academy of Sciences, Vienna. pdf file (376K) This lecture (the first of three) discusses foundational problems on the nature of light revealed by 1. attempts to define a probability concept for photons, 2. quantum models for photons on demands (and their realization through laser-induced emission by a calcium ion in a cavity), 3. models explaining the photo effect, and 4. Bell-type experiments for single photon nonlocality. A. Neumaier, A simple hidden variable experiment, arXiv:0706.0155 ps.gz file (170K), pdf file (96K) downloading/printing problems? An experiment is described which proves, using single photons only, that the standard hidden variables assumptions (commonly used to derive Bell inequalities) are inconsistent with quantum mechanics. The analysis is very simple and transparent. In particular, it demonstrates that a classical wave model for quantum mechanics is not ruled out by experiments demonstrating the violation of the traditional hidden variable assumptions. A. Neumaier, On the foundations of thermodynamics, arXiv:0705.3790 ps.gz file (324K), pdf file (587K) downloading/printing problems? On the basis of new, concise foundations, this paper establishes the four laws of thermodynamics, the Maxwell relations, and the stability requirements for response functions, in a form applicable to global (homogeneous), local (hydrodynamic) and microlocal (kinetic) equilibrium. The present, self-contained treatment needs very little formal machinery and stays very close to the formulas as they are applied by the practicing physicist, chemist, or engineer. From a few basic assumptions, the full structure of phenomenological thermodynamics and of classical and quantum statistical mechanics is recovered. Care has been taken to keep the foundations free of subjective aspects (which traditionally creep in through information or probability). One might describe the paper as a uniform treatment of the nondynamical part of classical and quantum statistical mechanics ``without statistics'' (i.e., suitable for the definite descriptions of single objects) and ``without mechanics'' (i.e., independent of microscopic assumptions). When enriched by the traditional examples and applications, this paper may serve as the basis for a course on thermal physics. A. Neumaier, Collapse challenge for interpretations of quantum mechanics, quant-ph/0505172 dvi.gz file (7K), ps.gz file (61K), pdf file (62K) living online version (html) downloading/printing problems? The collapse challenge for interpretations of quantum mechanics is to build from first principles and your preferred interpretation a complete, observer-free quantum model of the described experiment (involving a photon and two screens), together with a formal analysis that completely explains the experimental result. The challenge is explained in detail, and discussed in the light of the Copenhagen interpretation and the decoherence setting. A. Neumaier, Mathematik, Physik und Ewigkeit (mit einem Augenzwinkern betrachtet) Professorenforum-Journal 6 (2005), No. 3, 37--43. pdf file (116K) U. Leonhardt and A. Neumaier, Explicit effective Hamiltonians for general linear quantum-optical networks, J. Optics B: Quantum Semiclass. Opt. 6 (2004), L1-L4. quant-ph/0306123 dvi.gz file (15K), ps.gz file (60K), pdf file (114K), downloading/printing problems? Linear optical networks are devices that turn classical incident modes by a linear transformation into outgoing ones. In general, the quantum version of such transformations may mix annihilation and creation operators. We derive a simple formula for the effective Hamiltonian of a general linear quantum network, if such a Hamiltonian exists. Otherwise we show how the scattering matrix of the network is decomposed into a product of three matrices that can be generated by Hamiltonians. A. Neumaier, Quantum field theory as eigenvalue problem, gr-qc/0303037 dvi.gz file (46K), ps.gz file (139K), pdf file (281K), downloading/printing problems? A mathematically well-defined, manifestly covariant theory of classical and quantum field is given, based on Euclidean Poisson algebras and a generalization of the Ehrenfest equation, which implies the stationary action principle. The theory opens a constructive spectral approach to finding physical states both in relativistic quantum field theories and for flexible phenomenological few-particle approximations. In particular, we obtain a Lorentz-covariant phenomenological multiparticle quantum dynamics for electromagnetic and gravitational interaction which provides a representation of the Poincaré group without negative energy states. The dynamics reduces in the nonrelativistic limit to the traditional Hamiltonian multiparticle description with standard Newton and Coulomb forces. The key that allows us to overcome the traditional problems in canonical quantization is the fact that we use the algebra of linear operators on a space of wave functions slightly bigger than traditional Fock spaces. P. Frantsuzov, A. Neumaier and V.A. Mandelshtam, Gaussian resolutions for equilibrium density matrices, Chem. Phys. Letters 381 (2003), 117-122. quant-ph/0306124 ps.gz file (145K), pdf file (193K), downloading/printing problems? A Gaussian resolution method for the computation of equilibrium density matrices rho(T) for a general multidimensional quantum problem is presented. The variational principle applied to the ``imaginary time'' Schroedinger equation provides the equations of motion for Gaussians in a resolution of rho(T) described by their width matrix, center and scale factor, all treated as dynamical variables. The method is computationally very inexpensive, has favorable scaling with the system size and is surprisingly accurate in a wide temperature range, even for cases involving quantum tunneling. Incorporation of symmetry constraints, such as reflection or particle statistics, is also discussed. A. Neumaier, Ensembles and experiments in classical and quantum physics, Int. J. Mod. Phys. B 17 (2003), 2937-2980. quant-ph/0303047 dvi.gz file (71K), ps.gz file (169K), pdf file (333K) downloading/printing problems? A philosophically consistent axiomatic approach to classical and quantum mechanics is given. The approach realizes a strong formal implementation of Bohr's correspondence principle. In all instances, classical and quantum concepts are fully parallel: the same general theory has a classical realization and a quantum realization. Extending the `probability via expectation' approach of Whittle to noncommuting quantities, this paper defines quantities, ensembles, and experiments as mathematical concepts and shows how to model complementarity, uncertainty, probability, nonlocality and dynamics in these terms. The approach carries no connotations of unlimited repeatability; hence it can be applied to unique systems such as the universe. Consistent experiments provide an elegant solution to the reality problem, confirming the insistence of the orthodox Copenhagen interpretation on that there is nothing but ensembles, while avoiding its elusive reality picture. The weak law of large numbers explains the emergence of classical properties for macroscopic systems. A. Neumaier, Effective Schrödinger equations for nonlocal and/or dissipative systems, hep-th/0201085 dvi.gz file (35K), ps.gz file (108K), pdf file (237K), downloading/printing problems? The projection formalism for calculating effective Hamiltonians and resonances is generalized to the nonlocal and/or nonhermitian case, so that it is applicable to the reduction of relativistic systems (Bethe-Salpeter equations), and to dissipative systems modeled by an optical potential. It is also shown how to recover all solutions of the time-independent Schrödinger equation in terms of solutions of the effective Schrödinger equation in the reduced state space and a Schrödinger equation in a reference state space. For practical calculations, it is important that the resulting formulas can be used without computing any projection operators. This leads to a modified coupled reaction channel/resonating group method framework for the calculation of multichannel scattering information. V.A. Mandelshtam and A. Neumaier, Further generalization and numerical implementation of pseudo-time Schrödinger equations for quantum scattering calculations, J. Theor. Comput. Chemistry 1 (2002), 1-15. physics/0204049 ps.gz file (128K), pdf file (330K), downloading/printing problems? We review and further develop the recently introduced numerical approach for scattering calculations based on a so called pseudo-time Schrödinger equation, which is in turn a modification of the damped Chebyshev polynomial expansion scheme. The method utilizes a special energy-dependent form for the absorbing potential in the time-independent Schrödinger equation, in which the complex energy spectrum E_k is mapped to u_k inside the unit disk, where u_k are the eigenvalues of some explicitly known sparse matrix U. Most importantly for the numerical implementation, all the physical eigenvalues u_k are extreme eigenvalues of U, which allows one to extract these eigenvalues very efficiently by harmonic inversion of a pseudo-time autocorrelation function using the filter diagonalization method. The computation of 2T steps of the autocorrelation function requires only T sparse real matrix-vector multiplications. We describe and compare different schemes, effectively corresponding to different choices of the energy-dependent absorbing potential, and test them numerically by calculating resonances of the HCO molecule. Our numerical tests suggest an optimal scheme that provide accurate estimates for most resonance states. A. Neumaier and V.A. Mandelshtam, Pseudo-time Schrödinger equation with absorbing potential for quantum scattering calculations, Phys. Rev. Lett. 86 (2001), 5031-5034. physics/0101032 dvi.gz file (62K), ps.gz file (159K), pdf file (589K) downloading/printing problems? The Schrödinger equation with an energy-dependent complex absorbing potential, associated with a scattering system, can be reduced for a special choice of the energy-dependence to a harmonic inversion problem of a discrete pseudo-time correlation function. An efficient formula for Green's function matrix elements is also derived. Since the exact propagation up to time 2t can be done with only t real matrix-vector products, this gives an unprecedently efficient scheme for accurate calculations of quantum spectra for possibly very large systems. A. Neumaier, Bohmian mechanics contradicts quantum mechanics, quant-ph/0001011 dvi.gz file (17K), ps.gz file (61K), pdf file (157K) downloading/printing problems? It is shown that, for a harmonic oscillator in the ground state, Bohmian mechanics and quantum mechanics predict values of opposite sign for certain time correlations. The discrepancy can be explained by the fact that Bohmian mechanics has no natural way to accomodate the Heisenberg picture, since the local expectation values that define the beables of the theory depend on the Heisenberg time being used to define the operators. Relations to measurement are discussed, too, and are shown to leave no loophole for claiming that Bohmian mechanics reproduces all predictions of quantum mechanics exactly. G. Zauner, Quantendesigns - Grundzüge einer nichtkommutativen Designtheorie, Dissertation, Institut für Mathematik, Universität Wien, Wien 1999. (in German; English title: Quantum designs - foundations of a non-commutative theory of designs) ps.gz file (202K), pdf file (557K), downloading/printing problems? Quantum designs are sets of subspaces, or equivalent sets of orthogonal projection matrices, in complex finite dimensional vector spaces with certain properties. These structures are generalizations of classical t-designs (the special case of pairwise commuting matrices), spherical designs, complex t-designs and equi-isoclinic subspaces. All elements of quantum design theory have a natural interpretation in terms of quantum theory. Apart from general theory (e.g., absolute and special bounds), constructions are given for two classes of quantum designs which generalize the classical balanced incomplete block designs and affine designs. One of them gives rise to the first known class of infinitely many complex 2-designs. Also new tight complex 2-designs are constructed. The constructions have a close analogy to formalisms of quantum theory in infinite-dimensional vector spaces. A. Neumaier, On a realistic interpretation of quantum mechanics, quant-ph/9908071 dvi.gz file (31K), ps.gz file (84K), pdf file (192K), downloading/printing problems? The best mathematical arguments against a realistic interpretation of quantum mechanics - that gives definite but partially unknown values to all observables - are analysed and shown to be based on reasoning that is not compelling. This opens the door for an interpretation that, while respecting the indeterministic nature of quantum mechanics, allows to speak of definite values for all observables at any time that are, however, only partially measurable. The analysis also suggests new areas where the foundations of quantum theory need to be tested. A. Neumaier, On the Many-Worlds-Interpretation, Manuscript (1999) ASCII text file These comments intend to show that quantum paradoxes are not resolved by the "many-worlds" interpretation or metatheory of quantum mechanics; instead, the latter is full of home-made puzzles and ambiguities. A. Neumaier, W. Huyer and E. Bornberg-Bauer, Hydrophobicity Analysis of Amino Acids, WWW-Document (1999). html file Based on a principal component analysis of 47 published attempts to quantify hydrophobicity in terms of a single scale, we define a representation of the 20 amino acids as points in a 3-dimensional hydrophobicity space and display it by means of a minimal spanning tree. The dominant scale is found to be close to two scales derived from contact potentials. A. Neumaier, S. Dallwig, W. Huyer and H. Schichl, New techniques for the construction of residue potentials for protein folding, pp. 212-224 in: Algorithms for Macromolecular Modelling (P. Deuflhard et al., eds.), Lecture Notes Comput. Sci. Eng. 4, Springer, Berlin 1999. pdf file (448K), ps.gz file (159K), downloading/printing problems? A smooth empirical potential is constructed for use in off-lattice protein folding studies. Our potential is a function of the amino acid labels and of the distances between the C(alpha) atoms of a protein. The potential is a sum of smooth surface potential terms that model solvent interactions and of pair potentials that are functions of a distance, with a smooth cutoff at 12 Å. Techniques include the use of a fully automatic and reliable estimator for smooth densities, of cluster analysis to group together amino acid pairs with similar distance distributions, and of quadratic programming to find appropriate weights with which the various terms enter the total potential. For nine small test proteins, the new potential has minima within 1.3-4.7Å of the PDB geometry, with one exception that has an error of 8.5Å. Moreover, a nonuniqueness theorem is given that shows that no set of equilibrium geometries can determine the true effective potential energy function. A. Neumaier, Molecular modeling of proteins and mathematical prediction of protein structure, SIAM Rev. 39 (1997), 407-460. ps.gz file (220K), pdf file (490K), downloading/printing problems? This paper discusses the mathematical formulation of and solution attempts for the so-called protein folding problem. The static aspect is concerned with how to predict the folded (native, tertiary) structure of a protein, given its sequence of amino acids. The dynamic aspect asks about the possible pathways to folding and unfolding, including the stability of the folded protein. From a mathematical point of view, there are several main sides to the static problem: - the selection of an appropriate potential energy function; - the parameter identification by fitting to experimental data; and - the global optimization of the potential. The dynamic problem entails, in addition, the solution of (because of multiple time scales very stiff) ordinary or stochastic differential equations (molecular dynamics simulation), or (in case of constrained molecular dynamics) of differential-algebraic equations. A theme connecting the static and dynamic aspect is the determination and formation of secondary structure motifs. The present paper gives a self-contained introduction to the necessary background from physics and chemistry and surveys some of the literature. It also discusses the various mathematical problems arising, some deficiencies of the current models and algorithms, and possible (past and future) attacks to arrive at solutions to the protein folding problem. A. Neumaier, Experiments: Preparation and Measurement, Manuscript (1996). ps.gz file (49K), downloading/printing problems? Measurements can be adequately described without reference to ``the collapse of the wave function'' (or to wave functions at all). The collapse, as far as it occurs (i.e., the convergence of the density matrix to one that commutes with the Hamiltonian of the system), is a natural consequence of the reduced description of macroscopic systems in the thermodynamic limit since that leads to a dissipative dynamics. However, in the presence of spin, there is no complete collapse: macroscopic polarization phenomena remain that need 2-state quantum physics, a fact that seems to have escaped notice before. Since polarization is well-understood as a macroscopic phenomenon (no one ever talked about philosophical problems related to macroscopic polarization!), there is no reason to consider the microscopic world as essentially different from the macroscopic world. A. Neumaier, From thermodynamics to quantum theory. Part I: Equilibrium. Manuscript (1995). dvi.gz file (75K), ps.gz file (165K), pdf file (457K), downloading/printing problems? In this paper, an elementary and self-contained axiomatic treatment is given of equilibrium thermodynamics including fluctuations. Among other things, this leads to a natural explanation of the Hilbert space underlying quantum physics, using only a simple quantization condition related to the third law of thermodynamics. T. Rage, A. Neumaier and C. Schlier, Rigorous verification of chaos in a molecular model, Phys. Rev. E 50 (1994), 2682-2688. ps.gz file (81K), downloading/printing problems? The Thiele-Wilson system, a simple model of a linear, triatomic molecule, has been studied extensively in the past. The system exhibits complex molecular dynamics including dissociation, periodic trajectories and bifurcations. In addition, it has for a long time been suspected to be chaotic, but this has never been proved with mathematical rigor. In this paper, we present numerical results that, using interval methods, rigorously verify the existence of transversal homoclinic points in a Poincarè map of this system. By a theorem of Smale, the existence of transversal homoclinic points in a map rigorously proves its mixing property, i.e., the chaoticity of the system. A. Neumaier and T. Rage, Rigorous chaos verification in discrete dynamical systems, Physica D 67 (1993), 327-346. scanned text Mathematik - Sprache und Werkzeug der Naturwissenschaften Downloading/printing problems Global Optimization Protein Folding Interval Methods Mathematical Software Computational Mathematics Links Mathematics Links Statistics Links my home page ( Arnold Neumaier (
3991ee57c63289be
From Wikipedia, the free encyclopedia Jump to: navigation, search In philosophy, emergentism is the belief in emergence, particularly as it involves consciousness and the philosophy of mind, and as it contrasts (or not) with reductionism. A property of a system is said to be emergent if it is a new outcome of some other properties of the system and their interaction, while it is itself unexpected and different from them.[1] Emergent properties are not identical with, reducible to, or deducible from the other properties. The different ways in which this independence requirement can be satisfied lead to variant types of emergence. Forms of emergentism[edit] All varieties of emergentism strive to be compatible with physicalism, the theory that the universe is composed exclusively of physical entities, and in particular with the evidence relating changes in the brain with changes in mental functioning. Many forms of emergentism, including proponents of complex adaptive systems, do not hold a material but rather a relational or processural view of the universe. Furthermore, they view mind–body dualism as a conceptual error insofar as mind and body are merely different types of relationships. As a theory of mind (which it is not always), emergentism differs from idealism, eliminative materialism, identity theories, neutral monism, panpsychism, and substance dualism, whilst being closely associated with property dualism. It is generally not obvious whether an emergent theory of mind embraces mental causation or must be considered epiphenomenal. Some varieties of emergentism are not specifically concerned with the mind-body problem, and instead suggest a hierarchical or layered view of the whole of nature, with the layers arranged in terms of increasing complexity with each requiring its own special science. Typically physics (mathematical physics, particle physics, and classical physics) is basic, with chemistry built on top of it, then biology, psychology and social sciences. Reductionists respond that the arrangement of the sciences is a matter of convenience, and that chemistry is derivable from physics (and so forth) in principle, an argument which gained force after the establishment of a quantum-mechanical basis for chemistry.[2] Other varieties see mind or consciousness as specifically and anomalously requiring emergentist explanation, and therefore constitute a family of positions in the philosophy of mind. Douglas Hofstadter summarises this view as "the soul is more than the sum of its parts". A number of philosophers have offered the argument that qualia constitute the hard problem of consciousness, and resist reductive explanation in a way that all other phenomena do not. In contrast, reductionists generally see the task of accounting for the possibly atypical properties of mind and of living things as a matter of showing that, contrary to appearances, such properties are indeed fully accountable in terms of the properties of the basic constituents of nature and therefore in no way genuinely atypical. Intermediate positions are possible: for instance, some emergentists hold that emergence is neither universal nor restricted to consciousness, but applies to (for instance) living creatures, or self organising systems, or complex systems. Some philosophers hold that emergent properties causally interact with more fundamental levels, an idea known as downward causation. Others maintain that higher-order properties simply supervene over lower levels without direct causal interaction. All the cases so far discussed have been synchronic, i.e. the emergent property exists simultaneously with its basis. Yet another variation operates diachronically. Emergentists of this type believe that genuinely novel properties can come into being, without being accountable in terms of the preceding history of the universe. (Contrast with indeterminism where it is only the arrangement or configuration of matter that is unaccountable). These evolution-inspired theories often have a theological aspect, as in the process philosophy of Alfred North Whitehead and Charles Hartshorne. The concept of emergence has been applied to the theory of literature and art, history, linguistics, cognitive sciences, etc. by the teachings of Jean-Marie Grassin at the University of Limoges (v. esp.: J. Fontanille, B. Westphal, J. Vion-Dury, éds. L'Émergenceh—Poétique de l'Émergence, en réponse aux travaux de Jean-Marie Grassin, Bern, Berlin, etc., 2011; and: the article "Emergence" in the International Dictionary of Literary Terms (DITL). Relationship to vitalism[edit] A refinement of vitalism may be recognized in contemporary molecular histology in the proposal that some key organising and structuring features of organisms, perhaps including even life itself, are examples of emergent processes; those in which a complexity arises, out of interacting chemical processes forming interconnected feedback cycles, that cannot fully be described in terms of those processes since the system as a whole has properties that the constituent reactions lack.[3][4] Whether emergent system properties should be grouped with traditional vitalist concepts is a matter of semantic controversy.[5] In a light-hearted millennial vein, Kirshner and Michison call research into integrated cell and organismal physiology “molecular vitalism.”[6] According to Emmeche et al. (1997): "On the one hand, many scientists and philosophers regard emergence as having only a pseudo-scientific status. On the other hand, new developments in physics, biology, psychology, and crossdisciplinary fields such as cognitive science, artificial life, and the study of non-linear dynamical systems have focused strongly on the high level 'collective behaviour' of complex systems, which is often said to be truly emergent, and the term is increasingly used to characterize such systems."[7] Emmeche et al. (1998) state that "there is a very important difference between the vitalists and the emergentists: the vitalist's creative forces were relevant only in organic substances, not in inorganic matter. Emergence hence is creation of new properties regardless of the substance involved." "The assumption of an extra-physical vitalis (vital force, entelechy, élan vital, etc.), as formulated in most forms (old or new) of vitalism, is usually without any genuine explanatory power. It has served altogether too often as an intellectual tranquilizer or verbal sedative—stifling scientific inquiry rather than encouraging it to proceed in new directions."[8] The first emergentist theorists used the example of water having a new property when hydrogen, H, and oxygen, O, combine to form H2O (water). In this example there emerge such new properties as liquidity under standard conditions. (Analogous hydrides of the oxygen family, such as hydrogen sulfide, are gases). However, a better and more recent example of an emergent phenomenon, one provided by physicist Erwin Schrödinger, is found in the case of families of molecules known as isomers, which are made up of precisely the same atoms, differently arranged, which nevertheless have different physical properties. Similarly, enantiomers are molecules made up of precisely the same atoms but in mirror image arrangement: they exist in "right-handed" and "left-handed" forms which have different properties when interacting with other molecules. Biologists Ursula Goodenough and Terrence Deacon in their 2006 essay The Sacred Emergence of Nature have assembled a range of examples of physical and biological emergent properties that provide the evidential basis for emergentism as a philosophy that comports with a modern scientific understanding of how complexity arises in the natural world, and as a philosophy that supports religious naturalism. A longer compilation of emergent forms in nature is the 2004 book by biologist Harold J. Morowitz: The Emergence of Everything. In the game of Go, the rules stipulate various constraints on the placement and removal of playing pieces. As a consequence of this, an "emergent" pattern is that groups of pieces with two eyes are "alive" and can never be removed. This is a vital part of the game, without which it cannot be played or understood; but is not part of the rules. Similarly, in John Conway's Game of Life, some patterns of cells have striking properties — such as the ability to move or reproduce — which are not explicitly coded into the rules. Although examples of higher level properties which are not identical to lower order properties are easy to find, examples where they are not reducible to or predicable from their bases are more controversial. John Stuart Mill[edit] Mill believed that various chemical reactions (poorly understood in his time) could provide examples of emergent properties, although some critics believe that modern physical chemistry has shown that these reactions can be given satisfactory reductionist explanations. For instance, it has been claimed[by whom?] that the whole of chemistry is, in principle, contained in the Schrödinger equation.[9][need quotation to verify] C. D. Broad[edit] Broad defined emergence as follows: Put in abstract terms the emergent theory asserts that there are certain wholes, composed (say) of constituents A, B, and C in a relation R to each other; that all wholes composed of constituents of the same kind as A, B, and C in relations of the same kind as R have certain characteristic properties; that A, B, and C are capable of occurring in other kinds of complex where the relation is not of the same kind as R; and that the characteristic properties of the whole R(A, B, C) cannot, even in theory, be deduced from the most complete knowledge of the properties of A, B, and C in isolation or in other wholes which are not of the form R(A, B, C). This definition amounted to the claim that mental properties would count as emergent if and only if philosophical zombies were metaphysically possible[citation needed]. Many philosophers take this position to be inconsistent with some formulations of psychophysical supervenience. C. Lloyd Morgan and Samuel Alexander[edit] Samuel Alexander's views on emergentism, argued in Space, Time, and Deity (1920), were inspired in part by the ideas in psychologist C. Lloyd Morgan's Emergent Evolution. Alexander believed that emergence was fundamentally inexplicable, and that emergentism was simply a "brute empirical fact": Despite the causal and explanatory gap between the phenomena on different levels, Alexander held that emergent qualities were not epiphenomenal. His view can perhaps best be described as a form of nonreductive physicalism (NRP) or supervenience theory. Ludwig von Bertalanffy[edit] Jaegwon Kim[edit] Addressing emergentism (under the guise of non-reductive physicalism) as a solution to the mind-body problem Jaegwon Kim has raised an objection based on causal closure and overdetermination. Emergentism strives to be compatible with physicalism, and physicalism, according to Kim, has a principle of causal closure according to which every physical event is fully accountable in terms of physical causes. This seems to leave no "room" for mental causation to operate. If our bodily movements were caused by the preceding state of our bodies and our decisions and intentions, they would be overdetermined. Mental causation in this sense is not the same as free will, but is only the claim that mental states are causally relevant. If emergentists respond by abandoning the idea of mental causation, their position becomes a form of epiphenomenalism. In detail: he proposes (using the chart on the right) that M1 causes M2 (these are mental events) and P1 causes P2 (these are physical events). P1 realises M1 and P2 realises M2. However M1 does not causally effect P1 (i.e., M1 is a consequent event of P1). If P1 causes P2, and M1 is a result of P1, then M2 is a result of P2. He says that the only alternatives to this problem is to accept dualism (where the mental events are independent of the physical events) or eliminativism (where the mental events do not exist). See also[edit] 1. ^ O'Connor, Timothy and Wong, Hong Yu, "Emergent Properties", The Stanford Encyclopedia of Philosophy (Summer 2015 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/sum2015/entries/properties-emergent/>. 2. ^ Crane, T. The Significance of Emergence 3. ^ Schultz, SG (1998). "A century of (epithelial) transport physiology: from vitalism to molecular cloning". The American journal of physiology. 274 (1 Pt 1): C13–23. PMID 9458708.  4. ^ Gilbert, SF; Sarkar, S (2000). "Embracing complexity: organicism for the 21st century". Developmental Dynamics. 219 (1): 1–9. doi:10.1002/1097-0177(2000)9999:9999<::AID-DVDY1036>3.0.CO;2-A. PMID 10974666.  5. ^ see "Emergent Properties" in the Stanford Encyclopedia of Philosophy. online at Stanford University for explicit discussion; briefly, some philosophers see emergentism as midway between traditional spiritual vitalism and mechanistic reductionism; others argue that, structurally, emergentism is equivalent to vitalism. See also Emmeche C (2001) Does a robot have an Umwelt? Semiotica 134: 653-693 [1] 6. ^ Kirschner, M; Gerhart, J; Mitchison, T (2000). "Molecular "vitalism". Cell. 100 (1): 79–88. doi:10.1016/S0092-8674(00)81685-2. PMID 10647933.  7. ^ Emmeche C (1997) EXPLAINING EMERGENCE:towards an ontology of levels. Journal for General Philosophy of Science available online 8. ^ Dictionary of the History of Ideas 9. ^ Laloë, Franck (2012). Do We Really Understand Quantum Mechanics?. Cambridge University Press. p. 292ff. ISBN 9781107025011.  Further reading[edit] • Beckermann, Ansgar, Hans Flohr, and Jaegwon Kim, eds., Emergence Or Reduction? Essays on the Prospects of Nonreductive Physicalism (1992). • Cahoone, Lawrence, The Orders of Nature (2013). • Clayton, Philip and Paul Davies, eds., The Re-emergence of Emergence: The Emergentist Hypothesis from Science to Religion. Oxford University Press (2008). • Gregersen Niels H., eds., From Complexity to Life: On Emergence of Life and Meaning. Oxford University Press (2013). • Jones, Richard H., Analysis & the Fullness of Reality: An Introduction to Reductionism & Emergence. Jackson Square Books (2013). • Laughlin, Robert B., A Different Universe (2005). • MacDonald, Graham and Cynthia, Emergence in Mind. Oxford University Press (2010). • Morowitz, Harold J., The Emergence of Everything: How the World Became Complex. Oxford University Press (2002). External links[edit] • Emergentism in the Stanford Encyclopedia of Philosophy, 2007. • Emergentism in the Dictionary of Philosophy of Mind, 2007.
ed46ee4fd6bba4af
Friday, March 31, 2006 Quantum Probability I took part in a brief discussion over at antimeta which reminded me that I ought to get back to a document I started writing on quantum mechanics for dummies. One of my pet peeves is that I believe there to be a little bit of a conspiracy to make quantum mechanics seem less accessible to people. Not a deliberate conspiracy - but people maintaining an aura of mystery about it that puts people off the subject. All of the fuzzy talk about quantum mechanics in the popular science press does nothing to help the situation. In particular, there is a core of quantum mechanics that I believe requires few prerequisites beyond elementary probability theory, vector spaces and complex numbers. Anyway, I did some more digging on the web and found this course by Greg Kuperberg. The opening paragraphs almost take the words I wanted to say out of my mouth. In particular, despite the mystical mumbo-jumbo that is often written on the subject, the rules of quantum mechanics are "rigorous and clear" and "The precepts of quantum mechanics are neither a set of physical forces nor a geometrical model for physical objects. Rather, they are a variant, and ultimately a generalization, of classical probability theory." Most of all "...more mathematicians could and should learn quantum mechanics...". You don't even have to understand F=ma to get started with quantum mechanics and get to the point where you can really and truly get to grips, directly, with the so-called paradoxes of quantum mechanics such as the Bell Paradox. The strange thing is that you won't find words like this in most of the quantum mechanics textbooks. They throw you into physical situations that require finding tricky solutions to the Schrödinger equation while completely failing to give any insight into the real subject matter of quantum mechanics. Most QM books I know are really introductions to solving partial differential equations. (Remark to physicists: I bet you didn't know you could get the simultaneous eigenvalues for the energy and angular momentum operators for the hydrogen atom by a beautifully simple method that doesn't require even looking at a differential equation...) The best thing about the newly appearing field of quantum computing is that it's slowly forcing people to thing about quantum mechanics separately from the mechanics. So even though I haven't read that course myself yet, I'm recommending that everyone read it :-) And some time I might get back to the even more elementary introduction I hope to put together. ansobol said... Seems that the link to Kuperberg's course should be ansobol said... Gosh, I made the same mistyping (although the href attribute of the link in the previous comment is set right). Sorry. david said... How about Kindergarten QM if you're looking for simplicity? sigfpe said... Great link, thanks. Seems like I'm also not the only one to think it's weird that the almost trivial sequence of linear operations that goes into quantum teleportation took so long to discover. I don't think this is just an example of saying something is easy with hindsight - it really is easy. There are no new concepts required beyond the most elementary features of QM. Same goes for interactive-free measurement. I'm almost tempted to say that QM sits in some kind of collective blind-spot that makes us bad at reasoning about it. tzut said... In the english-speaking world the wave and hence differential equation-based approach to QM dominates, it is true, but in France the algebraic approach is dominant: see the textbook by Cohen-Tannoudji. sigfpe said... That's interesting. I didn't know the French studied QM differently. Blog Archive
b77289a9ef430836
Philosophy in physics: returning to measurement It has often been said, and certainly not without justification, that the man of science is a poor philosopher. Why then should it not be the right thing for the physicist to let the philosopher do the philosophizing? Such might indeed be the right thing to do at a time when the physicist believes he has at his disposal a rigid system of fundamental concepts and fundamental laws which are so well established that waves of doubt can’t reach them; but it cannot be right at a time when the very foundations of physics itself have become problematic as they are now. At a time like the present, when experience forces us to seek a newer and more solid foundation, the physicist cannot simply surrender to the philosopher the critical contemplation of theoretical foundations; for he himself knows best and feels more surely where the shoe pinches. In looking for a new foundation, he must try to make clear in his own mind just how far the concepts which he uses are justified, and are necessities. A knowledge of the historical and philosophical background gives a kind of independence from prejudices of his generation from which most scientists are suffering.  This independence created by philosophical insight is–in my opinion–the mark of distinction between a mere artisan or specialist and a real seeker after truth. Albert Einstein The above-linked Physics Today article gives a fascinating portrait of a philosophically-informed scientist.  Even at the time, one notes some defensiveness about spending time thinking about the meaning of one’s theories instead of doing what many in physics would call “real work”.  Most physicists probably didn’t read all of Kant’s works as teenagers; it will surprise no one to learn that Einstein was exceptional in many ways.  Nevertheless, Professor Howard shows that a solid historical and philosophical background was much more expected and cultivated in physicists a hundred years ago than it is now. Today, physicists and biologists feel qualified to mouth off on philosophical and religious questions like never before, and yet the exposure of most of them to philosophy–it’s classics, methodology, and current trends–has dwindled to next to nothing.  How did this happen?  One reason is probably the shift in the center of scientific activity from Europe (Germany, in particular) to America.  We do things differently here.  We don’t worry about what our theories mean; we just want to know how to calculate with them.  Cultural expectations no doubt play a role here.  In Germany a century ago, educated people were expected to have opinions on Hegel and Schopenhauer.  American science is also organized, to a much greater degree than European science was in Einstein’s day, around big projects, each of which mobilizes hundreds of researchers.  This way of doing things is better for addressing certain questions than others. The other big change–which Einstein lived through and was not at all comfortable with–is the rise of quantum mechanics, whose bizarreness we’ve all learned to live with by not thinking about it.  There’s a definite prejudice among American physicists that only losers worry about what quantum mechanics means, all that “Schroedinger’s cat” stuff.  The discomfort is understandable, since all the major interpretive schemes involve weirdness and extravagances of their own:  a wavefunction collapse that at least seems very different from its ordinary Schroedinger-equation evolution, a multitude of parallel universes splitting off from each other, nonlocal hidden variables for which we have no other evidence.  Rather than seeing this as an exciting challenge, we’ve largely punted the whole issue to the philosophers, who presumably have nothing better to do. It’s a shame, because the interpretation of quantum mechanics is important, really important.  You should all care about it.  Here’s physicist and First Things contributor Stephen Barr arguing that quantum mechanics disproves materialism.  Barr reviews the measurement problem:  quantum mechanics naturally leads to superpositions of states, but our knowledge is always of definite states.  Barr accepts the Copenhagen interpretation–when we make measurements on microscopic systems, the state vector “jumps” to a definite state with probabilities given by the Born rule–and he accepts the view of Wigner and a few others that “we observe” necessarily means a conscious observer.  In this version of the Copenhagen interpretation, minds somehow transcend the usual laws of nature.  The alternative is to accept the Everett (“Many Worlds”), which is downright crazy. Read it and see what you think.  I personally think that the materialist isn’t in quite so bad a position as Barr makes out, even in the Copenhagen world.  After all, there are a lot of differences between subatomic particles and human beings besides consciousness.  It could be something else that triggers wavefunction collapse/state vector reduction.  One possibility, advocated by Roger Penrose and others, is gravity.  According to the (totally untested so far) “one graviton” rule, it’s things that are massive enough to put a dent in spacetime that can’t remain in indeterminate states.  There’s also complexity–humans have many more internal degrees of freedom.  This is basically how decoherence explains the lack of macroscopic entanglement effects (although it doesn’t, I believe, solve the measurement problem itself).  There are intriguing toy models (for example by Grigorenko and by Pearle–I went to a colloquium by Philip Pearle as an undergraduate and was very excited by it) of nonlinear modifications to the Schroedinger equation leading to wavefunction collapse, a sign that the problem shouldn’t be entirely left to the philosophers. Coincidentally, I found this on the Wall Street Journal at about the same time: ‘Vain is the word of that philosopher which does not heal any suffering of man.” By the lights of this maxim, taken from the fourth-century B.C. Greek philosopher Epicurus, contemporary philosophy looks awfully vain. Upon opening the field’s most prestigious journal, one finds little that looks capable of healing and much that promises the opposite: an article titled “On the Supposed Inconceivability of Absent Qualia Functional Duplicates”; another that defends the “applicability of Bayesian confirmation theory to the Everett interpretation of quantum mechanics.” Epicurus, by contrast, taught his followers how to eliminate anxiety and irrational desires in order to lead a life of happiness. I disagree.  The Everett interpretation is the most serious challenge to our sense of personal identity coming from the physical sciences today.  It’s truth or falseness is indeed a source of anxiety for me.  The viability of this Many Worlds idea hangs on its ability or inability to explain the Born rule for identifying wavefunction amplitudes with probabilities.  I think there’s a good argument that it doesn’t and can’t correctly make this identification–which gives me a reason other than philosophical repugnance for rejecting this multiplicity or worlds and multiplicity of mes–but the arguments that it can all hinge on the meaning given to probabilities.  In other words, that paper the author quotes is important, or at least it could be. One of these days, I’d like to do a series of posts on quantum mechanics and its interpretation.  That might be pushing my readers’ indulgence a bit too far, though. 32 thoughts on “Philosophy in physics: returning to measurement • I remember pointedly asking Bonald for it six months ago. QM is where all the atheists are at these days. Everything about it, we’re told, disproves God. This is important stuff and reactionaries need a counter to it. A counter it, preferably, put out by a sympathetic and intelligent SME. • Really, they’ve migrated to quantum physics? And here I thought evolutionary biology was where all the atheist action was at. I wrote a whole blasted essay about it. They have a big advantage always getting to choose the field of battle. 1. No, you should definitely do some posts on the interpretation of QM. Those of us who are not interested can just skip over them. Re the difference between the educated man of today versus his counterpart of a century ago, I was permanently humbled and ashamed when I read in Schroedinger’s autobiography about how when he was 16 years old, and after the end of hostilities in WWI, which had imposed a halt of several years on his studies, he wandered back on foot across the countryside to his university from the field, only to find it almost completely deserted. He scrounged some food and waited several weeks for the academic community to recollect itself. While he waited, he passed the time sitting in the blessed sunshine on the roof of one of the colleges and reading through the works of Plato. Again. In Greek. 2. I found your post and Barr’s article fascinating. I’m a humanities professor who has always wanted to understand science better (and frankly wishes that he had been forced as an undergraduate to undergo a more rigorous introduction to science). Recently on a math colleague’s recommendation I’ve been working with a text by Commenetz on calculus and have made some progress. It takes a conceptual and somewhat philosophical approach but also includes proofs at every step. Could you recommend an introduction to physics that perhaps takes a similar approach? This is also a long way of saying please go ahead and post the more challenging article you have in mind. • Hello Professor Smith, My favorite physics books for the general public are “QED: the strange theory of light and matter” by Richard Feynman and “The Emperor’s New Mind” by Roger Penrose. (The former’s “Lectures on Physics” are rewarding for the very committed reader.) Both of those focus on topics in modern physics. I think there could be a fascinating book for the public covering classical phenomena (hydrodynamics, electromagnetism, stuff like that) that are closer to the level of day-to-day experience. Maybe it even exists, but I don’t know about it. If I get tenure, maybe someday I’ll write it. 3. Hi Bonald, I second Kristor’s comment that you should do a series of posts on quantum mechanics. For one, I would personally find it very interesting, and for two, I am surrounded by smug tin-pot atheists who think that science has disproved God. Also, I wanted to thank you for your writings: I was introduced to traditionalism through your original Throne and Altar site (which I discovered by a googling accident). I read your essay on Monarchy the day I discovered your site, and it all but persuaded me on the spot. (Speaking of posts that you that you’d like to get around doing, did you ever do the series of posts on neofeudalism that was promised at the following link? 4. Count me among those who would like to read such posts . . . I am thoroughly ignorant of quantum mechanics and would like to know more. I wish that there were more article’s like Barr’s — it is intelligible by the “uninitiated” without being condescending. It is hard to find well written explanatory articles about math, science, or philosophy that speak toward a broad, educated audience outside the field. We have become a society of insular experts. Also, I do not see what is so alarming about the “many worlds” theory. I have wondered about it, though not from knowing any discussion of it in physics — and certainly not to defend materialism. Rather, I wonder if our thinking that our world is “the world” is similar to our thinking that our present is “the present.” I call the latter temporal chauvinism. For our “now” is not God’s now any more than the moment when Heloise and Abelard first discovered their great love or some occasion in the thirty-fourth century of the Christian era. For God is beyond time, and thus past and future are causal directions in cosmic history relative to a given “present” moment on that timeline. Maybe, the same holds for multiple worlds. God surely knows every possible world, of which ours is one. But is it “the one” or simply one for us? Perhaps the structure of modal logic actually reveals something about reality — wherein the principle of plenitude may hold. Moreover, wouldn’t the many worlds theory itself discredit materialism? For if certain features/elements/entities show up in many worlds, and if there is an identity among them, then what is that very identity? If a particular baseball exists in so many different worlds, what explains the correspondence? Any decent answer will eventually have to resort to the non-material — form, an assembly of certain qualities (again, form), a relationship of the parts to the whole (formal structure yet again), and so on. Of course, we need not many worlds to see the same argument against materialism (thinking about atoms will suffice), but I find it queer that materialists would latch onto such an obvious refutation of their world view as a defense for that view. • Hi Joseph, The idea that my consciousness might bifurcate is one that I find troubling. Either all the bonald-states with nonzero amplitude are one person or they’re separate people. Either way, I would have to weaken my concept of unity and self-identity to either A) something that could simultaneously hold two incompatible conscious states B) something nontransitive, since in which (bonald after measuring result a) = (bonald before measurement) and (bonald after measuring result b) = (bonald before measurement), but (bonald after measuring a) =/= (bonald after measuring b) I find both alternatives distasteful. • There’s a far deeper problem with MWI. If everything that can happen does happen in one world or another, it can make no sense to wonder why one thing happens, rather than another. But this eliminates the data upon which science – formal or not – operates. I.e., it eliminates science, including QM. By definition, one can do physics on the natural history of just one world – the world in which the physicist finds that he exists. MWI is not therefore strictly speaking a hypothesis in physics. Which is fortunate. It would be odd if physics were to show that physics is impossible per se. • I think this is another way of saying my main objection to MWI: it doesn’t really establish the connection between wavefunction amplitudes and probabilities. On the other hand, there are a lot of smart people who think it can and does, so it would really need a full post to back up my claim. • I’ve actually thought about this problem a bit, so here’s a suggestion: perhaps not one universe is created per state, but several for each, and the number of universes for each state scaling to the amplitude for that state, with an equal probability of entering any one universe, but the number of universes skewing the balance. This assumes that there is some quantum for probability, but to my knowledge this does not exist. Without the quantum, I’m not sure if it would still work though (how much is 34% of infinity?). And of course, I have no idea why on Earth this would actually happen, only that it might explain the results of QM. 5. Bonald, Is the question of justification of universal wave function (UW) no longer interesting to the physical community? I had recently interacted with Prof Barr regarding the postulate of UW but he rather casually likened the extrapolation to the extrapolation of assuming a uniform set of physical laws on earth and mars. But I do not think this comparison valid. The QM was originally formulated for interaction of a small physical system with a macroscopic measuring apparatus. There is no reason to assume an automatic extrapolation to the whole universe. The question of wavefunction is linked with that of measurement and what kind of measurement makes sense with th wave function of the entire universe? • Hi vishmehr24, I find it interesting, but I have higher-than-normal sympathy for out-of-the-mainstream work in this field. Barr is certainly right that a single state vector is *the* correct way to treat multiparticle systems according to quantum mechanics as we know it. He would correctly point to the case of indistinguishable particles–especially indistinguishable fermions–where if you try giving each particle an independent state vector and don’t (anti)symmetrize, you get definitely wrong answers. And these fermionic effects have macro effects on macro objects–white dwarf stars, to take the most dramatic example. If you want to screw with quantum mechanics (and I’m all for trying), you’ve got to do it in such a way as to not mess up QM’s proven successes. • Bonald, Are there experimental results from White Dwarfs? Also, a wave function for a white dwarf is still qualitatively different from the wave function for the Universe. My basic point is that the very concept of the wave function requires its collapse and thus an external measurement device or consciousness. So, what could be this device for the universal wave function?. How do we define the measurement process and the collapse?. Prof Barr suggested that the consciousnesses of persons in the universe be treated as external to the physical universe even though the bodies of the persons are within the universe and are part of the universal wave function. This could work, I suppose, but then I asked what credence he is willing to put on the results obtained by quantum cosmology such as Hawkings’ work. But I don’t have his answer to that • There’s a lot of evidence for the Pauli exclusion principle (the main consequence of requiring antisymmetrized fermion wavefunctions): the periodic table and the properties of metals, for instance. I gave white dwarfs only as the most extremely macroscopic case, but even for those objects we have a lot of data: masses, radii, and g-mode frequencies, to name three. 6. Pingback: Philosophy in physics: returning to measurement « The Orthosphere | Philosophical wanderings | 7. Hi bonald, I’m a frequent reader of this wonderful website and like a previous poster would like to mention how much I appreciate and value that series of articles you did on Throne and Altar. Though my religion is Islam they enhanced my development and set me more securely in that path. “Truth has been made clear from error. Whoever rejects false worship and believes in Allah has grasped the most trustworthy handhold that never breaks.” I believe this verse to refer to all monotheists and indeed I think there is some brotherhood or at least overlap between them all. Its quite vexing to deal with atheists who have a shaky grasp of science themselves and seem to insist that Quantum Mechanics or String Theory effectively disproves God. While I don’t think anyone can be convinced who is not already receptive (after all, I’m sure many here had “awakenings” from hardcore liberalism plus maybe one or two traditional values or an upbrinding which led to initial questioning) I think it would help people to gain a workable grasp of a subject that is probably the defining one of our age. • It is off topic but your gracious reply reminds me of these Sufi couplets by al-Shebli: Whatever house Thou tak’st for thine No lamp is needed there to shine. Upon the day that men shall bring Their proofs before the Judge and King, Our proof shall be, in that dread place, The longed-for beauty of Thy face. • Hello Luqman, Thank you very much for introducing yourself. It’s actually very surprising to me that anybody does say that quantum mechanics, string theory, or anything of the sort disproves God’s existence. They simply have nothing to do with the arguments for or against. 8. bonald, What are your thoughts on Wolfgang Smith’s attempt to interpret quantum mechanics (and modern physics more generally) from a Thomistic perspective? He explores this at some length in his books The Quantum Enigma and The Wisdom of Ancient Cosmology. I found his works very interesting as a fellow physicist, although I think he is much less compelling when he makes (empirical) criticisms of evolutionary theory. His opposition to materialism and reductionism is one of the most forceful I have seen, even from other scientifically-inclined Thomists. 9. Pingback: Shamanic Healing: Why it Works « Luce – Healing by Cathartic Shamanism 10. I realize I am late to this conversation. A few speculations: 1) The Schrödinger equation may not point to multiple physical worlds at all, instead it is a peak into what the Greeks called the Noetic Cosmos(Mind of God/Platonic Forms). The function of the Nous is to describe the infinite, the one, God – resulting in an infinite number of forms. In my mind, Multiple World Theory does not apply to the physical emanation, rather it points to the necessary existence of the nous, this world of forms: the mind of God. 2) If the observer causes the collapse of wave function into a definite event, it suggests a brain/mind duality whereupon the brain functions as an antenna for consciousnesses through which consciousness as metaphysical entity experiences the physical resulting in the individualized mind. Collapsed wave function resulting in physical events are merely artifacts of our hardware. • This is more or less the way I have parsed it, except for this: So far as I can tell, this leaves the work of the conscious mind superfluous: the material hardware does the collapsing that realizes the material hardware. The mind is then epiphenomenal. It makes more sense to me to think of the psyche as being the catalyst of the collapse, thus determining the state of the brain (and by extension of the world in general) that follows upon that decision. But the psyche is not itself material, or actual; the feeling of being a conscious mind is “what it is like” to be catalyzing a material collapse. Granted, the conscious mind apprehends the actual past – i.e., the already having been collapsed world – as the datum of its collapse. And in so doing it pays particular attention to the immediately past state of the brain (although there is no reason in principle it could not likewise directly apprehend past events far away in time or space). But it apprehends also as datum the mind of God (who is neither only past, nor only future, but always, and always present), and it is from his storehouse of formal possibilities for the world that a novel instance of collapse procures such novelty of configuration as it introduces to history. And until it has completed its assembly of data from the actual past and from God into a new synthesis, an instant of collapse – of becoming one thing, rather than another – is not actual, but rather incipiently actual. Until its synthetic collapse is complete, it does not actually exist, to have any causal effects; for its nature and character are not yet fully specified, until at last its career of becoming is complete and it has become what it is, rather than all the other things it might have become. Once an instance of becoming has settled fully into being, then the relevant brain state that reflects and signifies that decision comes into being with the same step. Brain states that reflect a given mental event, then, are not material states of affairs until and after, and by virtue of, the completion of an immaterial mental procedure of decision about the way the collapse shall have occurred. 11. I agree with this. It had not intended the second point to imply that the mind as epiphenomenal. It was primarily speculation. I struggle coming to a conclusion as to whether the brain itself is the physical emanation of the archetype of individualized consciousness; or a receiver of sorts who’s sophistication accounts for the varying degrees of mentation present between and within species. 12. Pingback: Cartesian meditations on the social sciences | The Orthosphere 13. This is a wonderful article, Bonald, but the two misused apostrophes—both in it’s when you meant its—were so jarring that they made me physically uncomfortable. Proper punctuation is something that our late, lamented friend Lawrence Auster was a stickler for, and with good reason. I won’t go into the arguments; you know them already. But I do have a suggestion: whenever you write it’s, stop for a moment and expand it to its full form, it is. You’ll know instantly whether or not you’ve used the right spelling. (Some people find the possessive rule of thumb more useful (yours not “your’s”; ours not “our’s”; etc.).) I mean this in a spirit of friendship and helpfulness. You have many useful and interesting things to say. I just don’t want anyone to be put off from what you write by something like misused apostrophes. You are commenting using your account. Log Out / Change ) Twitter picture Facebook photo Google+ photo Connecting to %s
dd300701c780f996
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer I'm new to quantum physics (and to this site), so please bear with me. I know that quantum mechanics allows particles to appear in regions that are classically forbidden; for example, an electron might pass through a potential barrier even though its energy is classically too low. In fact its wave function never decays to zero, meaning there is a non-zero probability of finding it very far away. But I've seen a lot of people take quantum tunneling and the uncertainty principle to their logical extremes and say that, for instance, it's possible in theory for a human being to walk right through a concrete wall (though the probability of this happening is of course so close to zero so as to be negligible). I don't necessarily question that such things are possible, but I want to know what the limitations are. Naively one might claim that "anything" is possible: if we assume that every particle has a non-zero wave function (almost) everywhere, then any configuration of the particles of the universe is possible, and that leads to many ridiculous scenarios indeed. They will all come to pass, given infinite time. However, this relies on the assumption that any particle can appear anywhere. I'd like to know if this is true. Does the wave function always approach zero asymptotically, for any particle, at large distances? share|cite|improve this question Yes. Wave functions are taken to be in the Hilbert space of square integrable functions; these necessarily go to zero at infinity. – ZachMcDargh Jan 6 '14 at 5:33 @ZachMcDargh Consider the case of a particle whose probability of being found at points not at infinity is zero. – Torsten Hĕrculĕ Cärlemän Jan 6 '14 at 5:46 Possible duplicate: – Qmechanic Jan 6 '14 at 12:51 up vote 11 down vote accepted From a pure mathematical point of view the answer is negative. As you probably know, wavefunctions are all of the functions $\psi$ from, say, $R$ to $C$ such that $|\psi(x)|^2$ has finite (Lebesgue) integral, namely $\psi$ belongs to the Hilbert space $L^2(R)$. One can simply construct functions that belong to $L^2(R)$ and that oscillate with larger and larger oscillations as soon as $|x|\to\infty$ but the oscillations are supported in smaller and smaller sets in order to preserve the $L^2$ condition. (It is possible to arrange everything in order to keep the normalization $\int |\psi(x)|^2 dx =1$.) These wavefunctions do not vanish asymptotically. From the physical viewpoint it seems however very difficult to prepare a system in such a state, even if I do not know any impossibility proof. share|cite|improve this answer What about wave functions that are identically zero beyond a certain distance, as in the case of the infinite potential well? Do such wave functions actually occur in reality? – Andreas Jan 6 '14 at 10:07 @Andreas A lot of the mathematical toy systems used in exercises are (almost surely) completely fictional. In reality every point in space is affected by it's entire past lightcone, so we simplify it a bit when we calculate things. Similar fictionalities include: Dirac distribution or perfect sine-wave wavefunctions (single particular momentum) and the collapse simplification of decoherence. – Karl Damgaard Asmussen Jan 6 '14 at 11:08 @KarlDamgaardAsmussen Is it then reasonable to claim that any wave function will be non-zero throughout all of space, except possibly at discrete points? – Andreas Jan 6 '14 at 11:33 @Andreas I do not think so that so general propositions make sense, you are taking mathematical objects too literally as already stressed by Karl Damgaard Asmussen. Also notice that wavefunctions as elements of $L^2$ are defined up to measure zero sets, so any statement about a generic wavefunction in a point does not make sense, theoretically speaking. – Valter Moretti Jan 6 '14 at 11:54 Here is a bit of a dog's breakfast of provisos and tidbits to go with VM9's rather definitive answer. Another Pathological Example One can construct even more pathological examples than VM9's: consider the function $$\psi(x) = \left\{\begin{array}{cl}1& x\in\mathbb{Q}\\0& x\in\mathbb{R}-\mathbb{Q}\end{array}\right.$$ However, this functions is generally thought of, for the purposes of $\mathbf{L}^2(\mathbb{R})$, as being the same as the function $\psi:\mathbb{R}\to\mathbb{C};\;\psi(x) = 0$: one wontedly considers, strictly speaking, equivalence classes of functions where we deem $\psi_1\sim\psi_2$ if the Lebesgue measure $\mu(\{x\in\mathbb{R}: \psi_1(x) \neq \psi_2(x)\})$ of the set $\{x\in\mathbb{R}: \psi_1(x) \neq \psi_2(x)\}$ is nought. We call functions which are not equal but equivalent by the relation $\sim$ "equal almost everywhere". We must interpret the Hilbert space's completeness this way: the Hermite function expansion of my pathological $\psi$ is the same as the expansion for $\psi:\mathbb{R}\to\mathbb{C};\;\psi(x) = 0$. Moreover, this equivalence makes perfect sense physically: there is zero probability of observing a particle in a subset $U\subset\mathbb{R}$ if a wavefunction $\psi$ is nought almost everywhere in that subset, i.e. equal to nought there aside from within a set $V\subset U$ of measure nought ($\mu(V)=0$). Nonnormalisable States Often we also want to think of nonnormalisable states: for example the state $\psi(x) = e^{i\,k\,x}$ in position co-ordinates, the "momentum eigenstate" with precisely known momentum $\hbar\,k$ but totally delocalised in position co-ordinates, or for a second important example $\psi(x) = \delta(x)$: the Dirac delta, in the distributional sense. We can then build normalisable states up from continuous superpositions of these states: the Fourier transform, by definition, resolves any $psi\in\mathbf{L}^2(\mathbb{R})$ into a superposition of delocalised momentum eigenstates $e^{i\,k\,x}$. To reason with and wield these nonnormalised states properly, we must call on the idea of rigged Hilbert space and think about the class of tempered distributions rather than $\mathbf{L}^2(\mathbb{R})$; see my answer here and also this one here for some more details. Conditions for Normalisability I bring to your attention the conditions whereunder we get normalisable eigenstates. See QMechanic's pithy summary here; normalisable eigenstates needfully correspond to the discrete spectrum of an operator and this often boils down to whether an operator can in some way be construed as acting on a compact phase space. Another neat family of normalisable eigenstates comes from, of all places, the field of optical waveguide theory, where one looks for bound eigenmodes (normalisable states) of the Helmholtz equation $(\nabla^2 + k^2 n(x,y,z)^2)\psi = \beta^2 \psi$. Here $n(x,y,z)$ is the refractive index profile of a waveguide. We have the following result - I can't put my hand on the proof right now but I shall look for it. Theorem Suppose the refractive index profile $n:\mathbb{R}^3\to \mathbb{R}$ is such that $n(x,y,z)\to n_0$ as $\sqrt{x^2+y^2+z^2}\to\infty$. Then there there are bound modes $\psi\in\mathbf{L}^2(\mathbb{R}^3)$ of the Equation $(\nabla^2 + k^2 n(x,y,z)^2)\psi = \beta^2 \psi$ if and only if: $$\int_{\mathbb{R}^3} (n(x,y,z)^2 - n_0^2) \,{\rm d}x\,{\rm d} y\, {\rm d} z> 0$$ and the discrete eigenvalues $\beta$ lie in the interval $[n_0,\,\max(k \,n(x,y,z))]\qquad\square$. Analogous results for $\mathbf{L}^2(\mathbb{R}^2)$ (i.e. for a translationally invariant waveguide, where we apply the theorem to the 2D transverse progile $n(x,y)$). This theorem can clearly immediately be applied to the finding of discrete eigenfunctions of the Schrödinger equation: we simply replace $k^2 n(x,y)^2$ by $E_{max} - V(x,y,z)$, where $E_{max}$ is an overbound for energy of the energy eigenvalues we seek. The discrete eigenvalues will lie in between $V(\infty)$ and $E_{max}$ if and only if $E_{max}$ is big enough that $\int_{\mathbb{R}^3} (E_{max} - V(x,y,z)) \,{\rm d}x\,{\rm d} y\, {\rm d} z> 0$. So the theorem can give us sufficient conditions for normalisable eigenstates and find lower bounds for the ground state energy. Similar results show that if $V(x,y,z)$ is finite for all $\mathbb{R}^3$ but $V\to+\infty$ as $\sqrt{x^2+y^2+z^2}\to\infty$, then there are countably many normalisable eigenstates and they span the whole Hilbert space $\mathbf{L}^2(\mathbb{R}^3)$. The quantum harmonic oscillator falls into this category (as does the upside-down parabolic refractive index profile optical fibre in optical waveguide theory). Behaviour in Classically "Forbidden" Regions Here it might be enlightening to consider the behaviour of the wavefunction in classically forbidden regions. A good system to solve is to find the normalisable energy eigenstates for the finite square well potential. Look at the Wiki page and take particular heed that in the classically forbidden region, the wavefunction is always evanescent in classically forbidden regions: that is, it is nonzero there, but it decays exponentially with increasing depth into the classically forbidden region. What this means practically is that the probability to find the particle any distance into the classically forbidden region swiftly becomes fantastically small after a few wavelengths' penetration. Practically, the classically forbidden region is also pretty much quantum forbidden too, unless we are talking really small penetrations. This is the reason for the wide range of radioactive half lives amongst the elements: a truly tiny potential barrier to decay means decay can happen swiftly, but even a modest increase in the potential barrier causes a many orders of magnitude slump in the decay rate. share|cite|improve this answer Your Answer
2f87ebc288e9c38d
ijms International Journal of Molecular Sciences Int. J. Mol. Sci. 1422-0067 Molecular Diversity Preservation International (MDPI) 10.3390/ijms13079081 ijms-13-09081 Article The Structure Lacuna BoeyensJan C.A.1* LevendisDemetrius C.2 Unit for Advanced Scholarship, University of Pretoria, Lynnwood Road, Pretoria 0002, South Africa Molecular Sciences Institute, School of Chemistry, University of the Witwatersrand, Jan Smuts Avenue, Johannesburg 0001, South Africa; E-Mail: [email protected] Author to whom correspondence should be addressed; E-Mail: [email protected]; Tel.: +27-12-420-4528; Fax: +27-12-362-5288. 2012 20 07 2012 13 7 9081 9096 06 06 2012 10 07 2012 12 07 2012 © 2012 by the authors; licensee Molecular Diversity Preservation International, Basel, Switzerland. 2012 Molecular symmetry is intimately connected with the classical concept of three-dimensional molecular structure. In a non-classical theory of wave-like interaction in four-dimensional space-time, both of these concepts and traditional quantum mechanics lose their operational meaning, unless suitably modified. A required reformulation should emphasize the importance of four-dimensional effects like spin and the symmetry effects of space-time curvature that could lead to a fundamentally different understanding of molecular symmetry and structure in terms of elementary number theory. Isolated single molecules have no characteristic shape and macro-biomolecules only develop robust three-dimensional structure in hydrophobic response to aqueous cellular media. golden ratio molecular symmetry spin function 1. Introduction Most quantum-theoretical concepts of chemistry have their origin in spectroscopy. The atomic spectroscopy of Kirchhoff, Bunsen, Fraunhofer and others resulted in the formulation of Balmer’s mysterious formula, the direct stimulus for the development of quantum mechanics. From molecular spectroscopy, which also dates from the 19th century, developed the concept of molecular symmetry, based on point-group theory. The concept of molecular structure was conjectured to bridge the gap between the two branches of spectroscopy. It is noted in passing that the discipline of quantum molecular theory is based entirely on this postulated, but unproven, concept of molecular structure. The analysis of molecular crystals by X-ray diffraction is routinely accepted as final justification of the molecular-structure hypothesis, on assuming that individual molecules maintain their integrity and structure in the solid state. This assumption is demonstrably unjustified. The molecular unit projected from crystallographic analysis is best described as a rigid three-dimensional framework that connects point atoms and is stabilized by crystal-packing interactions. In the absence of such constraints the crystallographic point group no longer represents the shape of the more flexible molecule. Related techniques such as gas-phase electron diffraction define no more than a radial distribution function. Spectroscopists are well aware of the complications associated with defining the symmetry group of non-rigid molecules, but the chemist naïvely accepts that their primitive concept of molecular structure is consistent with the most fundamental theories of physics. It is not. A recently published history of quantum chemistry [1] never mentions the concept “molecular structure”. The classic example of a non-rigid molecule is ammonia, which appears as a bipyramidal arrangement of interconverting polyhedra that reflects by tunneling through the plane of the central N atom [2]. It is pointed out [2] that: As far as the quantum-mechanical formalism is concerned, ammonia is by far no exception. Similar situations can be constructed for all molecules. They do not fit into the molecular-structure theme of traditional chemistry and are considered as “strange” states from a chemical point of view. Neither chemical structures nor chemical bonds do exist any more. On the other extreme, molecules such as the adamantane-derived bispidine ligands have been demonstrated in synthetic studies to operate as rigid units with apparently well-defined classical structure [3]. We contend that the difference between ammonia and bispidine is one of degree only. In both cases the equilibrium molecular geometry is essentially non-classical. Only in the case of bispidine is there a striking correspondence with the structure predicted by classical concepts of valency. Molecular symmetry is a classical concept based on the hypothetical model of point atoms held together by rigid chemical bonds. There is no experimental evidence for the existence of either point atoms nor chemical bonds. Both of these concepts are postulates to support the assumption of molecules with three-dimensional geometrical structure and molecular symmetry is the mathematical description of this structure. Alternatively molecular symmetry could be defined in line with more recent views on atomic and molecular structure. A more appropriate theory, starting from the nature of electrons and atomic nuclei as wave-like distortions of four-dimensional space-time, characterizes a molecule as a 4D standing-wave structure in which individual particles do not have independent existence. The only meaningful symmetry of such a construct must be four- and not three-dimensional. It will be argued that classical molecular structure is the three-dimensional projection of non-classical four-dimensional molecular geometry. 2. Electron Theory The point atom with mass mn is a remnant of the classically assumed ultimate philosophical limit to the subdivision of matter. It has been used in this form by Newton, Dalton, Mendeleév and others, until the end of the 19th century. Following Nagaoka, Rutherford and Bohr the concept of a structureless atom was abandoned and replaced by the planetary model of point-like electrons, with mass me in orbit around a point-like heavy nucleus. With the advent of quantum mechanics the Bohr–Sommerfeld orbits gave way to chemical shells of probabilistic charge density—now defining the electron as a point particle! To account for the behaviour of electrons in diffraction experiments the kabbalistic notion of wave-particle duality was concocted, notwithstanding the wave-mechanical alternative proposed by Schrödinger. As the particle model found its way into textbooks the electronic wave structure was completely forgotten, although the quantum-mechanical model never made any logical sense. In order to rationalize the probability function as representing a single electron it is necessary to integrate over all space. The electron associated with a given hydrogen atom, although likely to occupy the ground-state, has a finite probability to be on the moon. The chemists of the world have been content to live with this assumption for a hundred years. They have been brainwashed so thoroughly as not to tolerate any alternative suggestion that makes logical sense. The most bothersome aspect of electronic quantum mechanics is the failure to account for the most prominent attribute of an electron, known as its spin. Efforts to artificially add a spin function to the three-dimensional state function of an electron have only been partially successful. A common remedy is often proposed through the assertion that spin is a relativistic phenomenon, but a conclusive proof has never been formulated. What has been demonstrated is that the spin function, known as a spinor, occurs naturally as an inherent feature of four-dimensional motion. In order to define an electron with spin it is therefore necessary to describe it as an entity in four-dimensional space-time. This would automatically rule out its definition as a point particle, noting that a point in four-dimensional space-time has, not only zero extension but also zero duration. It cannot exist. With this realization the interminable debate about particle or wave becomes redundant and the remaining option is to consider the electron as a four-dimensional wave structure. In fact, Schrödinger’s equation is a modification of the general wave Equation. It is fashionable to state [4] that … analogies between geometrical optics and classical mechanics on the one hand, and wave optics and quantum mechanics on the other hand … can only make the Schrödinger equation seem plausible; they cannot be used to derive or prove this equation. Statements like this are standard textbook fare that derives from the Copenhagen campaign to discredit Schrödinger’s wave interpretation [5]): 2 Φ x 2 + 2 Φ y 2 + 2 Φ z 2 - 1 c 2 2 Φ t 2 = 0 ( 2 Φ = 1 c 2 2 Φ t 2 ) By defining x0 = ict, etc., this equation rearranges into 2 Φ x 1 2 + 2 Φ x 2 2 + 2 Φ x 3 2 + 2 Φ x 0 2 = 0 ( 2 Φ = 0 ) The solution of Equation 2 has the hypercomplex form Φ = a + i b + j c + k d known as a quaternion or the spin function. The coefficients i, j, k are generalizations of - 1 with the rule of composition: i2 + j2 + k2 = ijk = −1. In order to obtain Schrödinger’s solution it is necessary to separate the space and time variables on solving Equation 1. The result of this is to describe the electron as a three-dimensional object without spin. The consequence is that the three-dimensional angular momentum defined by Schrödinger’s equation is not a conserved quantity and hence inadequate as a basis on which to construct molecular symmetry functions. The conserved quantity is traditionally defined as J = L + S, a solution of Equation 2. In the quaternion representation an electron can no longer be considered as an entity that moves through space, but rather as a distortion of space-time, or the vacuum. A vivid visualization was proposed by Weingard [6] … conceiving of the motion of particles on analogy with the motion of the letters on the illuminated news sign in Times Square. The letters move, not by anything moving, but by the sequential blinking of the lights of which the sign is composed. Similarly, we can conceive of the motion of a particle as the motion of a geometrical property. Regions of space do not move, but the geometry of adjacent regions change in such a way that a pattern of geometrical properties—the pattern we identify with a certain particle—moves. In quaternion terminology such a geometrical pattern may be described as a standing four-dimensional spin wave. Such a wave packet is flexible, by definition, and adapts to the geometry of its local environment. The size of a wave packet in space-time is restricted by the limited time component that prevents indefinite extension. The hydrogen electron can no longer be on the moon. The topology of the wave packet manifests, in addition to spin, also as characteristic charge and mass. On interaction with a proton, a spin wave of opposite charge and high mass, the electron envelops the proton in a hyperspherical shroud. Assuming that waves of similar topology readily coalesce into larger composite waves, the formation of heavier atoms appears as a natural process. All interactions and rearrangements consist of finding an equilibrium among merging patterns. The only feature to explain is the origin of the elementary patterns such as electron, proton and neutrino. The answer is provided by the theory of general relativity. We note in passing that the Lorentz transformation that defines special relativity amounts to a complex rotation in four-dimensional space-time, with the same structure as the quaternion spin function [7]. Wave mechanics and relativity collapse into a single theory [8]. The additional consideration that creates general relativity is formulation of the theory in non-Euclidean space-time. The resulting field equations are of the form G μ ν = R μ ν - g μ ν R = k T μ ν In words, the curvature tensor Gμν balances the energy-stress tensor Tμν, which may be interpreted as distortions (matter) in space-time. This means that Euclidean (flat) space-time has Tμν = 0 and hence contains no energy or matter. As space-time curves, elementary wrinkles develop and coalesce into atomic matter and eventually into macroscopic massive objects that cause increased local curvature of space-time, recognized as a gravitational field. 3. Atomic Structure The empirical Periodic Table of the Elements is widely accepted as the major contribution of chemistry to science, and correctly so. Whereas the Bohr–Sommerfeld atomic model could be demonstrated to be consistent with the periodic model by invoking the empirical exclusion principle of Pauli, the Schrödinger wave-mechanical model can only partially account for elemental periodicity. Two reasons for the discrepancy are the neglect of spin and the assumption of zero interaction with the environment as in Euclidean space-time. A few years ago it was demonstrated [9] that, on specifying the local curvature of space-time by the ubiquitous golden parameter, τ = 1 2 ( 5 - 1 ), details not only of elemental periodicity but also of the general periodicity of all stable nuclides may be mapped by elementary number theory as a function of space-time curvature. It is of special interest to note that this periodic function extends in a natural way to include antimatter in an involuted arrangement that strongly supports a projective topology of space-time. Of more immediate relevance is that appearance of the golden mean indicates a self-similar relationship between atomic structure, botanical growth, patterns in the solar system and spiral galaxies. In response it could be shown [10,11] that extranuclear electron density in all atoms is simulated by golden logarithmic-spiral optimization. The result is a spherical distribution of charge in the form of a standing wave around the nucleus. The distribution mirrors the best quantum-mechanical models in the form of the arrangement predicted by Thomas–Fermi statistics [12] as well as self-consistent-field Hartree–Fock calculations [13] of atomic structure. 4. Chemical Interaction The classical straight-line representation of a chemical bond poorly describes the mode of interaction between spherical wave structures. A more likely model is pictured in Figure 1 as the idealized interference pattern of two wave systems. The consecutive frames may be interpreted as schematic representations of homonuclear covalent interactions of increasing order. To distinguish between different modes of overlap, frames 1 and 3 could represent integer and frame 2 half-integer order. The relative nuclear positions implied by the two-dimensional drawing of Figure 1 are not necessarily fixed, as the atomic waves may rotate around each other in all directions, without disturbing the interference pattern. This conclusion is consistent with the standard assumption of spherical molecules in the study of gas kinetics [14]. There are two possible modes of rotation, shown schematically in Figure 2. Either one sphere rotates, by rolling around the other, or the spheres rotate together like mating gears. The geared rotation does not change the relative disposition of the spheres and corresponds to the classical model of a vibrating diatomic molecule. The other mode of rotation creates a situation without a fixed axis of symmetry. It amounts to rotation around a point as described by the quaternion spin function. It requires a rotation of 4π to restore the initial configuration and represents an element of four-dimensional symmetry, which does not occur in three dimensions. In this mode the cyclic disposition of atomic cores fluctuates with the spin function, consistent with the interaction between wave structures being subject to spin pairing. The hyperspherical molecular symmetry may be visualized as the interpenetration of two wave packets in spherical rotation such that the equilibrium separation of their centres of mass defines a vibrating interatomic distance. Generalization of this pattern suggests that atomic cores in multi-atomic molecules are confined, with restricted freedom, within a sea of valence electrons. In the case of heteronuclear interaction the interference patterns must obviously be of lower symmetry, but in four dimensions it has the same hyperspherical symmetry as before. As more atoms participate in the formation of larger molecules a unique holistic interference pattern characterizes each equilibrium arrangement. The removal or displacement of any atom in the molecule from/by another affects the entire wave pattern, and hence the molecular symmetry, in a non-trivial way. 4.1. The Valence State Atoms do not interact spontaneously to form molecules under all circumstances. Chemical theory assumes that interaction in a reacting system only commences under specific thermodynamic conditions that define a characteristic valence state for the system. This happens when a single activated electron becomes decoupled from its atomic core and free to interact with a similarly activated electron from another atom, as shown in Figure 1. The wave patterns of Figure 1 therefore only refer to a single pair of activated valence electrons and not the composite electronic wave of the core. The decoupled valence electron behaves like an elementary charge, which is smeared out uniformly over a sphere of radius r0, known as the characteristic ionization radius (not to be confused with ionic radius) [15] of the atom. Electronic charge density as calculated by the spherical wave model of an atom [11] is distributed in the form of annular segments, separated by nodal surfaces. In order to specify the ionization sphere of an atom, the total charge in the outermost segment is normalized over a sphere of constant charge density, with a radius defined as r0. The valence electron in the activated stationary state with spherical symmetry has neither kinetic nor classical potential energy and its energy of E g = h 2 8 m r 0 2 which represents quantum-potential energy (or chemical potential of the valence state) of V q = 2 R 2 m R is simply related to the classical electronegativity [16], χ = E g, with Eg in eV. All covalence interaction parameters such as interatomic distance, bond order and dissociation energy should therefore be simple functions of r0. Steric distortion of the equilibrium molecular arrangement is resisted by the interference pattern that requires either integer or half-integer bond orders. Such resistance should correlate with spectroscopically measured harmonic stretching force constants. In principle all characteristics of covalent interaction could therefore be calculated directly from atomic ionization radii as the only parameter. 4.2. Covalence Although the definition of bond order as the extent of overlap between interfering waves is completely different from the classical concept, interatomic distances predicted by the wave model are remarkably similar to experimental estimates of empirically assumed bond orders. In terms of the wave model all interatomic distances, d, of homonuclear diatomic interactions of the same order are linear functions of ionization radius, d = k b r 0 where kb is characteristic for order b. In the same way that atomic electron distribution, as a standing wave, is predicted by logarithmic-spiral optimization, bond-order coefficients, kb, are specified in the same way, as shown in Figure 3, from [17]. It is instructive to note that kb varies from unity to τ for b = 0 to 4. For heteronuclear interaction the relationship between interatomic distance and bond order is more complicated, but in all cases the golden ratio is involved. By the use of predicted bond orders and interatomic distance, dissociation energy and stretching force constants are next calculated directly from ionization radii and integral powers, τn, of the golden ratio. Other alternative approaches to the problem of covalence have appeared in the recent literature [1822]. The main thrust of this research is to find an alternative to molecular-orbital theory in terms of variables such as chemical action and chemical hardness, based on their relationship with electronegativity and defined by various techniques, including semi-classical path integrals and second quantization, culminating in the definition of bondons to represent the electron pairs of covalent interaction. Final synthesis of these ideas into a working model of covalence seems to converge towards an understanding based on the role of the quantum potential and the golden ratio [23]. To cut a long story short we repeat that the characteristics of electrons, atoms and molecules are intimately related to the golden parameter and in order to understand molecular structure and symmetry we find it necessary to establish why this should be the case. At the most fundamental level we suspect the golden ratio to function as a critical characteristic of the general curvature of space-time. In order to explore this possibility it is necessary to consider a few relevant aspects of cosmology. We start from classical cosmology with a known link to the golden ratio, which Johannes Kepler, on cosmological grounds, referred to as the divine proportion. 5. Cosmic Self-Similarity The cosmology of Pythagoras [24,25] (and of Kepler), based on natural numbers, can be summarized briefly as follows: The cosmic unit is polarized into two antagonistic halves (male and female) which interact through a third irrational diagonal component that contains the sum of the first male and female numbers (3 + 2), divides the four-element (earth, water, fire, air) world in the divine proportion of τ = ( 5 / 4 - 1 / 2 ) and sounds the music of the spheres. Translated into modern terms [26]: The cosmic whole consists of a world and an anti-world, separated by an interface that creates all matter as it curves through four-dimensional space-time, with an involution, described by the golden ratio τ, which puts the universe into harmonious perspective of projective geometry. For the benefit of prospective readers not familiar with projective geometry, a short introductory primer summarizes some details and Kepler’s contribution in the Appendix. It seems that a relationship between cosmic structure and the golden ratio has been surmised for millenia without understanding its cause. Projective cosmology now suggests that the common factor is space-time curvature—a complete unknown. However, the prominent role of the golden spiral (Figure 3) could provide this insight. By construction, the golden spiral is made up of circular segments that fit diagonally into touching squares with side lengths in golden ratio. In each square a measure of the curvature follows from the ratio of arc length (πr/2) to the length of the diagonal chord ( 2 r), i.e., π / ( 2 2 ) = 1.111 5 / 2. The agreement is not exact, but seductive (It is equally close to the ratio of 1.1115 said [27] to “keep popping up more frequently than coincidence would seem to allow”. The king’s chamber in the great pyramid is a 1:2 rectangle with its ceiling at the height of half its long floor diagonal, 5 / 2. [28]). It provides sufficient grounds to assume that cosmic self-similarity is a manifestation of general space-time curvature, resulting from cosmic projective geometry, of 5 / 2, with respect to Euclidean tangent space. We emphasize the similitude between Pythagorean and projective cosmologies to highlight the importance of self-similarity for the understanding of molecular symmetry. Noting that [29] … space itself has a structure that influences the shape of every existing thing, molecules, like space-time itself, would be shaped in four dimensions. This is referred to as non-classical molecular shape as distinct from the traditionally assumed three-dimensional classical shape. The closest accord between classical and non-classical molecular structures occurs in close confinement as in a crystal lattice. The biggest difference is in free interstellar space where any stable molecule is of highly symmetrical shape. As such a molecule finds itself in more crowded environments, as in a molecular cloud, each interaction reduces its symmetry according to Goldstone’s theorem [30]. Likewise, reduced temperatures and increased pressure cause further symmetry breakdown until the molecule in the crystalline state at 0 K has the least symmetry. It becomes problematic to decide which of these is the true molecular symmetry. The 0 K classical structure which is used invariably in spectroscopic analysis is probably the worst possible choice. The reason why it is a good practical model is because spectroscopic analysis relies mainly on internal parameters such as interatomic distance, described correctly by the classical structure. Molecular shape and symmetry are not predicted by molecular spectra. Without this knowledge the vibrational symmetry of molecules becomes as meaningless as the feckless caricature of statically rigid three-dimensional molecules. 6. Molecular Symmetry Nowhere in nature is there exact symmetry, only equilibrium. The reason is that the symmetry of any object depends on the symmetry of its environment, which is nowhere rigorously isotropic. It is safe to conclude that there are no symmetrical molecules except at 0 K, which is unreachable. As a mathematical concept however, symmetry is precisely defined by group theory and we hear that [31] … the number and kinds of energy levels which an atom or molecule may have are rigorously and precisely determined by the symmetry of the molecule or of the environment of the atom. This statement can only refer to a hypothetical molecule considered as a static array of point atoms. Knowing the number and kinds of energy levels of such a molecule is of no chemical interest. In fact, there is only the single ground-state energy level. In systems of chemical significance group theory has no function unless molecular symmetry is already known empirically. One may ask, as in the classic hamburger commercial: “ where is the beef?” Without any knowledge of molecular shape or symmetry all that remains is the connectivity pattern which is directly commensurate with the 0 K classical structure and symmetry via the techniques of classical molecular mechanics. In principle it is possible to progress from here to situations of chemical importance by modelling the effects of thermodynamics on classical bond parameters. Considerable progress has been achieved by simulations of this type, known as molecular dynamics. However, commentators [32] who proclaim the power of quantum mechanics to predict the structure, symmetry and chemical properties of non-existent chemical compounds can safely be ignored. Chemistry has a long way to go towards an understanding of the four-dimensional shape and symmetry of molecules. A useful first step would be to admit the inadequacy of three-dimensional quantum mechanics, which cannot deal with four-dimensional wave phenomena such as the electromagnetic field and the solutions of Equation 2. It is encouraging to note that pedagogical trends in this direction [33,34] are gaining momentum. These authors object to the way in which chemical bonding is understood and taught, although unaware of the root cause of their irritation. The irksome orbital concept and the very idea of a chemical bond are relics of extravagant efforts to develop quantum-mechanical arguments from classical molecular structures. The result is a mess. As remarked by Sheldon Goldstein [35]: … it is not unusual when it comes to quantum philosophy, to find the very best physicists and mathematicians making sharp emphatic claims, almost of a mathematical character, that are trivially false and profoundly ignorant. The performance of chemists is even worse—they accept the claims of the physicist at face value, then insert their classical model as a quantum concept, by the simple device of describing complex functions as real hybrid orbitals [10]. The implied violation of the exclusion principle is considered a small enough price to pay for the recognition of classical molecular structure and symmetry in quantum chemistry. It means that in reality there is no conceptual support for molecular symmetry, the way it features as a pillar of theoretical chemistry. The whole scheme is self-contradictory—in order to construct sp hybrids the orthogonality of s and p functions, which means zero overlap, is simply ignored. Equally remarkable is the way in which molecular spectroscopists keep up the pretence of a quantum theory, despite the well-known fact that the assignment of group frequencies is independent of the assumed molecular structure and symmetry. Without classical structure there is precious little left of molecular quantum theory and symmetry. To fill this hiatus would require a complete reformulation of the problem. It is, first of all, necessary to appreciate the four-dimensional nature of molecules. The symmetry of such a molecule is to be understood, by analogy with the three-dimensional VSEPR model, as resulting from the equilibrium arrangement of various wave structures that minimizes the spin function. Molecules with non-zero residual angular momentum are optically active. All others project into three dimensions with spherical symmetry. It should be clear that this 3D projection cannot be static as the time coordinate of space-time, as perceived in projection, would be non-zero. This means that the shape of a free achiral molecule represents the average of a spherically pulsating configuration. The spherical symmetry could break down if the molecule, when set into axial rotation, as in a microwave field, exhibits a dipole moment. In an applied electric field Stark modulation permits spectroscopic measurement of the dipole moment. Infrared and Raman spectra arise from molecular vibrations induced by further breakdown of symmetry. Molecular spectra simply do not reveal the structure or symmetry that molecules do not possess. Any symmetry argument that enters into the interpretation of molecular spectra arises from structure due to interaction with electromagnetic fields in the environment, becoming more pronounced in condensed phases, including solutions. Three-dimensional structures inferred from NMR spectra and X-ray crystallography are purely classical and confirms no more than chemical connectivity patterns. The frequent occurrence of molecular rearrangements shows that these patterns are not necessarily invariant. It is of interest to note that the complete molecular eigenstate, even of the Born–Oppenheimer Hamiltonian of complex molecules, signifies spherical symmetry [36]. Structured molecules are therefore undefined also in three-dimensional wave mechanics. What is popularly known as molecular structure and symmetry are purely classical concepts. This means that the fashionable proclivity to relate chemical property to molecular structure is a sterile pursuit. Apart from its structure, a classical molecule has no other properties and a free molecule, which exhibits the full range of chemical properties, has no structure. Molecular Shape Before the ideas outlined here can be incorporated into a meaningful theory of chemistry it is necessary to contemplate the nature of a non-classical four-dimensional molecule. For most chemists this would imply a major paradigm shift; away from their traditional view of rigid three-dimensional molecular quantum objects. The enormity of the proposed paradigm shift is underlined by statements from leading theoreticians such as [37]: There is no such thing as spacetime in the real world of physics … It is an approximation idea, an extremely good approximation under most circumstances, but always only an approximation. The first obstacle to overcome is the counter-intuitive notion of space-time entanglement that implies a temporal component to space variables. Projection into three-dimensional space separates space and time coordinates to create the illusion of an object in isotropic three-dimensional motion. This means that a hyperspherical object appears as a pulsating spheroid in three-dimensional space, if all environmental effects are ignored. This model describes a molecule in intergalactic space. By analogy with Rydberg atoms [38] it should behave as a Rydberg molecule, in which the distance scale between interacting units is highly inflated. This implies an increase in atomic (and molecular) size, ionization radius and interatomic distance, with concomitantly decreased dissociation energy and molecular stability. Not surprisingly only the most stable diatomic molecules H2, CO, N2, etc., have been observed [39]. Macromolecules would simply fall apart in intergalactic space. In interstellar dark molecular clouds (e.g., the horsehead nebula) with high concentrations of molecular material and dust, more complex species like simple amino acids have been recorded. To develop these ideas into useful computational models is not a straightforward exercise; for the time being it is difficult to see how it could be developed beyond the classical model of pairwise interaction. The only topic of immediate interest appears to be the spontaneous folding of macromolecules with biological function into characteristic single conformations. As revealed by crystallographic analyses the overall structure of proteins is remarkably compact, at about the same level as the crystals of small organic molecules [40], which means that a sensible relationship between their function and classical molecular structure could be established theoretically. However, the simulation of the reversible folding and unfolding of protein molecules has been conspicuously unsuccessful. Probably the most remarkable feature of protein folding is the observation [41] that the unfolding transitions are well established to be generally two-state at equilibrium, with only the fully folded and random unfolded states populated substantially. Partially-folded intermediates are relatively unstable. As remarked by Thomas Creighton [40]: … simple estimates of the number of conformations possible with even a small polypeptide chain of, say, only 50 amino acid residues imply that an astronomical length of time, many orders of magnitude longer than the age of the universe, would be required for all of them to be sampled randomly. Consequently, the folding process must be directed in some way … In our biased opinion the curvature of space-time is one possible factor that could direct such specific action. It is noted that in many proteins the primary polypeptide chain pursues a moderately straight course across the entire breadth of the structure and then turns to the other side, without getting knotted or entangled. Domains of uniform secondary structure (α–helix, β–sheet, etc.) may be predicted from the primary sequence and embedded structural elements that cause reverse turns in the surface of the protein ensure the eventual globular structure. However, since all biomolecules operate in aqueous environments, the influence of space-time curvature on molecular shape is effectively masked by hydrophobic interaction, which means that all non-polar amino-acid side chains are buried in the interior of the globule, with polar groups exposed in the hydrophilic surface; in exact analogy with the formation of micelles in soap solutions. The crucial consideration towards elucidation of protein folding therefore is the distribution of amino acids with the alternation of polar and non-polar regions required for the formation of globular micelles. The key to this is encoded in the primary amino-acid sequence, as generated by the evolution of life. Similar conclusions, from the opposite point of view [42], are summarized by the statement: Water is an integral part of biomolecular structural organization, and is central to the assembly and three-dimensional shape of proteins and nucleic acids. 7. Conclusions We have to conclude that the shape and symmetry of free molecules cannot be related to space-time topology. Molecules, small enough to persist in free space, are too small to develop a characteristic shape and macromolecules, large enough to exhibit three-dimensional structure, adopt their characteristic shape in response to interaction in condensed phases. It is only in those cases where such molecules are incorporated as building blocks in biological growth structures that the familiar Fibonacci patterns, characteristic of space-time topology, become noticeable [10,29]. The question of a robust structure for medium-sized molecules remains an open one and constitutes a serious lacuna in the interpretation of atomic and molecular spectra. A. Projective Geometry Any mathematical proof proceeds by way of deductive reasoning, which means that any fallacious conclusion or self-contradiction can be traced back to some fallacious axiomatic assumption. In Euclidean geometry one deals with one-dimensional straight lines and flat two-dimensional planes. A straight line is axiomatically created at the intersection of two flat planes, whereas the intersection of two straight lines defines a zero-dimensional point. By another axiom it is assumed that parallel straight lines or planes do not intersect. Certain straight lines or zero-dimensional points remain undefined by this reasoning, unless one of the axioms is abandoned. This dilemma was appreciated by Euclid, but no obvious resolution could be reached at the time. With the advent of non-Euclidean geometries came the realization that, by giving up the axiom of parallel lines and planes a more general geometry, without exceptions could be formulated. The chosen remedy is to assume that two lines, which appear to be parallel, when extended indefinitely, intersect at infinity. As the two lines can be extended in opposite directions they must obviously intersect twice, at plus and minus infinity, which again contradicts the axiom that two straight lines intersect in a single point. The only way to avoid this conclusion is by assuming the two points of intersection to coincide and define a single point at infinity. This basic assumption of projective geometry can clearly not be satisfied in a Euclidean plane, even when extended indefinitely. The two points can only be brought into coincidence in a curved, so-called non-Euclidean plane. Since the argument is valid for parallel lines in all directions the implied curved surface, which defines projective topology, cannot be mapped in three dimensions without intersecting itself. For better visualization of the situation consider the equivalent of straight lines between two points in the surface of a sphere, which is defined as a segment of the great circle that connects the points. Now there is another anomaly—two great circles intersect twice, at antipodal points. To avoid this problem the two points are identified as one to produce a geometrical construct with the same projective topology as before, known as a projective plane. The projective plane is hard to visualize because it cannot be defined in three dimensions without intersecting itself. An artists impression of a model, known as Boy’s surface, as a visualization of the projective plane is shown schematically in Figure A1a. A more familiar visualization is provided by a Möbius strip, which is a segment sliced from the projective surface. Figure A1b shows two Möbius bands in the surface of a sphere, intersecting at the north pole. Adding more of the same, all intersecting in the same way, the entire spherical surface will eventually be covered on both sides by a single continuous surface that defines the projective plane. Like a single Möbius band the projective surface is created by a twist, known as an involution. Although the projective plane cannot be embedded in three dimensions it is the most likely topology for a closed four-dimensional universe. Human beings are conditioned evolutionary to interpret the world in terms of three-dimensional Euclidean space, which may be considered as tangent to the underlying four-dimensional curved space-time. Those readers who are interested in the origins of projective geometry are referred to Stillwell’s [44] elementary discussion of Kepler’s seminal views on conic sections as projections of the circle. It is instructive to note that analysis of the group structure of the hypersphere [45] defines, not only projective space, but also quaternion multiplication and spherical rotation, referred to as the “plate trick”. (a) Drawing of Boy’s surface [43]; (b) Intersecting Möbius bands. References GavrogluK.SimõesANeither Physics nor ChemistryMIT PressCambridge, MA, USA2012 AmannACan quantum mechanics account for chemical structures?Fundamental Principles of Molecular ModelingPlenumNew York, NY, USA1996 CombaP.KerscherM.Structure correlation in bispidine coordination complexesCryst. Eng2003619721110.1016/j.cryseng.2004.04.002 LevineI.N.Quantum Chemistry4th edPrentice-HallEnglewood Cliffs, NJ, USA1991 SchrödingerECollected Papers on Wave Mechanics2nd edShearerJ.F.DeansW.M.ChelseaNew York, NY, USA1978 WeingardRMaking Everything out of NothingThe Philosophy of VacuumClarendon PressOxford, UK1991 MorseP.M.FeshbachHMethods of Theoretical PhysicsMcGraw-HillNew York, NY, USA1956 BoeyensJ.C.A.Chemistry in four dimensionsStruct. Bond2012in press BoeyensJ.C.A.LevendisD.C.Number Theory and the Periodicity of MatterSpringerBerlin/Heidelberg, Germany2008 BoeyensJ.C.A.A molecular-structure hypothesisInt. J. Mol. Sci2010114267428410.3390/ijms1111426721151437 BoeyensJ.C.A.Calculation of atomic structureStruct. Bond2012in press CondonE.U.OdabaşiHAtomic StructureUniversity PressCambridge, UK1980 MannJ.B.Atomic Structure Calculations IILos Alamos Scientific ReportLA36911968 HirshfelderJ.O.CurtisC.F.BirdR.B.Molecular Theory of Gases and LiquidsWileyNew York, NY, USA1954 BoeyensJ.C.A.Ionization radii of compressed atomsJ. Chem. Soc. Faraday Trans1994903377338110.1039/ft9949003377 BoeyensJ.C.A.The periodic electronegativity tableZ. Naturforsch200863b199209 BoeyensJ.C.A.Covalent interactionStruct. Bond2012in press PutzM.V.Systematic formulation of electronegativity and hardness and their atomic scales within density functional softness theoryInt. J. Quantum Chem200610636138910.1002/qua.20787 PutzM.V.Semiclassical electronegativity and chemical hardnessJ. Theor. Comput. Chem20076334710.1142/S0219633607002861 PutzM.V.Chemical action and chemical bondingJ. Mol. Struct. (Theochem)2009900647010.1016/j.theochem.2008.12.026 PutzM.V.Electronegativity: Quantum observableInt. J. Quantum Chem200910973373810.1002/qua.21957 PutzM.V.The bondons: The quantum particles of the chemical bondInt. J. Mol. Sci2010114227425610.3390/ijms1111422721151435 PutzM.V.Personal CommunicationIEEE Communications SocietyNew York, NY, USA2012 PlatoTimaeus and CritiasWilder PublicationsRadford, VA, USA2010 Schwaller de LubiczR.A.Le Temple de l’HommeCritical summary in EnglishQuest BooksWheaton, IL, USA1979 BoeyensJ.C.A.Chemical CosmologySpringerBerlin/Heidelberg, Germany2010 MaddoxJ.The temptations of numerologyNature198330411 LemesurierPThe Great PyramidElement BooksShaftesbury, Dorset, UK1987 StevensPPatterns in NatureLittle, Brown and CoBoston, MA, USA1976 GoldstoneJ.Field Theories with “Superconducting” SolutionsIl Nuovo Cimento19611915416410.1007/BF02812722 CottonF.A.Chemical Applications of Group Theory2nd edWiley-InterscienceNew York, NY, USA1971 KakuM.ThompsonJBeyond EinsteinUniversity PressOxford, UK1999 GrushowA.Is it time to retire the hybrid atomic orbital?J. Chem. Ed20118886086210.1021/ed100155c PritchardH.O.We need to update the teaching of valence theoryJ. Chem. Ed20128930130310.1021/ed2004752 GoldsteinS.Quantum philosophy: The flight from reason in scienceAnn. N. Y. Acad. Sci1996775119125 ClaverieP.Classical molecular structure and the puzzle of “classical limit” in quantum theoryStud. Phys. Theor. Chem1983231322 WheelerJ.A.From Relativity to MutabilityThe Physicist’s Conception of NatureReidelBoston, MA, USA1973 HakenH.WolfH.C.The Physics of Atoms and Quanta4th edSpringer-VerlagBerlin/Heidelberg, Germany1994 RehderDChemistry in SpaceWiley-VCHWeinheim, Germany2010 CreightonT.E.The problem of how and why proteins adopt folded conformationsJ. Phys. Chem1985892452245910.1021/j100258a006 CreightonT.E.Pathways and mechanisms of protein foldingAdv. Biophys19841812010.1016/0065-227X(84)90004-26399818 ChaplinM.Do we underestimate the importance of water in cell biology?Nat. Rev. Mol. Cell Biol2006786186610.1038/nrm202116955076 FrancisG.K.A Topological PicturebookSpringer-VerlagNew York, NY, USA1987 StillwellJNumbers and GeometrySpringer-VerlagNew York, NY, USA1998 StillwellJGeometry of SurfacesSpringer-VerlagNew York, NY, USA1992 Figures Interactions of different order. Spherical and geared rotation of spheres. Golden-spiral optimization of bond order.
3cc79aad5a09fbe1
quantum electrodynamics quantum electrodynamics Quantum theory of the interactions of charged particles with the electromagnetic field. It describes the interactions of light with matter as well as those of charged particles with each other. Its foundations were laid by P. A. M. Dirac when he discovered an equation describing the motion and spin of electrons that incorporated both quantum mechanics and the theory of special relativity. The theory, as refined and developed in the late 1940s, rests on the idea that charged particles interact by emitting and absorbing photons. It has become a model for other quantum field theories. Learn more about quantum electrodynamics (QED) with a free trial on Quantum electrodynamics (QED) is a relativistic quantum field theory of electrodynamics. QED was developed by a number of physicists, beginning in the late 1920s. It basically describes how light and matter interact. More specifically it deals with the interactions between electrons, positrons and photons. QED mathematically describes all phenomena involving electrically charged particles interacting by means of exchange of photons. It has been called "the jewel of physics" for its extremely accurate predictions of quantities like the anomalous magnetic moment of the electron, and the Lamb shift of the energy levels of hydrogen. In technical terms, QED can be described as a perturbation theory of the electromagnetic quantum vacuum. The word 'quantum' is Latin, meaning "how much" (neut. sing. of quantus "how great"). The word 'electrodynamics' was coined by André-Marie Ampère in 1822. The word 'quantum', as used in physics, i.e. with reference to the notion of count, was first used by Max Planck, in 1900 and reinforced by Einstein in 1905 with his use of the term light quanta. Quantum theory began in 1900, when Max Planck assumed that energy is quantized in order to derive a formula predicting the observed frequency dependence of the energy emitted by a black body. This dependence is completely at variance with classical physics. In 1905, Einstein explained the photoelectric effect by postulating that light energy comes in quanta later called photons. In 1913, Bohr invoked quantization in his proposed explanation of the spectral lines of the hydrogen atom. In 1924, Louis de Broglie proposed a quantum theory of the wave-like nature of subatomic particles. The phrase "quantum physics" was first employed in Johnston's Planck's Universe in Light of Modern Physics. These theories, while they fit the experimental facts to some extent, were strictly phenomenological: they provided no rigorous justification for the quantization they employed. Modern quantum mechanics was born in 1925 with Werner Heisenberg's matrix mechanics and Erwin Schrödinger's wave mechanics and the Schrödinger equation, which was a non-relativistic generalization of de Broglie's(1925) relativistic approach. Schrödinger subsequently showed that these two approaches were equivalent. In 1927, Heisenberg formulated his uncertainty principle, and the Copenhagen interpretation of quantum mechanics began to take shape. Around this time, Paul Dirac, in work culminating in his 1930 monograph finally joined quantum mechanics and special relativity, pioneered the use of operator theory, and devised the bra-ket notation widely used since. In 1932, John von Neumann formulated the rigorous mathematical basis for quantum mechanics as the theory of linear operators on Hilbert spaces. This and other work from the founding period remains valid and widely used. Quantum chemistry began with Walter Heitler and Fritz London's 1927 quantum account of the covalent bond of the hydrogen molecule. Linus Pauling and others contributed to the subsequent development of quantum chemistry. The application of quantum mechanics to fields rather than single particles, resulting in what are known as quantum field theories, began in 1927. Early contributors included Dirac, Wolfgang Pauli, Weisskopf, and Jordan. This line of research culminated in the 1940s in the quantum electrodynamics (QED) of Richard Feynman, Freeman Dyson, Julian Schwinger, and Sin-Itiro Tomonaga, for which Feynman, Schwinger and Tomonaga received the 1965 Nobel Prize in Physics. QED, a quantum theory of electrons, positrons, and the electromagnetic field, was the first satisfactory quantum description of a physical field and of the creation and annihilation of quantum particles. QED involves a covariant and gauge invariant prescription for the calculation of observable quantities. Feynman's mathematical technique, based on his diagrams, initially seemed very different from the field-theoretic, operator-based approach of Schwinger and Tomonaga, but Freeman Dyson later showed that the two approaches were equivalent. The renormalization procedure for eliminating the awkward infinite predictions of quantum field theory was first implemented in QED. Even though renormalization works very well in practice, Feynman was never entirely comfortable with its mathematical validity, even referring to renormalization as a "shell game" and "hocus pocus". (Feynman, 1985: 128) QED has served as the model and template for all subsequent quantum field theories. One such subsequent theory is quantum chromodynamics, which began in the early 1960s and attained its present form in the 1975 work by H. David Politzer, Sidney Coleman, David Gross and Frank Wilczek. Building on the pioneering work of Schwinger, Peter Higgs, Goldstone, and others, Sheldon Glashow, Steven Weinberg and Abdus Salam independently showed how the weak nuclear force and quantum electrodynamics could be merged into a single electroweak force. Physical interpretation of QED In classical optics, light travels over all allowed paths and their interference results in Fermat's principle. Similarly, in QED, light (or any other particle like an electron or a proton) passes over every possible path allowed by apertures or lenses. The observer (at a particular location) simply detects the mathematical result of all wave functions added up, as a sum of all line integrals. For other interpretations, paths are viewed as non physical, mathematical constructs that are equivalent to other, possibly infinite, sets of mathematical expansions. According to QED, light can go slower or faster than c, but will travel at velocity c on average. Physically, QED describes charged particles (and their antiparticles) interacting with each other by the exchange of photons. The magnitude of these interactions can be computed using perturbation theory; these rather complex formulas have a remarkable pictorial representation as Feynman diagrams. QED was the theory to which Feynman diagrams were first applied. These diagrams were invented on the basis of Lagrangian mechanics. Using a Feynman diagram, one decides every possible path between the start and end points. Each path is assigned a complex-valued probability amplitude, and the actual amplitude we observe is the sum of all amplitudes over all possible paths. The paths with stationary phase contribute most (due to lack of destructive interference with some neighboring counter-phase paths) — this results in the stationary classical path between the two points. QED doesn't predict what will happen in an experiment, but it can predict the probability of what will happen in an experiment, which is how it is experimentally verified. Predictions of QED agree with experiments to an extremely high degree of accuracy: currently about 10−12 (and limited by experimental errors); for details see precision tests of QED. This makes QED one of the most accurate physical theories constructed thus far. Near the end of his life, Richard P. Feynman gave a series of lectures on QED intended for the lay public. These lectures were transcribed and published as Feynman (1985), QED: The strange theory of light and matter, a classic non-mathematical exposition of QED from the point of view articulated above. Mathematically, QED is an abelian gauge theory with the symmetry group U(1). The gauge field, which mediates the interaction between the charged spin-1/2 fields, is the electromagnetic field. The QED Lagrangian for a spin-1/2 field interacting with the electromagnetic field is given by the real part of mathcal{L}=barpsi(igamma^mu D_mu-m)psi -frac{1}{4}F_{munu}F^{munu};, gamma_mu ,! are Dirac matrices; psi a bispinor field of spin-1/2 particles (e.g. electron-positron field); barpsiequivpsi^daggergamma_0, called "psi-bar", is sometimes referred to as Dirac adjoint; D_mu = partial_mu+ieA_mu ,! is the gauge covariant derivative; e is the coupling constant, equal to the electric charge of the bispinor field; A_mu is the covariant four-potential of the electromagnetic field; F_{munu} = partial_mu A_nu - partial_nu A_mu ,! is the electromagnetic field tensor. Euler-Lagrange equations To begin, substituting the definition of D into the Lagrangian gives us: mathcal{L} = i bar{psi} gamma^mu partial_mu psi - ebar{psi}gamma_mu A^mu psi -m bar{psi} psi - frac{1}{4}F_{munu}F^{munu}. quad quad , Next, we can substitute this Lagrangian into the Euler-Lagrange equation of motion for a field: partial_mu left(frac{partial mathcal{L}}{partial (partial_mu psi )} right) - frac{partial mathcal{L}}{partial psi} = 0 quad quad quad quad quad (2) , to find the field equations for QED. The two terms from this Lagrangian are then: partial_mu left(frac{partial mathcal{L}}{partial (partial_mu psi )} right) = partial_mu left(i bar{psi} gamma^mu right) , frac{partial mathcal{L}}{partial psi} = -ebar{psi}gamma_mu A^mu - m bar{psi}. , Substituting these two back into the Euler-Lagrange equation (2) results in: i partial_mu bar{psi} gamma^mu + ebar{psi}gamma_mu A^mu + m bar{psi} = 0 , with complex conjugate: i gamma^mu partial_mu psi - e gamma_mu A^mu psi - m psi = 0. , Bringing the middle term to the right-hand side transforms this second equation into: i gamma^mu partial_mu psi - m psi = e gamma_mu A^mu psi. , The left-hand side is like the original Dirac equation and the right-hand side is the interaction with the electromagnetic field. One further important equation can be found by substituting the Lagrangian into another Euler-Lagrange equation, this time for the field, A^mu: partial_nu left(frac{partial mathcal{L}}{partial (partial_nu A_mu )} right) - frac{partial mathcal{L}}{partial A_mu} = 0 . quad quad quad (3) , The two terms this time are: partial_nu left(frac{partial mathcal{L}}{partial (partial_nu A_mu )} right) = partial_nu left(partial^mu A^nu - partial^nu A^mu right) , frac{partial mathcal{L}}{partial A_mu} = -ebar{psi} gamma^mu psi , and these two terms, when substituted back into (3) give us: partial_nu F^{nu mu} = e bar{psi} gamma^mu psi. , In pictures The part of the Lagrangian containing the electromagnetic field tensor describes the free evolution of the electromagnetic field, whereas the Dirac-like equation with the gauge covariant derivative describes the free evolution of the electron and positron fields as well as their interaction with the electromagnetic field. See also Further reading • Feynman, Richard Phillips (1998). Quantum Electrodynamics. Westview Press; New Ed edition. • Tannoudji-Cohen, Claude; Dupont-Roc, Jacques, and Grynberg, Gilbert (1997). Photons and Atoms: Introduction to Quantum Electrodynamics. Wiley-Interscience. • De Broglie, Louis (1925). Recherches sur la theorie des quanta [Research on quantum theory]. France: Wiley-Interscience. • Jauch, J.M.; Rohrlich, F. (1980). The Theory of Photons and Electrons. Springer-Verlag. • Miller, Arthur I. (1995). Early Quantum Electrodynamics : A Sourcebook. Cambridge University Press. • Schweber, Silvian, S. (1994). QED and the Men Who Made It. Princeton University Press. • Schwinger, Julian (1958). Selected Papers on Quantum Electrodynamics. Dover Publications. • Greiner, Walter; Bromley, D.A.,Müller, Berndt. (2000). Gauge Theory of Weak Interactions. Springer. • Kane, Gordon, L. (1993). Modern Elementary Particle Physics. Westview Press. • J.M. Dudley and A.M. Kwan, "Richard Feynman's popular lectures on quantum electrodynamics: The 1979 Robb Lectures at Auckland University," American Journal of Physics Vol. 64 (June 1996) 694-698. External links Search another word or see quantum electrodynamicson Dictionary | Thesaurus |Spanish Copyright © 2014, LLC. All rights reserved. • Please Login or Sign Up to use the Recent Searches feature
f2afed33e5fcc18c
« · » Section 12.2: The Quantum-mechanical Harmonic Oscillator Please wait for the animation to completely load. The solution of the one-dimensional quantum harmonic oscillator problem begins with the time-independent Schrödinger equation [−(ħ2/2m)(d2/dx2) + (1/2)mω2x2] ψ(x) = Eψ(x) , (12.4) where k/2 = mω2/2. We can put this equation into a more standard form as [d2/dx2m2ω2x2/ħ2 + 2mE/ħ2 ] ψ(x) = 0 . (12.5) We can further simplify the differential equation by defining the following: β2 = mω/ħ and ξ2 = β2x2, which when substituted into the differential equation, gives [d2/dξ2 − ξ2 + 2E/(ħω)] ψ(ξ) = 0 . (12.6) Even though this differential equation looks nothing like our original differential equation, the three terms are just the kinetic, potential, and total energies, respectively. Consider the two special limiting cases of Eq. (12.6): Case I: −ξ2 + 2E/(ħω) ≈ −ξ2, this situation results when the potential energy at a given position in the well is much greater than the total energy, V >> E. This is forbidden classically. We explicitly solve the differential equation [d2/dξ2 − ξ2] ψ(ξ) = 0 , which for large ξ yields the solutions ψ(ξ) = exp(±ξ2/2). Case II: −ξ2 + 2E/(ħω) ≈ 2E/(ħω), this corresponds to the case where the total energy is much greater than the potential energy, E >> V, which occurs near x = 0.  We explicitly solve the differential equation [d2/dξ2 +2E/(ħω)]ψ(ξ) = 0 , which yields the real solutions ψ(ξ) = Acos(Kξ) + Bsin(Kξ) where K2 = 2E/(ħω). What does the total solution look like? In the region where E>>V we have an oscillating solution. However, in the classically forbidden region V >> E, we have some leakage of the energy eigenfunction into the classically-forbidden region. The well-behaved solution in this region is exp(−ξ2/2).  as it goes to zero for large ξ → ±∞. Now that we have an idea of what the bound states should look like, we can find the entire solution to to Eq. (12.6).  This solution can be written as Hermite polynomials weighted by a factor of exp(−ξ2/2) = exp(−β2x2/2). These solutions can be written in terms of β and x as: ψn(x) = AnHnx) exp(−β2x2/2) , (12.7) where n = 0,1,2,3,4,…. The term Hnx) refers to the Hermite polynomials of order n and the An is the normalization factor which is equal to An = ((mω)1/2/(2nn!(ħπ)1/2)1/2 . (12.8) In terms of the argument ξ, the first 4 Hermite polynomials are H0(ξ) = 1    H1(ξ) = 2ξ H2(ξ) = 4ξ2 − 2 H3(ξ) = 8ξ3 − 12ξ . In the animations, the energy eigenfunctions for a quantum harmonic oscillator are shown.  The animation uses ħ = 2m = 1 and ω = 2.  Note that in the well, the energy eigenfunction's amplitude and curviness change with position. These changes are due to the fact that the potential energy function itself changes with position. The energy eigenfunction is curvier and has a smaller amplitude nearer the center of the well as compared to the positions closer to the classical turning point. By brute force integration we can use these energy eigenfunctions to calculate the expectation values.2  We find that <x>n = 0 and <p>n = 0, as expected, and <x2>n = /2mω) (2n + 1) (12.9) <p2>n = (mħω/2) (2n + 1) , (12.10) and therefore <E>n = <p2>n/2m + mω2<x2>n/2 = (n + 1/2)ħω . (12.11) This uniformity in the spacing of the harmonic oscillator energies is shown in the "spectrum" animation. To see the other bound states simply click-drag in the energy level diagram on the left to select a level. The selected level will turn red.   Since we have chosen ω = 2 and ħ = 2m = 1, the energy spectrum is just En = (2n + 1). 2There is an easier way. This method uses operators, called raising and lowering operators, to write the harmonic oscillator Hamiltonian. This method was pioneered by Dirac and is the easiest way to calculate expectation values. The OSP Network: Open Source Physics - Tracker - EJS Modeling Physlet Physics Physlet Quantum Physics
b249ac2ab1b428f9
« · » Section 7.9: Exploring Eigenvalue Equations check to force initial vector to be of unit size check to see paths of vectors (only for unit vectors) Please wait for the animation to completely load. The time-independent Schrödinger equation is an energy-eigenvalue equation. What does this mean?  Symbolically (this symbolic notation is called Dirac notation) the act of an operator acting on a state can be expressed by A |α> = β |β>, (7.17) where A is an operator, β is typically a number, |α> and |β> are two different state vectors. In general, therefore, the action of an operator on a state produces a different state. For special states, however, the final state is the original state and we have A |a> = a |a>, (7.18) where |a> is an eigenvector or eigenstate of the operator A with eigenvalue a.  This equation is called an eigenvalue equation. To find energy eigenfunctions of the time-independent Schrödinger equation, we solve Hψ = Eψ and solve the boundary conditions. We can demonstrate the idea of operators, eigenvalues, and eigenvectors by using a 2 × 2 matrix to represent the operator, = A11 A12 A21 A22 (7.19) and a two-element column vector, |a>  = a1 a2 (7.20) to represent the state vector. An eigenstate of the operator A has the property that the application of A to the eigenvector results in a new vector has the same direction as the original eigenvector and a magnitude that is a constant times the eigenvector's original magnitude.   In other words: eigenvectors of A are stretched, not rotated, when the operator A is applied to them. Select a matrix to begin.  Restart. Drag the red vector around in the animation. The result of the matrix (operator) acting on the column vector (the state vector) is shown as the green vector.   To make things easier, you may set the red vector's length equal to 1. What are the two eigenvectors and eigenvalues of each operator? The OSP Network: Open Source Physics - Tracker - EJS Modeling Physlet Physics Physlet Quantum Physics
91f5327f2f7fe068
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer Why do we consider evolution of a wave function and why is the evolution parameter taken as time, in QM. If we look at a simple wave function $\psi(x,t) = e^{kx - \omega t}$, $x$ is a point in configuration space and $t$ is the evolution parameter, they both look the same in the equation, then why consider one as an evolution parameter and other as configuration of the system. My question is why should we even consider the evolution of the wave function in some parameter (it is usually time)?. Why can't we just deal with $\psi(\boldsymbol{x})$, where $\boldsymbol{x}$ is the configuration of the system and that $|\psi(\boldsymbol{x})|^2$ gives the probability of finding the system in the configuration $\boldsymbol{x}$? (Added) (I had drafted but missed while copy pasting) One may say, "How to deal with systems that vary with time?", and the answer could be, "consider time also as a part of the configuration space". I wonder why this could not be possible. Clarification (after answer by Alfred Centauri) My question is why consider the evolution at all (What ever the case may be and what ever the parameter may be, time or proper time or whatever). My motivation here is to study the nature of the theory of quantum mechanics as a statistical model. I am looking at it from that angle. share|cite|improve this question Related: – Emilio Pisanty Jul 19 '12 at 12:47 So do I understand you to be asking for a block world formulation of quantum theory? For which you could use the Wightman axioms (albeit they're not close to the successes of Lagrangian QFT). They introduce a single Hilbert space that supports a representation of the Poincaré group, and time is not privileged over space (except for the 1+3 signature). Lagrangian QFT somewhat obscures a block world perspective, insofar as it focuses on a Hilbert space at a single time, corresponding to phase space observables, however a block world perspective of Lagrangian QFT is possible. – Peter Morgan Jul 19 '12 at 20:25 @RajeshD: The Heisenberg formulation takes your point of view, the wavefunction is time independent, but the observables depend on time. This just means that the interaction with the particle at different times is by different operators. – Ron Maimon Jul 20 '12 at 5:24 up vote 2 down vote accepted I think the main reason is practical, but it might be related to a theoretical reason. The main reason is that we almost never use the time-dependent Schroedinger equation because if the state wasn't stationary, its rate of change would be, at the usual atomic scales, so fast that we couldn't measure it or study it empirically with laboratory-sized apparatus. Similarly, what governs the observable properties of macroscopic bodies, such as their chemical bonds and colours, involves stationary states. If the states weren't stationary, the body would not persist long enough for us to consider it as having a property. It is striking how little direct empirical support the time-dependent Schroedinger equation has, and how little use it finds. We don't even use it to study scattering events (which, admittedly, for a very brief time occur very rapidly). This might be related to a deeper theoretical reason one finds in statistical mechanics. In statistical mechanics, it is often pointed out that measurements made with laboratory-sized equipment necessarily involve a practically infinite time average such as $$\lim_{T\rightarrow\infty}\frac1T\int_0^T f(t)g(t)dt.$$ Well, in Quantum Mechanics, measurement has something similar about it, in that it always involves amplification of something microscopic up to the macroscopic scale so we can observe it (an observation made by many, including Feynman), and the main way to do this seems to be to let the microscopic event trigger the change from a meta-stable state to a stable equilibrium state of the laboratory-sized apparatus (H.S. Green Observation in Quantum Mechanics, Nuovo Cimento vol. 9, pp. 880--889, posted at , and many others since). Once again, this involves a long-time, stable, equilibrium as in Statistical Mechanics. But the relation to the practical reason is not completely clear. That said, in theory it is sometimes possible to rephrase the time-dependent Schroedinger evolution equation into a space-evolution equation, even though no one ever does this since it has no earthly use. Consider the Klein--Gordon equation (which is the relativistic version of Schroedinger's equation), $$({\partial\over \partial x}^2-{\partial\over \partial t}^2 + V )\psi = 0.$$ Obviously, we can get isolate either $x$ or $t$, and under certain conditions take the square root of the operator to get $$ {\partial\over \partial x} \psi = \sqrt{ ({\partial\over \partial t}^2 - V)}\psi .$$ Under the usual physical assumptions of flat space--time and no field-theoretic effects, one could do this to isolate $t$ and get the time evolution because we assume that energy is always positive, so we can indeed take the square root (all the eigenvalues of the Hamiltonian are positive). This may not always be true when, as here, we try to isolate $x$ and get the space-evolution. Now, as to the question of why consider any evolution at all, why not just consider $\psi(x,y,z,t)$ in a relativistically timeless fashion, the main answer is that it wreaks havoc with the idea of measurement, observable. and the justification of the Born interpretation. Dirac tried to write a Quantum Mechanics textbook your way, but gave up even after the fifth chapter, where he remarks that the notion of observable in not relativistic, and for the rest of the book he proceeds non-relativistically (until he get to the Dirac Equation at the end). The second edition abandons the attempt to be relativistic, is more traditional and uses the time evolution point of view from the start. He remarked, famously, The main change has been brought about by the use of the word «state» in a three-dimensional non-relativistic sense. It would seem at first sight a pity to build up the theory largely on the basis of nonrelativistic concepts. The use of the non-relativistic meaning of «state», however, contributes so essentially to the possibilities of clear exposition as to lead one to suspect that the fundamental ideas of the present quantum mechanics are in need of serious alteration at just tbis point, and that an improved theory would agree more closely ' with the development here given than with a development which aims at preserving the relativistic meaning of «state» throughout. And in fact Relativistic Quantum Mechanics, as opposed to field theory, is, like many-particle Relativistic (classical) mechanics, not theoretically very well developed. There seem to be so many problems, people prefer to jump right to Quantum Field Theory in spite of the divergences and need for renormalisation and everything. Furthermore, relativistic QM is restricted to the low energy regime since with high energies, particle pair production is possible, yet the equations of QM hold the number of particles as fixed and do not allow for pair production. share|cite|improve this answer Thanks for the nice answer. It was a joy reading it. You really got the spirit of the question. – Rajesh Dachiraju Jun 7 '13 at 22:39 (1) In the Heisenberg picture, the wavefunction does not evolve with time, the operators do. (2) For relativistic covariance, $t$ ought to be a coordinate with proper time $\tau$ as the evolution parameter. (3) In QFT, which is relativistically co-variant, $t$ is a coordinate. If these don't begin to address your question, please re-edit your question to clarify. share|cite|improve this answer I have edited with a clarification in view of your answer. – Rajesh Dachiraju Jul 19 '12 at 13:10 it's an empiral fact that time exists, and states evolve in time. or is that really the case, or does it just seem so? interesting question. anyway, feynman path integrals, no such problem. share|cite|improve this answer Sorry I missed a crucial part of the question while copy pasting the draft. Now I have added it. I hope you excuse this. – Rajesh Dachiraju Jul 19 '12 at 12:39 You can, sort of. You can take $\psi(x)$ to satisfy the time-independent Schrödinger equation, for some eigenvalue $E_n$ of the Hamiltonian operator that appears in the time-dependent Schrödinger equation. However I would take that to make the time-independent formalism less fundamental. It's also possible for the time-dependent state to be in a superposition of different energy states, which doesn't play well with the time-independent formalism. share|cite|improve this answer I think you have digressed a bit from what I had in mind. I do not suggest to consider time independent Schrodinger equation. I am not interested in that and that is not the only choice. My question is just why consider evolution of wave function at all? – Rajesh Dachiraju Jul 19 '12 at 13:00 Your Answer
5a9b783571cbb3c0
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer Could someone experienced in the field tell me what the minimal math knowledge one must obtain in order to grasp the introductory Quantum Mechanics book/course? I do have math knowledge but I must say, currently, kind of a poor one. I did a basic introductory course in Calculus, Linear algebra and Probability Theory. Perhaps you could suggest some books I have to go through before I can start with QM? share|cite|improve this question It's easier to learn something if you have a need for it, so you might use your interest in QM to inspire yourself to learn the math. – Mike Dunlavey Dec 15 '11 at 1:39 Related Math.SE question: – Qmechanic Apr 20 '14 at 7:16 There are many different mathematical levels at which one can learn quantum mechanics. You can learn quantum mechanics with nothing more than junior high school algebra; you just won't be learning it at the same level of mathematical depth and sophistication. – Ben Crowell Sep 24 '14 at 23:11 up vote 18 down vote accepted I depends on the book you've chosen to read. But usually some basics in Calculus, Linear Algebra, Differential equations and Probability theory is enough. For example, if you start with Griffiths' Introduction to Quantum Mechanics, the author kindly provides you with the review of Linear Algebra in the Appendix as well as with some basic tips on probability theory in the beginning of the first Chapter. In order to solve Schrödinger equation (which is (partial) differential equation) you, of course, need to know the basics of Differential equations. Also, some special functions (like Legendre polynomials, Spherical Harmonics, etc) will pop up in due course. But, again, in introductory book, such as Griffiths' book, these things are explained in detail, so there should be no problems for you if you're careful reader. This book is one of the best to start with. share|cite|improve this answer +1 for the book recommendation. This was the one I was taught with and it provided an excellent starting point. – qubyte Dec 15 '11 at 16:56 You don't need any probability: the probability used in QM is so basic that you pick it up just from common sense. You need linear algebra, but sometimes it is reviewed in the book itself or an appendix. QM seems to use functional analysis, i.e., infinite dimensional linear algebra, but the truth is that you will do just fine if you understand the basic finite dimensional linear algebra in the usual linear algebra course and then pretend it is all true for Hilbert Spaces, too. It would be nice if you had taken a course in ODE but the truth is, most ODE courses these days don't do the only topic you need in QM, which is the Frobenius theory for eq.s with a regular singular point, so most QM teachers re-do the special case of that theory needed for the hydrogen atom anyway, sadly but wisely assuming that their students never learned it. An ordinary Calculus II course covers ODE basics like separation of variables and stuff. Review it. I suggest using Dirac's book on QM! It uses very little maths, and a lot of physical insight. The earlier edition of David Park is more standard and easy enough and can be understood with one linear algebra course and Calc I, CalcII, and CalcIII. share|cite|improve this answer Dirac's book is readable with no prior knowledge, +1, and it is still the best, but it has no path integral, and the treatment of the Dirac equation (ironically) is too old fasioned. I would recommend learning matrix mechanics, which is reviewed quickly on Wikipedia. The prerequisite is Fourier transforms. Sakurai and Gottfried are good, as is Mandelstam/Yourgrau for path integrals. – Ron Maimon Dec 6 '11 at 22:37 There is a story about Dirac. When it was proved that parity was violated, someone asked him what he thought about that. He replied "I never said anything about it in my book." The things you mention that are left out of his book are things it is a good idea to omit. Path integrals are ballyhooed but are just a math trick and give no physical insight, in fact, they are misleading. Same for matrix mechanics. Those are precisely why I still recommend Dirac for beginners... I would not even be surprised if his treatment of QED in the second edition proved more durable than Feynman's..... – joseph f. johnson Dec 7 '11 at 0:38 Matrix mechanics is good because it gives you intuition for matrix elements, for example, you immediately understand that an operator with constant frequency is a raising/lowering operator. You also understand the semiclassical interpretation of off-diagonal matrix elements, they are just stunted Fourier transforms of classical motions. You also understand why the dipole matrix element gives the transition rate without quantizing the photon field, just semiclassically. These are all important intuitions, which have been lost because Schrodinger beat Heisenberg in mass appeal. – Ron Maimon Dec 7 '11 at 5:20 Your comment about path integrals is silly. The path integral gives a unification of Heisenberg and Schrodinger in one formalism, that is automatically relativistic. It gives analytic continuation to imaginary time, which gives results like CPT, relativistic regulators, stochastic renormalization, second order transitions, Fadeev Popov ghosts, supersymmetry, and thousands of other things that would be practically impossible without it. The particle path path integral is the source of the S-matrix formulation and string theory, of unitarity methods, and everything modern. – Ron Maimon Dec 7 '11 at 5:29 @RonMaimon I have had to teach stochastic processes and integrals to normal, untalented folks. IMHO, stohastic processes count as probability theory, one of the trickiest parts, and path integrals are no help for beginners here either. It is still better for the beginning student to not take a course in probability and let what they learn about the physics of QM be their introduction to stochastic processes...I mean, besides what they already learned about stochastic processes from playing Snakes and Ladders. This is part of my theme: learn the physics first, and mathematical tricks later – joseph f. johnson Dec 15 '11 at 17:52 Try these two lectures from Leonard: Also more at PS:I dont have any physics and math background except a few basics. so I cant comment on if these were too basic for you.. share|cite|improve this answer There is a nice book with an extremely long title: Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles. It does the basics pretty well. Griffith's would be the next logical step. After that there is Shankar. share|cite|improve this answer Try Schaum's Outlines: Quantum Mechanics, ISBN 0-07-054018-7. You'll see the math there, but you'll need to do the deep background studies on all the math from Chapter 2. share|cite|improve this answer nice and cheap book – igael May 31 '15 at 10:26 protected by Qmechanic Oct 6 '15 at 17:34 Would you like to answer one of these unanswered questions instead?
2208242e7f4d7c6a
Erwin Schrödinger Erwin Schrödinger Erwin Schrödinger.jpg Erwin Rudolf Josef Alexander Schrödinger August 12 1887(1887-08-12) Erdberg, Vienna, Austria-Hungary Died January 4 1961 (aged 73) Vienna, Austria Residence Austria, Ireland Nationality Flag of Austria Austria Flag of Republic of Ireland Ireland Field Physics Institutions University of Wroclaw University of Zurich University of Berlin University of Oxford University of Graz Dublin Institute for Advanced Studies Alma mater University of Vienna Academic advisor  Friedrich Hasenöhrl Known for Schrödinger's equation Notable prizes Nobel Prize.png Nobel Prize in Physics (1933) Quantum physics <math>\Delta x \, \Delta p \ge \frac{\hbar}{2} </math> Quantum mechanics Introduction to... Mathematical formulation of... Fundamental concepts Decoherence · Interference Uncertainty · Exclusion Transformation theory Ehrenfest theorem · Measurement Double-slit experiment Davisson-Germer experiment Stern–Gerlach experiment EPR paradox · Popper's experiment Schrödinger's cat Schrödinger equation Pauli equation Klein-Gordon equation Dirac equation Advanced theories Quantum field theory Wightman axioms Quantum electrodynamics Quantum chromodynamics Quantum gravity Feynman diagram Copenhagen · Ensemble Hidden variables · Transactional Many-worlds · Consistent histories Quantum logic Consciousness causes collapse Planck · Schrödinger Heisenberg · Bohr · Pauli Dirac · Bohm · Born de Broglie · von Neumann Einstein · Feynman Everett · Others Schrödinger in 1933, when he was awarded the Nobel Prize in Physics. Erwin Schrödinger's gravesite. Erwin Rudolf Josef Alexander Schrödinger (August 12, 1887 – January 4, 1961) was an Austrian-Irish physicist who achieved fame for his contributions to quantum mechanics, especially the Schrödinger equation, for which he received the Nobel Prize in 1933. In 1935, after extensive correspondence with personal friend Albert Einstein, he proposed the Schrödinger's cat as an imaginary thought experiment to magnify the Copenhagen Interpretation of quantum experiments to the patent-absurdity of a cat that is both full-of-life and stone-cold-dead at the same time, in a supposition of states. In 1887, Schrödinger was born in Erdberg, Vienna, to Rudolf Schrödinger (cerecloth producer, botanist) and Georgine Emilia Brenda (daughter of Alexander Bauer, Professor of Chemistry, k.u.k. Technische Hochschule Vienna). His father was a Catholic and his mother was a Lutheran. In 1898, he attended the Akademisches Gymnasium. Between 1906 and 1910, Schrödinger studied in Vienna under Franz Serafin Exner (1849-1926) and Friedrich Hasenöhrl (1874-1915). He also conducted experimental work with Friedrich Kohlrausch. In 1911, Schrödinger became an assistant to Exner. Middle years In 1914, Erwin Schrödinger achieved Habilitation (venia legendi). Between 1914 and 1918, he participated in war work as a commissioned officer in the Austrian fortress artillery (Gorizia, Duino, Sistiana, Prosecco, Vienna). On April 6, 1920, Schrödinger married Annemarie Bertel. The same year, he became the assistant to Max Wien, in Jena, and in September 1920, he attained the position of ao. Prof. (Ausserordentlicher Professor), roughly equivalent to Reader (UK) or associate professor (U.S.), in Stuttgart. In 1921, he became O. Prof. (Ordentlicher Professor, that is, full professor), in Breslau (now Wrocław, Poland). In 1918, he made up his mind to abandon physics for philosophy, but the city he had hoped to obtain a post in was ceded to Austria in the peace treaties ending World War I. Schrödinger, therefore, remained a physicist. In 1922, he attended the University of Zürich. In January 1926, Schrödinger published in the Annalen der Physik, the paper, "Quantisierung als Eigenwertproblem" (tr. Quantisation as an Eigenvalue Problem_ on wave mechanics and what is now known as the Schrödinger equation. In this paper, he gave a "derivation" of the wave equation for time independent systems, and showed that it gave the correct energy eigenvalues for the hydrogen-like atom. This paper has been universally celebrated as one of the most important achievements of the twentieth century, and created a revolution in quantum mechanics, and indeed of all physics and chemistry. A second paper was submitted just four weeks later that solved the quantum harmonic oscillator, the rigid rotor, and the diatomic molecule, and gives a new derivation of the Schrödinger equation. A third paper in May showed the equivalence of his approach to that of Heisenberg and gave the treatment of the Stark effect. A fourth paper in this most remarkable series showed how to treat problems in which the system changes with time, as in scattering problems. These papers were the central achievement of his career and were at once recognized as having great significance by the physics community. In 1927, he joined Max Planck at the Friedrich Wilhelm University in Berlin. In 1933, however, Schrödinger decided to leave Germany; he disliked the Nazis' antisemitism. He became a Fellow of Magdalen College at the University of Oxford. Soon after he arrived, he received the Nobel Prize together with Paul Adrien Maurice Dirac. His position at Oxford did not work out; his unconventional personal life (Schrödinger lived with two women) was not met with acceptance. In 1934, Schrödinger lectured at Princeton University; he was offered a permanent position there, but did not accept it. Again, his wish to set up house with his wife and his mistress may have posed a problem. He had the prospect of a position at the University of Edinburgh but visa delays occurred, and in the end he took up a position at the University of Graz in Austria in 1936. Later years In 1938, after Hitler occupied Austria, Schrödinger had problems because of his flight from Germany in 1933 and his known opposition to Nazism. He issued a statement recanting this opposition. (He later regretted doing so, and he personally apologized to Einstein). However, this did not fully appease the new dispensation and the university dismissed him from his job for political unreliability. He suffered harassment and received instructions not to leave the country, but he and his wife fled to Italy. From there he went to visiting positions in Oxford and Ghent Universities. In 1940, he received an invitation to help establish an Institute for Advanced Studies in Dublin, Ireland. He became the Director of the School for Theoretical Physics and remained there for 17 years, during which time he became a naturalized Irish citizen. He wrote about 50 further publications on various topics, including his explorations of unified field theory. In 1944, he wrote What is Life? which contains a discussion of Negentropy and the concept of a complex molecule with the genetic code for living organisms. According to James D. Watson's memoir, DNA, The Secret of Life, Schrödinger's book gave Watson the inspiration to research the gene, which led to the discovery of the DNA double helix structure. Similarly, Francis Crick, in his autobiographical book What Mad Pursuit, described how he was influenced by Schrödinger's speculations about how genetic information might be stored in molecules. Schrödinger stayed in Dublin until retiring in 1955. During this time, he remained committed to his particular passion. Scandalous involvements with students occurred and he fathered two children by two different Irish women. In 1956, he returned to Vienna (chair ad personam). At an important lecture during the World Energy Conference, he refused to speak on nuclear energy because of his skepticism about it and gave a philosophical lecture instead. During this period, Schrödinger turned from mainstream quantum mechanics' definition of wave-particle duality and promoted the wave idea alone, causing much controversy. Personal life Schrödinger decided, in 1933, that he could not live in a country in which persecution of Jews had become a national policy. Alexander Frederick Lindemann, the head of physics at Oxford University, visited Germany in the spring of 1933, to try to arrange positions in England for some young Jewish scientists from Germany. He spoke to Schrödinger about posts for one of his assistants and was surprised to discover that Schrödinger himself was interested in leaving Germany. Schrödinger asked for a colleague, Arthur March, to be offered a post as his assistant. The request for March stemmed from Schrödinger's unconventional relationships with women. His relations with his wife had never been good and he had had many lovers with his wife's knowledge. Anny had her own lover for many years, Schrödinger's friend Hermann Weyl. Schrödinger asked for March to be his assistant because, at that time, he was in love with March's wife, Hilde. Many of the scientists who had left Germany spent the summer of 1933 in Alto Adige/Südtirol. Here, Hilde became pregnant with Schrödinger's child. On November 4, 1933, Schrödinger, his wife, and Hilde March arrived in Oxford. Schrödinger had been elected a fellow of Magdalen College. Soon after they arrived in Oxford, Schrödinger heard that, for his work on wave mechanics, he had been awarded the Nobel prize. In the spring of 1934, Schrödinger was invited to lecture at Princeton University and while there he was made an offer of a permanent position. On his return to Oxford, he negotiated about salary and pension conditions at Princeton but in the end he did not accept. It is thought that the fact that he wished to live at Princeton with Anny and Hilde both sharing the upbringing of his child was not found acceptable. The fact that Schrödinger openly had two wives, even if one of them was married to another man, was not well received in Oxford either. Nevertheless, his daughter, Ruth Georgie Erica was born there on May 30, 1934.[1] On January 4, 1961, Schrödinger died in Vienna of tuberculosis at the age of 73. He left a widow, Anny (born Annamaria Bertel on Dec. 3, 1896, died Oct. 3, 1965), and was buried in Alpbach (Austria). Color science • "Theorie der Pigmente von größter Leuchtkraft," Annalen der Physik, (4), 62, (1920), 603-622 • "Grundlinien einer Theorie der Farbenmetrik im Tagessehen," Annalen der Physik, (4), 63, (1920), 397-426; 427-456; 481-520 (Outline of a theory of color measurement for daylight vision) • "Farbenmetrik," Zeitschrift für Physik, 1, (1920), 459-466 (Color measurement) Legacy: The equation In the quantum view, an atom is a quantum probability wave with the particles jittering about in it over time. Schrödinger's enduring contribution to the the development of science was to describe the atom in terms of the wave and particle aspects of the electron established by the pioneers of quantum mechanics. His insight, while derived by considering the electron, applies equally to the quarks and all other particles discovered after his time. In particle physics, the electron, like all fundamental particles is a unified entity of wave and particle (or wave-particle duality). The wavefunction tells the particle what to do over time, while the interactions of the particle tell the wave how to develop and resonate. The wave aspect of the atomic electron is atom-size, while the particle aspect is point-like even at scales thousands of times smaller than the proton. The electron jitters so rapidly about in the wave that, over even fractions of a second, it behaves as a 'solid' cloud with the shape of the wave. Applying the classical wave equation to the electron, Schrödinger derived an equation—which now bears his name—that gave the shape of the wave and thus the shapes of the atoms. The wavefunction, unlike classical waves that can be measured with real numbers—is measured with complex numbers involving the square root of minus one. In English, his equation states that the negative rate of change in the value of the wavefunction at a point at a distance from the tiny central nucleus equals the product of 1. the value of the wavefunction at that point 2. the difference in total and potential energy at that point (called the action and 3. the inertia of the particle—its rest mass-energy. This requirement determines the shape of the wavefunction, or orbital, and hence the shape of the atom. In math symbols, this is written: A simple expression of the Schrödinger Equation where the Greek letter psi, ψ, is the complex value of the wavefunction at a distance from the nucleus, m is the mass, E is the total energy, V is the potential energy at that distance from the nucleus, and the h-bar squared—Planck's Constant divided by 2π—simply converts the mass and energy measured in human units—such as grams and ergs—into natural units. The electron density over time at any point equals the absolute square of the complex number value of the wavefunction, |ψ|<math>^2</math>, which is always a real number. While this equation, remarkably enough, explains everything about the nature of atoms, the current state of math is such that it can only be solved for the two simplest atoms, hydrogen and helium. Perturbation theory is used to generate approximate—and quite accurate—solutions for the more complex atoms. Max Born, one of the quantum founding-fathers, stated his opinion, in 1926, that: "The Schrödinger Equation enjoys in modern physics the same place as in classical physics do the equations derived by Newton, Lagrange, and Hamilton."[2] Born's opinion of the legacy and fame was correct; the Schrödinger Equation is the very foundation of all atomic physics and chemistry. Named after him Books by Erwin Schrödinger • Schrödinger, Erwin. 1996. Nature and the Greeks, and Science and Humanism. Cambridge: Cambridge University Press. ISBN 0521575508 • Schrödinger, Erwin. 1995. The Interpretation of Quantum Mechanics. Woodbridge, CT: Ox Bow Press. ISBN 1881987094 • Schrödinger, Erwin. 1989. Statistical Thermodynamics. New York: Dover Publications. 1989. ISBN 0486661016 • Schrödinger, Erwin. 1984. Collected papers. Braunschweig, DE: Friedr. Vieweg & Sohn. 1984. ISBN 3700105738 • Schrödinger, Erwin. 1983. My View of the World. Woodbridge, CT: Ox Bow Press. ISBN 0918024307 • Schrödinger, Erwin. 1956. Expanding Universes. Cambridge: Cambridge University Press. • Schrödinger, Erwin. 1950. Space-Time Structure. Cambridge: Cambridge University Press. ISBN 0521315204 • Schrödinger, Erwin. 1946. What is Life?. New York: Macmillan. 1. Walter John Moore, Schrödinger: Life and Thought (Cambridge University Press, 1992). ISBN 0-521-43767-9 2. L.I. Ponomarev, The Quantum Dice (Moscow: Mir Publishers, 1988). ISBN 5030002162 • Asimov, Isaac. 1964. Asimov's Biographical Encyclopedia of Science and Technology; the Living Stories of More than 1000 Great Scientists from the age of Greece to the Space Age. Garden City, N.Y.: Doubleday. ISBN 20144414 • Cropper, William H. 2001. Great Physicists: The Life and Times of Leading Physicists from Galileo to Hawking. Oxford: Oxford University Press. ISBN 0195137485 • Grosse, Harald and André Martin. 2005. Particle Physics and the Schrödinger Equation. Cambridge: Cambridge University Press. ISBN 0521017785 • Moore, Walter John. 1992. Schrödinger: Life and Thought. Cambridge: Cambridge University Press. ISBN 0521437679 External links
d2877aa9b16ad402
Atomic units 1 / 15 Atomic units - PowerPoint PPT Presentation • Uploaded on Atomic units. The atomic units have been chosen such that the fundamental electron properties are all equal to one atomic unit. (m e =1, e=1, = h/2 = 1, a o =1, and the potential energy in the hydrogen atom (e 2 /a o = 1). 1D=3.33564 ·10 -30 Cm. Download Presentation PowerPoint Slideshow about ' Atomic units' - dot Presentation Transcript Atomic units The atomic units have been chosen such that the fundamental electron properties are all equal to one atomic unit. (me=1, e=1, = h/2 = 1, ao=1, and the potential energy in the hydrogen atom (e2/ao = 1). The use of atomic units also simplifies Schrödinger's equation. For example the Hamiltonian for an electron in the Hydrogen atom would be: Other fundamental constants: Boltzmann’s constant: k=1.38066·10-23J/K Avogadro’s number: NA=6.02205·1023mol-1 Rydberg constant: R∞=1.097373·107m-1 Compton wavelength of electron: λC=2.426309·10-12m Stefan-Boltzmann constant: σ=5.67032·108W/(m2K4) Other frequently used energy units: 1a.u. = 27.212eV = 627.51Kcal/mol = 2.1947·105 cm-1 1Kcal/mol = 4.184KJ/mol Basic concepts, techniques and notations of molecular quantum mechanics • structure of many-electron operators (e.g. Hamiltonian) • form of many-electron wave-functions (Slater determinants, and linear combination of them) • Hartree-Fock (HF) approximation • more sophisticated approaches which use the HF method as a starting point The electronic problem The non-relativistic time-independent Schrödinger equation H – Hamiltonian operator for a system of nuclei and electrons MA - the ratio of the mass of nucleus A to the mass of an electron ZA – the atomic number of nucleus A Te – the operator for the kinetic energy of the electrons Tn – the operator for the kinetic energy of the nuclei Vee– the operator for the Coulomb attraction between electrons and nuclei Vee – the operator for the repulsion between electrons Vnn – the operator for the repulsion between nuclei A molecular coordinate system (1)– represents the general problem  to be separated in two parts: electronic and nuclear problems Born-Oppenheimer Approximation quantum mechanics The nuclei are much heavier than electrons  they move much more slowly  the nuclei can be considered frozen in a single arrangement (molecular conformation)  the electrons can respond almost instantaneously to any change in the nuclear position ► the electrons in a molecule are moving in the field of fixed nuclei ► 2-nd term in (1) can be neglected ► 5-th term in (1) is a constant Electronic Hamiltonian describes the motion of N electrons in the field of M point charges Electronic Schrödinger equation: (4) - is the electronic wave-function which describes the motion of the electrons explicitly depends on the electronic coordinatesparametrically depends on the nuclear coordinates parametric dependence the nuclear coordinates do not appear explicitly in Φelec. different wave-function is defined for each nuclear configuration Helec|Φelec>=Eelec|Φelec> (3) Φelec=Φelec({rI};{RA}) (4) Eelec = Eelec({RA}) (5) The total energy: Equations (2) – (6) ≡ electronic problem If the electronic problem is solved quantum mechanics ► we can solve for the motion of the nuclei Since the electrons move much faster than the nuclei ► we can replace the electronic coordinates by their average values (averaged over the electronic wave-function)  nuclear Hamiltonian odescribes the motion of the nuclei in the average field of the electrons • nuclear Schrödinger equation Hnucl|Φnnucl> = E|Φnucl> • Φnucl - describes the vibration, rotation and translation of a molecule • E - total energy of the molecule(in the Born-Oppenheimer approximation) • - includes: - electronic energy • - vibrational energy • - rotational energy • - translational energy potential energy surface (PES) Schematic illustration of a potential energy surface The equilibrium conformation of the molecule corresponds to the minimum of the surface Total wave-function in Born-Oppenheimer approximation: quantum mechanics Φ({ri};{RA}) = Φelec({ri};{RA})·Φnucl({RA}) • Born-Oppenheimer approximation • - usually a good approximation • - bad approximation for: • excited states • degenerate or cuasidegenerate states The Antisymmetry or Pauli Exclusion Principle electron spin α(ω) and β(ω) –spin functions (complete and orthonormal) the electron is described by spatial (r) and spin (ω) coordinates: x={r,ω} A many electron wave-function must be antisymmetric with respect to the interchange of the coordinate x (both space and spin) of any two electrons. Φ(x1, x2, ... , xi, ..., xj, ...,xN) = -Φ(x1, x2, ... , xj, ..., xi, ...,xN) Hartree Approximation (Hartree, 1928) quantum mechanics Φi– spin orbitals The form of ΨHPsuggests the independence of Φi Probability density given by ΨHP is equal to the product of monoelectronic probability densities This is true only if each electron is completely independent of the other electrons ΨHP - independent electron model A ♥ A♥  PA=1/13 P♥=1/4 PA♥=1/52=PAP♥ PA is uncorrelated (independent) with P♥. Uncorrelated probabilities Correlated probabilities In a n-electron system of electrons the motions of the electrons is correlated due to the Coulomb repulsion (electron-one will avoid regions of space occupied by electron two). Electronic Hamiltonian can be rewritten: vi is the monoelectronic term of the external potential: In HP, hiwill act only on the wavefunction corresponding to the i-th electron. However, Vee depends on pairs of electrons so that we can not separate the variables in Schrödinger equation. is the monoelectronic operator Hartree Approximation: quantum mechanics the electrons do not interact explicitly with the others, but each electron interacts with the medium potential given by the other electrons Using the variational methods one obtains the energy of the system: - core monoelectronic integrals • Coulombian bielectronic integrals • represent the classical repulsion energy between two charge densities described by Φi and Φj Bielectronic potential 1/r12 felt by the electron 1, due to the instantaneous position of electron 2 is replaced by a monoelectronic potential Vi(1) obtained by averaging the interaction between the two electrons over the spatial and spin coordinates of electron 2. Summing over j≠i one obtains the medium potential acting on electron in Φi and which is due to the other N-1 electrons Coulomb operator represents the local medium potential felt by electron 1 and due to the electron described by Φj Using the Lagrange’s multipliers method quantum mechanics Hartree equations: - the energies of molecular orbitals Total electronic energy: In order to find Φi we needΦi SCF procedure SCF procedure in the framework of Hartree approximation electronic density corresponding to the i-th electron quantum mechanics total electronic density Each electron interacts with an electronic density obtained by subtracting its density from the total density Vee potential can be written as: gi(r) - interaction energy of the point charge (the considered individual electron) with the other electrons represented as an electronic density Hartree equations: Determinantal wave-functions: Hartree-Fock approximation quantum mechanics ΨHP - does not satisfy the Pauli principle - gives a non-zero probability for two electrons to be exactly at the same point in space Fock, Slater, 1930 ΨSD antisimetrized sum of Hartree products with all the possible distributions of the electrons in the molecular orbitals - shorthand notation Using the variational method of Ritz: In Hartree approximation exchange integral exchange operator: quantum mechanics - a non-local operator because its result depends on the value of Φi on entire space and not only on the value of Φi where is located the electron 1 Minimizing the energy by varying the spin orbitals leads to the Hartree-Fock equations: Definig the Fock operator: molecular orbital energies: total electronic energies: total energy: In Hartree approximation if we use the spatial orbitals: quantum mechanics Φ1(x)=φ1(r)α(ω) şi Φ2(x)=φ1(r)β(ω) in the framework of RHF approximation: molecular orbital energies: Hartree-Fock equations = alternative Schrödinger equation in which the exact Hamiltonian has been replaced by an approximate Fock operator - Coulomb operator has been replaced by an operator which describes the interaction of each electron with the average field due to the other electrons RHF and UHF formalisms quantum mechanics Given a set of k orthonormal spatial orbitals (MO) {φi}, i=1,...k  2k spin-orbitals: Φi, i=1,...,2k restricted OM  restricted wave-function Restricted wave-function for Li atom But: K1s()2s( )≠0 and K1s()2s()=0 1s() and 1s() electrons will experience different potentials so that it will be more convenient to describe the two kind of electrons by different wave-functions Unrestricted wave-function for Li atom usually, the two sets of spatial orbitals use the same basis set UHF wave-functions are not eigenfunctions of quantum mechanics S2operator !!!  spin contamination |2> - exact doublet state |4> - exact quartet state |6> - exact sextet state -approximately a singlet - approximately a doublet For an UHF wave-function, the expectation value of S2 is: spin projection procedures (Gaussian)
519eaf4042c9fc0b
Friday, April 29, 2011 Fun with an Argon Atom Photon-recoil bilocation experiment at Heidelberg A recent experiment on Argon atoms by Jeri Tomkovic and five collaborators at the University of Heidelberg has demonstrated once again the subtle and astonishing reality of the quantum world. Erwin Schrödinger, who devised the Schrödinger equation that governs quantum behavior, also demonstrated the preposterousness of his own equation by showing that under certain special conditions quantum theory seemed to allow a cat (Schrödinger's Cat) to be alive and dead at the same time. Humans can't yet do this to cats, but clever physicists are discovering how to put larger and larger systems into a "quantum superposition" in which a single entity can comfortably dwell in two distinct (and seemingly contradictory) states of existence. The Heidelberg experiment with Argon atoms (explained popularly here, in the physics arXiv here and published in Nature here) dramatically demonstrates two important features of quantum reality: 1) if it is experimentally impossible to tell whether a process went one way or the other, then, in reality, IT WENT BOTH WAYS AT ONCE (like a Schrödinger Cat); 2) quantum systems behave like waves when not looked at--and like particles when you look. The Heidelberg physicists looked at laser-excited Argon atoms which shed their excitation by emitting a single photon of light. The photon goes off in a random direction and the Argon atom recoils in the opposite direction. Ordinary physics so far. But Tomkovic and pals modified this experiment by placing a gold mirror behind the excited Argon atom. Now (if the mirror is close enough to the atom) it is impossible for anyone to tell whether the emitted photon was emitted directly or bounced off the mirror. According to the rules of quantum mechanics then, the Argon atom must be imagined to recoil IN BOTH DIRECTIONS AT ONCE--both towards and away from the mirror. But this paradoxical situation is present only if we don't look. Like Schrödinger's Cat, who will be either alive or dead (if we look) but not both, the bilocal Argon atom (if we look) will always be found to be recoiling in only one direction--towards the mirror (M) or away from the mirror (A) but never both at the same time. To prove that the Argon atom was really in the bilocal superposition state we have to devise an experiment that involves both motions (M and A) at once. (Same to verify the Cat--we need to devise a measurement that looks at both LIVE and DEAD cat at the same time.) To measure both recoil states at once, the Heidelberg guys set up a laser standing wave by shining a laser directly into a mirror and scattered the bilocal Argon atom off the peaks and troughs of this optical standing wave. Just as a wave of light is diffracted off the regular peaks and troughs of a matter-made CD disk, so a wave of matter (Argon atoms) can be diffracted from a regular pattern of light (a laser shining into a mirror). When an Argon atom encounters the regular lattice of laser light, it is split (because of its wave nature) into a transmitted (T) and a diffracted (D) wave. The intensity of the laser is adjusted so that the relative proportion of these two waves is approximately 50/50. In its encounter with the laser lattice, each state (M and A) of the bilocated Argon atom is split into two parts (T and D), so now THE SAME ARGON ATOM is traveling in four directions at once (MT, MD, AT, AD). Furthermore (as long as we don't look) these four distinct parts act like waves. This means they can constructively and destructively interfere depending on their "phase difference". The two waves MT and AD are mixed and the result sent to particle detector #1. The two waves AT and MD are mixed and sent to particle detector #2. For each atom only one count is recorded--one particle in, one particle out. But the PATTERN OF PARTICLES in each detector will depend on the details of the four-fold experience each wavelet has encountered on its way to a particle detector. This hidden wave-like experience is altered by moving the laser mirror L which shifts the position of the peaks of the optical diffraction grating. In quantum theory, the amplitude of a matter wave is related to the probability that it will trigger a count in a particle detector. Even though the unlooked-at Argon atom is split into four partial waves, the looked-at Argon particle can only trigger one detector. The outcome of the Heidelberg experiment consists of counting the number of atoms detected in counters #1 and #2 as a function of the laser mirror position L. The results of this experiment show that, while it was unobserved, a single Argon atom was 1) in two places at once because of the mirror's ambiguisation of photon recoil, then 2) four places at once after encountering the laser diffraction grating, 3) then at last, only one place at a time when it is finally observed by either atom counter #1 or atom counter #2. The term "Schrödinger Cat state" has come to mean ANY MACROSCOPIC SYSTEM that can be placed in a quantum superposition. Does an Argon atom qualify as a Schrödinger Cat? Argon is made up of 40 nucleons, each consisting of 3 quarks. Furthermore each Argon atom is surrounded by 18 electrons for a total of 138 elementary particles--each "doing its own thing" while the atom as a whole exists in four separate places at the same time. Now a cat surely has more parts than a single Argon atom, but the Heidelberg experiment demonstrates that, with a little ingenuity, a quite complicated system can be coaxed into quantum superposition. Today's physics students are lucky. When I was learning quantum physics in the 60s, much of the quantum weirdness existed only as mere theoretical formalism. Now in 2011, many of these theoretical possibilities have become solid experimental fact. This marvelous Heidelberg quadralocated Argon atom joins the growing list of barely believable experimental hints from Nature Herself about how She routinely cooks up the bizarre quantum realities that underlie the commonplace facts of ordinary life. Saturday, April 23, 2011 Philip Wagner Phil Wagner at Ben Lomond Art Center Colorful denizen of the Late BistroScene, founder of the Santa Cruz Emerald Street Poets, moral philosopher and  intellectual sparring partner, Irish fiddle player and Santa Cruz good old boy, Phil Wagner shares here a tale from his youth. On Sunday, the A-bomb doesn't fall but rain does, drops the shape of grapes that burst themselves on the city. On the kitchen wall,  a photo of uncle Jim in uniform waving from a pontoon bridge  over the Kumho River on his way to shoot Koreans. On Monday, I go into the third grade to open page one of the Catechism, "Who is God?" then the picture book about how to hide when the A-bomb falls and how God and President Truman would save our town and me, under my desk repeating "Our Father, whoartinheaven . . ." until I die or until President Truman sounds the all-clear siren so I can walk home and study page two of the Catechism. On the way home I wade the edge of the creek that rages near the school grounds that this same water can find its way into a grape, then a tear and, after I push a large stone into the stream, can so easily leap out of its trench to wash away a playground. How fragile history must be when a single well-timed and placed stone can kill a giant or re-route a river. Tuesday, and no bombs blast in spite of our sins but more rain arrives and a photo of "Uncle Jim holding his ground." Dad explains, "Wave after wave of Koreans attack him and his machine gun." I go to school anyway. "God loves me," I sing, as my friend Zimmer leads the charge, splashing into the re-routed river  that washes away layer after layer of what remains of the school grounds. "How much ground was lost?" Father McLaughlin's eyes screw shut, ". . . one maybe two years of instruction." he tells Sister Gerard, "The devil just got loose." Zimmer's mom goes into the principal's office and comes out holding Zimmer by the hand. He waves good-bye  and I'll never see my friend again-- nor Uncle Jim. On Thursday more rain. We turn to page three of the Catechism. Sunday, April 17, 2011 Quantum Tantra At The Tannery Nick befriending some atoms The Oldest Tannery in the West (Salz Tannery in Santa Cruz, CA) closed its doors in 2001 and was converted into a space for artists to live, work and perform. The former tannery is located on Highway 9, just inside Santa Cruz's northern limits on the San Lorenzo River. Old-time residents will remember that, during the transformation from leather factory to artist's lofts, the site was enlivened by dozens of plywood cows. The Tannery Arts Center has been up and running for several years and has now invited local physicist Nick Herbert to appear on April 21 ( Room 117, 7-9 PM ) as part of their Thursday Evening Lecture Series. Nick will talk about the deepest unsolved problem in physics, the "quantum reality question"-- the awkward situation that the most powerfully predictive theory humans have ever possessed comes at the price of renouncing a picture of reality. We cannot really say what atoms really look like. Nor can we say "what happens" when we measure an atom. But we can predict (sometimes with an accuracy of 11 decimal places) the results of these not-very-well-understood measurements. Physicists do not fully understand what they are doing when they do quantum mechanics but whatever it is they are doing they do it very successfully. In the second part of his presentation Nick will introduce "quantum tantra" his latest obsession: an attempt to develop a more intimate way of connecting with nature using clues and technologies from quantum physics. The Tannery Arts Center is located near the intersection of Highway 9 and Highway 1 in Santa Cruz (close to Costco). Admission is free. Copies of Nick's Quantum Reality and Physics On All Fours will be available for sale and signing. Works by Stephen Lynch and Diana Hobson will also be on exhibit. Click to Expand Friday, April 15, 2011 A Frog Named Jagger In alternate universe K-9, a rogue artificial intelligence was successfully violating the Second Law of Thermodynamics through quantum currency manipulation. As part of his plan to bring this silicon-based outlaw to justice, Ferdinand Feghoot was posing as the manager of a walk-in bank on the planet Jeffbezos. The chief teller of Feghoot's bank was an Earth I female named Patricia Whack, a distant relative of the Whacks who run Whack and Crusher, the demolition company that changed history by unmasking the real scoundrels responsible for the destruction of America's World Trade Center. Just before closing time, a frog walks into Feghoot's bank and approaches Patricia Whack's desk. "Ms Whack," he says," I'd like to get a $30,000 loan to take a holiday." Very confused, Patty explains that she'll have to consult with the bank manager and disappears into the back office. She finds Feghoot behind his desk playing Romulan solitaire and says, "There's a frog called Kermit Jagger out there who claims to know you and wants to borrow $30,000, and he wants to use this as collateral." Ferdinand Feghoot looks back at her and says: "It's a knickknack, Patty Whack. Give the frog a loan. His old man's a Rolling Stone." Tuesday, April 12, 2011 The Bologna E-Cat Four Bologna E-Cats: nearest one covered in insulation. A remarkable new source of energy has been demonstrated at the University of Bologna. It appears to be a type of cold fusion in which nuclear reactions are initiated at temperatures of a few hundred degrees. At present the mechanism is unknown but the device produces substantial quantities of power for several hours. In a preliminary demonstration in January 2011, the inventor Andrea Rossi and his scientific collaborator Professor Sergio Focardi at the University of Bologna showed off a version of their device, called "E-Cat" for "energy catalyzer", which produced 12 kW for a number of hours. The energy emerges from the device in the form of hot steam. The E-Cat's fuel consists of Nickel powder and Hydrogen gas plus a "secret Italian sauce" that is necessary for the reaction to proceed. Rossi apparently discovered this secret catalyst though a long process of trial and error. On March 29, 2011, the Bologna scientists performed a second demonstration of a smaller "more stable" version of the E-Cat (pictured above) that produced 4.4 kW for a period of 6 hours resulting in a total energy output of 24 kWh. Besides the "secret sauce" the reactor was fueled with 50 grams of Nickel and 1.1 grams of Hydrogen. I infer from their report that the reaction was terminated before the fuel was exhausted so we do not as yet know the ultimate capacity of this new Bologna energy source. The second demonstration was witnessed by two members of a Swedish skeptics society who were allowed complete access to the E-Cat at all stages of its operation. The Swedish team could not discover any covert sources of energy and concluded from the facts available that some new form of nuclear reaction was involved.  According to physicist Brian Josephson, who has followed the cold fusion effort more closely than I, the Bologna research is self-funded but a Greek company Defkalion Green Technologies has contracted Rossi and Focardi to build a 1 MegaWatt reactor in Athens, Greece, which they hope to achieve by linking together 300 of the 4.4 kW demonstration model E-Cats. Rossi and Focardi are also carrying out experiments to determine the nature of the reaction or reactions that power this device. For a nuclear scientist the most puzzling feature of the Rossi-Focardi E-Cat is that it produces substantially no gamma rays and no radioactive byproducts. A cursory study of possible reaction mechanisms between protons (Hydrogen) and Nickel suggests that both gamma rays and radioactive isotopes of Nickel should be produced. Because these expected products seem to be absent, the answer to the question: "What is the mechanism for the Rossi-Focardi reaction?" will probably turn out to be highly unconventional. Until a plausible (and experimentally verifiable) nuclear mechanism is put forth, scientists are wise to suspend their belief. One obvious experiment that begs to be done is to run a small E-Cat to exhaustion while carrying out isotopic analysis at various stages of the process. I am hoping that Rossi and Focardi will publish soon the results of such an experiment. Nuclear Data Table on the Ni/Cu region. Click to expand. Above is the nuclear data table for the Ni/Cu region of nuclides. Click to expand. First number under the element sign is that element's natural abundance and/or half life. For instance the natural abundance of Ni58 is 68.3% and the half-life of Ni59 is 80,000 years. When a nuclide absorbs a proton it makes a pawn-like move one square upwards in the chart. We can use this table, for instance, to see what happens when the most abundant Nickel isotope absorbs a proton.  Ni58 + p --> Cu59. (We see from the table that Cu59 has a half-life of 81.8 sec and decays into Ni59 by emitting a positron and gamma rays.) Since this reaction produces gamma rays, both directly and through the annihilation of the resulting positron, and also produces a radioactive residue of Ni59, this hypothetical reaction cannot be responsible for the E-Cat's energy production. So if it's a nuclear reaction, which reaction is it? Using this table, can you devise a plausible cold-fusion scheme that 1) emits no gamma rays and 2) does not create a radioactive residue of Ni59, Ni63 or Ni65? Whatever the origin of its power, this new Bologna power source is extremely light (10 pounds) and compact, producing 4.4 kW for at least 6 hours (and probably much longer) using 50 grams of Nickel (an American Nickel coin weighs exactly 5 grams) and 1.1 grams of Hydrogen (an amount that would fill a seven-foot balloon). For comparison purposes let's consider the properties of the largest home electric generator sold by Honda--the Honda Eu6500. The Honda Eu6500 (pictured below) weighs 250 pounds and produces 6.5 kW of power (compared to one E-Cat's 4.4 kW). The Honda is fueled by gasoline and will run 5 hours on its 4.7 gallon tank producing about 30 kWh of energy compared to the 25 kWh produced by one E-Cat during its most recent demonstration. Quantitatively the Bologna E-Cat's output is comparable to the Honda generator but there is one important difference. The Honda outputs its energy in the form of electricity while the E-Cat in its present stage of development produces its energy in the form of heat. Congratulations, Rossi and Focardi! May your secret Italian sauce transform the world. Honda Eu6500 Portable Generator Monday, April 11, 2011 Lunar Backside Lunar Farside: Astronomy Picture of the Day (APOD) Mendeleev, Landau, Lebedev, Lemonosov, Lobachevski, Popov, Chebyshev, Evdokimov. Why so many Slavic names On hidden lunar farside craters? Cause Soviet cameras saw them first NASA's lenses got there later. Gagarin, Feoktisov, Golitsyn, Zhukovski, Kurchatov, Numerov, Rasumnov, Tsilokovski. Fair planetoid that shapes our tides Twenty-two hundred miles wide Here's where they're immortalized Russians on the moon's backside.
effbcf7ea0e31748
Dynamics and Thermodynamics of Systems with Long Range Format: Hardcover Language: English Format: PDF / Kindle / ePub Size: 14.04 MB Downloadable formats: PDF Or, in quantum language, we say the wave function has collapsed. This quantization gives rise to electron orbitals of a series of integer primary quantum number. Unfortunately, this logical positivist view to retain the point particle and vector force fields has been the root cause of the many paradoxes and mysteries surrounding quantum theory. In advanced mechanics the total momentum is called the canonical momentum and the kinetic momentum is the ordinary momentum. Pages: 503 Publisher: Springer; 2002 edition (February 12, 2003) ISBN: 3540443150 Conveniently, these happen to be the wavelengths used for DVD and CD reading, respectively. The unit costs only $3500. [24] The laser directly delivers energy ( photons ) and electrons directly to cells , cited: http://portraitofacreative.com/books/quantum-field-theory-in-curved-spacetime-quantized-fields-and-gravity-cambridge-monographs-on. So how many dimensions you have is how many ways you can move your hands, but they are linearly independent. But if you have a function, and you can change it, you can change it here, you can change it there, or you can change it there, and those are all orthogonal directions. When you find the critical point, you should imagine that you're plotting a function that is not one dimensional function or two dimensional function, but it's infinitely dimensional function , cited: download for free. According to Helge Kragh in his 2000 article in Physics World magazine, " Max Planck, the Reluctant Revolutionary ," "If a revolution occurred in physics in December 1900, nobody seemed to notice it. Planck was no exception …" Planck's equation also contained a number that would later become very important to future development of QM; today, it's known as "Planck's Constant." A wavefront is a line or surface, in the path of a wave motion, where all the displacements at any point have the same phase. A point source leads to circular wavefronts, at large distances from the source they are straight lines download. Other mode of energy transfer: radiation. Highlighting modes of vibration by sinusoidal excitation: fundamental mode, harmonic quantification of their frequency. Free oscillations of a plucked string or struck: interpretation of the sound emitted by the superposition of these modes. Highlighting modes of vibration by sinusoidal excitation. Simplified model of excitation of a column of air through a reed or a bevel: selection of frequencies emitted by the length of the air column , e.g. portraitofacreative.com. Where the crests of those waves intersect, they form a larger wave; where a crest intersects with a trough, the fluid is still. A bank of pressure sensors struck by the waves would register an "interference pattern"—a series of alternating light and dark bands indicating where the waves reinforced or canceled each other , source: read epub. A concept related to the spacetime interval is the emphproper time τ. The proper time between the two events A and C in figure 4.4 is defined by the equation τ 2 = T 2 − X 2 /c2. (4.5) Notice that I and τ are related by τ 2 = −I 2 /c2, (4.6) so the spacetime interval and the proper time are not independent concepts portraitofacreative.com. We hope you enjoy MIT Video and welcome your feedback. Not exactly sure what you're looking for? You can search by keyword or tag, and then browse through the results based on video type — such as event, educational or feature — or by channel, or other options http://funnyphotostoday.com/lib/stability-of-kam-tori-for-nonlinear-schrodinger-equation-memoirs-of-the-american-mathematical. And he observed that one can easily arrange, for example by including a cat in the system, “quite ridiculous cases” with the ψ-function of the entire system having in it the living and the dead cat (pardon the expression) mixed or smeared out in equal parts , source: read pdf. The ultimate sub-photonic particle is the elementary particle of light, the single EM oscillation portraitofacreative.com. At exam time, you may find that you need extra information to help you revise , source: shop.goldmooreassociates.co.uk. Schrodinger unsuccess- fully tried to account these. He tried to visualize electron as `wave packets’ made up of many small waves so that these wave packets would behave in the same way as a particle in classical mechanics pdf. But in the last time period, this part of the rope had some type of an upward velocity. And even after that point, even though this left-hand point starts getting pulled down, this point right here still has some upward momentum download. Assume that the wavelength of light used is 5 × 10−7 m. (Hint: What is the spreading angle of a beam of light passing through an opening the size of the lens?) 6. The great refractor telescope of Yerkes Observatory in Wisconson (see figure 3.19) has primary lens D = 1.02 m in diameter with a focal length of L = 19.4 m ref.: read epub. Any solution is, up to a phase, equal to a real solution. Maybe we can use more of this blackboard. Why would any solution be, up to a phase, equal to a real solution pdf? Watch headings for an "edit" link when available. Append content without editing the whole page source. Check out how this page has evolved in the past. If you want to discuss contents of this page - this is the easiest way to do it. View and manage file attachments for this page. See pages that link to and include this page. Change the name (also URL address, possibly the category) of the page http://goldmooreassociates.co.uk/library/coherent-states. Modern physics is dominated by the concepts of Quantum Mechanics. This page aims to give a brief introduction to some of these ideas. Until the closing decades of the last century the physical world, as studied by experiment, could be explained according to the principles of classical (or Newtonian) mechanics: the physics of everyday life ref.: http://thcma.com/library/maxwells-equations-of-electrodynamics-an-explanation-spie-press-monograph-vol-pm-232. In the above examples, the waves are very narrow, and are confined to the spring or the string that they are travelling down. Clearly a single wave on the sea, for example, can be hundreds of metres wide as it moves along. Water waves are often used to demonstrate the properties of waves because the wavefront of a water wave is easy to see. A wavefront is the moving line that joins all the points on the crest of a wave ref.: download pdf. What was the material in which they spread? For a while, physicists believed in the existence of an invisible all-pervading "aether," and electromagnetic waves were waves in the aether, which jiggled like a jelly. But the properties of this "aether" seemed strange http://inspire.company/ebooks/einsteins-unfinished-symphony-listening-to-the-sounds-of-space-time. In this picture, the offshore wave is 10 ft high with a 15 sec. wave period epub. An analogue signal would be sent as a continuously variable light intensity but pulsed signals are more common. Serial rather than parallel data transfer keeps the number of fibres to a minimum. Because a negative light intensity is not possible a sine wave must either modulate an otherwise constant light intensity (the carrier) or be converted to a digital signal and be transmitted as pulses of some sort , e.g. buckscountyadventures.com. This 1926 paper was enthusiastically endorsed by Einstein, who saw the matter-waves as an intuitive depiction of nature, as opposed to Heisenberg's matrix mechanics, which he considered overly formal. [20] The Schrödinger equation details the behavior of Ψ but says nothing of its nature http://weatherfor.net/library/electromagnetic-field-theory-a-problem-solving-approach. In the non-relativistic limit, the dispersion relation for free waves thus becomes ω = µ(1 + k 2 c2 /µ2 )1/2 ≈ µ + k 2 c2 /(2µ) , source: http://portraitofacreative.com/books/acoustic-electromagnetic-and-elastic-wave-scattering-focus-on-the-t-matrix-approach. Planck applied Boltzmann's statistics of many particles to radiation and derived the distribution of radiation at different frequencies (or wavelengths) just as James Clerk Maxwell and Boltzmann had derived the distribution of velocities (or energies) of the gas particles , source: http://portraitofacreative.com/books/singularities-and-oscillations-the-ima-volumes-in-mathematics-and-its-applications. Current graduate students interested in working with Center faculty should contact those faculty directly. Prospective graduate students interested in working with Center faculty in areas of gravitational wave science should apply in the usual way to the relevant department. If as it claims the Bible is the word of God then we should expect its view of reality to fit with the reality we see around us epub. A blunt body is not, however, the only possible way to get back from space epub. Below it, the surface remains smooth except for the waves produced by the bouncing droplet. The closer one tunes the vibrations to that threshold, the more robust and long-lived the generated pilot waves will be ref.: http://makeavatars.net/?library/random-fields-rigorous-results-in-statistical-mechanics-and-quantum-field-theory-2-volume-set. The magnitude of the velocity of the object is v, so the object moves a distance v∆t during the time interval http://portraitofacreative.com/books/shock-waves-in-condensed-matter-1987-proceedings. On the other hand, event O cannot be connected to events B and C by a world line, since this would imply a velocity greater than the speed of light http://portraitofacreative.com/books/simulation-and-experiment-in-laser-metrology-proceedings-of-the-international-symposium-on-laser. Schrodinger developed the wave mechanics. It became the second formulation of quantum mechanics. The first formulation, called matrix mechanics, was developed by Werner Heisenberg. Schrodinger’s wave equation (or Schrodinger equation) is one of the most basic equations of quantum mechanics. It bears the same relation to the mechanics of the atom as Newton’s equations of motions bear to planetary astronomy online. This would be another good time to back up, maybe re-read the section, and think it over , source: inspireinfosol.com. Stenger was emeritus professor of physics and astronomy at the University of Hawaii and Visiting Fellow in Philosophy at the University of Colorado , cited: read online. Rated 4.8/5 based on 1611 customer reviews
bc1cda327d7b6df2
torsdag 24 april 2014 Quantum Mechanics as Gift from God More Intelligent than Man • Quantum mechanics is, with relativity, the essence of the big conceptual revolution of the physics of the 20th century.  • Now, do we really understand quantum mechanics?  • It is probably safe to say that we understand its machinery pretty well; in other words, we know how to use its formalism to make predictions in an extremely large number of situations, even in cases that may be very intricate.  • Heinrich Hertz, who played such a crucial role in the understanding of electromagnetic waves in the 19th century (Hertzian waves), remarked that, sometimes, the equations in physics are “more intelligent than the person who invented them” [182].  • The remark certainly applies to the equations of quantum mechanics, in particular to the Schrödinger equation, or to the superposition principle: they contain probably much more substance that any of their inventors thought, for instance in terms of unexpected types of correlations, entanglement, etc.  • It is astonishing to see that, in all known cases, the equations have always predicted exactly the correct results, even when they looked completely counter-intuitive.  • Conceptually, the situation is less clear.  • Nevertheless, among all intellectual constructions of the human mind, quantum mechanics may be the most successful of all theories since, despite all efforts of physicists to find its limits of validity (as they do for all physical theories), and many sorts of speculation, no one for the moment has yet been able to obtain clear evidence that they even exist. Future will tell us if this is the case; surprises are always possible! Laloe illuminates the fact that modern physicists (and nobody else) do not understand the modern physics of quantum mechanics, and do not even pretend to do so,  as a conceptual revolution away from classical physics based on understanding. The argument is that the linear Schrödinger equation must be more intelligent than Schrödinger, since Schrödinger admittedly could not understand it and nobody else has ever claimed to understand it either.  If the difference between science and religion is that science is all about understanding, while religion leaves understanding to divinity, modern physics appears to be more religion than science. But it is hard to understand that an equation that cannot be solved, always predicts exactly the correct results! It is more easy to believe that any observation made can be claimed to fit exactly with the equation, since checking is impossible. It would be more convincing if observation was somewhat different from theory. 1 kommentar: 1. Some aspects of quantum mechanics are often quoted in climate discussions, but the real issue there is thermodynamics ... please read ....
f2e1466989e5bb25
onsdag 10 december 2014 The Radiating Atom 7: Quantum Electro Dynamics Without Infinities? The interaction between matter in the form of an atom and light as electro-magnetic wave is supposedly described by Quantum Electro Dynamics QED as a generalization of quantum mechanics into the "jewel of physics" according to Feynman as main creator.  However QED was from start loaded with infinities requiring  "renormalization", which made the value of the jewel as a "strange theory" questionable according to Feynman himself: Let us see what we can say from the experience of the present series of posts on The Radiating Atom leading to the following Schrödinger equation for a radiating Hydrogen atom subject to exterior forcing: • $\dot\psi + H\phi -\gamma\dddot\phi = f$,       (1) • $-\dot\phi + H\psi -\gamma\dddot\psi = g$,      (2) where $\psi = \psi (x,t)$ and $\phi = \phi (x,t)$ are real-valued functions of space-time coordinates $(x,t)$ (as the real and imaginary parts of Schrödinger's complex-valued electronic wave function $\psi +i\phi$), $\dot\psi =\frac{\partial\psi}{\partial t}$, • $H=-\frac{h^2}{2m}\Delta + V(x)$ is  the Hamiltonian with $\Delta$ the Laplacian with respect to $x$, $V(x)=-\frac{1}{\vert x\vert}$ the kernel potential, $m$ the electron mass and $h$ Planck's constant,  $-\gamma\dddot\phi$ is a Abraham-Lorentz radiation recoil force with corresponding radiation energy $\gamma\ddot\phi^2$ with $\gamma$ a small positive radiation coefficient and $f=f(x,t)$ and $g=g(x,t)$ express exterior forcing. Note that here the electron wave function is coupled to radiation and forcing through a radiative damping modeled by $(-\gamma\dddot\phi ,-\gamma\dddot\psi )$ and the right hand side $(f,g)$, and not through a time-dependent potential connecting an incoming electric field to an electronic dipole moment, which is a common alternative. An advantage of the above more phenomenological model is simpler mathematical analysis since the potential is kept independent of time. The system (1)-(2) can be viewed as a generalized harmonic oscillator with small radiative damping subject to exterior forcing similar to the system analyzed in Mathematical Physics of Black Body Radiation. The essence of this analysis is a balance of forcing and radiation (cf. PS5 below): • $R \equiv\int\gamma (\ddot\psi^2 +\ddot\phi^2)dxdt\approx \int (f^2 + g^2)dxdt$, which can be viewed to express that $output \approx input$. A radiating atom with wave function $(\psi ,\phi )$ can be viewed to interact with an electromagnetic $(E,B)$ through the charge density • $\rho (x,t) =\psi^2(x,t) + \phi^2(x,t)$, according to Maxwell's equations: • $\dot B + \nabla\times E = 0$, $\nabla\cdot B =0$, • $-\dot E + \nabla\times B = J$, $\nabla\cdot E =\rho$, with $J$ a corresponding current. For a superposition of two pure eigen-states with eigenvalues $E_1$ and $E_2$ the charge density varies in time with frequency $\omega =(E_2 -E_1)/h$ and then as an electrical dipole generates outgoing radiation • $P\sim\omega^4$,    which is balanced by the radiation damping in Schrödinger's equation • $R=\int\gamma (\ddot\psi^2 +\ddot\phi^2)dxdt\sim\omega^4$. The above QED model combining Schrödinger's equation for an atom with Maxwell's equations for an electro-magnetic field, thus explains the physics of  1. an electron configuration as a superposition of two pure eigen-states of different energies,  2. which generates a time variable charge/electrical dipole,  3. which generates an electro-magnetic field,  4. which generates outgoing radiation, 5. under exterior forcing. The analysis in Mathematical Physics of Black Body Radiation shows that in this system  • $P \approx R\approx \int (f^2 + g^2)dxdt$, that is, • outgoing radiation $\approx$ radiative damping $\approx$ exterior forcing.   The fact that outgoing radiation $\approx$ exterior forcing makes it possible to reverse the physics (1) from an atom generating outgoing radiation as an electromagnetic field (emission) into (2) a model of the reaction of an atom subject to an incoming electro-magnetic field (absorption). This is the same reversal that can be made to use a loadspeaker as a microphone (or that an antenna reradiates about half what it absorbs allowing Swedish Television agents to detect individual watchers and check if the TV-license has been paid). Note that the physics of (1) may be easier to explain/understand than (2), since outgoing radiation/emission can be observed, while atomic absorption of incoming electro-magnetic waves is hidden to inspection.  On the other hand if (2) is just the other side of (1), then explaining/understanding (1) may be sufficient. The analysis thus offers an explanation of self-interaction without a catastrophy of acoustic feedback between loadspeaker and microphone, which may be at the origin of the infinities troubling Feynman's jewel of physics QED with photons being emitted and possibly directly being reabsorbed in a form of catastrophical photonic feedback. PS1 The radiation damping $-\gamma\dddot\psi$ may alternatively take the form $\gamma \vert\dot\rho\vert^2\dot\psi$, with again $R\sim \omega^4$ for a superposition of eigen-states, and $R=0$ for a pure eigen-state with $\dot\rho =0$. Compare PS5 below. PS2 The basic conservation laws built into (1)-(2) with $f=g=0$ are (with PS1) • $\frac{d}{dt}\int\rho (x,t)dx =0$   (conservation of charge),  • $\frac{d}{dt}\int (\psi H\psi +\phi H\phi)dx = -\int(\gamma\vert\dot\rho\vert^2(\dot\psi^2+\dot\phi^2)dx$  (radiative damping of energy). PS3 Feynman states in the above book:  • It is very important to know that light behaves like particles, especially for those of you who have gone to school, where you were probably told something about light behaving like waves. I am telling you the way does behave - like particles. ...every instrument (photomultiplier) that has been designed to be sensitive enough to detect weak light has always ended up discovering the same thing: light is made of particles. We read that Feynman concludes that because the output of a light detector/photo-multiplier under decreasingly weak light input, changes from a continuous signal to an intermittent signal to no signal, light must also be intermittent as if composed of a stream of isolated particles.  But this is a weak argument because it draws a general conclusion about the normal nature of light from an extreme situation where blips on a screen or sound clicks are taken as evidence that what causes the blips also must be blip-like, that is must be particles. But to draw conclusions about normality by only observing extremity or non-normality, is to stretch normal scientific methodology beyond reason. In particular, the infinities troubling QED seems to originate from particle self-interaction. With light and atom instead in the form of waves and their interaction consisting of interference of waves, self-interaction does not seem to be an issue. PS4 The book Atoms and Light Interactions presents what its author by J. D. Dodd refers to as a semi-classical view of the interaction of electromagnetic radiation and atoms, thus as waves and not particles (which is also my view): • It may well be that the semiclassical view falls down at some stage and is unable to predict correctly certain phenomena; my own view is that it succeeds much more widely than it is given credit for. Even if it is not justified from the point of view of many physicists, i is still useful for another reason. Even if the quantum nature of radiation (QED) is required, the underlying physics needs a firm understanding of its classical basis.   Yes, it may well by that also atomistic physics is a form of wave mechanics and thus a form of classical continuum physics, as expressed by Zeh: • There are no quantum jumps and nor are there any particles. PS5 The analysis of Mathematical Physics of Black Body Radiation is more readily applicable if (1)-(2) is formulated as a second order in time wave equation of the form • $\ddot\psi +H^2\psi + \gamma\dot\rho^2\dot\psi = F$, with the following tentative main result as an extension of the analysis from radiative damping $-\gamma\dddot\psi$ to $\gamma\dot\rho^2\dot\psi$ (with $\gamma >0$ constant): • $\int\gamma\dot\rho^2\dot\psi^2dxdt\approx\int F^2dxdt$. Here $\gamma$ may have a dependence on $\psi$ to guarantee charge conservation under forcing. 21 kommentarer: 1. Are there any attempts to calculate the hydrogen spectrum? If so, are you then using gamma as a fitting parameter, or how do you else calculate it? 2. The spectrum as the difference of eigenvalues of the Hamiltonian/h, does not depend on gamma, but the presence of a positive gamma with corresponding radiative damping is what makes it possible to observe the spectrum as outgoing radiation. The precise value of gamma is insignificant. 3. Regarding your objections against Feynmans argument under PS3. Although Feynman uses an example of a photomultiplier, that is not the only argument for the point he is making. And I'm quite positive Feynman knew that. But remember that the book is an adaption of a set of lectures he gave, aimed at the GENERAL PUBLIC (in the late 70s). There is no need to give any deeper arguments than that to give the big picture. Starting in the 70s, and continued until today, there is a tremendous amount of different kinds of experiments that really nails the coffin for any kind of semiclassical (that is a theory of only waves) description of light, since it would be astronomically improbable for such a theory to work, regarding the experimental data. I urge you to read the published paper Observing the quantum behavior of light in an undergraduate laboratory for a concrete example. This, and similar experiments, is what you need to argue against if you want to refute the photon model. Any objections? 4. This article states that the photoelectric and Compton effect do not give evidence of the existence of photons, and I agree. Their experimental evidence very difficult to understand, and certainly beyond undergraduate level. To say that something is impossible, like saying that a complicated experiment cannot possibly be explained by wave theory, is easy to do but probably impossible to prove. To show that something is possible is possible by just doing it, while showing impossibility is virtually impossible. 5. I strongly disagree with you. The experiment is not at all difficult to understand. The theory behind it is highly accessible after finishing a modern introductory quantum mechanical course. And on the master level this is not hard at all. Coincidence techniques and beam splitters are not rocket science, and the element of field theory is pretty straight forward. So I need to ask you, difficulties aside (I don't think the universe cares if undergraduates find the theory difficult or not ;-) ), you don't seem to object to the experimental result? Any theory containing classical waves predicts a g^(2) of 1 or larger, so the experimental evidence says that a classical theory can't account for the data. If you disagree, why so? 6. The evidence is negative: It is claimed that a certain phenomenon is not readily explained by wave theory, but how can we be so sure that an explanation is impossible? Photoelectricity used to be evidence of particle nature of light, but the authors of the article do not buy that argument. To claim that something is impossible favors ignorance and inability, and that is dangerous. 7. It is claimed that a certain phenomenon is not readily explained by wave theory Not really. From any theory with a classical EM-field, there is a prediction on the correlation function in question. Since the observed effect is lower then this prediction (by 377 standard deviations!), the classical theory must be seen as falsified. This doesn't mean that a semi-classical description can't be used. The question then is, when is it justified to use a semi-classical description as an approximation? There is no generality in a semi-classical description, this can be justified to a really high precision judging from experimental data. There are no quantum jumps and nor are there any particles. I would say that you misjudge this title. We do know from quantum mechanics that there are no particles. That doesn't mean that there are waves. It is a false dichotomy. The general quantity in quantum mechanics is the quantum field, that is neither a wave nor a particle. I don't know if you read Zeh's paper in detail, either way, you should really re/read the paragraph in detail. It thus appears becoming evident that our classical concepts describe mere shadows on the wall of Plato's cave in which we are living. Using them for describing reality must lead to 'paradoxes'. 8. Ok, so we agree that there are no particles and then what remains is waves. To say neither particle nor wave but something else without telling what is not constructive, to me at least. 9. I did write what, an excitation in a quantum field. 10. And what is it then? In physical terms? 11. That is kind of a silly question. Classically speaking, what is an electro-magnetic field? In physical terms? 12. A distributed function of space and time satisfying Maxwell's wave equation that is a wave, not a particle. 13. A distributed function [...] But that doesn't say what the electromagnetic field really is, physically. You say here how to describe the field mathematically. From a classical point of view the classical electromagnetic field is fundamental, so you can't describe it in much more detail than you do here. It is the same with a quantum field. If you want the mathematical formalism, see for instance An Introduction To Quantum Field Theory If you accept the classical method of defining an electromagnetic field (I assert that you do, since you just used it that way), I can't see how you wouldn't accept the exact same method for the quantum field. At the same time, the quantum field must be a more fundamental description, for two (at least) reasons. First, it contains the classical fields and equations as a subset. Second, it can be used to account for more experimental data (as discussed above). 14. We are talking about waves or particles. I say electromagnetic fields are waves as real-valued functions of space and time, not particles, which satisfy Maxwell's equations and thus can be understood by many. QFT is loaded with infinities and not understood by many, if any. 15. We are talking about waves or particles. Not really, you are talking about waves OR particles. I tried to mention earlier that doing so is a false dichotomy, false dilemma, false duality or what you want to call it. None the less, it is a logical fallacy since you miss the situation where we have something that is more fundamental than our notions of particles and waves. Heck, you even linked to a paper that had that as a main theme (the Zeh paper and reference to a wall in Plato's cave in the conclusions). It is a great misfortune that the name wave-particle duality still remains. [...] which satisfy Maxwell's equations and thus can be understood by many. I would call this a fallacy as well. Humans ability to understand a theory can of course not have any impact on how well that theory describes reality. Maybe I misunderstand you here. If so, what do you really mean? 16. Yes, a theory which cannot be understood is not useful, like the special and general theories of relativity. I have the impression that QFT falls in the same category, but in this case I may be wrong. Anyway, I am seeking a continuum model for the radiating atom which can make sense and thus be understood. 17. In what way cannot special and general relativity be understood? The theoretical frameworks are not that complicated, especially in the case of special relativity. And I don't know of any inconsistencies either. So I must admit that I don't understand what you mean here. 18. I got a bit curios to what you think cannot be understood about the theory of relativity, looked around on this blog and found your other blog, The World As Computation. And more specific the post Questioning Relativity 2: Unphysical Lorentz Transformation And I do see a little where your confusion about the theory originates. You write there However, the figure is misleading: The x'-axis defined by t' =0 is not parallel to the x-axis, since it is given by the line t=vx which is tilted with respect to the x-axis. And you then conclude that the transformation must be unphysical. This looks really strange, and I can not see how this has to do with relativity at all. In relativity, the primed coordinate system is not given by an equation, it is just another inertial system, it is predefined, or given if you so wish. The Lorentz transformation then, is just the passive (no physical change) transformation that relates the physical event (x,t) in one coordinate system with another coordinate system that describes the SAME PHYSICAL event as (x',t'). Given the constraint that both systems should agree numerically on a measurement of the light speed. To be honest, this is really simple stuff. General relativity gets trickier, but is not impossible. 19. If you think that special relativity is a physical theory, then you are fooling yourself. Yes, it is as mathematical theory simple/trivial because it just a simple linear transformation, but it has no physical meaning and thus the simplicity you perceive is just an illusion. A meaningless theory cannot be viewed as simple. 20. but it has no physical meaning I must ask you to be more specific in what you mean. What criteria do you use to call a theory physical and with meaning? 21. I explain this in detail in Many Minds Relativity. Take it or leave it.
ded217d29a75b827
Schrodinger equation About Schrodinger equation Schrodinger equation logo The Schrodinger equation is used to find the allowed energy levels of quantum mechanical systems (such as atoms, or transistors). The associated wavefunction gives the probability of finding the particle at a certain position. Each of these three rows is a wave function which satisfies the time-dependent Schrödinger equation for a harmonic oscillator. Left: The real part (blue) and imaginary part (red) of the wave function. Right: The probability distribution of finding the particle with this wave function at a given position. The top two rows are examples of stationary states, which correspond to standing waves. The bottom row is an example of a state which is not a stationary state. The right column illustrates why stationary states are called "stationary". Featured Resources
65a08bb516d3d856
Quantum Theory Without Observers What is this business about observers? What are the other quantum theories without observers? How do they compare to each other? Why should we care? Standard textbook quantum mechanics has observers at the very heart of the theory. It discusses what happens when a measurement is performed. This is fine as a phenomenological prescription, but it is meaningless for a fundamental theory, a theory which is to explain the existence of those experiments. John Bell in “Quantum mechanics for cosmologists” (Speakable and unspeakable in quantum mechanics) eloquently and directly presents the problem: It would seem that the theory is exclusively concerned with ‘results of measurements’ and has nothing to say about anything else. When the ‘system’ in question is the whole world where is the ‘measurer’ to be found? Inside, rather than outside, presumably. What exactly qualifies some subsystems to play this role? Was the world wave function waiting to jump for thousands of millions of years until a single-celled living creature appeared? Or did it have to wait a little longer for some more highly qualified measurer — with a Ph.D.? If the theory is to apply to anything but idealized laboratory operations, are we not obliged to admit that more or less ‘measurement-like’ processes are going on more or less all the time more or less everywhere? Is there ever then a moment when there is no jumping and the Schrödinger equation applies? Bohmian mechanics is one theory that satisfies the need to avoid measurement and observers in the formulation of the theory. But there are others. What is the theory about? Decide this before equations and deductions are made.  What is the ontology of Bohmian mechanics? GRW and Spontaneous Collapse Can we modify the wave evolution so that collapse is occurring independently of measurement? What is the ontology of such a theory? Everett and Many Worlds Is it possible not to collapse the wave function and to avoid using particles with positions? What is the ontology of such a theory? Decoherent Histories How much can we do with focusing on theories as information? What is the ontology of such a theory? OtheR QTWo Other quantum theories without observers may exist. What are they?
17be4bc43a7db840
Archive for the ‘Metaphysical Spouting’ Category Wednesday, June 3rd, 2015 Reading the essays and speculative fiction of Scott Alexander, as they’ve grown in awesomeness even just within the past half-year, has for me been like witnessing the birth of a new Asimov.  (For more Alexandery goodness, check out Universal Love, Said the Cactus Person.)  That this nerd-bard, this spinner of stupid Internet memes into reflections on eternity, came to my attention by way of his brilliantly defending me, is almost immaterial at this point; I don’t think it plays any role in my continuing admiration for his work.  Whatever you do, just keep writing, other Scott A. The End of Suffering? Monday, June 1st, 2015 A computer science undergrad who reads this blog recently emailed me about an anxiety he’s been feeling connected to the Singularity—not that it will destroy all human life, but rather that it will make life suffering-free and therefore no longer worth living (more Brave New World than Terminator, one might say). As he puts it: This probably sounds silly, but I’ve been existentially troubled by certain science fiction predictions for about a year or two, most of them coming from the Ray Kurzweil/Singularity Institute types … What really bothers me is the idea of the “abolition of suffering” as some put it. I just don’t see the point. Getting rid of cancer, premature death, etc., that all sounds great. But death itself? All suffering? At what point do we just sit down and ask ourselves, why not put our brains in a jar, and just activate our pleasure receptors for all eternity? That seems to be the logical conclusion of that line of thinking. If we want to reduce the conscious feeling of pleasure to the release of dopamine in the brain, well, why not? I guess what I think I’m worried about is having to make the choice to become a cyborg, or to upload my mind to a computer, to live forever, or to never suffer again. I don’t know how I’d answer, given the choice. I enjoy being human, and that includes my suffering. I really don’t want to live forever. I see that as a hedonic treadmill more than anything else. Crazy bioethicists like David Pearce, who want to genetically re-engineer all species on planet Earth to be herbivores, and literally abolish all suffering, just add fuel to my anxiety. … Do you think we’re any closer to what Kurzweil (or Pearce) predicted (and by that I mean, will we see it in our lifetimes)? I want to stop worrying about these things, but something is preventing me from doing so. Thoughts about the far flung (or near) future are just intrusive for me. And it seems like everywhere I go I’m reminded of my impending fate. Ernst Jünger would encourage me to take up an attitude of amor fati, but I can’t see myself doing that. My father says I’m too young to worry about these things, and that the answer will be clear when I’ve actually lived my life. But I just don’t know. I want to stop caring, more than anything else. It’s gotten to a point where the thoughts keep me up at night. I don’t know how many readers might have had similar anxieties, but in any case, I thought my reply might be of some interest to others, so with the questioner’s kind permission, I’m reproducing it below. 1. An end to suffering removing the meaning from life? As my grandmother might say, “we should only have such problems”! I believe, alas, that suffering will always be with us, even after a hypothetical technological singularity, because of basic Malthusian logic. I.e., no matter how many resources there are, population will expand exponentially to exploit them and make the resources scarce again, thereby causing fighting, deprivation, and suffering. What’s terrifying about Malthus’s logic is how fully general it is: it applies equally to tenure-track faculty positions, to any extraterrestrial life that might exist in our universe or in any other bounded universe, and to the distant post-Singularity future. But if, by some miracle, we were able to overcome Malthus and eliminate all suffering, my own inclination would be to say “go for it”! I can easily imagine a life that was well worth living—filled with beauty, humor, play, love, sex, and mathematical and scientific discovery—even though it was devoid of any serious suffering. (We could debate whether the “ideal life” would include occasional setbacks, frustrations, etc., even while agreeing that at any rate, it should certainly be devoid of cancer, poverty, bullying, suicidal depression, and one’s Internet connection going down.) 2. If you want to worry about something, then rather than an end to suffering, I might humbly suggest worrying about a large increase in human suffering within our lifetimes. A few possible culprits: climate change, resurgent religious fundamentalism, large parts of the world running out of fresh water. 3. It’s fun to think about these questions from time to time, to use them to hone our moral intuitions—and I even agree with Scott Alexander that it’s worthwhile to have a small number of smart people think about them full-time for a living.  But I should tell you that, as I wrote in my post The Singularity Is Far, I don’t expect a Singularity in my lifetime or my grandchildrens’ lifetimes. Yes, technically, if there’s ever going to be a Singularity, then we’re 10 years closer to it now than we were 10 years ago, but it could still be one hell of a long way away! And yes, I expect that technology will continue to change in my lifetime in amazing ways—not as much as it changed in my grandparents’ lifetimes, probably, but still by a lot—but how to put this? I’m willing to bet any amount of money that when I die, people’s shit will still stink. “Is There Something Mysterious About Math?” Wednesday, April 22nd, 2015 When it rains, it pours: after not blogging for a month, I now have a second thing to blog about in as many days.  Aeon, an online magazine, asked me to write a short essay responding to the question above, so I did.  My essay is here.  Spoiler alert: my thesis is that yes, there’s something “mysterious” about math, but the main mystery is why there isn’t even more mystery than there is.  Also—shameless attempt to get you to click—the essay discusses the “discrete math is just a disorganized mess of random statements” view of Luboš Motl, who’s useful for putting flesh on what might otherwise be a strawman position.  Comments welcome (when aren’t they?).  You should also read other interesting responses to the same question by Penelope Maddy, James Franklin, and Neil Levy.  Thanks very much to Ed Lake at Aeon for commissioning these pieces. Update (4/22): On rereading my piece, I felt bad that it didn’t make a clear enough distinction between two separate questions: 1. Are there humanly-comprehensible explanations for why the mathematical statements that we care about are true or false—thereby rendering their truth or falsity “non-mysterious” to us? 2. Are there formal proofs or disproofs of the statements? Interestingly, neither of the above implies the other.  Thus, to take an example from the essay, no one has any idea how to prove that the digits 0 through 9 occur with equal frequency in the decimal expansion of π, and yet it’s utterly non-mysterious (at a “physics level of rigor”) why that particular statement should be true.  Conversely, there are many examples of statements for which we do have proofs, but which experts in the relevant fields still see as “mysterious,” because the proofs aren’t illuminating or explanatory enough.  Any proofs that require gigantic manipulations of formulas, “magically” terminating in the desired outcome, probably fall into that class, as do proofs that require computer enumeration of cases (like that of the Four-Color Theorem). But it’s not just that proof and explanation are incomparable; sometimes they might even be at odds.  In this MathOverflow post, Timothy Gowers relates an interesting speculation of Don Zagier, that statements like the equidistribution of the digits of π might be unprovable from the usual axioms of set theory, precisely because they’re so “obviously” true—and for that very reason, there need not be anything deeper underlying their truth.  As Gowers points out, we shouldn’t go overboard with this speculation, because there are plenty of other examples of mathematical statements (the Green-Tao theorem, Vinogradov’s theorem, etc.) that also seem like they might be true “just because”—true only because their falsehood would require a statistical miracle—but for which mathematicians nevertheless managed to give fully rigorous proofs, in effect formalizing the intuition that it would take a miracle to make them false. Zagier’s speculation is related to another objection one could raise against my essay: while I said that the “Gödelian gremlin” has remained surprisingly dormant in the 85 years since its discovery (and that this is a fascinating fact crying out for explanation), who’s to say that it’s not lurking in some of the very open problems that I mentioned, like π’s equidistribution, the Riemann Hypothesis, the Goldbach Conjecture, or P≠NP?  Conceivably, not only are all those conjectures unprovable from the usual axioms of set theory, but their unprovability is itself unprovable, and so on, so that we could never even have the satisfaction of knowing why we’ll never know. My response to these objections is basically just to appeal yet again to the empirical record.  First, while proof and explanation need not go together and sometimes don’t, by and large they do go together: over thousands over years, mathematicians learned to seek formal proofs largely because they discovered that without them, their understanding constantly went awry.  Also, while no one can rule out that P vs. NP, the Riemann Hypothesis, etc., might be independent of set theory, there’s very little in the history of math—including in the recent history, which saw spectacular proofs of (e.g.) Fermat’s Last Theorem and the Poincaré Conjecture—that lends concrete support to such fatalism. So in summary, I’d say that history does present us with “two mysteries of the mathematical supercontinent”—namely, why do so many of the mathematical statements that humans care about turn out to be tightly linked in webs of explanation, and also in webs of proof, rather than occupying separate islands?—and that these two mysteries are very closely related, if not quite the same. The ultimate physical limits of privacy Wednesday, March 11th, 2015 Somewhat along the lines of my last post, the other day a reader sent me an amusing list of questions about privacy and fundamental physics.  The questions, and my answers, are below. 1. Does the universe provide us with a minimum level of information security? I’m not sure what the question means. Yes, there are various types of information security that are rooted in the known laws of physics—some of them (like quantum key distribution) even relying on specific aspects of quantum physics—whose security one can argue for by appealing to the known properties of the physical world. Crucially, however, any information security protocol is only as good as the assumptions it rests on: for example, that the attacker can’t violate the attack model by, say, breaking into your house with an ax! 2. For example, is my information safe from entities outside the light-cone I project? Yes, I think it’s safe to assume that your information is safe from any entities outside your future light-cone. Indeed, if information is not in your future light-cone, then almost by definition, you had no role in creating it, so in what sense should it be called “yours”? 3. Assume that there are distant alien cultures with infinite life spans – would they always be able to wait long enough for my light cone to spread to them, and then have a chance of detecting my “private” information? First of all, the aliens would need to be in your future light-cone (see my answer to 2). In 1998, it was discovered that there’s a ‘dark energy’ pushing the galaxies apart at an exponentially-increasing rate. Assuming the dark energy remains there at its current density, galaxies that are far enough away from us (more than a few tens of billions of light-years) will always recede from us faster than the speed of light, meaning that they’ll remain outside our future light-cone, and signals from us can never reach them. So, at least you’re safe from those aliens! For the aliens in your future light-cone, the question is subtler. Suppose you took the only piece of paper on which your secrets were written, and burned it to ash—nothing high-tech, just burned it. Then there’s no technology that we know today, or could even seriously envision, that would piece the secrets together. It would be like unscrambling an egg, or bringing back the dead from decomposing corpses, or undoing a quantum measurement. It would mean, effectively, reversing the Arrow of Time in the relevant part of the universe. This is formally allowed by the Second Law of Thermodynamics, since the decrease in entropy within that region could be balanced by an increase in entropy elsewhere, but it would require a staggering level of control over the region’s degrees of freedom. On the other hand, it’s also true that the microscopic laws of physics are reversible: they never destroy information. And for that reason, as a matter of principle, we can’t rule out the possibility that some civilization of the very far future, whether human or alien, could piece together what was written on your paper even after you’d burned it to a crisp. Indeed, with such godlike knowledge and control, maybe they could even reconstruct the past states of your brain, and thereby piece together private thoughts that you’d never written anywhere! 4. Does living in a black hole provide privacy? Couldn’t they follow you into the hole? No, I would not recommend jumping into a black hole as a way to ensure your privacy. For one thing, you won’t get to enjoy the privacy for long (a couple hours, maybe, for a supermassive black hole at the center of a galaxy?) before getting spaghettified on your way to the singularity. For another, as you correctly pointed out, other people could still snoop on you by jumping into the black hole themselves—although they’d have to want badly enough to learn your secrets that they wouldn’t mind dying themselves along with you, and also not being able to share whatever they learned with anyone outside the hole. But a third problem is that even inside a black hole, your secrets might not be safe forever! Since the 1970s, it’s been thought that all information dropped into a black hole eventually comes out, in extremely-scrambled form, in the Hawking radiation that black holes produce as they slowly shrink and evaporate. What do I mean by “slowly”? Well, the evaporation would take about 1070 years for a black hole the mass of the sun, or about 10100 years for the black holes at the centers of galaxies. Furthermore, even after the black hole had evaporated, piecing together the infalling secrets from the Hawking radiation would probably make reconstructing what was on the burned paper from the smoke and ash seem trivial by comparison! But just like in the case of the burned paper, the information is still formally present (if current ideas about quantum gravity are correct), so one can’t rule out that it could be reconstructed by some civilization of the extremely remote future. The flow of emails within the block inbox Saturday, March 7th, 2015 As a diversion from the important topics of shaming, anti-shaming, and anti-anti-shaming, I thought I’d share a little email exchange (with my interlocutor’s kind permission), which gives a good example of what I find myself doing all day when I’m not blogging, changing diapers, or thinking about possibly doing some real work (but where did all the time go?). Dear Professor Aaronson, I would be very pleased to know your opinion about time.  In a letter of condolence to the Besso family, Albert Einstein wrote: “Now he has departed from this strange world a little ahead of me. That means nothing. People like us, who believe in physics, know that the distinction between past, present and future is only a stubbornly persistent illusion.” I’m a medical doctor and everyday I see time’s effect over human bodies. Is Einstein saying time is an illusion?  For who ‘believe in physics’ is death an illusion?  Don’t we lose our dears and will they continue to live in an ‘eternal world’? Is time only human perceptive illusion (as some scientists say physics has proved)? Dear [redacted], I don’t read Einstein in that famous quote as saying that time itself is an illusion, but rather, that the sense of time flowing from past to present to future is an illusion. He meant, for example, that the differential equations of physics can just as easily be run backward (from future to past) as forward (from past to future), and that studying physics can strongly encourage a perspective—which philosophers call the “block universe” perspective—where you treat the entire history of spacetime as just a fixed, 4-dimensional manifold, with time simply another dimension in addition to the three spatial ones (admittedly, a dimension that the laws of physics treat somewhat differently than the other three). And yes, relativity encourages this perspective, by showing that different observers, moving at different speeds relative to each other, will divide up the 4-dimensional manifold into time slices in different ways, with two events judged to be simultaneous by one observer judged to be happening at different times by another. But even after Einstein is read this way, I’d personally respond: well, that’s just one perspective you can take. A perfectly understandable one, if you’re Einstein, and especially if you’re Einstein trying to comfort the bereaved. But still: would you want to say, for example, that because physics treats the table in front of you as just a collection of elementary particles held together by forces, therefore the table, as such, doesn’t “exist”? That seems overwrought. Physics deepens your understanding of the table, of course—showing you what its microscopic constituents are and why they hold themselves together—but the table still “exists.”  In much the same way, physics enormously deepened our understanding of what we mean by the “flow of time”—showing how the “flow” emerges from the time-symmetric equations of physics, combined with the time-asymmetric phenomena of thermodynamics, which increase the universe’s entropy as we move away from the Big Bang, and thereby allow for the creation of memories, records, and other irreversible effects (a part of the story that I didn’t even get into here). But it feels overwrought to say that, because physics gives us a perspective from which we can see the “flow of time” as emerging from something deeper, therefore the “flow” doesn’t exist, or is just an illusion. Hope that helps! (followup question) Dear Professor, I’ve been thinking about the “block universe” and it seems to me that in it past, present and future all coexist.  So on the basis of Einstein’s theory, do all exist eternally, and why do we perceive only the present? But you don’t perceive only the present!  In the past, you perceived what’s now the past (and which you now remember), and in the future, you’ll perceive what’s now the future (and which you now look forward to), right?  And as for why the present is the present, and not some other point in time?  Well, that strikes me as one of those questions like why you’re you, out of all the possible people who you could have been instead, or why, assuming there are billions of habitable planets, you find yourself on earth and not on any of the other planets.  Maybe the best answer is that you had to be someone, living somewhere, at some particular point in time when you asked this question—and you could’ve wondered the same thing regardless of what the answer had turned out to be. “Could a Quantum Computer Have Subjective Experience?” Monday, August 25th, 2014 Author’s Note: Below is the prepared version of a talk that I gave two weeks ago at the workshop Quantum Foundations of a Classical Universe, which was held at IBM’s TJ Watson Research Center in Yorktown Heights, NY.  My talk is for entertainment purposes only; it should not be taken seriously by anyone.  If you reply in a way that makes clear you did take it seriously (“I’m shocked and outraged that someone who dares to call himself a scientist would … [blah blah]”), I will log your IP address, hunt you down at night, and force you to put forward an account of consciousness and decoherence that deals with all the paradoxes discussed below—and then reply at length to all criticisms of your account. If you’d like to see titles, abstracts, and slides for all the talks from the workshop—including by Charles Bennett, Sean Carroll, James Hartle, Adrian Kent, Stefan Leichenauer, Ken Olum, Don Page, Jason Pollack, Jess Riedel, Mark Srednicki, Wojciech Zurek, and Michael Zwolak—click here.  You’re also welcome to discuss these other nice talks in the comments section, though I might or might not be able to answer questions about them.  Apparently videos of all the talks will be available before long (Jess Riedel has announced that videos are now available). (Note that, as is probably true for other talks as well, the video of my talk differs substantially from the prepared version—it mostly just consists of interruptions and my responses to them!  On the other hand, I did try to work some of the more salient points from the discussion into the text below.) Thanks so much to Charles Bennett and Jess Riedel for organizing the workshop, and to all the participants for great discussions. I didn’t prepare slides for this talk—given the topic, what slides would I use exactly?  “Spoiler alert”: I don’t have any rigorous results about the possibility of sentient quantum computers, to state and prove on slides.  I thought of giving a technical talk on quantum computing theory, but then I realized that I don’t really have technical results that bear directly on the subject of the workshop, which is how the classical world we experience emerges from the quantum laws of physics.  So, given the choice between a technical talk that doesn’t really address the questions we’re supposed to be discussing, or a handwavy philosophical talk that at least tries to address them, I opted for the latter, so help me God. I like to tell this story when people ask me whether the interpretation of quantum mechanics has any empirical consequences. Look, I understand the impulse to say “let’s discuss the measure problem, or the measurement problem, or derivations of the Born rule, or Boltzmann brains, or observer-counting, or whatever, but let’s take consciousness off the table.”  (Compare: “let’s debate this state law in Nebraska that says that, before getting an abortion, a woman has to be shown pictures of cute babies.  But let’s take the question of whether or not fetuses have human consciousness—i.e., the actual thing that’s driving our disagreement about that and every other subsidiary question—off the table, since that one is too hard.”)  The problem, of course, is that even after you’ve taken the elephant off the table (to mix metaphors), it keeps climbing back onto the table, often in disguises.  So, for better or worse, my impulse tends to be the opposite: to confront the elephant directly. Having said that, I still need to defend the claim that (a) the questions we’re discussing, centered around quantum mechanics, Many Worlds, and decoherence, and (b) the question of which physical systems should be considered “conscious,” have anything to do with each other.  Many people would say that the connection doesn’t go any deeper than: “quantum mechanics is mysterious, consciousness is also mysterious, ergo maybe they’re related somehow.”  But I’m not sure that’s entirely true.  One thing that crystallized my thinking about this was a remark made in a lecture by Peter Byrne, who wrote a biography of Hugh Everett.  Byrne was discussing the question, why did it take so many decades for Everett’s Many-Worlds Interpretation to become popular?  Of course, there are people who deny quantum mechanics itself, or who have basic misunderstandings about it, but let’s leave those people aside.  Why did people like Bohr and Heisenberg dismiss Everett?  More broadly: why wasn’t it just obvious to physicists from the beginning that “branching worlds” is a picture that the math militates toward, probably the simplest, easiest story one can tell around the Schrödinger equation?  Even if early quantum physicists rejected the Many-Worlds picture, why didn’t they at least discuss and debate it? Here was Byrne’s answer: he said, before you can really be on board with Everett, you first need to be on board with Daniel Dennett (the philosopher).  He meant: you first need to accept that a “mind” is just some particular computational process.  At the bottom of everything is the physical state of the universe, evolving via the equations of physics, and if you want to know where consciousness is, you need to go into that state, and look for where computations are taking place that are sufficiently complicated, or globally-integrated, or self-referential, or … something, and that’s where the consciousness resides.  And crucially, if following the equations tells you that after a decoherence event, one computation splits up into two computations, in different branches of the wavefunction, that thereafter don’t interact—congratulations!  You’ve now got two consciousnesses. And if everything above strikes you as so obvious as not to be worth stating … well, that’s a sign of how much things changed in the latter half of the 20th century.  Before then, many thinkers would’ve been more likely to say, with Descartes: no, my starting point is not the physical world.  I don’t even know a priori that there is a physical world.  My starting point is my own consciousness, which is the one thing besides math that I can be certain about.  And the point of a scientific theory is to explain features of my experience—ultimately, if you like, to predict the probability that I’m going to see X or Y if I do A or B.  (If I don’t have prescientific knowledge of myself, as a single, unified entity that persists in time, makes choices, and later observes their consequences, then I can’t even get started doing science.)  I’m happy to postulate a world external to myself, filled with unseen entities like electrons behaving in arbitrarily unfamiliar ways, if it will help me understand my experience—but postulating other versions of me is, at best, irrelevant metaphysics.  This is a viewpoint that could lead you Copenhagenism, or to its newer variants like quantum Bayesianism. Of course, there are already tremendous difficulties here, even if we ignore quantum mechanics entirely.  Ken Olum was over much of this ground in his talk yesterday (see here for a relevant paper by Davenport and Olum).  You’ve all heard the ones about, would you agree to be painlessly euthanized, provided that a complete description of your brain would be sent to Mars as an email attachment, and a “perfect copy” of you would be reconstituted there?  Would you demand that the copy on Mars be up and running before the original was euthanized?  But what do we mean by “before”—in whose frame of reference? Some people say: sure, none of this is a problem!  If I’d been brought up since childhood taking family vacations where we all emailed ourselves to Mars and had our original bodies euthanized, I wouldn’t think anything of it.  But the philosophers of mind are barely getting started. You might say, sure, maybe these questions are puzzling, but what’s the alternative?  Either we have to say that consciousness is a byproduct of any computation of the right complexity, or integration, or recursiveness (or something) happening anywhere in the wavefunction of the universe, or else we’re back to saying that beings like us are conscious, and all these other things aren’t, because God gave the souls to us, so na-na-na.  Or I suppose we could say, like the philosopher John Searle, that we’re conscious, and the lookup table and homomorphically-encrypted brain and Vaidman brain and all these other apparitions aren’t, because we alone have “biological causal powers.”  And what do those causal powers consist of?  Hey, you’re not supposed to ask that!  Just accept that we have them.  Or we could say, like Roger Penrose, that we’re conscious and the other things aren’t because we alone have microtubules that are sensitive to uncomputable effects from quantum gravity.  But neither of those two options ever struck me as much of an improvement. Yet I submit to you that, between these extremes, there’s another position we can stake out—one that I certainly don’t know to be correct, but that would solve so many different puzzles if it were correct that, for that reason alone, it seems to me to merit more attention than it usually receives.  (In an effort to give the view that attention, a couple years ago I wrote an 85-page essay called The Ghost in the Quantum Turing Machine, which one or two people told me they actually read all the way through.)  If, after a lifetime of worrying (on weekends) about stuff like whether a giant lookup table would be conscious, I now seem to be arguing for this particular view, it’s less out of conviction in its truth than out of a sense of intellectual obligation: to whatever extent people care about these slippery questions at all, to whatever extent they think various alternative views deserve a hearing, I believe this one does as well. Before I go further, let me be extremely clear about what this view is not saying.  Firstly, it’s not saying that the brain is a quantum computer, in any interesting sense—let alone a quantum-gravitational computer, like Roger Penrose wants!  Indeed, I see no evidence, from neuroscience or any other field, that the cognitive information processing done by the brain is anything but classical.  The view I’m discussing doesn’t challenge conventional neuroscience on that account. Secondly, this view doesn’t say that consciousness is in any sense necessary for decoherence, or for the emergence of a classical world.  I’ve never understood how one could hold such a belief, while still being a scientific realist.  After all, there are trillions of decoherence events happening every second in stars and asteroids and uninhabited planets.  Do those events not “count as real” until a human registers them?  (Or at least a frog, or an AI?)  The view I’m discussing only asserts the converse: that decoherence is necessary for consciousness.  (By analogy, presumably everyone agrees that some amount of computation is necessary for an interesting consciousness, but that doesn’t mean consciousness is necessary for computation.) Thirdly, the view I’m discussing doesn’t say that “quantum magic” is the explanation for consciousness.  It’s silent on the explanation for consciousness (to whatever extent that question makes sense); it seeks only to draw a defensible line between the systems we want to regard as conscious and the systems we don’t—to address what I recently called the Pretty-Hard Problem.  And the (partial) answer it suggests doesn’t seem any more “magical” to me than any other proposed answer to the same question.  For example, if one said that consciousness arises from any computation that’s sufficiently “integrated” (or something), I could reply: what’s the “magical force” that imbues those particular computations with consciousness, and not other computations I can specify?  Or if one said (like Searle) that consciousness arises from the biology of the brain, I could reply: so what’s the “magic” of carbon-based biology, that could never be replicated in silicon?  Or even if one threw up one’s hands and said everything was conscious, I could reply: what’s the magical power that imbues my stapler with a mind?  Each of these views, along with the view that stresses the importance of decoherence and the arrow of time, is worth considering.  In my opinion, each should be judged according to how well it holds up under the most grueling battery of paradigm-cases, thought experiments, and reductios ad absurdum we can devise. So, why might one conjecture that decoherence, and participation in the arrow of time, were necessary conditions for consciousness?  I suppose I could offer some argument about our subjective experience of the passage of time being a crucial component of our consciousness, and the passage of time being bound up with the Second Law.  Truthfully, though, I don’t have any a-priori argument that I find convincing.  All I can do is show you how many apparent paradoxes get resolved if you make this one speculative leap. For starters, if you think about exactly how our chunk of matter is going to amplify microscopic fluctuations, it could depend on details like the precise spin orientations of various subatomic particles in the chunk.  But that has an interesting consequence: if you’re an outside observer who doesn’t know the chunk’s quantum state, it might be difficult or impossible for you to predict what the chunk is going to do next—even just to give decent statistical predictions, like you can for a hydrogen atom.  And of course, you can’t in general perform a measurement that will tell you the chunk’s quantum state, without violating the No-Cloning Theorem.  For the same reason, there’s in general no physical procedure that you can apply to the chunk to duplicate it exactly: that is, to produce a second chunk that you can be confident will behave identically (or almost identically) to the first, even just in a statistical sense.  (Again, this isn’t assuming any long-range quantum coherence in the chunk: only microscopic coherence that then gets amplified.) It might be objected that there are all sorts of physical systems that “amplify microscopic fluctuations,” but that aren’t anything like what I described, at least not in any interesting sense: for example, a Geiger counter, or a photodetector, or any sort of quantum-mechanical random-number generator.  You can make, if not an exact copy of a Geiger counter, surely one that’s close enough for practical purposes.  And, even though the two counters will record different sequences of clicks when pointed at identical sources, the statistical distribution of clicks will be the same (and precisely calculable), and surely that’s all that matters.  So, what separates these examples from the sorts of examples I want to discuss? What separates them is the undisputed existence of what I’ll call a clean digital abstraction layer.  By that, I mean a macroscopic approximation to a physical system that an external observer can produce, in principle, without destroying the system; that can be used to predict what the system will do to excellent accuracy (given knowledge of the environment); and that “sees” quantum-mechanical uncertainty—to whatever extent it does—as just a well-characterized source of random noise.  If a system has such an abstraction layer, then we can regard any quantum noise as simply part of the “environment” that the system observes, rather than part of the system itself.  I’ll take it as clear that such clean abstraction layers exist for a Geiger counter, a photodetector, or a computer with a quantum random number generator.  By contrast, for (say) an animal brain, I regard it as currently an open question whether such an abstraction layer exists or not.  If, someday, it becomes routine for nanobots to swarm through people’s brains and make exact copies of them—after which the “original” brains can be superbly predicted in all circumstances, except for some niggling differences that are traceable back to different quantum-mechanical dice rolls—at that point, perhaps educated opinion will have shifted to the point where we all agree the brain does have a clean digital abstraction layer.  But from where we stand today, it seems entirely possible to agree that the brain is a physical system obeying the laws of physics, while doubting that the nanobots would work as advertised.  It seems possible that—as speculated by Bohr, Compton, Eddington, and even Alan Turing—if you want to get it right you’ll need more than just the neural wiring graph, the synaptic strengths, and the approximate neurotransmitter levels.  Maybe you also need (e.g.) the internal states of the neurons, the configurations of sodium-ion channels, or other data that you simply can’t get without irreparably damaging the original brain—not only as a contingent matter of technology but as a fundamental matter of physics. (As a side note, I should stress that obviously, even without invasive nanobots, our brains are constantly changing, but we normally don’t say as a result that we become completely different people at each instant!  To my way of thinking, though, this transtemporal identity is fundamentally different from a hypothetical identity between different “copies” of you, in the sense we’re talking about.  For one thing, all your transtemporal doppelgängers are connected by a single, linear chain of causation.  For another, outside movies like Bill and Ted’s Excellent Adventure, you can’t meet your transtemporal doppelgängers and have a conversation with them, nor can scientists do experiments on some of them, then apply what they learned to others that remained unaffected by their experiments.) So, on this view, a conscious chunk of matter would be one that not only acts irreversibly, but that might well be unclonable for fundamental physical reasons.  If so, that would neatly resolve many of the puzzles that I discussed before.  So for example, there’s now a straightforward reason why you shouldn’t consent to being killed, while your copy gets recreated on Mars from an email attachment.  Namely, that copy will have a microstate with no direct causal link to your “original” microstate—so while it might behave similarly to you in many ways, you shouldn’t expect that your consciousness will “transfer” to it.  If you wanted to get your exact microstate to Mars, you could do that in principle using quantum teleportation—but as we all know, quantum teleportation inherently destroys the original copy, so there’s no longer any philosophical problem!  (Or, of course, you could just get on a spaceship bound for Mars: from a philosophical standpoint, it amounts to the same thing.) Similarly, in the case where the simulation of your brain was run three times for error-correcting purposes: that could bring about three consciousnesses if, and only if, the three simulations were tied to different sets of decoherence events.  The giant lookup table and the Earth-sized brain simulation wouldn’t bring about any consciousness, unless they were implemented in such a way that they no longer had a clean digital abstraction layer.  What about the homomorphically-encrypted brain simulation?  That might no longer work, simply because we can’t assume that the microscopic fluctuations that get amplified are homomorphically encrypted.  Those are “in the clear,” which inevitably leaks information.  As for the quantum computer that simulates your thought processes and then perfectly reverses the simulation, or that queries you like a Vaidman bomb—in order to implement such things, we’d of course need to use quantum fault-tolerance, so that the simulation of you stayed in an encoded subspace and didn’t decohere.  But under our assumption, that would mean the simulation wasn’t conscious. Now, it might seem to some of you like I’m suggesting something deeply immoral.  After all, the view I’m considering implies that, even if a system passed the Turing Test, and behaved identically to a human, even if it eloquently pleaded for its life, if it wasn’t irreversibly decohering microscopic events then it wouldn’t be conscious, so it would be fine to kill it, torture it, whatever you want. But wait a minute: if a system isn’t doing anything irreversible, then what exactly does it mean to “kill” it?  If it’s a classical computation, then at least in principle, you could always just restore from backup.  You could even rewind and not only erase the memories of, but “uncompute” (“untorture”?) whatever tortures you had performed.  If it’s a quantum computation, you could always invert the unitary transformation U that corresponded to killing the thing (then reapply U and invert it again for good measure, if you wanted).  Only for irreversible systems are there moral acts with irreversible consequences. This is related to something that’s bothered me for years in quantum foundations.  When people discuss Schrödinger’s cat, they always—always—insert some joke about, “obviously, this experiment wouldn’t pass the Ethical Review Board.  Nowadays, we try to avoid animal cruelty in our quantum gedankenexperiments.”  But actually, I claim that there’s no animal cruelty at all in the Schrödinger’s cat experiment.  And here’s why: in order to prove that the cat was ever in a coherent superposition of |Alive〉 and |Dead〉, you need to be able to measure it in a basis like {|Alive〉+|Dead〉,|Alive〉-|Dead〉}.  But if you can do that, you must have such precise control over all the cat’s degrees of freedom that you can also rotate unitarily between the |Alive〉 and |Dead〉 states.  (To see this, let U be the unitary that you applied to the |Alive〉 branch, and V the unitary that you applied to the |Dead〉 branch, to bring them into coherence with each other; then consider applying U-1V.)  But if you can do that, then in what sense should we say that the cat in the |Dead〉 state was ever “dead” at all?  Normally, when we speak of “killing,” we mean doing something irreversible—not rotating to some point in a Hilbert space that we could just as easily rotate away from. (There followed discussion among some audience members about the question of whether, if you destroyed all records of some terrible atrocity, like the Holocaust, everywhere in the physical world, you would thereby cause the atrocity “never to have happened.”  Many people seemed surprised by my willingness to accept that implication of what I was saying.  By way of explaining, I tried to stress just how far our everyday, intuitive notion of “destroying all records of something” falls short of what would actually be involved here: when we think of “destroying records,” we think about burning books, destroying the artifacts in museums, silencing witnesses, etc.  But even if all those things were done and many others, still the exact configurations of the air, the soil, and photons heading away from the earth at the speed of light would retain their silent testimony to the Holocaust’s reality.  “Erasing all records” in the physics sense would be something almost unimaginably more extreme: it would mean inverting the entire physical evolution in the vicinity of the earth, stopping time’s arrow and running history itself backwards.  Such ‘unhappening’ of what’s happened is something that we lack any experience of, at least outside of certain quantum interference experiments—though in the case of the Holocaust, one could be forgiven for wishing it were possible.) OK, so much for philosophy of mind and morality; what about the interpretation of quantum mechanics?  If we think about consciousness in the way I’ve suggested, then who’s right: the Copenhagenists or the Many-Worlders?  You could make a case for either.  The Many-Worlders would be right that we could always, if we chose, think of decoherence events as “splitting” our universe into multiple branches, each with different versions of ourselves, that thereafter don’t interact.  On the other hand, the Copenhagenists would be right that, even in principle, we could never do any experiment where this “splitting” of our minds would have any empirical consequence.  On this view, if you can control a system well enough that you can actually observe interference between the different branches, then it follows that you shouldn’t regard the system as conscious, because it’s not doing anything irreversible. In my essay, the implication that concerned me the most was the one for “free will.”  If being conscious entails amplifying microscopic events in an irreversible and unclonable way, then someone looking at a conscious system from the outside might not, in general, be able to predict what it’s going to do next, not even probabilistically.  In other words, its decisions might be subject to at least some “Knightian uncertainty”: uncertainty that we can’t even quantify in a mutually-agreed way using probabilities, in the same sense that we can quantify our uncertainty about (say) the time of a radioactive decay.  And personally, this is actually the sort of “freedom” that interests me the most.  I don’t really care if my choices are predictable by God, or by a hypothetical Laplace demon: that is, if they would be predictable (at least probabilistically), given complete knowledge of the microstate of the universe.  By definition, there’s essentially no way for my choices not to be predictable in that weak and unempirical sense!  On the other hand, I’d prefer that my choices not be completely predictable by other people.  If someone could put some sheets of paper into a sealed envelope, then I spoke extemporaneously for an hour, and then the person opened the envelope to reveal an exact transcript of everything I said, that’s the sort of thing that really would cause me to doubt in what sense “I” existed as a locus of thought.  But you’d have to actually do the experiment (or convince me that it could be done): it doesn’t count just to talk about it, or to extrapolate from fMRI experiments that predict which of two buttons a subject is going to press with 60% accuracy a few seconds in advance. But since we’ve got some cosmologists in the house, let me now turn to discussing the implications of this view for Boltzmann brains. (For those tuning in from home: a Boltzmann brain is a hypothetical chance fluctuation in the late universe, which would include a conscious observer with all the perceptions that a human being—say, you—is having right now, right down to false memories and false beliefs of having arisen via Darwinian evolution.  On statistical grounds, the overwhelming majority of Boltzmann brains last just long enough to have a single thought—like, say, the one you’re having right now—before they encounter the vacuum and freeze to death.  If you measured some part of the vacuum state toward which our universe seems to be heading, asking “is there a Boltzmann brain here?,” quantum mechanics predicts that the probability would be ridiculously astronomically small, but nonzero.  But, so the argument goes, if the vacuum lasts for infinite time, then as long as the probability is nonzero, it doesn’t matter how tiny it is: you’ll still get infinitely many Boltzmann brains indistinguishable from any given observer; and for that reason, any observer should consider herself infinitely likelier to be a Boltzmann brain than to be the “real,” original version.  For the record, even among the strange people at the IBM workshop, no one actually worried about being a Boltzmann brain.  The question, rather, is whether, if a cosmological model predicts Boltzmann brains, then that’s reason enough to reject the model, or whether we can live with such a prediction, since we have independent grounds for knowing that we can’t be Boltzmann brains.) At this point, you can probably guess where this is going.  If decoherence, entropy production, full participation in the arrow of time are necessary conditions for consciousness, then it would follow, in particular, that a Boltzmann brain is not conscious.  So we certainly wouldn’t be Boltzmann brains, even under a cosmological model that predicts infinitely more of them than of us.  We can wipe our hands; the problem is solved! I find it extremely interesting that, in their recent work, Kim Boddy, Sean Carroll, and Jason Pollack reached a similar conclusion, but from a completely different starting point.  They said: look, under reasonable assumptions, the late universe is just going to stay forever in an energy eigenstate—just sitting there doing nothing.  It’s true that, if someone came along and measured the energy eigenstate, asking “is there a Boltzmann brain here?,” then with a tiny but nonzero probability the answer would be yes.  But since no one is there measuring, what licenses us to interpret the nonzero overlap in amplitude with the Boltzmann brain state, as a nonzero probability of there being a Boltzmann brain?  I think they, too, are implicitly suggesting: if there’s no decoherence, no arrow of time, then we’re not authorized to say that anything is happening that “counts” for anthropic purposes. Let me now mention an obvious objection.  (In fact, when I gave the talk, this objection was raised much earlier.)  You might say, “look, if you really think irreversible decoherence is a necessary condition for consciousness, then you might find yourself forced to say that there’s no consciousness, because there might not be any such thing as irreversible decoherence!  Imagine that our entire solar system were enclosed in an anti de Sitter (AdS) boundary, like in Greg Egan’s science-fiction novel Quarantine.  Inside the box, there would just be unitary evolution in some Hilbert space: maybe even a finite-dimensional Hilbert space.  In which case, all these ‘irreversible amplifications’ that you lay so much stress on wouldn’t be irreversible at all: eventually all the Everett branches would recohere; in fact they’d decohere and recohere infinitely many times.  So by your lights, how could anything be conscious inside the box?” My response to this involves one last speculation.  I speculate that the fact that we don’t appear to live in AdS space—that we appear to live in (something evolving toward) a de Sitter space, with a positive cosmological constant—might be deep and important and relevant.  I speculate that, in our universe, “irreversible decoherence” means: the records of what you did are now heading toward our de Sitter horizon at the speed of light, and for that reason alone—even if for no others—you can’t put Humpty Dumpty back together again.  (Here I should point out, as several workshop attendees did to me, that Bousso and Susskind explored something similar in their paper The Multiverse Interpretation of Quantum Mechanics.) Does this mean that, if cosmologists discover tomorrow that the cosmological constant is negative, or will become negative, then it will turn out that none of us were ever conscious?  No, that’s stupid.  What it would suggest is that the attempt I’m now making on the Pretty-Hard Problem had smacked into a wall (an AdS wall?), so that I, and anyone else who stressed in-principle irreversibility, should go back to the drawing board.  (By analogy, if some prescription for getting rid of Boltzmann brains fails, that doesn’t mean we are Boltzmann brains; it just means we need a new prescription.  Tempting as it is to skewer our opponents’ positions with these sorts of strawman inferences, I hope we can give each other the courtesy of presuming a bare minimum of sense.) Another question: am I saying that, in order to be absolutely certain of whether some entity satisfied the postulated precondition for consciousness, one might, in general, need to look billions of years into the future, to see whether the “decoherence” produced by the entity was really irreversible?  Yes (pause to gulp bullet).  I am saying that.  On the other hand, I don’t think it’s nearly as bad as it sounds.  After all, the category of “consciousness” might be morally relevant, or relevant for anthropic reasoning, but presumably we all agree that it’s unlikely to play any causal role in the fundamental laws of physics.  So it’s not as if we’ve introduced any teleology into the laws of physics by this move. Let me end by pointing out what I’ll call the “Tegmarkian slippery slope.”  It feels scientific and rational—from the perspective of many of us, even banal—to say that, if we’re conscious, then any sufficiently-accurate computer simulation of us would also be.  But I tried to convince you that this view depends, for its aura of obviousness, on our agreeing not to probe too closely exactly what would count as a “sufficiently-accurate” simulation.  E.g., does it count if the simulation is done in heavily-encrypted form, or encoded as a giant lookup table?  Does it matter if anyone actually runs the simulation, or consults the lookup table?  Now, all the way at the bottom of the slope is Max Tegmark, who asks: to produce consciousness, what does it matter if the simulation is physically instantiated at all?  Why isn’t it enough for the simulation to “exist” mathematically?  Or, better yet: if you’re worried about your infinitely-many Boltzmann brain copies, then why not worry equally about the infinitely many descriptions of your life history that are presumably encoded in the decimal expansion of π?  Why not hold workshops about how to avoid the prediction that we’re infinitely likelier to be “living in π” than to be our “real” selves? From this extreme, even most scientific rationalists recoil.  They say, no, even if we don’t yet know exactly what’s meant by “physical instantiation,” we agree that you only get consciousness if the computer program is physically instantiated somehow.  But now I have the opening I want.  I can say: once we agree that physical existence is a prerequisite for consciousness, why not participation in the Arrow of Time?  After all, our ordinary ways of talking about sentient beings—outside of quantum mechanics, cosmology, and maybe theology—don’t even distinguish between the concepts “exists” and “exists and participates in the Arrow of Time.”  And to say we have no experience of reversible, clonable, coherently-executable, atemporal consciousnesses is a massive understatement. Of course, we should avoid the sort of arbitrary prejudice that Turing warned against in Computing Machinery and Intelligence.  Just because we lack experience with extraterrestrial consciousnesses, doesn’t mean it would be OK to murder an intelligent extraterrestrial if we met one tomorrow.  In just the same way, just because we lack experience with clonable, atemporal consciousnesses, doesn’t mean it would be OK to … wait!  As we said before, clonability, and aloofness from time’s arrow, call severely into question what it even means to “murder” something.  So maybe this case isn’t as straightforward as the extraterrestrials after all. At this point, I’ve probably laid out enough craziness, so let me stop and open things up for discussion. Integrated Information Theory: Virgil Griffith opines Wednesday, June 25th, 2014 Remember the two discussions about Integrated Information Theory that we had a month ago on this blog?  You know, the ones where I argued that IIT fails because “the brain might be an expander, but not every expander is a brain”; where IIT inventor Giulio Tononi wrote a 14-page response biting the bullet with mustard; and where famous philosopher of mind David Chalmers, and leading consciousness researcher (and IIT supporter) Christof Koch, also got involved in the comments section? OK, so one more thing about that.  Virgil Griffith recently completed his PhD under Christof Koch at Caltech—as he puts it, “immersing [him]self in the nitty-gritty of IIT for the past 6.5 years.”  This morning, Virgil sent me two striking letters about his thoughts on the recent IIT exchanges on this blog.  He asked me to share them here, something that I’m more than happy to do: Reading these letters, what jumped out at me—given Virgil’s long apprenticeship in the heart of IIT-land—was the amount of agreement between my views and his.  In particular, Virgil agrees with my central contention that Φ, as it stands, can at most be a necessary condition for consciousness, not a sufficient condition, and remarks that “[t]o move IIT from talked about to accepted among hard scientists, it may be necessary for [Tononi] to wash his hands of sufficiency claims.”  He agrees that a lack of mathematical clarity in the definition of Φ is a “major problem in the IIT literature,” commenting that “IIT needs more mathematically inclined people at its helm.”  He also says he agrees “110%” that the lack of a derivation of the form of Φ from IIT’s axioms is “a pothole in the theory,” and further agrees 110% that the current prescriptions for computing Φ contain many unjustified idiosyncrasies. Indeed, given the level of agreement here, there’s not all that much for me to rebut, defend, or clarify! I suppose there are a few things. 1. Just as a clarifying remark, in a few places where it looks from the formatting like Virgil is responding to something I said (for example, “The conceptual structure is unified—it cannot be decomposed into independent components” and “Clearly, a theory of consciousness must be able to provide an adequate account for such seemingly disparate but largely uncontroversial facts”), he’s actually responding to something Giulio said (and that I, at most, quoted). 2. Virgil says, correctly, that Giulio would respond to my central objection against IIT by challenging my “intuition for things being unconscious.”  (Indeed, because Giulio did respond, there’s no need to speculate about how he would respond!)  However, Virgil then goes on to explicate Giulio’s response using the analogy of temperature (interestingly, the same analogy I used for a different purpose).  He points out how counterintuitive it would be for Kelvin’s contemporaries to accept that “even the coldest thing you’ve touched actually has substantial heat in it,” and remarks: “I find this ‘Kelvin scale for C’ analogy makes the panpsychism much more palatable.”  The trouble is that I never objected to IIT’s panpsychism per se: I only objected to its seemingly arbitrary and selective panpsychism.  It’s one thing for a theory to ascribe some amount of consciousness to a 2D grid or an expander graph.  It’s quite another for a theory to ascribe vastly more consciousness to those things than it ascribes to a human brain—even while denying consciousness to things that are intuitively similar but organized a little differently (say, a 1D grid).  A better analogy here would be if Kelvin’s theory of temperature had predicted, not merely that all ordinary things had some heat in them, but that an ice cube was hotter than the Sun, even though a popsicle was, of course, colder than the Sun.  (The ice cube, you see, “integrates heat” in a way that the popsicle doesn’t…) 3. Virgil imagines two ways that an IIT proponent could respond to my argument involving the cerebellum—the argument that accuses IIT proponents of changing the rules of the game according to convenience (a 2D grid has a large Φ?  suck it up and accept it; your intuitions about a grid’s lack of consciousness are irrelevant.  the human cerebellum has a small Φ?  ah, that’s a victory for IIT, since the cerebellum is intuitively unconscious).  The trouble is that both of Virgil’s imagined responses are by reference to the IIT axioms.  But I wasn’t talking about the axioms themselves, but about whether we’re allowed to validate the axioms, by checking their consequences against earlier, pre-theoretic intuitions.  And I was pointing out that Giulio seemed happy to do so when the results “went in IIT’s favor” (in the cerebellum example), even though he lectured me against doing so in the cases of the expander and the 2D grid (cases where IIT does less well, to put it mildly, at capturing our intuitions). 4. Virgil chastises me for ridiculing Giulio’s phenomenological argument for the consciousness of a 2D grid by way of nursery rhymes: “Just because it feels like something to see a wall, doesn’t mean it feels like something to be a wall.  You can smell a rose, and the rose can smell good, but that doesn’t mean the rose can smell you.”  Virgil amusingly comments: “Even when both are inebriated, I’ve never heard [Giulio] nor [Christof] separately or collectively imply anything like this.  Moreover, they’re each far too clueful to fall for something so trivial.”  For my part, I agree that neither Giulio nor Christof would ever advocate something as transparently silly as, “if you have a rich inner experience when thinking about X, then that’s evidence X itself is conscious.”  And I apologize if I seemed to suggest they would.  To clarify, my point was not that Giulio was making such an absurd statement, but rather that, assuming he wasn’t, I didn’t know what he was trying to say in the passages of his that I’d just quoted at length.  The silly thing seemed like the “obvious” reading of his words, and my hermeneutic powers were unequal to the task of figuring out the non-silly, non-obvious reading that he surely intended. Anyway, there’s much more to Virgil’s letters than the above—including answers to some of my subsidiary questions about the details of IIT (e.g., how to handle unbalanced partitions, and the mathematical meanings of terms like “mechanism” and “system of mechanisms”).  Also, in parts of the letters, Virgil’s main concern is neither to agree with me nor to agree with Giulio, but rather to offer his own ideas, developed in the course of his PhD work, for how to move forward and fix some of the problems with IIT.  All in all, these are recommended reads for anyone who’s been following this debate. Giulio Tononi and Me: A Phi-nal Exchange Friday, May 30th, 2014 You might recall that last week I wrote a post criticizing Integrated Information Theory (IIT), and its apparent implication that a simple Reed-Solomon decoding circuit would, if scaled to a large enough size, bring into being a consciousness vastly exceeding our own.  On Wednesday Giulio Tononi, the creator of IIT, was kind enough to send me a fascinating 14-page rebuttal, and to give me permission to share it here: If you’re interested in this subject at all, then I strongly recommend reading Giulio’s response before continuing further.   But for those who want the tl;dr: Giulio, not one to battle strawmen, first restates my own argument against IIT with crystal clarity.  And while he has some minor quibbles (e.g., apparently my calculations of Φ didn’t use the most recent, “3.0” version of IIT), he wisely sets those aside in order to focus on the core question: according to IIT, are all sorts of simple expander graphs conscious? There, he doesn’t “bite the bullet” so much as devour a bullet hoagie with mustard.  He affirms that, yes, according to IIT, a large network of XOR gates arranged in a simple expander graph is conscious.  Indeed, he goes further, and says that the “expander” part is superfluous: even a network of XOR gates arranged in a 2D square grid is conscious.  In my language, Giulio is simply pointing out here that a √n×√n square grid has decent expansion: good enough to produce a Φ-value of about √n, if not the information-theoretic maximum of n (or n/2, etc.) that an expander graph could achieve.  And apparently, by Giulio’s lights, Φ=√n is sufficient for consciousness! While Giulio never mentions this, it’s interesting to observe that logic gates arranged in a 1-dimensional line would produce a tiny Φ-value (Φ=O(1)).  So even by IIT standards, such a linear array would not be conscious.  Yet the jump from a line to a two-dimensional grid is enough to light the spark of Mind. Yet even as we admire Giulio’s honesty and consistency, his stance might also prompt us, gently, to take another look at this peanut-butter-moon theory, and at what grounds we had for believing it in the first place.  In his response essay, Giulio offers four arguments (by my count) for accepting IIT despite, or even because of, its conscious-grid prediction: one “negative” argument and three “positive” ones.  Alas, while your Φ-lage may vary, I didn’t find any of the four arguments persuasive.  In the rest of this post, I’ll go through them one by one and explain why. I. The Copernicus-of-Consciousness Argument Like many commenters on my last post, Giulio heavily criticizes my appeal to “common sense” in rejecting IIT.  Sure, he says, I might find it “obvious” that a huge Vandermonde matrix, or its physical instantiation, isn’t conscious.  But didn’t people also find it “obvious” for millennia that the Sun orbits the Earth?  Isn’t the entire point of science to challenge common sense?  Clearly, then, the test of a theory of consciousness is not how well it upholds “common sense,” but how well it fits the facts. The above position sounds pretty convincing: who could dispute that observable facts trump personal intuitions?  The trouble is, what are the observable facts when it comes to consciousness?  The anti-common-sense view gets all its force by pretending that we’re in a relatively late stage of research—namely, the stage of taking an agreed-upon scientific definition of consciousness, and applying it to test our intuitions—rather than in an extremely early stage, of agreeing on what the word “consciousness” is even supposed to mean. Since I think this point is extremely important—and of general interest, beyond just IIT—I’ll expand on it with some analogies. Suppose I told you that, in my opinion, the ε-δ definition of continuous functions—the one you learn in calculus class—failed to capture the true meaning of continuity.  Suppose I told you that I had a new, better definition of continuity—and amazingly, when I tried out my definition on some examples, it turned out that ⌊x⌋ (the floor function) was continuous, whereas x2  had discontinuities, though only at 17.5 and 42. You would probably ask what I was smoking, and whether you could have some.  But why?  Why shouldn’t the study of continuity produce counterintuitive results?  After all, even the standard definition of continuity leads to some famously weird results, like that x sin(1/x) is a continuous function, even though sin(1/x) is discontinuous.  And it’s not as if the standard definition is God-given: people had been using words like “continuous” for centuries before Bolzano, Weierstrass, et al. formalized the ε-δ definition, a definition that millions of calculus students still find far from intuitive.  So why shouldn’t there be a different, better definition of “continuous,” and why shouldn’t it reveal that a step function is continuous while a parabola is not? In my view, the way out of this conceptual jungle is to realize that, before any formal definitions, any ε’s and δ’s, we start with an intuition for we’re trying to capture by the word “continuous.”  And if we press hard enough on what that intuition involves, we’ll find that it largely consists of various “paradigm-cases.”  A continuous function, we’d say, is a function like 3x, or x2, or sin(x), while a discontinuity is the kind of thing that the function 1/x has at x=0, or that ⌊x⌋ has at every integer point.  Crucially, we use the paradigm-cases to guide our choice of a formal definition—not vice versa!  It’s true that, once we have a formal definition, we can then apply it to “exotic” cases like x sin(1/x), and we might be surprised by the results.  But the paradigm-cases are different.  If, for example, our definition told us that x2 was discontinuous, that wouldn’t be a “surprise”; it would just be evidence that we’d picked a bad definition.  The definition failed at the only task for which it could have succeeded: namely, that of capturing what we meant. Some people might say that this is all well and good in pure math, but empirical science has no need for squishy intuitions and paradigm-cases.  Nothing could be further from the truth.  Suppose, again, that I told you that physicists since Kelvin had gotten the definition of temperature all wrong, and that I had a new, better definition.  And, when I built a Scott-thermometer that measures true temperatures, it delivered the shocking result that boiling water is actually colder than ice.  You’d probably tell me where to shove my Scott-thermometer.  But wait: how do you know that I’m not the Copernicus of heat, and that future generations won’t celebrate my breakthrough while scoffing at your small-mindedness? I’d say there’s an excellent answer: because what we mean by heat is “whatever it is that boiling water has more of than ice” (along with dozens of other paradigm-cases).  And because, if you use a thermometer to check whether boiling water is hotter than ice, then the term for what you’re doing is calibrating your thermometer.  When the clock strikes 13, it’s time to fix the clock, and when the thermometer says boiling water’s colder than ice, it’s time to replace the thermometer—or if needed, even the entire theory on which the thermometer is based. Ah, you say, but doesn’t modern physics define heat in a completely different, non-intuitive way, in terms of molecular motion?  Yes, and that turned out to be a superb definition—not only because it was precise, explanatory, and applicable to cases far beyond our everyday experience, but crucially, because it matched common sense on the paradigm-cases.  If it hadn’t given sensible results for boiling water and ice, then the only possible conclusion would be that, whatever new quantity physicists had defined, they shouldn’t call it “temperature,” or claim that their quantity measured the amount of “heat.”  They should call their new thing something else. The implications for the consciousness debate are obvious.  When we consider whether to accept IIT’s equation of integrated information with consciousness, we don’t start with any agreed-upon, independent notion of consciousness against which the new notion can be compared.  The main things we start with, in my view, are certain paradigm-cases that gesture toward what we mean: • You are conscious (though not when anesthetized). • (Most) other people appear to be conscious, judging from their behavior. • Many animals appear to be conscious, though probably to a lesser degree than humans (and the degree of consciousness in each particular species is far from obvious). • A rock is not conscious.  A wall is not conscious.  A Reed-Solomon code is not conscious.  Microsoft Word is not conscious (though a Word macro that passed the Turing test conceivably would be). Fetuses, coma patients, fish, and hypothetical AIs are the x sin(1/x)’s of consciousness: they’re the tougher cases, the ones where we might actually need a formal definition to adjudicate the truth. Now, given a proposed formal definition for an intuitive concept, how can we check whether the definition is talking about same thing we were trying to get at before?  Well, we can check whether the definition at least agrees that parabolas are continuous while step functions are not, that boiling water is hot while ice is cold, and that we’re conscious while Reed-Solomon decoders are not.  If so, then the definition might be picking out the same thing that we meant, or were trying to mean, pre-theoretically (though we still can’t be certain).  If not, then the definition is certainly talking about something else. What else can we do? II. The Axiom Argument According to Giulio, there is something else we can do, besides relying on paradigm-cases.  That something else, in his words, is to lay down “postulates about how the physical world should be organized to support the essential properties of experience,” then use those postulates to derive a consciousness-measuring quantity. OK, so what are IIT’s postulates?  Here’s how Giulio states the five postulates leading to Φ in his response essay (he “derives” these from earlier “phenomenological axioms,” which you can find in the essay): 1. A system of mechanisms exists intrinsically if it can make a difference to itself, by affecting the probability of its past and future states, i.e. it has causal power (existence). 2. It is composed of submechanisms each with their own causal power (composition). 3. It generates a conceptual structure that is the specific way it is, as specified by each mechanism’s concept — this is how each mechanism affects the probability of the system’s past and future states (information). 4. The conceptual structure is unified — it cannot be decomposed into independent components (integration). 5. The conceptual structure is singular — there can be no superposition of multiple conceptual structures over the same mechanisms and intervals of time. From my standpoint, these postulates have three problems.  First, I don’t really understand them.  Second, insofar as I do understand them, I don’t necessarily accept their truth.  And third, insofar as I do accept their truth, I don’t see how they lead to Φ. To elaborate a bit: I don’t really understand the postulates.  I realize that the postulates are explicated further in the many papers on IIT.  Unfortunately, while it’s possible that I missed something, in all of the papers that I read, the definitions never seemed to “bottom out” in mathematical notions that I understood, like functions mapping finite sets to other finite sets.  What, for example, is a “mechanism”?  What’s a “system of mechanisms”?  What’s “causal power”?  What’s a “conceptual structure,” and what does it mean for it to be “unified”?  Alas, it doesn’t help to define these notions in terms of other notions that I also don’t understand.  And yes, I agree that all these notions can be given fully rigorous definitions, but there could be many different ways to do so, and the devil could lie in the details.  In any case, because (as I said) it’s entirely possible that the failure is mine, I place much less weight on this point than I do on the two points to follow. I don’t necessarily accept the postulates’ truth.  Is consciousness a “unified conceptual structure”?  Is it “singular”?  Maybe.  I don’t know.  It sounds plausible.  But at any rate, I’m far less confident about any these postulates—whatever one means by them!—than I am about my own “postulate,” which is that you and I are conscious while my toaster is not.  Note that my postulate, though not phenomenological, does have the merit of constraining candidate theories of consciousness in an unambiguous way. I don’t see how the postulates lead to Φ.  Even if one accepts the postulates, how does one deduce that the “amount of consciousness” should be measured by Φ, rather than by some other quantity?  None of the papers I read—including the ones Giulio linked to in his response essay—contained anything that looked to me like a derivation of Φ.  Instead, there was general discussion of the postulates, and then Φ just sort of appeared at some point.  Furthermore, given the many idiosyncrasies of Φ—the minimization over all bipartite (why just bipartite? why not tripartite?) decompositions of the system, the need for normalization (or something else in version 3.0) to deal with highly-unbalanced partitions—it would be quite a surprise were it possible to derive its specific form from postulates of such generality. I was going to argue for that conclusion in more detail, when I realized that Giulio had kindly done the work for me already.  Recall that Giulio chided me for not using the “latest, 2014, version 3.0” edition of Φ in my previous post.  Well, if the postulates uniquely determined the form of Φ, then what’s with all these upgrades?  Or has Φ’s definition been changing from year to year because the postulates themselves have been changing?  If the latter, then maybe one should wait for the situation to stabilize before trying to form an opinion of the postulates’ meaningfulness, truth, and completeness? III. The Ironic Empirical Argument Or maybe not.  Despite all the problems noted above with the IIT postulates, Giulio argues in his essay that there’s a good a reason to accept them: namely, they explain various empirical facts from neuroscience, and lead to confirmed predictions.  In his words: [A] theory’s postulates must be able to explain, in a principled and parsimonious way, at least those many facts about consciousness and the brain that are reasonably established and non-controversial.  For example, we know that our own consciousness depends on certain brain structures (the cortex) and not others (the cerebellum), that it vanishes during certain periods of sleep (dreamless sleep) and reappears during others (dreams), that it vanishes during certain epileptic seizures, and so on.  Clearly, a theory of consciousness must be able to provide an adequate account for such seemingly disparate but largely uncontroversial facts.  Such empirical facts, and not intuitions, should be its primary test… [I]n some cases we already have some suggestive evidence [of the truth of the IIT postulates’ predictions].  One example is the cerebellum, which has 69 billion neurons or so — more than four times the 16 billion neurons of the cerebral cortex — and is as complicated a piece of biological machinery as any.  Though we do not understand exactly how it works (perhaps even less than we understand the cerebral cortex), its connectivity definitely suggests that the cerebellum is ill suited to information integration, since it lacks lateral connections among its basic modules.  And indeed, though the cerebellum is heavily connected to the cerebral cortex, removing it hardly affects our consciousness, whereas removing the cortex eliminates it. I hope I’m not alone in noticing the irony of this move.  But just in case, let me spell it out: Giulio has stated, as “largely uncontroversial facts,” that certain brain regions (the cerebellum) and certain states (dreamless sleep) are not associated with our consciousness.  He then views it as a victory for IIT, if those regions and states turn out to have lower information integration than the regions and states that he does take to be associated with our consciousness. But how does Giulio know that the cerebellum isn’t conscious?  Even if it doesn’t produce “our” consciousness, maybe the cerebellum has its own consciousness, just as rich as the cortex’s but separate from it.  Maybe removing the cerebellum destroys that other consciousness, unbeknownst to “us.”  Likewise, maybe “dreamless” sleep brings about its own form of consciousness, one that (unlike dreams) we never, ever remember in the morning. Giulio might take the implausibility of those ideas as obvious, or at least as “largely uncontroversial” among neuroscientists.  But here’s the problem with that: he just told us that a 2D square grid is conscious!  He told us that we must not rely on “commonsense intuition,” or on any popular consensus, to say that if a square mesh of wires is just sitting there XORing some input bits, doing nothing at all that we’d want to call intelligent, then it’s probably safe to conclude that the mesh isn’t conscious.  So then why shouldn’t he say the same for the cerebellum, or for the brain in dreamless sleep?  By Giulio’s own rules (the ones he used for the mesh), we have no a-priori clue whether those systems are conscious or not—so even if IIT predicts that they’re not conscious, that can’t be counted as any sort of success for IIT. For me, the point is even stronger: I, personally, would be a million times more inclined to ascribe consciousness to the human cerebellum, or to dreamless sleep, than I would to the mesh of XOR gates.  For it’s not hard to imagine neuroscientists of the future discovering “hidden forms of intelligence” in the cerebellum, and all but impossible to imagine them doing the same for the mesh.  But even if you put those examples on the same footing, still the take-home message seems clear: you can’t count it as a “success” for IIT if it predicts that the cerebellum in unconscious, while at the same time denying that it’s a “failure” for IIT if it predicts that a square mesh of XOR gates is conscious.  If the unconsciousness of the cerebellum can be considered an “empirical fact,” safe enough for theories of consciousness to be judged against it, then surely the unconsciousness of the mesh can also be considered such a fact. IV. The Phenomenology Argument I now come to, for me, the strangest and most surprising part of Giulio’s response.  Despite his earlier claim that IIT need not dovetail with “commonsense intuition” about which systems are conscious—that it can defy intuition—at some point, Giulio valiantly tries to reprogram our intuition, to make us feel why a 2D grid could be conscious.  As best I can understand, the argument seems to be that, when we stare at a blank 2D screen, we form a rich experience in our heads, and that richness must be mirrored by a corresponding “intrinsic” richness in 2D space itself: [I]f one thinks a bit about it, the experience of empty 2D visual space is not at all empty, but contains a remarkable amount of structure.  In fact, when we stare at the blank screen, quite a lot is immediately available to us without any effort whatsoever.  Thus, we are aware of all the possible locations in space (“points”): the various locations are right “there”, in front of us.  We are aware of their relative positions: a point may be left or right of another, above or below, and so on, for every position, without us having to order them.  And we are aware of the relative distances among points: quite clearly, two points may be close or far, and this is the case for every position.  Because we are aware of all of this immediately, without any need to calculate anything, and quite regularly, since 2D space pervades most of our experiences, we tend to take for granted the vast set of relationship[s] that make up 2D space. And yet, says IIT, given that our experience of the blank screen definitely exists, and it is precisely the way it is — it is 2D visual space, with all its relational properties — there must be physical mechanisms that specify such phenomenological relationships through their causal power … One may also see that the causal relationships that make up 2D space obtain whether the elements are on or off.  And finally, one may see that such a 2D grid is necessary not so much to represent space from the extrinsic perspective of an observer, but to create it, from its own intrinsic perspective. Now, it would be child’s-play to criticize the above line of argument for conflating our consciousness of the screen with the alleged consciousness of the screen itself.  To wit:  Just because it feels like something to see a wall, doesn’t mean it feels like something to be a wall.  You can smell a rose, and the rose can smell good, but that doesn’t mean the rose can smell you. However, I actually prefer a different tack in criticizing Giulio’s “wall argument.”  Suppose I accepted that my mental image of the relationships between certain entities was relevant to assessing whether those entities had their own mental life, independent of me or any other observer.  For example, suppose I believed that, if my experience of 2D space is rich and structured, then that’s evidence that 2D space is rich and structured enough to be conscious. Then my question is this: why shouldn’t the same be true of 1D space?  After all, my experience of staring at a rope is also rich and structured, no less than my experience of staring at a wall.  I perceive some points on the rope as being toward the left, others as being toward the right, and some points as being between two other points.  In fact, the rope even has a structure—namely, a natural total ordering on its points—that the wall lacks.  So why does IIT cruelly deny subjective experience to a row of logic gates strung along a rope, reserving it only for a mesh of logic gates pasted to a wall? And yes, I know the answer: because the logic gates on the rope aren’t “integrated” enough.  But who’s to say that the gates in the 2D mesh are integrated enough?  As I mentioned before, their Φ-value grows only as the square root of the number of gates, so that the ratio of integrated information to total information tends to 0 as the number of gates increases.  And besides, aren’t what Giulio calls “the facts of phenomenology” the real arbiters here, and isn’t my perception of the rope’s structure a phenomenological fact?  When you cut a rope, does it not split?  When you prick it, does it not fray? At this point, I fear we’re at a philosophical impasse.  Having learned that, according to IIT, 1. a square grid of XOR gates is conscious, and your experience of staring at a blank wall provides evidence for that, 2. by contrast, a linear array of XOR gates is not conscious, your experience of staring at a rope notwithstanding, 3. the human cerebellum is also not conscious (even though a grid of XOR gates is), and 4. unlike with the XOR gates, we don’t need a theory to tell us the cerebellum is unconscious, but can simply accept it as “reasonably established” and “largely uncontroversial,” I personally feel completely safe in saying that this is not the theory of consciousness for me.  But I’ve also learned that other people, even after understanding the above, still don’t reject IIT.  And you know what?  Bully for them.  On reflection, I firmly believe that a two-state solution is possible, in which we simply adopt different words for the different things that we mean by “consciousness”—like, say, consciousnessReal for my kind and consciousnessWTF for the IIT kind.  OK, OK, just kidding!  How about “paradigm-case consciousness” for the one and “IIT consciousness” for the other. Completely unrelated announcement: Some of you might enjoy this Nature News piece by Amanda Gefter, about black holes and computational complexity. Wednesday, May 21st, 2014 Happy birthday to me! We then consider the sum and hence or after normalizing, and hence Retiring falsifiability? A storm in Russell’s teacup Friday, January 17th, 2014
aa4001a65c5e5491
Wednesday, February 29, 2012 Progress in number theoretic vision about TGD During last weeks I have been writing a new chapter Quantum Adeles. The key idea is the generalization of p-adic number fields to their quantum counterparts and they key problem is what quantum p-adics and quantum adeles mean. Second key question is how these notions relate to various key ideas of quantum TGD proper. The new chapter gives the details: here I just list the basic ideas and results. What quantum p-adics and quantum adeles really are? What quantum p-adics are? The first guess is that one obtains quantum p-adics from p-adic integers by decomposing them to products of primes l first and after then expressing the primes l in all possible manners as power series of p by allowing the coefficients to be also larger than p but containing only prime factors p1<p. In the decomposition of coefficients to primes p1<p these primes are replaced with quantum primes assignable to p. One could pose the additional condition that coefficients are smaller than pN and decompose to products of primes l<pN mapped to quantum primes assigned with q= exp(i2π/pN). The interpretation would be in terms of pinary cutoff. For N=1 one would obtain the counterpart of p-adic numbers. For N>1 this correspondence assigns to ordinary p-adic integer larger number of quantum p-adic integers and one can define a natural projection to the ordinary p-adic integer and its direct quantum counterpart with coefficients ak<p in pinary expansion so that a covering space of p-adics results. One expects also that it is possible to assign what one could call quantum Galois group to this covering and the crazy guess is that it is isomorphich with the Absolute Galois Group defined as Galois group for algebraic numbers as extension of rationals. One must admit that the details are not fully clear yet here. For instance, one can consider quantum p-adics defined in power series of pN with coefficients an<pN and expressed as products of quantum primes l<pN. Even in the case that only N=1 option works the work has left to surprisingly detailed understanding of the relationship between different pieces of TGD. This step is however not enough for quantum p-adics. 1. The first additional idea is that one replaces p-adic integers with wave functions in the covering spaces associated with the prime factors l of integers n. This delocalization would give a genuine content for the attribute "quantum" as it does in the case of electron in hydrogen atom. The natural physical interpretation for these wave functions would be as cognitive representations of the quantum states in matter sector so that momentum, spin and various internal quantum numbers would find cognitive representation in quantum Galois degrees of freedom. One could talk about self-reference: the unseen internal degrees of freedom associated with p-adic integers would make it possible to represent physical information. Also the ratios of infinite primes reducing to unity give rise to similar but infinite-dimensional number theoretical anatomy of real numbers and leads to what I call Brahman=Atman identity. 2. Second additional idea is to replace numbers with sequences of arithmetic operations that is quantum sum +q and quantum product ×q represented as fundamental 3-vertices and to formulate basic laws of arithmetics as symmetries of these vertices give rise to additional selection rules from natural additional symmetry conditions. These sequences of arithmetics with sets of integers as inputs and outputs are analogous to Feynman diagrams and the factorization of integers to primes has the decomposition of braid to braid strands as a direct correlate. One can also group incoming integers to sub-groups and the hierarchy of infinite primes describes this grouping. A beautiful physical interpretation for the number theoretic Feynman diagrams emerges. 1. The decomposition of integers m and n of a quantum rational m/n to products of primes l correspond to the decomposition of two braids to braid strands labeled by primes l. TGD predicts both time-like and space-like braids having their ends at partonic 2-surfaces. These two kinds of braids would naturally correspond to the two co-prime integers defining quantum rational m/n. 2. The two basic vertices +q and ×q correspond to the fusion vertex for stringy diagrams and 3-vertex for Feynman diagrams: both vertices have TGD counterparts and correspond at Hilbert space level direct sum and tensor product. Note that the TGD inspired interpretation of +q (direct sum) is different from string model interpretation (tensor product). The incoming and outgoing integers in the Feynman diagram corresponds to Hilbert space dimensions and the decomposition to prime factors corresponds to the decomposition of Hilbert space to prime Hilbert spaces as tensor factors. 3. Ordinary arithmetic operations have interpretation as tensor product and direct sum and one can formulate associativity, commutativity, and distributivity as well as product and sum as conditions on Feynman diagrams. These conditions imply that all loops can be transformed away by basic moves so that diagram reduces to a diagram obtained by fusing only sum and product to initial state to produce single line which decays to outgoing states by co-sum and co-product. Also the incoming lines attaching to same line can be permuted and permutation can only induce a phase factor. The conjecture that these rules hold true also for the generalized Feynman diagrams is obviously extremely powerful and consistent with the picture provided by zero energy ontology. Also connection with twistor approach is suggestive. 4. Quantum adeles for ordinary rationals can be defined as Cartesian products of quantum p-adics and of reals or rationals. For algebraic extensions of rationals similar definition applies but allowing only those p-adic primes which do not split to a product of primes or the extension. Number theoretic evolution means increasing dimension for the algebraic extension of rationals and this means that increasing number of p-adic primes drops from the adele. This means a selective pressure under which only the fittest p-adic primes survive. The basic question is why Mersenne primes and some primes near powers of two are survivors. The connection with infinite primes A beautiful connection with the hierarchy of infinite primes emerges. 1. The simplest infinite primes at the lowest level of hierarchy define two integers having no common prime divisors and thus defining a rational number having interpretation in terms of time-like and space-like braids characterized by co-prime integers. 2. Infinite primes at the lowest level code for algebraic extensions of rationals so that the infinite primes which are survivors in the evolution dictate what p-adic primes manage to avoid splitting. Infinite primes coding for algebraic extensions have interpretation as bound states and the most stable bound states and p-adic primes able to resist corresponding splitting pressures survive. At the n:th level of the hierarchy of infinite primes correspond to monic polynomials of n variables constructed from prime polymomials of n-1 variables constructed from.... The polynomials of single variable are in 1-1 correspondence with ordered collections of n rationals. This collection corresponds to n pairs of time-like and space-like braids. Thus infinite primes code for collections of lower level infinite primes coding for... and eventually everything boils down to collections rational coefficients for monic polynomials coding for infinite primes at the lowest level of the hierarchy. In generalized Feynman diagrams this would correspond to groups of groups of .... of groups of integers of incoming and outgoing lines. 3. The physical interpretation is in terms of pairs time-like and space-like braids having ends at partonic 2-surfaces with strands labelled by primes and defining as their product integer: the rational is the ratio of these integers. From these basic braids one can form collections of braid pairs labelled by infinite primes at the second level of hierarchy, and so on and a beautiful connection with the earlier vision about infinite primes as coders of infinite hierarchy of braids of braids of... emerges. Space-like and time-like braids playing key role in generalized Feynman diagrams and representing rationals supporting the interpretation of generalized Feynman diagrams as arithmetic Feynman diagrams. The connection with many-sheeted space-time in which sheets containing smaller sheet define higher level particles, emerges too. 4. Number theoretic dynamics for ×q conserves the total numbers of prime factors so that one can either talk about infinite number of conserved number theoretic momenta coming as multiples of log(p), p prime or of particle numbers assignable to primes p: pn corresponds to n-boson state and finite parts of infinite primes correspond to states with fermion number one for each prime and arbitrary boson number. The infinite parts of infinite primes correspond to fermion number zero in each mode. The two braids could also correspond to braid strands with fermion number 0 and 1. The bosonic and fermionic excitations would naturally correspond the generators of super-conformal algebras assignable to light-like and space-like 3-surfaces. The interpretation of integers representing particles a Hilbert space dimensions In number theoretic dynamics particles are labeled by integers decomposing to primes interpreted as labels for braid strands. Both time-like and space-like braids appear. The interpretation of sum and product in terms of direct sum and tensor product implies that these integers must correspond to Hilbert space dimensions. Hilbert spaces indeed decompose to tensor product of prime-dimensional Hilbert spaces stable against further decomposition. Second natural decomposition appearing in representation theory is into direct sums. This decomposition would take place for prime-dimensional Hilbert spaces with dimension l with dimensions anpn in the p-adic expansion. The replacement of an with quantum integer would mean decomposition of the summand to a tensor product of quantum Hilbert spaces with dimensions which are quantum primes and of pn-dimensional ordinary Hilbert space. This should relate to the finite measurement resolution. ×q vertex would correspond to tensor product and +q to direct sum with this interpretation. Tensor product automatically conserves the number theoretic multiplicative momentum defined by n in the sense that the outgoing Hilbert space is tensor product of incoming Hilbert spaces. For +q this conservation law is broken. Connection with the hierarchy of Planck constants, dark matter hierarchy, and living matter The obvious question concerns the interpretation of the Hilbert spaces assignable to braid strands. The hierarchy of Planck constants interpreted in terms of a hierarchy of phases behaving like dark matter suggests the answer here. 1. The enormous vacuum degeneracy of Kähler action implies that the normal derivatives of imbedding space coordinates both at space-like 3 surfaces at the boundaries of CD and at light-like wormhole throats are many-valued functions of canonical momentum densities. Two directions are necessary by strong form of holography implying effective 2-dimensionality so that only partonic 2-surfaces and their tangent space data are needed instead of 3-surfaces. This implies that space-time surfaces can be regarded as surfaces in local singular coverings of the imbedding space. At partonic 2-surfaces the sheets of the coverings co-incide. 2. By strong form of holography there are two integers characterizing the covering and the obvious interpretation is in terms of two integers characterizing infinite primes and time-like and space-like braids decomposing into braids labelled by primes. The braid labelled by prime would naturally correspond to a braid strand and its copies in l points of the covering. The state space defined by amplitudes in the n-fold covering would be n-dimensional and decompose into a tensor product of state spaces with prime dimension. These prime-dimensional state spaces would correspond to wave functions in prime-dimensional sub-coverings. 3. Quantum primes are obtained as different sum decompositions of primes l and correspond direct sum decompositions of l-dimensional state space associated with braid defined by l-fold sub-covering. What suggests itself strongly is a symmetry breaking. This breaking would mean the geometric decomposition of l strands to subsets with numbers of elements coming proportional to powers pn of p. Could anpn in the expression of l as ∑ akpk correspond to a tensor product of an-dimensional space with finite field G(p,n)? Does this decomposition to state functions localized to sub-braids relate to symmetries and symmetry breaking somehow? Why an-dimensional Hilbert space would be replaced with a tensor product of quantum-p1-dimensional Hilbert spaces? The proper understanding of this issue is needed in order to have more rigorous formulation of quantum p-adics. 4. Number theoretical dynamics would therefore relate directly to the hierarchy of Planck constants. This would also dictate what happens for Planck constants in the two vertices. There are two options. 1. For ×q vertex the outgoing particle would have Planck constant, which is product of incoming Planck constants using ordinary Planck constant as unit. For +q vertex the Planck constant would be sum. This stringy vertex would lead to generation of particles with Planck constant larger than its minimum value. For ×q two incoming particles with ordinary Planck constant would give rise to a particle with ordinary Planck constant just as one would expect for ordinary Feynman diagrams. 2. Another possible scenario is the one in which Planck constant is given by hbar/hbar0= n-1. In this case particles with ordinary Planck constant fuse to particles with ordinary Planck constant in both vertices. For both options the feed of particles with non-standard value of Planck constant to the system can lead to a fusion cascade leading to a generation of dark matter particles with very large value of Planck constant. Large Planck constant means macroscopic quantum phases assumed to be crucial in TGD inspired biology. The obvious proposal is that inanimate matter transforms to living and thus also to dark matter by this kind of phase transition in presence of feed of particles - say photons- with non-standard value of Planck constant. The work with quantum p-adics and quantum adeles and generalization of number field concept to quantum number field in the framework of zero energy ontology has led to amazingly deep connections between p-adic physics as physics of cognition, infinite primes, hierarchy of Planck constants, vacuum degeneracy of Kähler action, generalized Feynman diagrams, and braids. The physics of life would rely crucially on p-adic physics of cognition. The optimistic inside me even insists that the basic mathematical structures of TGD are now rather well-understood. This fellow even uses the word "breakthrough" without blushing. I have of course continually admonished him for his reckless exaggerations but in vain. The skeptic inside me continues to ask how this construction could fail. A possible Achilles heel relates to the detailed definition of the notion of quantum p-adics. For N=1 it reduces essentially to ordinary p-adic number field mapped to reals by quantum variant of canonical identification. Therefore most of the general picture survives even for N=1. What would be lost are wave functions in the space of quantum variants of a given prime and also the crazy conjecture that quantum Galois group is isomorphic to Absolute Galois Group. For details see the new chapter Quantum Adeles of "Physics as Generalized Number Theory". Thursday, February 23, 2012 Error in OPERA experiment? The rumor mill in particle physics has gone rather wild. Lubos rumored just some time ago that CDF will provide additional sigmas for Higgs at 125 GeV. It turned out that CDF had found no evidence for the signal. Lubos rumored also about support for stop: not even indications had been actually found. The latest rumor is that OPERA collaboration has detected two errors in the experiment suggesting super-luminality of neutrinos: the first error would be technical and second one related to the analysis. I could not however make head or tail of the published popular pieces of text. The reader is encouraged to find whether he/she can make any sense of the following cryptic piece of text (which is "popular" and should be therefore easy to understand!). As those bloggers who are unable to image modifications of Einstein's theory (see for instance this) do not hesitate to take the rumor as a final truth, and have no difficulties in forgetting that also other experiments have seen indications for the super-luminality. Sad to see that so many science bloggers behave as third rank politicians. This tends to give totally wrong view about people working in the field. My day was saved by Cosmic Variance, where the rumor was taken as a rumor and nothing else. In TGD framework one has sub-manifold gravity and the operationally defined maximal signal velocity varies and can depend on particle species. I am however unable to fix the magnitude of the variation from the maximal signal velocity for photons quantitatively so that the possible neutrino sub-luminality cannot kill TGD whereas super-luminality can only support the notion of sub-manifold gravity. One must just wait and see. Addition: There is now a New Scientist article about the possible measurement error. Two errors with opposite effects have been identified. The first error relates to a mis-functioning signal cable communicating the time measured by CERN clock to Gran Sasso clock and 60 ns lapse in signal transfer would mean that neutrinos seem to arrive 60 ns earlier than they should. For me this is not a question about whether Einstein was or TGD is wrong or right and it is interesting to see what the final answer will be. No need for ranting;-)! Addition: New Scientist contains another popular article with title Lights speed limit is safe for now. Is some-one threating some-one? Why a possible anomaly which could have extremely far reaching consequences allowing to generalize Einstein's theory rather than destroy it, is seen as a threat? How people with this attitude can make objective decisions? How many scientific decision makers and researchers have this defensive attitude? Addition: Matt Strassler has excellent blog posting about the situation in Opera experiment. Monday, February 20, 2012 Progress in understanding of quantum p-adics Quantum arithmetics is a notion which emerged as a possible resolution of long-lived challenge of finding mathematical justification for the canonical identification mapping p-adics to reals playing key role in p-adic mass calculations. The model for Shnoll effect was the bridge leading to the discovery of quantum arithmetics. I have been gradually developing the notion of quantum p-adics and during the weekend made quite a step of progress in understanding the concept and dare say that the notion now rests on a sound basis. 1. What quantum arithmetics suggests is a modification of p-adic numbers by replacing p-adic pinary expansions with their quantum counterparts allowing the coefficients of prime powers to be integers not divisible by p. A further important constraint is that the factors of coefficients are primes smaller than p. If the coefficients are smaller than p, one obtains something reducing effectively to ordinary p-adic number field. 2. A further constraint is that quantum integers respect the decomposition of integer to powers of prime. Quantum p-adic integers are to p-adic integers what the integers in the extension of number field are for the number field and one can indeed identify Galois group Gp for each prime p and form adelic counterpart of this group as Cartesian product of all Gp:s. 3. After various trials it turned out (this is what motivated this posting!) that quantum p-adics are indeed quantal in the sense that one can assign to given quantum p-adic integer n a wave function at the orbit of corresponding Galois group decomposing to Galois groups of its prime factors of n. 1. The basic conditions are that ×q and +q satisfy the basic associativity and distributivity laws. These conditions are extremely powerful and can be formulated in terms of number theoretic Feynman diagrams assignable to sequences of arithmetical operations and their co-algebra counterparts. This brings in physical insight. 2. One can interpret ×q and +q and their co-algebra operations as 3-vertices for number theoretical Feynman diagrams describing algebraic identities X=Y having natural interpretation in zero energy ontology. The two vertices have direct counterparts as two kinds of basic topological vertices in quantum TGD, namely stringy vertices in which 3-surface splits and vertices analogous to those of Feynman diagrams in which lines join along their 3-D ends. Only the latter vertices correspond to particle decays and fusions whereas stringy vertices correspond to decay of particle to path and simultaneous propagation along both paths: this is by the way one of the fundamental differences between quantum TGD and string models. This plus the assumption that Galois groups associated with primes define symmetries of the vertices allows to deduce very precise information about the symmetries of the two kinds of vertices needed to satisfy the associativity and distributivity and actually fix them highly uniquely, and therefore determine corresponding zero energy states having collections of integers as counterparts of incoming positive energy (or negative energy) particles. 3. Zero energy ontology leads naturally zero energy states for which time reversal symmetry is broken in the sense that either positive or negative energy part corresponds to a single collection of integers as incoming lines. What is fascinating is the the prime decomposition of integer corresponds to a decomposition of braid to strands. C and P have interpretation as formations of multiplicative and additive inverses of quantum integers and CP=T changes the positive and negative energy parts of the number theoretic zero energy states. 4. This gives strong support for the old conjecture that generalized Feynman diagrams have number theoretic interpretation and allow moves transforming them to tree diagrams - also this generalization of old-fashioned string duality is old romantic idea of quantum TGD, which I however gave up as too "romantic". I noticed the analogy of Feynman diagrams with the algebraic expressions but failed to realize how extremely concrete the connection could be. What was left from the idea were some brief comments in Appendix A: Quantum Groups and Related Structures to one of the chapters of "Towards M-matrix". The moves for generalized Feynman diagrams would code for associativity and distributivity of quantum arithmetics and we have actually learned them in elementary school as a process simplifying algebraic expressions! Also braidings with strands labeled by the primes dividing the integer emerge naturally so that the connection with quantum TGD proper becomes very strong and consistent with the earlier conjecture inspired by the construction of infinite primes stating that transition amplitudes have purely number theoretic meaning in ZEO. 4. Canonical identification finds a fundamental role in the definition of the norm for both quantum p-adics and quantum adeles. The construction is also consistent with the notion of number theoretic entropy which can have also negative values (this is what makes living systems living!). 5. There are arguments suggesting that quantum p-adics form a field - one might say "quantum field" - so that also differential calculus and even integral calculus would make sense since quantum p-adics inherit almost well-ordering from reals via canonical identification. 6. One can also generalize the construction to algebraic extensions of rationals. In this case the coefficients of quantum adeles are replaced by rationals in the extension and only those p-adic number fields for which the p-adic prime does not split into a product of primes of algebraic extension are kept in the quantum adele associated with rationals. This construction gives first argument in favor of the crazy conjecture that the Absolute Galois group (AGG) is isomorphic with the Galois group of quantum adeles. To sum up, the vision abut "Physics as generalized number theory" can be also transformed to "Number theory as quantum physics"! For detais see the new chapter Quantum Adeles of "Physics as Generalized Number Theory". Saturday, February 18, 2012 The anatomy of state function reduction In a comment to previous posting Ulla gave a link to an interesting article by George Svetlichny describing an attempt to understand free will in terms of quantum measurement. After reading of the article I found myself explaining once again to myself what state function reduction in TGD framework really means. The proposal of Svetlichny The basic objection against assigning free will to state function reduction in the sense of wave mechanics is that state function reduction from the point of view of outsider is like playing dice. One can of course argue that for an outsider any form of free will looks like throwing a dice since causally effective experience of free will is accompanied by non-determinism. We simply do cannot know what is the experience possibly associated with the state function reduction. The lesson is that we must carefully distinguish between two levels: the single particle level and ensemble level - subjective and objective. When we can say that something is random, we are talking about ensembles, not about single member of ensemble. The author takes the objection seriously and notices that quantum measurement means a division of system to three parts: measured system, measuring system and external world and argues that in some cases this division might not be unique. The choice of this division would have interpretation as an act of free will. I leave it to the reader can decide whether this proposal is plausible or not. TGD view about state function reduction What can one say about the situation in TGD framework? There are several differences as compared to the standard measured "theory", which is just certain ad hoc rules combined with Born rule, which applies naturally also in TGD framework and which I do not regard as adhoc in infinite-D context. I have considered the general anatomy of quantum jump in zero energy ontology (ZEO) from a slightly different point of view here. In the sequel I will discuss the possible anatomy of the state function reduction part of the quantum jump. 1. TGD ontology differs from the standard one. Space-time surfaces and quantum states as such are zombies in TGD Universe: consciousness is in the quantum jump. Conscious experience is in the change of the state of the brain, brain state as such is not conscious. Self means integration of quantum jumps to higher level quantum jumps and the hierarchy of quantum jumps and hierarchy of selves can be identified in ZEO . It has the hierarchy of CDs and space-time sheets as geometrical correlates. In TGD Universe brain and body are not conscious: rather, conscious experience is about brain and body and this leads to the illusion caused by the assimilation with the target of sensory input: I am what I perceive. 2. In TGD framework one does not assume the division of the system to a product of measured system, measuring system, and external world before the measurement. Rather, this kind of divisions are outcomes of state function reduction which is part of quantum jump involving also the unitary process. Note that standard measurement theory is not able to say anything about the dynamics giving rise to this kind of divisions. 3. State function reduction cascade as a part of quantum jump - this holistic view is one new element - proceeds in zero energy ontology (ZEO) from long to short length scales CD→sub-CDs→..., and stops when Negentropy Maximization Principle (NMP defining the variational principle of consciousness is also something new) does not allow to reduce entanglement entropy for any subsystem pair of subsystem un-entangled with the external world. This is the case if the sub-system in question is such that all divisions to two parts are negentropically entangled or form entangled bound state. For a given subsystem occurring in the cascade the splitting into an unentangled pair of measured and measuring system can take place if the entanglement between these subsystems is entropic. The splitting takes place for a pair with largest entanglement entropy and defines measuring and measured system. Who measures who? This seems to be a matter of taste and one should not talk about measuring system as conscious entity in TGD Universe, where consciousness is in quantum jump. 4. The factorization of integer to primes is a rather precise number theoretical analogy for what happens, and the analogy might actually have a deeper mathematical meaning since Hilbert spaces with prime dimension cannot be decomposed into tensor products. Any factorization of integer to a product of primes corresponds to a cascade of state function reductions. At the first step division takes place to two integers and several alternative divisions are possible. The pair for which the reduction of entanglement entropy is largest, is preferred. The resulting two integers can be further factorized to two integers, and the process continues and eventually stops when all factors are primes and no further factorization is possible. One could even assign to any decomposition n= rs the analogs of entanglement probabilities as p1= log(r)/log(n) and p2= log(s)/log(n). NMP would favor the divisions to factors r and s which are as near as possible to n/2. Negentropically entangled system is like prime. Note however that these systems can still make an analog of state function reduction which does not split them but increases the negentropy for all splittings of system to two parts. This would be possible only in the intersection of real and p-adic worlds, that is for living matter. My cautious proposal is that just this kind of systems - living systems - can experience free will: either in the analog of state function reduction process increasing their negentropy or in state function process reducing their entanglement with environment. 5. In standard measurement theory observer chooses the measured observables and the theory says nothing about this process. In TGD the measured observable is the density matrix for a pair formed by any two entangled parts of sub-system division for which negentropy gain is maximal in quantum measurement defines the pair. Therefore both the measurement axis and the pair representing the target of measurement and measurer are selected in quantum jump. 6. Quantum measurement theory assumes that measurement correlates classical long range degrees of freedom with quantal degrees of freedom. One could say that the direction of the pointer of the measurement apparatus correlates faithfully with the value of the measured microscopic observable. This requires that the entanglement is reduced between microscopic and macroscopic systems . I have identified the "classical" degrees of freedom in TGD framework as zero modes which by definition do not contribute to the line-element of WCW although the WCW metric depends on zero modes as external parameters. The induced Kähler field represents an infinite number of zero modes whereas the Hamiltonians of the boundaries of CD define quantum fluctuating degrees of freedom. The reduction of the entanglement between zero modes and quantum fluctuating degrees of freedom is an essential part of quantum measurement process. Also state function reductions between microscopic degrees of freedom are predicted to occur and this kind of reductions lead to decoherence so that one can apply quantum statistical description and derive Boltzmann equations. Also state function reductions between different values of zero modes are possible are possible and one could perhaps assign "telepathic" effects with them. The differences with respect to the standard quantum measurement theory are that several kinds of state function reductions are possible and that the division to classical and quantum fluctuating degrees of freedom has a purely geometric meaning in TGD framework. 7. One can even imagine quantum parallel state function reduction cascades. This would make possible quantum parallel dissipation, which would be something new. My original proposal was that in hadronic physics this could make possible a state function reduction cascade proceeding in quark scales while hadronic scales would remain entangled so that one could apply statistical description to quarks as parts of a system, which is quantum coherent in hadronic length scale. This looks nice but...! It is a pity that eventually an objection pops up against every idea irrespective how cute it looks like. The p-adic primes associated with light quarks are larger than that associated with hadron so that quarks - or rather, their magnetic bodies are larger than that hadron's magnetic body. This looks strange at first but actually conforms with Uncertainty Principle and the observation that the charge radius of proton is slightly smaller than predicted (see this), gives support for this picture. Geometrically the situation might change if quarks are highly relativistic and color magnetic fields of quarks are dipoled fields compressed to cigar like shape: Lorentz contraction could reduce the size scale of their magnetic bodies in the direction of their motion. [Note that p-adic length scale hypothesis applies in the rest system of the particle so that Lorentz contraction is in conflict with it]. Situation remains unsettled. Further questions There are many other interesting issues about which my understanding could be much better. 1. In ZEO the choice of the quantization axes and would fix the moduli of the causal diamond CD: the preferred time direction defined by the line connecting the tips of CD, the spin quantization axis, etc.. This choice certainly occurs. Does it reduce to the measurement of a density matrix for some decomposition of some subsystem to a pair? Or should one simply assume state function reductions also at this level meaning localization to a sector of WCW corresponding to given CD. This would involve localization in the moduli space of CDs selecting some boost of a CD with fixed quantized proper time distance between it tips, fixed spin directions for positive and negative energy parts of zero energy states defined by light-like geodesics at its light-like boundary. Preferred complex coordinates for CP2, etc... 2. Zero energy states are characterized by arrow of geometric time in the sense that either positive or negative energy parts of states have well defined particles numbers and single particle numbers but not both. State function reduction is possible only for positive or negative energy part of the state but not both. This should relate very closely to the fact that our sensory percepts defined by state function reductions are mostly about the upper or lower boundary of CD. I have discussed this in previous posting. 3. In ZEO also quantum jumps can also lead to generation of new sub-Universes, sub-CDs carrying zero energy states. Quantum jumps can also involve phase transitions changing p-adic space-time sheets to real ones and these could serve as quantum correlates for intentional actions. Also the reverse process changing matter to thoughts is possible. These possibilities are totally unimaginable in the quantum measurement theory for systems describable by wave mechanics. 4. There is also the notion of finite measurement resolution described in terms of inclusions of hyperfinite factors at quantum level and in terms of braids at space-time level. To summarize, a lot of theory building is needed in order to fuse all new elements to a coherent framework. In this framework standard quantum measurement theory is only a collection of ad hoc rules and can catch only a small part of what really happens. Certainly, standard quantum measurement theory is is far from being enough for the purposes of consciousness theorist. Friday, February 17, 2012 Views about free will Now and then comes the day when you feel that you have said all that might possibly interest anyone somewhere in this waste Universe and even an attempt to think about some problem creates a feeling of deep disgust. I try to escape this depressive mood by meandering around the web in the hope that some colleague or blogger might have written something original. Usually the outcome is a disappointment. Original and not obviously wrong thoughts are as rare as genuine anomalies. This kind of cheerless wandering around web led me to read some postings and articles about free will. Even some physicists have now accepted "free will" into their vocabulary. The fQXI conference about the nature of time held in some boat sailing from Norway towards Copenhagen las summer had inspired several blog postings. Also I wrote comments about the excellent lecture of David Eagleman about perceived time. This kind of sailing trips cost and it is good if they induce interaction between people with different backgrounds: now at least physicists, neuro-scientists, computer scientists and philosophers were solving both the problem of time and the problems caused by sea sickness at one blow. I did not find it surprising that I did not find anything surprising in these postings. The common feature of all these articles is that quite too much is assumed. Sean Carroll as a descent reductionist makes especially strong assumptions. All writers have managed to remain unaware of the dramatic distinctions between subjective time and the geometric time of physicist. They also make the same error: in the process of trying to understand free will scientifically their first step is to carefully eliminate conscious mind from the picture. The outcome is free will as something effective and emergent or free will as resulting from deterministic but non-predictable/non-computable process. My humble question is: Why on earth something very complex or non-computable would generate sensation "I decide to do this!"?! A non-deterministic behavior serves as a correlate of free will but non-predictable (but possibly deterministic) behavior does not imply experience of free will. Every writer grasps something essential but fails to see the essence of the problem and connect it with many related problems like the puzzle of time and the basic paradox of quantum measurement theory. One should not be however too critical since the position of the writers is unrewarding. Being names in blogo-sphere they have been invited to solve the problem of time with minimal background: this is like solving some deep problem in topology with the background given by a couple of hastily read Wikipedia articles. I was a little bit disappointed but understood that I must also realize that the understanding free will is as difficult as the understanding of the nature of time. It requires a lot of time and a flash of genius: a sea trip from Norway to Copenhagen with National Geographic Explorer - even in a good company - need not be enough to spark this kind of flash. I have been trying to communicate more than 15 years my own flash of genius relating to free will and the relationship between experienced time and the geometric time of physicist but it seems that this has been waste of time. They must discover it themselves! Let us hope better luck during the next cruise! In the following some more detailed comments about articles of the peopled who participated the trip. Sabine Hossenfelder: Free Will function Sabine Hossenfelder has a posting titled "Free Will function". I agree with Sabine that the idea about emergent free will is self deception. Free will does not emerge from a deterministic microscopic dynamics. The believers in emergence say that free will is an effective concept. Not real but useful. If the system in question is complex enough and behaves non-predictably as seen by outsider one can say that it has effective free will. But why the impossibility to predict a deterministic dynamics in practice would generate the experience "I will do this!". There is absolutely no justification for this belief. A good objection against this identification comes from neuro-science and is described in the article The Brain on Trial by David Eagleman. People suffering Tourette's syndrome, split brain patients, persons with side personalities, and patients with choreic motions behave from the point of view of outsider as they would have free will. Using biblical language: they act as if being possessed. They do not experience free will. Who wills? Who uses the biological body of the patient? Same questions can be made in the situation when people who have done mass murder become conscious and begin to wonder what these bloody hands have done. Who used these hands? Are we merely our brains and bodies? Who uses my biological body? What is this "me"? Is this particular biological body used only by single intentional agent, by single "me" only? I could continue by telling about the notion of magnetic body but will not do it here. Acidic out-of-topic side remark: Effective theories have become the basic curse of theoretical physics today. No-one can seriously claim that string models say anything about the world of experimental physicists. But there is a loop hole. By postulating effective field theory approach one can build entire landscape of effective theories. This is non-sense but it works. The only honest reaction would be to admit that string models are nice theories but not theories about the world we live in. Sabine Hossenfelder suggests as a solution something that she calls free will function. Sabine considers a machine spitting out digits of π. This process is fully deterministic but outsider has no means of predicting what the next digit will be and what number the digit sequence represents unless he manages to get the program code. The proposal is that our brain has this kind of free will function. The strange assumption is that the inability to predict would in some mysterious manner generate experience of free will. But Sabine a physicist has learned that one must forget all subjective elements when doing science. In this mental framework the only conceivable goal of a theory of consciousness is to eliminate it. The fruitless searches of "consciousness modules" assumed to reside somewhere in the brain are fruits of similar "consciousness as function" thinking. Sean Carroll: Free will as real as baseball Also Sean Carroll has written about free will in his posting "Free will as real as baseball". Sean belongs to the effective theory camp and sees free will as a convenient tool of description just like baseball is seen by a reductionist as a convenient abstraction to describe the dynamics of a condensed matter system. Sean makes two very strange claims. 1. The first strange claim is that free will is inconsistent with the laws of physics. This is the case only if the experienced time and geometric time of physicists are identified. The are not one and the same thing as even child realizes. Experienced time is irreversible and there is no subjective future. Geometric time is reversible and future and past are in the same position. In general relativity 4-D space-time region becomes the basic entity instead of time=constant snapshot which is the basic entity according to Newtonian thinking. Amusingly, all writers except Scott Aaronson seem to belong to the species of Newtonians as far as their views about time are considered. The first years of scientific education involves really heavy social conditioning and it is really hard to de-learn even the obviously wrong beliefs. 2. The second strange claim of Sean Carroll is that the physics is completely understood in everyday realm! Really! Do we really understand the physics underlying living matter?! I cannot do help it: this kind of loose text book statements irritate me that suddenly the dull depressive mood has gone and I am full of adrenaline! Interestingly, Sean Carroll notices analogy of poorly understood notion of free will with the poorly understood notion of time. The arrow of time is in conflict with microscopic reversibility but - according to Sean Carroll - physicists do not see this as a problem so that it is not a problem. Continuing in the same spirit: if billions of Chinese believe in communism then marxism is the only correct view about society and is indeed law of Nature! The effective theory solution is simple: also the arrow of time somehow emerges. Exactly how?: this we do not understand but it does not matter. This is self deception. One should admit this and really try to understand second law. If one does this, the first observation is that Boltzmann's equations are deduced by assuming the occurrence of state function reductions in time scale much shorter than the time scale of observations. State function reduction is what makes quantum physics non-deterministic at the level of single quantum system - also internally in-consistent: the determinism of Schrödinger equation is in blatant conflict with state function reduction if one identifies experienced time with the geometric time of physicist. One should be able to resolve this logical flaw and this requires that the two times are different - something which of course even child knows! If we have two times we have also two independent causalities: the causality of field equations and that of "free will". This would be the first step towards the solution. Sean Carroll also represents what he calls consequent argument. The argument begins with an innocent looking statement that our past is fixed. Therefore free will obeying field equations is impossible since it would change both future and past. Wrong again: the assumption about fixed past in the geometric sense need not be true. About subjective past it is. Already Wheeler was led to propose that in state function reduction the geometric past changes: see for the Wheeler's delayed choice experiment. Maybe Wheeler's general relativistic background helped him to make this conceptual leap, which leads very near to TGD view about quantum jump. In TGD framework quantum states are superpositions of classical histories and quantum jumps replace them with new ones and the average geometric past also changes. The finding of Libet that in volitional act neural activity begins a fraction of second before the conscious decisions supports the idea that we are replacing our geometric past with a new one all the subjective time. Sean Carroll notices also the ethical aspect of problem. If we really believe that free will is illusion, we have no justification for moral rules. The criminal has been doomed to perform his crime at the moment of Big Bang and we cannot therefore accuse him. Of course, there could be something badly wrong in the brain of the mass murderer and it has indeed become clear that our behavior correlates strongly with biology. This does not however mean that free choices are not possible. Braid disorer only changes the probabilities of different outcomes of the choices. We have the experience of free will as every reader can testify. This we must accept and try to understand the physical correlates of this experience irrespective of whether the free will is real or not. In fact, neuroscience has led to quite concrete progress in the understanding of the correlations between biology and behavior. This has also practical consequences. Many mass murderers have been victims of child abuse or have suffered from brain tumor. This does not mean that we should allow mass murders to continue with their rare hobby. We can however do our best to prevent child abuse. Also the degeneration of some regions of frontal lobes can lead to highly asocial behaviors when stimuli usually inhibited are not inhibited anymore. One could say that there are competing free wills using the same biological body and the one wanting to perform the mass murder wins. These issues were discussed already at times of Dostojevski and Turgeniev. The fashionable thinking was that we are nothing but physiology and that we can indeed forget the rules of moral. The people propagating this view and trying to live according to this philosophy were known as nihilists: they were mad but fully logical in their madness. Many people calling themselves skeptics today are surprisingly near to these views. Thanks to God, most of us are to not too strict in their logics and follow their conscience rather than materialistic dogmas. Scott Aaronson's view G. Musser has summarized computer scientist Scott Aaronson's talk about free will. Also Scott Aaronson studies the idea of reducing free will to behavior observed from outside. Aaronson's thought experiment considers a Turing like test allowing to decide whether you have free will. A computer model of you is built using all available data about the initial state of your brain: this of course assumes determinism or at least quantum statistical determinism. If the computer is able to mimic your behavior faithfully, one can say that you have no free will. The proponent of effective free will might say that the longer the needed computer code is, the more you have effective free will. This kind of free-will-meter is of course not possible in practice except with some accuracy so that the whole thing reduces to mere mimicry, kind of parameter fit. Aaronson represents the non-cloning theorem of quantum theory as a first principle objection against Turing test of free-will-meter. Even in principle it is not possible to construct complete copy of brain state to make a complete simulation possible. This kind of machine would be successful in what Aaronson calls Toddler test but this would be a fake success. Any toddler says completely predictably "No" to any question. We however know that the toddler expresses by behaving irrationally that he/she has discovered his/her free will (but can this kind of free will be effective?)! Aaronson brings in special relativity and notices that free will means also backward causation if it is to be consistent with the causality of field equations. From this it would be only a short step to the realization that the causality of free will could act in the space of quantum states defined as superposition of solutions of classical field equations consistent with holography in the sense that 3-D section determines the entire space - at least below certain scale! The problem would have been solved! Scott makes a near miss! To sum up, Aaronson dimly realizes that in general relativity - and in any 4-D Universe obeying general coordinate invariance - we live in a kind of block world consisting of 4-D blocks but the other writers continue in the good old Newtonian style. In TGD zero energy ontology would realize blocks as causal diamonds and would extend free will from a mere choice between given alternatives to creation of new worlds. Sabine Hossenfelder realizes that emergence is self deception: I cannot but agree. Sean Carroll grasps the full meaning of the absence of free will at the level of moral issues. Eagleman describes real life situations, which should be highly valuable for any-one proposing in earnest a theory of consciousness. Also the lecture of Eagleman about perceived time was excellent. To me it seems that physicists and (quantum) computer scientists should be able to forget for a moment their formulas and rhetorics making possible to get rid of a problems they cannot solve, and open their minds for the problem to get settled. Wednesday, February 15, 2012 Quantum Adeles as a Golden Road to Number Theoretical Universality? 2. A further constraint is that quantum integers respect the decomposition of integer to powers of prime. Quantum p-adic integers are to p-adic integers what the integers in the extension of number field are for the number field and one can indeed identify Galois group Gp for each prime p and form adelic counterpart of this group as Cartesian product of all Gp:s. After various trials it turned out that quantum p-adics are indeed quantal in the sense that one can assign to given quantum p-adic integer n a wave function at the orbit of corresponding Galois group decomposing to Galois groups of its prime factors of n. The basic conditions are that ×q and +q satisfy the basic associativity and distributivity laws. One can interpret ×q and +q and their co-algebra operations as 3-vertices for number theoretical Feynman diagrams describing algebraic identities X=Y having natural interpretation in zero energy ontology. The two vertices have direct counterparts as two kinds of basic topological vertices in quantum TGD (stringy vertices and vertices of Feynman diagrams). This allows to deduce very precise information about the symmetries of the vertices needed to satisfy the associativity and distributivity and actually fix them highly uniquely, and therefore determined corresponding zero energy states having collections of integers as counterparts of incoming positive energy (or negative energy) particles. This gives strong support for the old conjectures that generalized Feynman diagrams have number theoretic interpretation and allow moves transforming them to tree diagrams - also this generalization of old-fashioned string duality is old romantic idea of quantum TGD. The moves for generalized Feynman diagrams would code for associativity and distributivity of quantum arithmetics. Also braidings with strands labelled by the primes dividing the integer emerge naturally so that the connection with quantum TGD proper becomes very strong. 4. There are arguments suggesting that quantum p-adics form a field so that also differential calculus and even integral calculus would make sense since quantum p-adics inherit well-ordering from reals via canonical identification. The ring of adeles is essentially Cartesian product of different p-adic number fields and reals. 1. The proposal is that adeles can be replaced with quantum adeles. Gp has natural action on quantum adeles allowing to construct representations of Gp. This norm for quantum adeles is the ordinary Hilbert space norm obtained by first mapping quantum p-adic numbers in each factor of quantum adele by canonical identification to reals. 2. Also quantum adeles could form form a field rather than only ring so that also differential calculus and even integral calculus could make sense. This would allow to replace reals by quantum adeles and in this manner to achieve number theoretical universality. The natural applications would be to quantum TGD, in particular to construction of generalized Feynman graphs as amplitudes which have values in quantum adele valued function spaces associated with quantum adelic objects. Quantum p-adics and quantum adeles suggest also solutions to a number of nasty little inconsistencies, which have plagued to p-adicization program. 3. One must of course admit that quantum arithmetics is far from a polished mathematical notion. It would require a lot of work to see whether the dream about associative and distributive function field like structure allowing to construct differential and integral calculus is realized in terms of quantum p-adics and even in terms of quantum adeles. This would provide a realization of number theoretical universality. Ordinary adeles play a fundamental technical tool in Langlands correspondence. The goal of classical Langlands program is to understand the Galois group of algebraic numbers as algebraic extension of rationals - Absolute Galois Group (AGG) - through its representations. Invertible adeles define Gl1 which can be shown to be isomorphic with the Galois group of maximal Abelian extension of rationals (MAGG) and the Langlands conjecture is that the representations for algebraic groups with matrix elements replaced with adeles provide information about AGG and algebraic geometry. The crazy question is whether quantum adeles could be isomorphic with algebraic numbers and whether the Galois group of quantum adeles could be isomorphic with AGG or with its commutator group. If so, AGG would naturally act is symmetries of quantum TGD. The connection with infinite primes leads to a proposal what quantum p-adics and quantum adeles associated with algebraic extensions of rationals could be and provides support for the conjecture. The Galois group of quantum p-adic prime p would be isomorphic with the ordinary Galois group permuting the factors in the representation of this prime as product of primes of algebraic extension in which the prime splits. Objects known as dessins d'enfant provide a geometric representation for AGG in terms of action on algebraic Riemann surfaces allowing interpretation also as algebraic surfaces in finite fields. This representation would make sense for algebraic partonic 2-surfaces, and could be important in the intersection of real and p-adic worlds assigned with living matter in TGD inspired quantum biology, and would allow to regard the quantum states of living matter as representations of AGG. Quantum Adeles would make these representations very concrete by bringing in cognition represented in terms of quantum p-adics. Quantum Adeles could allow to realize number theoretical universality in TGD framework and would be essential in the construction of generalized Feynman diagrams as amplitudes in the tensor product of state spaces assignable to real and p-adic number fields. Canonical identification would allow to map the amplitudes to reals and complex numbers. Quantum Adeles also provide a fresh view to conjectured M8-M4×CP2 duality, and the two suggested realizations for the decomposition of space-time surfaces to associative/quaternionic and co-associative/co-quaternionic regions. Tuesday, February 14, 2012 No stop but maybe cold fusion The rumors about the detection of stop particle at LHC have been circulating for some time. Here stop is understood in the sense of standard SUSY predicting R-parity conservation so that sparticles are produced only in pairs and that stop is the lightest squark. Missing energy corresponding to lightest - and thus stable - neutral sparticle is the basic decay signature of stop in this sense. For those who took these rumors as more than wishful thinking, the ATLAS collaboration produced a dissappointment: the analysis of integrated luminosity 2.05/fb shows no significant excess. The new limits tell that gluino mass must be above 650 GeV stop mass above 450 GeV. Also in TGD framework both Higgs and SUSY are also in TGD are creators of tension. It would be nice to have a computer program listing the predictions of the theory but the situation is not so simple. Developing and interpreting the theory is a complex process requiring a continual interaction with experiment making educated guesses. Even in the case of Higgs the situation in TGD is still not closed. Higgs is not needed in TGD and no-Higgs option is the most elegant one: but does Nature think in the same manner as I happen to do just now? SUSY in TGD sense means that sfermion is obtained by adding right handed neutrino to a wormhole throat carrying quantum numbers of fermion. R-parity as well as B and L are conserved and spartners are created in pairs. The simplest option is that the right-handed neutrino corresponds to a covariantly constant spinor in CP2 degrees of freedom. More complex option possibly allowed by super-conformal symmetry is that right-handed neutrinos appear as color octets. LHC tells us that sfermions and gluinos are heavy very (TeV mass scale) if they obey standard SUSY. The conclusion comes from the missing missing energy. This conclusion might be circumvented in TGD. 1. Squarks are colored and interact strongly. This allows them to fuse together to form shadrons: say smesons formed from squark pair. This could be the dominating decay channel leading eventually to ordinary hadrons. 2. For covariantly constant right handed neutrino this however leaves the decays of squarks to quarks and electroweak gauginos proceeding with a rate fixed by electro-weak gauge symmetry. The situation seems to be like that in standard SUSY. Gauginos would decay eventually produce missing energy seen as righthanded neutrinos which mix with left handed components. It might well be that LHC already kills this option unless one assumes short enough p-adic length scales for squarks which is of course possible. 3. If right-handed neutrino is in color octet partial wave, the situation changes. Shadrons are the only final states by color confinement and quarks and squarks could have even same p-adic mass scale for both ordinary and M89 hadron physics. Fuel for the speculations with this option comes from so called X and Y bosons, which are charmonium like states which should not be there: are they scharmoniums? There are also two other anomalies discussed in previous posting suggesting that mesons have what I call IR Regge trajectories with mass scale of 38 MeV. They are very natural in TGD framework in which hadrons are accompanied by color magnetic flux tubes behaving like string-like objects and thus contributing to to hadron mass a stringy contribution with a small string tension. Is TGD SUSY needed to explain X and Y boson or could also IR Regge trajectories do the same (probably not): this is the first thing to check. Quite often I feel that this endless questioning rather frustrating. Life would be so easy if I could just believe. Blind believing makes things simple but eventually it leads to painful conflicts with facts. Lubos has been especially strong believer of stop rumours and it is a pity that he is wrong again with so much authority a (big names such as Gell Mann) behind his arguments;-). This is hard time for Lubos also otherwise;-): Lubos has used all tools of bad rhetorics to attack cold fusion but demonstrations continue to generate support for the effect. The progress of physics understood as a reductionistic (and highly imperialistic;-)) enterprise proceeding to shorter and shorter length scales has perhaps been quite not so successful as we have been taught. There are a lot of bridges of belief on the road of reductionism and this particular bridge - the belief that there is no interaction between atomic and nuclear length scales - might be collapsing under merciless pressures of cold fusion researchers whom Lubos does not want to count as scientists at all. It might be that we do not understand nuclear physics properly, and this mis-understanding - if it continues- can have profound impact on the future of our civilization. Even worse, there will be a cold fusion colloqium - and believe or not - at CERN! On Thursday, March 22nd. I have written some postings earlier debunking the cold fusion debunkings of Lubos (see for instance this). I admit that I have to make a conscious effort to keep a fully serious face;-). Here is the rant of Lubos inspired by cold fusion colloqium at CERN. Lubos is learning- or at least he should finally learn - that authority means absolutely nothing for Nature. More evidence for IR Regge trajectories TGD based view about non-perturbative aspects of hadron physics (see this) relies on the notion of color magnetic flux tubes. These flux tubes are string like objects and it would not be surprising if the outcome would be satellite states of hadrons with string tension below the pion mass scale. One would have kind of infared Regge trajectories satisfying in a reasonable approximation a mass formula analogous to string mass fomula. What is amazing that this phenomenon could allow new interpretation for the claims for a signal interpreted as Higgs at several masses (115 GeV by ATLAS, at 125 GeV by ATLAS and CMS, and at 145 GeV by CDF). Consider first the mass formula for the hadrons at IR Regge trajectories. 1. There are two options depending on whether the mass squared or mass for hadron and for the flux tubes are assumed to be additive. p-Adic physics would suggest that if the p-adic primes characterizing the flux tubes associated with hadron and hadron proper are different then mass is additive. If the p-adic prime is same, the mass squared is additive. 2. The simplest guess is that the IR stringy spectrum is universal in the sense that m0 does not depend on hadron at all. This is the case if the flux tubes in question correspond to hadronic space-time sheets characterized by p-adic prime M107 in the case of ordinary hadron physics. This would give for the IR contribution to mass the expression m2=(m02+ nm12)1/2 . 3. The net mass of hadron results from the contribution of the "core" hadron and the stringy contribution. If mass squared is additive, one obtains m(Hn)= [m2(H0) +m02+ nm12]1/2, where H0 denotes hadron ground state and Hn its excitation assignable to magnetic flux tube. For heavy hadrons this would give the approximate spectrum m(Hn)≈ m(H0)+ [m02+nm12]/2m(H0) . The mass unit for the excitations decreases with the mass of the hadron. 4. If mass is additive as one indeed expects since the p-adic primes characterizing heavy quarks are smaller than hadronic p-adic prime, one obtains m(Hn)= m(H0)+ (m02+ nm12)1/2 . For m02>> m12 one has m(Hn)= m(H0)+ m0+ nm12/2m0 . If the flux tubes correspond to p-adic prime. This would give linear spectrum which is same for all hadrons. There is evidence for this kind of states. 1. Tatischeff and Tomasi-Gustafsson claim the existence of states analogous to ordinary pion with masses 60, 80, 100, 140,.... MeV. Also nucleons have this kind of satellite states. 2. Second piece of evidence comes from two articles by Eef van Beveren and George Rupp. The first article is titled First indications of the existence of a 38 MeV light scalar boson. Second article has title Material evidence of a 38 MeV boson . The basic observations are following. The rate for the annihilation e++e-→ uubar assignable to the reaction e++e-→ π+π- has a small periodic oscillation with a period of 78+/- 2 MeV and amplitude of about 5 per cent. The rate for the annihilation e++e-→ bbbar, assignable to the reaction e++e-→ Υπ+π- has similar oscillatory behavior with a period of 73+/- 3 MeV and amplitude about 12.5 per cent. The rate for the annihilation ppbar→ cbbar assignable to the reaction e++e-→ J/Ψπ+π- has similar oscillatory behavior with period of 79+/- 5 MeV and amplitude .75 per cent. In these examples universal Regge slope is consistent with the experimental findings and supports additive mass formula and the assignment of IR Regge trajectories to hadronic flux tubes with fixed p-adic length scale. What does one obtain if one scales up the IR Regge trajectories to the M89 which replaces Higgs in TGD framework? 1. In the case of M89 pion the mass differences 20 MeV and 40 MeV appearing in the IR Regge trajectories of pion would scale up to 10 GeV and 20 GeV respectively. This would suggest the spectrum of pion like states with masses 115, 125, 145, 165 GeV. What makes this interesting that ATLAS reported during last year evidence for a signal at 115 GeV taken as evidence for Higgs and CDF reported before this signal taken as evidence for Higgs around 145 GeV! 125 GeV is the mass of the the most recent Higgs candidate. Could it be that all these reported signals have been genuine signals - not for Higgs- but for M89 pion and corresponding spion consisting of squark pair and its IR satellites? 2. I the case of M89 hadron physics the naive scaling of the parameters m0 and m1 by factor 512 would scale 38 MeV to 19.5 GeV. Tuesday, February 07, 2012 Indeed! Is it really Higgs? Jester comments the latest release of results from LHC relating to the signal interpreted by all fashionable and well-informed physics bloggers as Higgs. Additional support for a resonance at 125 GeV is emerging. What is new are two events which are interpreted as fusion of two W bosons to Higgs. This is very nice. The only problem is that the predicted rate for these events is so small for standard model Higgs that they would not have been observed. Second not anymore pleasant surprise is that Higgs candidates are indeed produced but with a rate twice than the predicted rate. Hitherto these signals which are too strong to allow interpretation as standard model Higgs have been interpreted by saying that both CMS and ATLAS have been "lucky". I warned already in the previous Higgs posting that if this good luck continues, it turns to a serious problem. And as Jester mentions, already now people are beginning to suspect that this Higgs is not quite the standard model Higgs. The next step will come sooner or later and will be a cautious proposal spoiling the euphoric mood of co-bloggers: perhaps it is not Higgs at all! But things go slowly. Colleagues are rather conformistic and remarkably slow as thinkers. There are even those who are still making bets for standard SUSY;-)! I can however hope that after this step colleagures would be finally psychologically mature to consider the TGD prediction for M89 hadron physics as an alternative to Higgs. Accepting this hypothesis as something worth of testing would mean enormous progress on both the theoretical and the experimental side. Thursday, February 02, 2012 One more good reason for p-adic cognition One can present several justifications for why p-adic numbers are natural correlates of cognition and why p-adic topology is tailor-made for computation. One possible justification derives from the ultrametricity of p-adic norm stating that the p-adic norm is never larger than the maximum of the norms of summands. If one forms functions of real arguments, a cutoff in decimal or more general expansion of arguments introduces a cumulating error, and in principle one must perform calculation assuming that the number of digits for the arguments of function is higher than the number digits required by the cutoff, and drop the surplus digits at the end of the calculations. In p-adic case the situation is different. The sum for the errors resulting from cutoffs is never p-adically larger than the largest individual error so that there is no cumulation of errors , and therefore no need for surplus pinary digits for the arguments of the function. In practical computations this need not have great significance unless they involve very many steps but in cognitive processing the situation might be different. Wednesday, February 01, 2012 Bullying as a national disease We have a presidential election in Finland. The two main candidates are Sauli Niinistö and Pekka Haavisto. Niinistö can be said to represent the old world order in which economical values dictate everything. Haavisto is a representative of the new world order in which humanity, freedom, equality, and environment represent the most important values. For me the choice between these options is easy although I have nothing against Niinistö personally. Haavisto crystallized something very essential about Finland as a nation as he said that bullying is the national disease of Finland. Teasing begins already in elementary schools and continues in various educational establishments and eventually it continues at working places. Web has become also an arena of bullying providing completely new opportunities. Now and then some-one gets enough. The two mass murders that took place in educational establishments for few years ago are just two sad examples of what "enough is enough" really means. Personally I belong to the victims of academic bullying. The terror began for 34 years ago and has continued since then. I have lost my academic human rights and have been unemployed most of the time after I began to write my thesis 1977. I will remain so until I get to the age of 63 (only two years of this humiliation anymore!) and start to receive a minimal pension. I have done impressive life work: 15 books making about 12 thousand pages and a lot of articles. This does not means anything since so called "evaluation by equals" (direct translation for "vertaisarviointi"), a scientific equivalent of inquisition, can be used to label my work as crackpottery. There are many people out-broad and also in Finland who appreciate my work and they have made attempts to inform about my work in Wikipedia but (very probably finnish) censors have reacted immediately and vandalized the attempts. I have tried to understand what drives people to this kind of sadistic behaviors in which human life is literally destroyed. As far as I know, these people are quite descent human beings as individuals. But as members of collective they become sadistic beasts. Or some of them. The others remain completely passive and this is probably the core problem. We do not have the courage to say no when some sociopath starts the cruel game. I am of course just one of the many victims of this national hobby and I sincerely hope that Finland as a nation could heal from it. Haavisto is certainly experienced as a symbol of this healing and my sincere hope is that he wins.
2ad072130b690988
Wednesday, April 29, 2015 What could be the origin of p-adic length scale hypothesis? The argument would explain the existence of preferred p-adic primes. It does not yet explain p-adic length scale hypothesis stating that p-adic primes near powers of 2 are favored. A possible generalization of this hypothesis is that primes near powers of prime are favored. There indeed exists evidence for the realization of 3-adic time scale hierarchies in living matter (see this) and in music both 2-adicity and 3-adicity could be present, this is discussed in TGD inspired theory of music harmony and genetic code (see this). The weak form of NMP might come in rescue here. 1. Entanglement negentropy for a negentropic entanglement characterized by n-dimensional projection operator is the log(Np(n) for some p whose power divides n. The maximum negentropy is obtained if the power of p is the largest power of prime divisor of p, and this can be taken as definition of number theoretic entanglement negentropy. If the largest divisor is pk, one has N= k× log(p). The entanglement negentropy per entangled state is N/n=klog(p)/n and is maximal for n=pk. Hence powers of prime are favoured which means that p-adic length scale hierarchies with scales coming as powers of p are negentropically favored and should be generated by NMP. Note that n=pk would define a hierarchy of heff/h=pk. During the first years of heff hypothesis I believe that the preferred values obey heff=rk, r integer not far from r= 211. It seems that this belief was not totally wrong. 2. If one accepts this argument, the remaining challenge is to explain why primes near powers of two (or more generally p) are favoured. n=2k gives large entanglement negentropy for the final state. Why primes p=n2= 2k-r would be favored? The reason could be following. n=2k corresponds to p=2, which corresponds to the lowest level in p-adic evolution since it is the simplest p-adic topology and farthest from the real topology and therefore gives the poorest cognitive representation of real preferred extremal as p-adic preferred extermal (Note that p=1 makes formally sense but for it the topology is discrete). 3. Weak form of NMP suggests a more convincing explanation. The density matrix of the state to be reduced is a direct sum over contributions proportional to projection operators. Suppose that the projection operator with largest dimension has dimension n. Strong form of NMP would say that final state is characterized by n-dimensional projection operator. Weak form of NMP allows free will so that all dimensions n-k, k=0,1,...n-1 for final state projection operator are possible. 1-dimensional case corresponds to vanishing entanglement negentropy and ordinary state function reduction isolating the measured system from external world. 4. The negentropy of the final state per state depends on the value of k. It is maximal if n-k is power of prime. For n=2k=Mk+1, where Mk is Mersenne prime n-1 gives the maximum negentropy and also maximal p-adic prime available so that this reduction is favoured by NMP. Mersenne primes would be indeed special. Also the primes n=2k-r near 2k produce large entanglement negentropy and would be favored by NMP. 5. This argument suggests a generalization of p-adic length scale hypothesis so that p=2 can be replaced by any prime. This argument together with the hypothesis that preferred prime is ramified would correlate the character of the irreducible extension and character of super-conformal symmetry breaking. The integer n characterizing super-symplectic conformal sub-algebra acting as gauge algebra would depends on the irreducible algebraic extension of rational involved so that the hierarchy of quantum criticalities would have number theoretical characterization. Ramified primes could appear as divisors of n and n would be essentially a characteristic of ramification known as discriminant. An interesting question is whether only the ramified primes allow the continuation of string world sheet and partonic 2-surface to a 4-D space-time surface. If this is the case, the assumptions behind p-adic mass calculations would have full first principle justification. For details see the article The Origin of Preferred p-Adic Primes?. Tuesday, April 28, 2015 How preferred p-adic primes could be determined? p-Adic mass calculations allow to conclude that elementary particles correspond to one or possible several preferred primes assigning p-adic effective topology to the real space-time sheets in discretization in some length scale range. TGD inspired theory of consciousness leads to the identification of p-adic physics as physics of cognition. The recent progress leads to the proposal that quantum TGD is adelic: all p-adic number fields are involved and each gives one particular view about physics. Adelic approach plus the view about evolution as emergence of increasingly complex extensions of rationals leads to a possible answer to th question of the title. The algebraic extensions of rationals are characterized by preferred rational primes, namely those which are ramified when expressed in terms of the primes of the extensions. These primes would be natural candidates for preferred p-adic primes. 1. Earlier attempts How the preferred primes emerges in this framework? I have made several attempts to answer this question. 1. Classical non-determinism at space-time level for real space-time sheets could in some length scale range involving rational discretization for space-time surface itself or for parameters characterizing it as a preferred extremal correspond to the non-determinism of p-adic differential equations due to the presence of pseudo constants which have vanishing p-adic derivative. Pseudo- constants are functions depend on finite number of pinary digits of its arguments. 2. The quantum criticality of TGD is suggested to be realized in in terms of infinite hierarchies of super-symplectic symmetry breakings in the sense that only a sub-algebra with conformal weights which are n-multiples of those for the entire algebra act as conformal gauge symmetries. This might be true for all conformal algebras involved. One has fractal hierarchy since the sub-algebras in question are isomorphic: only the scale of conformal gauge symmetry increases in the phase transition increasing n. The hierarchies correspond to sequences of integers n(i) such tht n(i) divides n(i+1). These hierarchies would very naturally correspond to hierarchies of inclusions of hyper-finite factors and m(i)= n(i+1)/n(i) could correspond to the integer n characterizing the index of inclusion, which has value n≥ 3. Possible problem is that m(i)=2 would not correspond to Jones inclusion. Why the scaling by power of two would be different? The natural question is whether the primes dividing n(i) or m(i) could define the preferred primes. 3. Negentropic entanglement corresponds to entanglement for which density matrix is projector. For n-dimensional projector any prime p dividing n gives rise to negentropic entanglement in the sense that the number theoretic entanglement entropy defined by Shannon formula by replacing pi in log(pi)= log(1/n) by its p-adic norm Np(1/n) is negative if p divides n and maximal for the prime for which the dividing power of prime is largest power-of-prime factor of n. The identification of p-adic primes as factors of n is highly attractive idea. The obvious question is whether n corresponds to the integer characterizing a level in the hierarchy of conformal symmetry breakings. 4. The adelic picture about TGD led to the question whether the notion of unitary could be generalized. S-matrix would be unitary in adelic sense in the sense that Pm=(SS)mm=1 would generalize to adelic context so that one would have product of real norm and p-adic norms of Pm. In the intersection of the realities and p-adicities Pm for reals would be rational and if real and p-adic Pm correspond to the same rational, the condition would be satisfied. The condition that Pm≤ 1 seems however natural and forces separate unitary in each sector so that this options seems too tricky. These are the basic ideas that I have discussed hitherto. 2. Could preferred primes characterize algebraic extensions of rationals? The intuitive feeling is that the notion of preferred prime is something extremely deep and the deepest thing I know is number theory. Does one end up with preferred primes in number theory? This question brought to my mind the notion of ramification of primes (see this) (more precisely, of prime ideals of number field in its extension), which happens only for special primes in a given extension of number field, say rationals. Could this be the mechanism assigning preferred prime(s) to a given elementary system, such as elementary particle? I have not considered their role earlier also their hierarchy is highly relevant in the number theoretical vision about TGD. 1. Stating it very roughly (I hope that mathematicians tolerate this language): As one goes from number field K, say rationals Q, to its algebraic extension L, the original prime ideals in the so called integral closure (see this) over integers of K decompose to products of prime ideals of L (prime is a more rigorous manner to express primeness). 2. There are two further basic notions related to ramification and characterizing it. Relative discriminant is the ideal divided by all ramified ideals in K and relative different is the ideal of L divided by all ramified Pi:s. Note that te general ideal is analog of integer and these ideas represent the analogous of product of preferred primes P of K and primes Pi of L dividing them. 3. A physical analogy is provided by decomposition of hadrons to valence quarks. Elementary particles becomes composite of more elementary particles in the extension. The decomposition to these more elementary primes is of form P= ∏ Pie(i), where ei is the ramification index - the physical analog would be the number of elementary particles of type i in the state (see this). Could the ramified rational primes could define the physically preferred primes for a given elementary system? 1. In p-adic context a proper definition of counterparts of angle variables as phases allowing definition of the analogs of trigonometric functions requires the introduction of algebraic extension giving rise to some roots of unity. Their number depends on the angular resolution. These roots allow to define the counterparts of ordinary trigonometric functions - the naive generalization based on Taylors series is not periodic - and also allows to defined the counterpart of definite integral in these degrees of freedom as discrete Fourier analysis. For the simplest algebraic extensions defined by xn-1 for which Galois group is abelian are are unramified so that something else is needed. One has decomposition P= ∏ Pie(i), e(i)=1, analogous to n-fermion state so that simplest cyclic extension does not give rise to a ramification and there are no preferred primes. 2. What kind of polynomials could define preferred algebraic extensions of rationals? Irreducible polynomials are certainly an attractive candidate since any polynomial reduces to a product of them. One can say that they define the elementary particles of number theory. Irreducible polynomials have integer coefficients having the property that they do not decompose to products of polynomials with rational coefficients. It would be wrong to say that only these algebraic extensions can appear but there is a temptation to say that one can reduce the study of extensions to their study. One can even consider the possibility that string world sheets associated with products of irreducible polynomials are unstable against decay to those characterize irreducible polynomials. 3. What can one say about irreducible polynomials? Eisenstein criterion states following. If Q(x)= ∑k=0,..,n akxk is n:th order polynomial with integer coefficients and with the property that there exists at least one prime dividing all coefficients ai except an and that p2 does not divide a0, then Q is irreducible. Thus one can assign one or more preferred primes to the algebraic extension defined by an irreducible polynomial Q - in fact any polynomial allowing ramification. There are also other kinds of irreducible polynomials since Eisenstein's condition is only sufficient but not necessary. 4. Furthermore, in the algebraic extension defined by Q, the primes P having the above mentioned characteristic property decompose to an n :th power of single prime Pi: P= Pin. The primes are maximally/completely ramified. The physical analog P=P0n is Bose-Einstein condensate of n bosons. There is a strong temptation to identify the preferred primes of irreducible polynomials as preferred p-adic primes. A good illustration is provided by equations x2+1=0 allowing roots x+/-=+/- i and equation x2+2px+p=0 allowing roots x+/-= -p+/-p1/2p-11/2. In the first case the ideals associated with +/- i are different. In the second case these ideals are one and the same since x+= =- x- +p: hence one indeed has ramification. Note that the first example represents also an example of irreducible polynomial, which does not satisfy Eisenstein criterion. In more general case the n conditions on defined by symmetric functions of roots imply that the ideals are one and same when Eisenstein conditions are satisfied. 5. What does this mean in p-adic context? The identity of the ideals can be stated by saying P= P0n for the ideals defined by the primes satisfying the Eisenstein condition. Very loosely one can say that the algebraic extension defined by the root involves n:th root of p-adic prime p. This does not work! Extension would have a number whose n:th power is zero modulo p. On the other hand, the p-adic numbers of the extension modulo p should be finite field but this would not be field anymore since there would exist a number whose n:th power vanishes. The algebraic extension simply does not exist for preferred primes. The physical meaning of this will be considered later. 6. What is so nice that one could readily construct polynomials giving rise to given preferred primes. The complex roots of these polymials could correspond to the points of partonic 2-surfaces carrying fermions and defining the ends of boundaries of string world sheet. It must be however emphasized that the form of the polynomial depends on the choices of the complex coordinate. For instance, the shift x→ x+1 transforms (xn-1)/(x-1) to a polynomial satisfying the Eisenstein criterion. One should be able to fix allowed coordinate changes in such a manner that the extension remains irreducible for all allowed coordinate changes. Already the integral shift of the complex coordinate affects the situation. It would seem that only the action of the allowed coordinate changes must reduce to the action of Galois group permuting the roots of polynomials. A natural assumption is that the complex coordinate corresponds to a complex coordinate transforming linearly under subgroup of isometries of the imbedding space. In the general situation one has P= ∏ Pie(i), e(i)≥ 1 so that aso now there are prefered primes so that the appearance of preferred primes is completely general phenomenon. 3. A connection with Langlands program? In Langlands program (see this) the great vision is that the n-dimensional representations of Galois groups G characterizing algebraic extensions of rationals or more general number fields define n-dimensional adelic representations of adelic Lie groups, in particular the adelic linear group Gl(n,A). This would mean that it is possible to reduce these representations to a number theory for adeles. This would be highly relevant in the vision about TGD as a generalized number theory. I have speculated with this possibility earlier (see this) but the mathematics is so horribly abstract that it takes decade before one can have even hope of building a rough vision. One can wonder whether the irreducible polynomials could define the preferred extensions K of rationals such that the maximal abelian extensions of the fields K would in turn define the adeles utilized in Langlands program. At least one might hope that everything reduces to the maximally ramified extensions. At the level of TGD string world sheets with parameters in an extension defined by an irreducible polynomial would define an adele containing various p-adic number fields defined by the primes of the extension. This would define a hierarchy in which the prime ideals of previous level would decompose to those of the higher level. Each irreducible extension of rationals would correspond to some physically preferred p-adic primes. It should be possible to tell what the preferred character means in terms of the adelic representations. What happens for these representations of Galois group in this case? This is known. 1. For Galois extensions ramification indices are constant: e(i)=e and Galois group acts transitively on ideals Pi dividing P. One obtains an n-dimensional representation of Galois group. Same applies to the subgroup of Galois group G/I where I is subgroup of G leaving Pi invariant. This group is called inertia group. For the maximally ramified case G maps the ideal P0 in P=P0n to itself so that G=I and the action of Galois group is trivial taking P0 to itself, and one obtains singlet representations. 2. The trivial action of Galois group looks like a technical problem for Langlands program and also for TGD unless the singletness of Pi under G has some physical interpretation. One possibility is that Galois group acts as like a gauge group and here the hierarchy of sub-algebras of super-symplectic algebra labelled by integers n is highly suggestive. This raises obvious questions. Could the integer n characterizing the sub-algebra of super-symplectic algebra acting as conformal gauge transformations, define the integer defined by the product of ramified primes? P0n brings in mind the n conformal equivalence classes which remain invariant under the conformal transformations acting as gauge transformiations. . Recalling that relative discriminant is an of K ideal divisible by ramified prime ideals of K, this means that n would correspond to the relative discriminant for K=Q. Are the preferred primes those which are "physical" in the sense that one can assign to the states satisfying conformal gauge conditions? 4. A connection with infinite primes? Infinite primes are one of the mathematical outcomes of TGD. There are two kinds of infinite primes. There are the analogs of free many particle states consisting of fermions and bosons labelled by primes of the previous level in the hierarchy. They correspond to states of a supersymmetric arithmetic quantum field theory or actually a hierarchy of them obtained by a repeated second quantization of this theory. A connection between infinite primes representing bound states and and irreducible polynomials is highly suggestive. 1. The infinite prime representing free many-particle state decomposes to a sum of infinite part and finite part having no common finite prime divisors so that prime is obtained. The infinite part is obtained from "fermionic vacuum" X= ∏kpk by dividing away some fermionic primes pi and adding their product so that one has X→ X/m+ m, where m is square free integer. Also m=1 is allowed and is analogous to fermionic vacuum interpreted as Dirac sea without holes. X is infinite prime and pure many-fermion state physically. One can add bosons by multiplying X with any integers having no common denominators with m and its prime decomposition defines the bosonic contents of the state. One can also multiply m by any integers whose prime factors are prime factors of m. 2. There are also infinite primes, which are analogs of bound states and at the lowest level of the hierarchy they correspond to irreducible polynomials P(x) with integer coefficients. At the second levels the bound states would naturally correspond to irreducible polynomials Pn(x) with coefficients Qk(y), which are infinite integers at the previous level of the hierarchy. 3. What is remarkable that bound state infinite primes at given level of hierarchy would define maximally ramified algebraic extensions at previous level. One indeed has infinite hierarchy of infinite primes since the infinite primes at given level are infinite primes in the sense that they are not divisible by the primes of the previous level. The formal construction works as such. Infinite primes correspond to polynomials of single variable at the first level, polynomials of two variables at second level, and so on. Could the Langlands program could be generalized from the extensions of rationals to polynomials of complex argument and that one would obtain infinite hierarchy? 4. Infinite integers in turn could correspond to products of irreducible polynomials defining more general extensions. This raises the conjecture that infinite primes for an extension K of rationals could code for the algebraic extensions of K quite generally. If infinite primes correspond to real quantum states they would thus correspond the extensions of rationals to which the parameters appearing in the functions defining partonic 2-surfaces and string world sheets. This would support the view that partonic 2-surfaces associated with algebraic extensions defined by infinite integers and thus not irreducible are unstable against decay to partonic 2-surfaces which corresponds to extensions assignable to infinite primes. Infinite composite integer defining intermediate unstable state would decay to its composites. Basic particle physics phenomenology would have number theoretic analog and even more. 5. According to Wikipedia, Eisenstein's criterion (see this) allows generalization and what comes in mind is that it applies in exactly the same form also at the higher levels of the hierarchy. Primes would be only replaced with prime polynomials and the there would be at least one prime polynomial Q(y) dividing the coefficients of Pn(x) except the highest one such that its square would not divide P0. Infinite primes would give rise to an infinite hierarchy of functions of many complex variables. At first level zeros of function would give discrete points at partonic 2-surface. At second level one would obtain 2-D surface: partonic 2-surfaces or string world sheet. At the next level one would obtain 4-D surfaces. What about higher levels? Does one obtain higher dimensional objects or something else. The union of n 2-surfaces can be interpreted also as 2n-dimensional surface and one could think that the hierarchy describes a hierarchy of unions of correlated partonic 2-surfaces. The correlation would be due to the preferred extremal property of Kähler action. One can ask whether this hierarchy could allow to generalize number theoretical Langlands to the case of function fields using the notion of prime function assignable to infinite prime. What this hierarchy of polynomials of arbitrary many complex arguments means physically is unclear. Do these polynomials describe many-particle states consisting of partonic 2-surface such that there is a correlation between them as sub-manifolds of the same space-time sheet representing a preferred extremals of Kähler action? This would suggest strongly the generalization of the notion of p-adicity so that it applies to infinite primes. 1. This looks sensible and maybe even practical! Infinite primes can be mapped to prime polynomials so that the generalized p-adic numbers would be power series in prime polynomial - Taylor expansion in the coordinate variable defined by the infinite prime. Note that infinite primes (irreducible polynomials) would give rise to a hierarchy of preferred coordinate variables. In terms of infinite primes this expansion would require that coefficients are smaller than the infinite prime P used. Are the coefficients lower level primes? Or also infinite integers at the same level smaller than the infinite prime in question? This criterion makes sense since one can calculate the ratios of infinite primes as real numbers. 2. I would guess that the definition of infinite-P p-adicity is not a problem since mathematicians have generalized the number theoretical notions to such a level of abstraction much above of a layman like me. The basic question is how to define p-adic norm for the infinite primes (infinite only in real sense, p-adically they have unit norm for all lower level primes) so that it is finite. 3. There exists an extremely general definition of generalized p-adic number fields (see this). One considers Dedekind domain D, which is a generalization of integers for ordinary number field having the property that ideals factorize uniquely to prime ideals. Now D would contain infinite integers. One introduces the field E of fractions consisting of infinite rationals. Consider element e of E and a general fractional ideal eD as counterpart of ordinary rational and decompose it to a ratio of products of powers of ideals defined by prime ideals, now those defined by infinite primes. The general expression for the p-adic norm of x is x-ord(P), where n defines the total number of ideals P appearing in the factorization of a fractional ideal in E: this number can be also negative for rationals. When the residue field is finite (finite field G(p,1) for p-adic numbers), one can take c to the number of its elements (c=p for p-adic numbers. Now it seems that this number is not finite since the number of ordinary primes smaller than P is infinite! But this is not a problem since the topology for completion does not depend on the value of c. The simple infinite primes at the first level (free many-particle states) can be mapped to ordinary rationals and q-adic norm suggests itself: could it be that infinite-P p-adicity corresponds to q-adicity discussed by Khrennikov about p-adic analysics. Note however that q-adic numbers are not a field. Finally, a loosely related question. Could the transition from infinite primes of K to those of L takes place just by replacing the finite primes appearing in infinite prime with the decompositions? The resulting entity is infinite prime if the finite and infinite part contain no common prime divisors in L. This is not the case generally if one can have primes P1 and P2 of K having common divisors as primes of L: in this case one can include P1 to the infinite part of infinite prime and P2 to finite part. Monday, April 27, 2015 Could adelic approach allow to understand the origin of preferred p-adic primes? The comment of Crow to the posting Intentions, cognitions, and time stimulated rather interesting ideas about the adelization of quantum TGD. First two questions. 1. What is Adelic quantum TGD? The basic vision is that scattering amplitudes are obtained by algebraic continuation to various number fields from the intersection of realities and p-adicities (briefly intersection in what follows) represented at the space-time level by string world sheets and partonic 2-surfaces for which defining parameters (WCW coordinates) are in rational or in in some algebraic extension of p-adic numbers. This principle is a combination of strong form of holography and algebraic continuation as a manner to achieve number theoretic universality. 2. Why Adelic quantum TGD? Adelic approach is free of the earlier assumptions, which require mathematics which need not exist: transformation of p-adic space-time surfaces to real ones as a realization of intentional actions was the questionable assumption, which is un-necessary if cognition and matter are two different aspects of existence as already the success of p-adic mass calculations strongly suggests. It always takes years to develop ability to see things from bigger perspective and distill discoveries from clever inventions. Now adelicity is totally obvious. Being a conservative radical - not radical radical or radical conservative - is the correct strategy which I have been gradually learning. This particular lesson was excellent! For some years ago Crow sent to me the book of Lapidus about adelic strings. Witten wrote for long time ago an article in which the demonstrated that that the product of real stringy vacuum amplitude and its p-adic variants equals to 1. This is a generalisation of the adelic identity for a rational number stating that the product of the norm of rational number with its p-adic norms equals to one. The real amplitude in the intersection of realities and p-adicities for all values of parameter is rational number or in an appropriate algebraic extension of rationals. If given p-adic amplitude is just the p-adic norm of real amplitude, one would have the adelic identity. This would however require that p-adic variant of the amplitude is real number-valued: I want p-adic valued amplitudes. A further restriction is that Witten's adelic identity holds for vacuum amplitude. I live in Zero Energy Ontology (ZEO) and want it for entire S-matrix, M-matrix, and/or U-matrix and for all states of the basis in some sense. Consider first the vacuum amplitude. A weaker form of the identity would be that the p-adic norm of a given p-adic valued amplitude is same as that p-adic norm for the rational-valued real amplitude (this generalizes to algebraic extensions, I dare to guess) in the intersection. This would make sense and give a non-trivial constraint: algebraic continuation would guarantee this constraint. In particular, the p-adic norm of the real amplitude would be inverse of the product of p-adic norms of p-adic amplitudes. Most of these amplitudes should have p-adic norm equal to one in other words, real amplitude is product of finite number of powers of prime. This because the p-adic norms must approach rapidly to unity as p-adic prime increases and for large p-adic primes this means that the norm is exactly unity. Hence the p-adic norm of p-adic amplitude equals to 1 for most primes. In ZEO one must consider S-, M-, or U-matrix elements. U and S are unitary. M is product of hermitian square root of density matrix times unitary S-matrix. Consider next S-matrix. 1. For S-matrix elements one should have pm=(SS)mm=1. This states the unitarity of S-matrix. Probability is conserved. Could it make sense to generalize this condition and demand that it holds true only adelically that only for the product of real and p-adic norms of pm equals to one: NR(pm)(R)∏p Np(pm(p))=1. This could be actually true identically in the intersection if algebraic continuation principle holds true. Despite the triviality of the adelicity condition, one need not have anymore unitarity separately for reals and p-adic number fields. Notice that the numbers pm would be arbitrary rationals in the most general cased. 2. Could one even replace Np with canonical identification or some form of it with cutoffs reflecting the length scale cutoffs? Canonical identification behaves for powers of p like p-adic norm and means only more precise map of p-adics to reals. 3. For a given diagonal element of unit matrix characterizing particular state m one would have a product of real norm and p-adic norms. The number of the norms, which differ from unity would be finite. This condition would give finite number of exceptional p-adic primes, that is assign to a given quantum state m a finite number of preferred p-adic primes! I have been searching for a long time the underlying deep reason for this assignment forced by the p-adic mass calculations and here it might be. 4. Unitarity might thus fail in real sector and in a finite number of p-adic sectors (otherwise the product of p-adic norms would be infinite or zero). In some sense the failures would compensate each other in the adelic picture. The failure of course brings in mind p-adic thermodynamics, which indeed means that adelic SS, or should it be called MM, is not unitary but defines the density matrix defining the p-adic thermal state! Recall that M-matrix is defined as hermitian square root of density matrix and unitary S-matrix. 5. The weakness of these arguments is that states are assumed to be labelled by discrete indices. Finite measurement resolution implies discretization and could justify this. The p-adic norms of pm or the images of pm under canonical identification in a given number field would define analogs of probabilities. Could one indeed have ∑m pm=1 so that SS would define a density matrix? 1. For the ordinary S-matrix this cannot be the case since the sum of the probabilities pm equals to the dimension N of the state space: ∑ pm=N. In this case one could accept pm>1 both in real and p-adic sectors. For this option adelic unitarity would make sense and would be highly non-trivial condition allowing perhaps to understand how preferred p-adic primes emerge at the fundamental level. 2. If S-matrix is multiplied by a hermitian square root of density matrix to get M-matrix, the situation changes and one indeed obtains ∑ pm=1. MM†=1 does not make sense anymore and must be replaced with MM†=ρ, in special case a projector to a N-dimensional subspace proportional to 1/N. In this case the numbers p(m) would have p-adic norm larger than one for the divisors of N and would define preferred p-adic primes. For these primes the sum Np(p(m)) would not be equal to 1 but to NNp(1/N. 3. Situation is different for hyper-finite factors of type II1 for which the trace of unit matrix equals to one by definition and MM=1 and ∑ pm=1 with sum defined appropriately could make sense. If MM† could be also a projector to an infinite-D subspace. Could the M-matrix using the ordinary definition of dimension of Hilbert space be equivalent with S-matrix for the state space using the definition of dimension assignable to HFFs? Could these notions be dual of each other? Could the adelic S-matrix define the counterpart of M-matrix for HFFs? This looks like a nice idea but usually good looking ideas do not live long in the crossfire of counter arguments. The following is my own. The reader is encouraged to invent his or her own objections. 1. The most obvious objection against the very attractive direct algebraic continuation} from real to p-adic sector is that if the real norm or real amplitude is small then the p-adic norm of its p-adic counterpart is large so that p-adic variants of pm(p) can become larger than 1 so that probability interpretation fails. As noticed there is no actually no need to pose probability interpretation. The only way to overcome the "problem" is to assume that unitarity holds separately in each sector so that one would have p(m)=1 in all number fields but this would lead to the loss of preferred primes. 2. Should p-adic variants of the real amplitude be defined by canonical identification or its variant with cutoffs? This is mildly suggested by p-adic thermodynamics. In this case it might be possible to satisfy the condition pm(R)∏p Np(pm(p))=1. One can however argue that the adelic condition is an ad hoc condition in this To sum up, if the above idea survives all the objections, it could give rise to a considerable progress. A first principle understanding of how preferred p-adic primes are assigned to quantum states and thus a first principle justification for p-adic thermodynamics. For the ordinary definition of S-matrix this picture makes sense and also for M-matrix. One would still need the justification of canonical identification map playing a key role in p-adic thermodynamics allowing to map p-adic mass squared to its real counterpart. Sunday, April 26, 2015 Hierarchies of conformal symmetry breakings, quantum criticalities, Planck constants, and hyper-finite factors TGD is characterized by various hierarchies. There are fractal hierarchies of quantum criticalities, Planck constants and hyper-finite factors and these hierarchies relate to hierarchies of space-time sheets, and selves. These hierarchies are closely related and this article describes these connections. In this article the recent view about connections between various hierarchies associated with quantum TGD are described. For details see the article Hierarchies of conformal symmetry breakings, quantum criticalities, Planck constants, and hyper-finite factors. Updated View about Kähler geometry of WCW Quantum TGD reduces to a construction of Kähler geometry for what I call the "World of Classical Worlds. It has been clear from the beginning that the gigantic super-conformal symmetries generalizing ordinary super-conformal symmetries are crucial for the existence of WCW Kähler metric. The detailed identification of Kähler function and WCW Kähler metric has however turned out to be a difficult problem. It is now clear that WCW geometry can be understood in terms of the analog of AdS/CFT duality between fermionic and space-time degrees of freedom (or between Minkowskian and Euclidian space-time regions) allowing to express Kähler metric either in terms of Kähler function or in terms of anti-commutators of WCW gamma matrices identifiable as super-conformal Noether super-charges for the symplectic algebra assignable to δ M4+/-× CP2. The string model type description of gravitation emerges and also the TGD based view about dark matter becomes more precise. String tension is however dynamical rather than pregiven and the hierarchy of Planck constants is necessary in order to understand the formation of gravitationally bound states. Also the proposal that sparticles correspond to dark matter becomes much stronger: sparticles actually are dark variants of particles. A crucial element of the construction is the assumption that super-symplectic and other super-conformal symmetries having the same structure as 2-D super-conformal groups can be seen a broken gauge symmetries such that sub-algebra with conformal weights coming as n-ples of those for full algebra act as gauge symmetries. In particular, the Noether charges of this algebra vanish for preferred extremals- this would realize the strong form of holography implied by strong form of General Coordinate Invariance. This gives rise to an infinite number of hierarchies of conformal gauge symmetry breakings with levels labelled by integers n(i) such that n(i) divides n(i+1) interpreted as hierarchies of dark matter with levels labelled by the value of Planck constant heff=n× h. These hierarchies define also hierarchies of quantum criticalities and are proposed to give rise to inclusion hierarchies of hyperfinite factors of II1 having interpretation in terms of finite cognitive resolution. These hierarchies would be fundamental for the understanding of living matter. For details see the article Updated view about Kähler geometry of WCW. Intentions, Cognition, and Time Intentions involve time in an essential manner and this led to the idea that p-adic-to-real quantum jumps could correspond to a realization of intentions as actions. It however seems that this hypothesis posing strong additional mathematical challenges is not needed if one accepts adelic approach in which real space-time time and its p-adic variants are all present and quantum physics is adelic. I have already earlier developed the first formulation of p-adic space-time surfaces as cognitive charges of real space-time surfaces and also the ideas related to the adelic vision. The recent view involving strong form of holography would provide dramatically simplified view about how these representations are formed as continuations of representations of strings world sheets and partonic 2-surfaces in the intersection of real and p-adic variants of WCW ("World of Classical Worlds") in the sense that the parameters characterizing these representations are in the algebraic numbers in the algebraic extension of p-adic numbers involved. For details see the article Intentions, Cognition, and Time Saturday, April 25, 2015 Good and Evil, Life and Death In principle the proposed conceptual framework allows already now a consideration of the basic questions relating to concepts like Good and Evil and Life and Death. Of course, too many uncertainties are involved to allow any definite conclusions, and one could also regard the speculations as outputs of the babbling period necessarily accompanying the development of the linguistic and conceptual apparatus making ultimately possible to discuss these questions more seriously. Even the most hard boiled materialistic sceptic mentions ethics and moral when suffering personal injustice. Is there actual justification for moral laws? Are they only social conventions or is there some hard core involved? Is there some basic ethical principle telling what deeds are good and what deeds are bad? Second group of questions relates to life and biological death. How should on define life? What happens in the biological death? Is self preserved in the biological death in some form? Is there something deserving to be called soul? Are reincarnations possible? Are we perhaps responsible for our deeds even after our biological death? Could the law of Karma be consistent with physics? Is liberation from the cycle of Karma possible? In the sequel these questions are discussed from the point of view of TGD inspired theory of consciousness. It must be emphasized that the discussion represents various points of view rather than being a final summary. Also mutually conflicting points of view are considered. The cosmology of consciousness, the concept of self having space-time sheet and causal diamond as its correlates, the vision about the fundamental role of negentropic entanglement, and the hierarchy of Planck constants identified as hierarchy of dark matters and of quantum critical systems, provide the building blocks needed to make guesses about what biological death could mean from subjective point of view. Friday, April 24, 2015 Variation of Newton's constant and of length of day J. D. Anderson et al have published an article discussing the observations suggesting a periodic variation of the measured value of Newton constant and variation of length of day (LOD) (see also this). This article represents TGD based explanation of the observations in terms of a variation of Earth radius. The variation would be due to the pulsations of Earth coupling via gravitational interaction to a dark matter shell with mass about 1.3× 10-4ME introduced to explain Flyby anomaly: the model would predict Δ G/G= 2Δ R/R and Δ LOD/LOD= 2Δ RE/RE with the variations pf G and length of day in opposite phases. The expermental finding Δ RE/RE= MD/ME is natural in this framework but should be deduced from first principles. The gravitational coupling would be in radial scaling degree of freedom and rigid body rotational degrees of freedom. In rotational degrees of freedom the model is in the lowest order approximation mathematically equivalent with Kepler model. The model for the formation of planets around Sun suggests that the dark matter shell has radius equal to that of Moon's orbit. This leads to a prediction for the oscillation period of Earth radius: the prediction is consistent with the observed 5.9 years period. The dark matter shell would correspond to n=1 Bohr orbit in the earlier model for quantum gravitational bound states based on large value of Planck constant. Also n>1 orbits are suggestive and their existence would provide additional support for TGD view about quantum gravitation. For details see the chapter Cosmology and Astrophysics in Many-Sheeted Space-Time or the article Variation of Newton's constant and of length of day. Tuesday, April 21, 2015 Connection between Boolean cognition and emotions Weak form of NMP allows the state function reduction to occur in 2n-1 manners corresponding to subspaces of the sub-space defined by n-dimensional projector if the density matrix is n-dimensional projector (the outcome corresponding to 0-dimensional subspace and is excluded). If the probability for the outcome of state function reduction is same for all values of the dimension 1≤m ≤n, the probability distribution for outcome is given by binomial distribution B(n,p) for p=1/2 (head and tail are equally probable) and given by p(m)= b(n,m)× 2-n= (n!/m!(n-m)!)×2-n . This gives for the average dimesion E(m)= n/2 so that the negentropy would increase on the average. The world would become gradually better. One cannot avoid the idea that these different degrees of negentropic entanglement could actually give a realization of Boolean algebra in terms of conscious experiences. 1. Could one speak about a hierarchies of codes of cognition based on the assignment of different degrees of "feeling good" to the Boolean statements? If one assumes that the n:th bit is always 1, all independent statements except one correspond at least two non-vanishing bits and corresponds to negentropic entanglement. Only of statement (only last bit equal to 1) would correspond 1 bit and to state function reduction reducing the entanglement completely (brings in mind the fruit in the tree of Good and Bad Knowlege!). 2. A given hierarchy of breakings of super-symplectic symmetry corresponds to a hierarchy of integers ni+1= ∏k≤ i mk. The codons of the first code would consist of sequences of m1 bits. The codons of the second code consists of m2 codons of the first code and so on. One would have a hierarchy in which codons of previous level become the letters of the code words at the next level of the hierarchy. In fact, I ended up with almost Boolean algebra for decades ago when considering the hierarchy of genetic codes suggested by the hierarchy of Mersenne primes M(n+1)= MM(n), Mn= 2n-1. 1. The hierarchy starting from M2=3 contains the Mersenne primes 3,7,127,2127-1 and Hilbert conjectured that all these integers are primes. These numbers are almost dimensions of Boolean algebras with n=2,3,7,127 bits. The maximal Boolean sub-algebras have m=n-1=1,2,6,126 bits. 2. The observation that m=6 gives 64 elements led to the proposal that it corresponds to a Boolean algebraic assignable to genetic code and that the sub-algebra represents maximal number of independent statements defining analogs of axioms. The remaining elements would correspond to negations of these statements. I also proposed that the Boolean algebra with m=126=6× 21 bits (21 pieces consisting of 6 bits) corresponds to what I called memetic code obviously realizable as sequences of 21 DNA codons with stop codons included. Emotions and information are closely related and peptides are regarded as both information molecules and molecules of emotion. 3. This hierarchy of codes would have the additional property that the Boolean algebra at n+1:th level can be regarded as the set of statements about statements of the previous level. One would have a hierarchy representing thoughts about thoughts about.... It should be emphasized that there is no need to assume that the Hilbert's conjecture is true. One can obtain this kind of hierarchies as hierarchies with dimensions m, 2m, 22m,... that is n(i+1)= 2n(i). The conditions that n(i) divides n(i+1) is non-trivial only for at the lowest step and implies that m is power of 2 so that the hierarchies starting from m=2k. This is natural since Boolean algebras are involved. If n corresponds to the size scale of CD, it would come as a power of 2. p-Adic length scale hypothesis has also led to this conjecture. A related conjecture is that the sizes of CDs correspond to secondary p-adic length scales, which indeed come as powers of two by p-adic length scale hypothesis. In case of electron this predicts that the minimal size of CD associated with electron corresponds to time scale T=.1 seconds, the fundamental time scale in living matter (10 Hz is the fundamental bio-rhythm). It seems that the basic hypothesis of TGD inspired partly by the study of elementary particle mass spectrum and basic bio-scales (there are 4 p-adic length scales defined by Gaussian Mersenne primes in the range between cell membrane thickness 10 nm and and size 2.5 μm of cell nucleus!) follow from the proposed connection between emotions and Boolean cognition. 4. NMP would be in the role of God. Strong NMP as God would force always the optimal choice maximizing negentropy gain and increasing negentropy resources of the Universe. Weak NMP as God allows free choice so that entropy gain is not be maximal and sinners populate the world. Why the omnipotent God would allow this? The reason is now obvious. Weak form of NMP makes possible the realization of Boolean algebras in terms of degrees of "feels good"! Without the God allowing the possibility to do sin there would be no emotional intelligence! Hilbert's conjecture relates in interesting manner to space-time dimension. Suppose that Hilbert's conjecture fails and only the four lowest Mersenne integers in the hierarchy are Mersenne primes that is 3,7,127, 2127-1. In TGD one has hierarchy of dimensions associated with space-time surface coming as 0,1,2,4 plus imbedding space dimension 8. The abstraction hierarchy associated with space-time dimensions would correspond discretization of partonic 2-surfaces as point set, discretization of 3-surfaces as a set of strings connecting partonic 2-surfaces characterized by discrete parameters, discretization of space-time surfaces as a collection of string world sheet with discretized parameters, and maybe - discretization of imbedding space by a collection of space-time surfaces. Discretization means that the parameters in question are algebraic numbers in an extension of rationals associated with p-adic numbers. In TGD framework it is clear why imbedding space cannot be higher-dimensional and why the hierarchy does not continue. Could there be a deeper connection between these two hierarchies. For instance, could it be that higher dimensional manifolds of dimension 2×n can be represented physically only as unions of say n 2-D partonic 2-surfaces (just like 3×N dimensional space can be represented as configuration space of N point-like particles)? Also infinite primes define a hierarchy of abstractions. Could it be that one has also now similar restriction so that the hierarchy would have only finite number of levels, say four. Note that the notion of n-group and n-algebra involves an analogous abstraction hierarchy. Monday, April 20, 2015 Can one identify quantum physical correlates of ethics and moral? Quantum ethics very briefly This idea can be criticized. What could the quantum correlates of moral be? Intentions, cognitions, time, and p-adic physics Intentions involved time in an essential manner and this led to the idea that p-adic-to-real quantum jumps could correspond to a realization of intentions as actions. It however seems that this hypothesis posing strong additional mathematical challenges is not needed if one accepts adelic approach in which real space-time time and its p-adic variants are all present and quantum physics is adelic. I have developed the first formulation of p-adic space-time surface in and the ideas related to the adelic vision (see this, this, and this). 1. What intentions are? One of the earlier ideas about the flow of subjective time was that it corresponds to a phase transition front representing a transformation of intentions to actions and propagating towards the geometric future quantum jump by quantum jump. The assumption about this front is un-necessary in the recent view inspired by ZEO. Intentions should relate to active aspects of conscious experience. The question is what the quantum physical correlates of intentions are and what happens in the transformation of intention to action. 1. The old proposal that p-adic-to-real transition could correspond to the realization of intention as action. One can even consider the possibility that the sequence of state function reductions decomposes to pairs real-to-padic and p-adic-to-real transitons. This picture does not explain why and how intention gradually evolves stronger and stronger, and is finally realized. The identification of p-adic space-time sheets as correlates of cognition is however natural. 2. The newer proposal, which might be called adelic, is that real and p-adic space-time sheets form a larger sensory-cognitive structure: cognitive and sensory aspects would be simultaneously present. Real and p-adic space-time surfaces would form single coherent whole which could be called adelic space-time. All p-adic manifolds could be present and define kind of chart maps about real preferred extremals so that they would not be independent entities as for the first option. The first objection is that the assignment of fermions separately to the every factor of adelic space-time does not make sense. This objection is circumvented if fermions belong to the intersection of realities and p-adicities. This makes sense if string world sheets carrying the induced spinor fields define seats of cognitive representations in the intersection of reality and p-adicities. Cognition would be still associated with the p-adic space-time sheets and sensory experience with real ones. What can sensed and cognized would reside in the intersection. Intention would be however something different for the adelic option. The intention to perform quantum jump at the opposite boundary would develop during the sequence of state function reductions at fixed boundary and eventually NMP would force the transformation of intention to action as first state function reduction at opposite boundary. NMP would guarantee that the urge to do something develops so strong that eventually something is done. Intention involves two aspects. The plan for achieving something which corresponds to cognition and the will to achieve something which corresponds to emotional state. These aspects could correspond to p-adic and real aspects of intentionality. 2. p-Adic physics as physics of only cognition? There are two views about p-adic-real correspondence corresponding to two views about p-adic physics. According to the first view p-adic physics defines correlates for both cognition and intentionality whereas second view states that it provides correlates for cognition only. 1. Option A: The older view is that p-adic -to-real transitions realize intentions as actions and opposite transitions generate cognitive representations. Quantum state would be either real or p-adic. This option raises hard mathematical challenges since scattering amplitudes between different number fields are needed and the needed mathematics might not exist at all. 2. Option B: Second view is that cognition and sensory aspects of experience are simultaneously present at all levels and means that real space-time surface and their real counterparts form a larger structure in the spirit of what might be called Adelic TGD. p-Adic space-time charts could be present for all primes. It is of course necessary to understand why it is possible to assign definite prime to a given elementary particle. This option could be developed by generalizing the existing mathematics of adeles by replacing number in given number field with a space-time surface in the imbedding space corresponding that number field. Therefore this option looks more promising. For this option also the development of intention can be also understood. The condition that the scattering amplitudes are in the intersection of reality and p-adicities is very powerful condition on the scattering amplitudes and would reduce the realization of number theoretical universality and p-adicization to that for string world sheets and partonic 2-surfaces. For instance, the difficult problem of defining p-adic analogs of topological invariant would trivialize since these invariants (say genus) have algebraic representation for 2-D geometries. 2-dimensionality of cognitive representation would be perhaps basically due to the close correspondence between algebra and topology in dimension D=2. Most of the following considerations apply in both cases. 3. Some questions to ponder The following questions are part of the list of question that one must ponder. a) Do cognitive representations reside in the intersection of reality and p-adicities? The idea that cognitive representation reside in the intersection of reality and various p-adicities is one of the key ideas of TGD inspired theory of consciousness. 1. All quantum states have vanishing total quantum numbers in ZEO, which now forms the basis of quantum TGD (see this). In principle conservation laws do not pose any constraints on possibly occurring real--p-adic transitions (Option A) if they occur between zero energy states. On the other hand, there are good hopes about the definition of p-adic variants of conserved quantities by algebraic continuation since the stringy quantal Noether charges make sense in all number fields if string world sheets are in the real--p-adic intersection. This continuation is indeed needed if quantum states have adelic structure (Option B). In accordance with this quantum classical correspondence (QCC) demands that the classical conserved quantities in the Cartan algebra of symmetries are equal to the eigenvalues of the quantal charges. 2. The starting point is the interpretation of fermions as correlates for Boolean cognition and p-adic space-time sheets space-time correlates for cognitions (see this). Induced spinor fields are localized at string world sheets, which suggests that string world sheets and partonic 2-surfaces define cognitive representations in the intersection of realities and p-adicities. The space-time adele would have a book-like structure with the back of the book defined by string world sheets. 3. At the level of partonic 2-surfaces common rational points (or more generally common points in algebraic extension of rationals) correspond to the real--p-adic intersection. It is natural to identify the set of these points as the intersection of string world sheets and partonic 2-surfaces at the boundaries of CDs. These points would also correspond to the ends of strings connecting partonic 2-surfaces and the ends of fermion lines at the orbits of partonic 2-surfaces (at these surfaces the signature of the induced 4-metric changes). This would give a direct connection with fermions and Boolean cognition. 1. For option A the interpretation is simple. The larger the number of points is, the higher the probability for the transitions to occur. This because the transition amplitude must involve the sum of amplitudes determined by data from the common points. 2. For option B the number of common points measures the goodness of the particular cognitive representation but does not tell anything about the probability of any quantum transition. It however allows to discriminate between different p-adic primes using the precision of the cognitive representation as a criterion. For instance, the non-determinism of Kähler action could resemble p-adic non-determinism for some algebraic extension of p-adic number field for some value of p. Also the entanglement assignable to density matrix which is n-dimensional projector would be negentropic only if the p-adic prime defining the number theoretic entropy is divisor of n. Therefore also entangled quantum state would give a strong suggestion about the value of the optimal p-adic cognitive representation as that associated with the largest power of p appearing in n. b) Could cognitive resolution fix the measurement resolution? For p-adic numbers the algebraic extension used (roots of unity fix the resolution in angle degrees of freredom and pinary cutoffs fix the resolution in "radial" variables which are naturally positive. Could the character of quantum state or perhaps quantum transition fix measurement resolution uniquely? 1. If transitions (state function reductions) can occur only between different number fields (Option A), discretization is un-avoidable and unique if maximal. For real-real transitions the discretization would be motivated only by finite measurement resolution and need be neither necessary nor unique. Discretization is required and unique also if one requires adelic structure for the state space (Option B). Therefore both options A and B are allowed by this criterion. 2. For both options cognition and intention (if p-adic) would be one half of existence and sensory perception and motor actions would be second half of existence at fundamental level. The first half would correspond to sensory experience and motor action as time reversals of each other. This would be true even at the level of elementary particles, which would explain the amazing success of p-adic mass calculations. 3. For option A the state function reduction sequence would correspond to a formation of p-adic maps about real maps and real maps about p-adic maps: real → p-adic → real →..... For option B it would correspond the sequence adelic → adelic → adelic →..... 4. For both options p-adic and real physics would be unified to single coherent whole at the fundamental level but the adelic option would be much simpler. This kind of unification is highly suggestive - consider only the success of p-adic mass calculations - but I have not really seriously considered what it could mean. c) What selects the preferred p-adic prime? What determines the p-adic prime or preferred p-adic prime assignable to the system considered? Is it unique? Can it change? 1. An attractive hypothesis is that the most favorable p-adic prime is a factor of the integer n defining the dimension of the n× n density matrix associated with the flux tubes/fermionic strings connecting partonic 2-surfaces: the presence of fermionic strings already implies at least two partonic 2-surfaces. During the sequence of reductions at same boundary of CD n receives additional factors so that p cannot change. If wormhole contacts behave as magnetic monopoles there must be at least two of them connected by monopole flux tubes. This would give a connection with negentropic entanglement and for heff/h=n to quantum criticality, dark matterm and hierarchy of inclusions of HFFs. 2. Second possibility is that the classical non-determinism making itself visible via super-symplectic invariance acting as broken conformal gauge invariance has same character as p-adic non-determinism for some value of p-adic prime. This would mean that p-adic space-time surfaces would be especially good representations of real space-time sheets. At the lowest level of hierarchy this would mean large number of common points. At higher levels large number of common parameter values in the algebraic extension of rationals in question. d) How finite measurement resolution relates to hyper-finite factors? The connection with hyper-finite factors suggests itself. 1. Negentropic entanglement can be said to be stabilized by finite cognitive resolution if hyper-finite factors are associated with the hierarchy of Planck constants and cognitive resolutions. For HFFs the projection to single ray of state space in state function reduction is replaced with a projection to an infinite-dimensional sub-space whose von Neumann dimension is not larger than one. 2. This raises interesting question. Could infinite integers constructible from infinite primes correspond to these infinite dimensions so that prime p would appear as a factor of this kind of infinite integer? One can say that for inclusions of hyperfinite factors the ratio of dimensions for including and included factors is quantum dimension which is algebraic number expressible in terms of quantum phase q=exp(i2π/n). Could n correspond to the integer ratio n=nf/ni for the integers characterizing the sub-algebra of super-symplectic algebra acting as gauge transformations? 4. Generalizing the notion of p-adic space-time surface The notion of p-adic manifold \citealb/picosahedron is an attempt to formulate p-adic space-time surfaces identified as preferred extremal of p-adic variants of p-adic field equations as cognitive charts of real space-time sheets. Here the essential point is that p-adic variants of field equations make sense: this is due to the fact that induced metric and induced gauge fields make sense (differential geometry exists p-adically unlike global geometry involving notions of lengths, area, etc does not exist: in particular the notion of angle and conformal invariance make sense). The second key element is finite resolution so that p-adic chart map is not unique. Same applies to the real counterpart of p-adic extremal and having representation as space-time correlate for an intention realized as action. The discretization of the entire space-time surface proposed in the formulation of p-adic manifold concept (see this) looks too naive an approach. It is plausible that one has an abstraction hierarchy for discretizations at various abstraction levels. 1. The simplest discretization would occur at space-time level only at partonic 2-surfaces in terms of string ends identified as algebraic points in the extension of p-adics used. For the boundaries of string world sheets at the orbits of partonic 2-surface one would have discretization for the parameters defining the boundary curve. By field equations this curve is actually a segment of light-like geodesic line and characterized by initial light-like 8-velocity, which should be therefore a number in algebraic extension of rationals. The string world sheets should have similar parameterization in terms of algebraic numbers. By conformal invariance the finite-dimensional conformal moduli spaces and topological invariants would characterize string world sheets and partonic 2-surfaces. The p-adic variant of Teichmueller parameters was indeed introduced in p-adic mass calculations and corresponds to the dominating contribution to the particle mass (see this and this). 2. What might be called co-dimension 2 rule for discretization suggests itself. Partonic 2-surface would be replaced with the ends of fermion lines at it or equivalently: with the ends of space-like strings connecting partonic 2-surfaces at it. 3-D partonic orbit would be replaced with the fermion lines at it. 4-D space-time surface would be replaced with 2-D string world sheets. Number theoretically this would mean that one has always commutative tangent space. Physically the condition that em charge is well-defined for the spinor modes would demand co-dimension 2 rule. 3. This rule would reduce the real-p-adic correspondence at space-time level to construction of real and p-adic space-time surfaces as pairs to that for string world sheets and partonic 2-surfaces determining algebraically the corresponding space-time surfaces as preferred extremals of Kähler action. Strong form of holography indeed leads to the vision that these geometric objects can be extended to 4-D space-time surface representing preferred extremals. 4. In accordance with the generalization of AdS/CFT correspondence to TGD framework cognitive representations for physics would involve only partonic 2-surfaces and string world sheets. This would tell more about cognition rather than Universe. The 2-D objects in question would be in the intersection of reality and p-adicities and define cognitive representations of 4-D physics. Both classical and quantum physics would be adelic. 5. Space-time surfaces would not be unique but possess a degeneracy corresponding to a sub-algebra of the super-symplectic algebra isomorphic to it and and acting as conformal gauge symmetries giving rise to n conformal gauge invariance classes. The conformal weights for the sub-algebra would be n-multiples of those for the entire algebra and n would correspond to the effective Planck constant heff/h=n. The hierarchy of quantum criticalities labelled by n would correspond to a hierarchy of cognitive resolutions defining measurement resolutions. Clearly, very many big ideas behind TGD and TGD inspired theory of consciousness would have this picture as a Boolean intersection. 5. Number theoretic universality for cognitive representations 1. By number theoretic universality p-adic zero energy states should be formally similar to their real counterparts for option B. For option A the states between which real--p-adic transitions are highly probable would be similar. The states would have as basic building bricks the elements of the Yangian of the super-symplectic algebra associated with these strings which one can hope to be algebraically universal. 2. Finite measurement resolution demands that all scattering amplitudes representing zero energy states involve discretization. In purely p-adic context this is unavoidable because the notion of integral is highly problematic. Residue integral is p-adically well-defined if one can deal with π. p-Adic integral can be defined as the algebraic continuation of real integral made possible by the notion of p-adic manifold and this works at least in the real--p-adic intersection. String world sheets would belong to the intersection if they are cognitive representations as the interpretation of fermions as correlates of Boolean cognition suggests. In this case there are excellent hopes that all real integrals can be continued to various p-adic sectors (which can involve algebraic extensions of p-adic number fields). Quantum TGD would be adelic. There are of course potential problems with transcendentals like powers of π. 3. Discrete Fourier analysis allows to define integration in angle degrees of freedom represented in terms of algebraic extension involving roots of unity. In purely p-adic context the notion of angle does not make sense but trigonometric functions make sense: the reason is that only the local aspect of geometry generalize characterized by metric generalize. The global aspects such as line length involving integral do not. One can however introduce algebraic extensions of p-adic numbers containing roots of unity and this gives rise to a realistic notion of trigonometric function. One can also define the counterpart of integration as discrete Fourier analysis in discretized angle degrees of freedom. 4. Maybe the 2-dimensionality of cognition has something to do with the fact that quaternions and octonions do not have p-adic counterpart (the p-adic norm squared of quaternion/octonion can vanish). I have earlier proposed that life and cognitive representations resides in real-p-adic intersection. Stringy description of TGD could be seen as number theoretically universal cognitive representation of 4-D physics. The best that the limitations of cognition allow to obtain. This hypothesis would also guarantee that various conserved quantal charges make sense both in real and p-adic sense as p-adic mass calculations demand. Monday, April 13, 2015 Manifest unitarity and information loss in gravitational collapse There was a guest posting in the blog of Lubos by Prof. Dejan Stojkovic from Buffalo University. The title of the post was Manifest unitarity and information loss in gravitational collapse. It explained the contents of the article Radiation from a collapsing object is manifestly unitary by Stojkovic and Saini. The posting The posting describes calculations carried out for a collapsing spherical mass shell, whose radius approaches its own Scwartschild radius. The metric outside the shell with radius larger than rS is assumed to be Schwartschild metric. In the interior of the shell the metric would be Minkowski metric. The system considered is second quantized massless scalar field. One can calculate the Hamiltonian of the radiation field in terms of eigenmodes of the kinetic and potential parts and by canonical quantization the Schrödinger equation for the eigenmodes reduces to that for a harmonic oscillator with time dependent frequency. Solutions can be developed in terms of solutions of time-independent harmonic oscillator. The average value of the photon number turns out to approach to that associated with a thermal distribution irrespective of initial values at the limit when the of the shell approaches its blackhole radius. The temperature is Hawking temperature. This is of course highly interesting result and should reflect the fact that Minkowski vacuum looks from the point of view of an accelerated system to be in thermal equilibrium. Manifest unitary is just what one expects. The authors assign a density matrix to the state in the harmonic oscillator basis. Since the state is pure, the density matrix is just a projector to the quantum state since the components of the density matrix are products of the coefficients characterizing the state in the oscillator basis (there are a couple of typos in the formulas, reader certainly notices them). In Hawking's original argument the non-diagonal cross terms are neglected and one obtains a non-pure density matrix. The approach of authors is of course correct since they consider only the situation before the formation of horizon. Hawking consider the situation after the formation of horizon and assumes some un-specified process taking the non-diagonal components of the density matrix to zero. This decoherence hypothesis is one of the strange figments of insane theoretical imagination which plagues recent day theoretical physics. Authors mention as a criterion for purity of the state the condition that the square of the density matrix has trace equal to one. This states that the density matrix is N-dimensional projector. The criterion alone does not however guarantee the purity of the state for N> 1. This is clear from the fact that the entropy is in this case non-vanishing and equal to log(N). I notice this because negentropic entanglement in TGD framework corresponds to the situation in entanglement matrix is proportional to unit matrix (that is projector). For this kind of states number theoretic counterpart of Shannon entropy makes sense and gives negative entropy meaning that entanglement carries information. Note that unitary 2-body entanglement gives rise to negentropic entanglement. Authors inform that Hawkins used Bogoliubov transformations between initial Minkowski vacuum and final Schwartschild vacum at the end of collapse which looks like thermal distribution with Hawking temperature in terms from Minkowski space point of view. I think that here comes an essential physical point. The question is about the relationship between two observers - one might call them the observer falling into blackhole and the observer far away approximating space-time with Minkowski space. If the latter observer traces out the degrees of freedom associated with the region below horizon, the outcome is genuine density matrix and information loss. This point is not discussed in the article and authors inform that their next project is to look at the situation after the spherical shell has reached Schwartschild radius and horizon is born. One might say that all that is done concerns the system before the formation of blackhole (if it is formed at all!). Several poorly defined notions arise when one tries to interpret the results of the calculation. 1. What do we mean with observer? What do we mean with information? For instance, authors define information as difference between maximum entropy and real entropy. Is this definition just an ad hoc manner to get sum well-defined number christened as an information? Can we really reduce the notion of information to thermodynamics? Shouldn't we be very careful in distinguishing between thermodynamical entropy and entanglement entropy? A sub-system possessing entanglement entropy with its complement can be purified by seeing it as a part of the entire system. This entropy relates to pair of systems. Thermal entropy can be naturally assigned to an average representative of ensemble and is single particle observable. 2. Second list of questions relates to quantum gravitation. Is blackhole really a relevant notion or just a singular outcome of a theory exceeding its limits? Does something deserving to be called blackhole collapse really occur? Is quantum theory in its recent form enough to describe what happens in this process or its analog? Do we really understand the quantal description of gravitational binding? What TGD can say about blackholes? The usual objection of string theory hegemony is that there are no competing scenarios so that superstring is the only "known" interesting approach to quantum gravitation (knowing in academic sense is not at all the same thing as knowing in the naive layman sense and involves a lot of sociological factors transforming actual knowing to sociological unknowing: in some situations these sociological factors can make a scientist practically blind, deaf, and as it looks - brainless!) . I dare however claim that TGD represents an approach, which leads to a new vision challenging a long list of cherished notions assigned with blackholes. To my view blackhole science crystallizes huge amount of conceptual sloppiness. People can calculate but are not so good in concetualizing. Therefore one must start the conceptual cleaning from fundamental notions such as information, notions of time (experienced and geometric), observer, etc... In attempt to develop TGD from a bundle of ideas to a real theory I have been forced to carry out this kind of distillation and the following tries to summarize the outcome. 1. TGD provides a fundamental description for the notions of observer and information. Observer is replaced with "self" identified in ZEO by a sequences of quantum jumps occurring at same boundary of CD and leaving it and the part of the zero energy state at it fixed whereas the second boundary of CD is delocalized and superposition for which the average distance between the tips of CDs involve increases: this gives to the experience flow of time and its correlation with the flow of geometric time. The average size of CDs simply increases and this means that the experiences geometric time increases. Self "dies" as the first state function reduction to the opposite boundary takes place and new self assignable it is born. 2. Negentropy Maximizaton Principle favors the generation of entanglement negentropy. For states with projection operator as density matrix the number theoretic negentropy is possible for primes dividing the dimension of the projection and is maximum for the largest power of prime factor of N. Second law is replaced with its opposite but for negentropy which is two-particle observable rather than single particle observable as thermodynamical entropy. Second law follows at ensemble level from the non-determinism of the state function reduction alone. The notions related to blackhole are also in need of profound reconsideration. 1. Blackhole disappears in TGD framework as a fundamental object and is replaced by a space-time region having Euclidian signature of the induced metric identifiable as wormhole contact, and defining a line of generalized Feynman diagram (here "Feynmann" could be replaced with " twistor" or "Yangian" something even more appropriate). Blackhole horizon is replaced the 3-D light-like region defining the orbit of wormhole throat having degenerate metric in 4-D sense with signature (0,-1,-1,-1). The orbits of wormhole throats are carries of various quantum numbers and the sizes of M4 projections are of order CP2 size in elementary particle scales. This is why I refer to these regions also as light-like parton orbits. The wormhole contacts involved connect to space-time sheets with Minkowskian signature and stability requires that the wormhole contacts carry monopole magnetic flux. This demands at least two wormhole contacts to get closed flux lines. Elementary particles are this kind of pairs but also multiples are possible and valence quarks in baryons could be one example. 2. The connection with GRT picture could emerge as follows. The radial component of Schwartschild-Nordström metric associated with electric charge can be deformed slightly at horizon to transform horizon to light-like surface. In the deep interior CP2 would provide gravitational instanton solution to Maxwell-Einstein system with cosmological constant and having thus Euclidian metric. This is the nearest to TGD description that one can get within GRT framework obtained from TGD at asymptotic regions by replacing many-sheeted space-time with slightly deformed region of Minkowski space and summing the gravitational fields of sheets to get the the gravitational field of M4 region. All physical systems have space-time sheets with Euclidian signature analogous to blackhole. The analog of blackhole horizon provides a very general definition of "elementary particle". 3. Strong form of general coordinate invariance is central piece of TGD and implies strong form of holography stating that partonic 2-surfaces and their 4-D tangent space data should be enough to code for quantum physics. The magnetic flux tubes and fermionic strings assignable to them are however essential. The localization of induced spinor fields to string world sheets follows from the well-definedness of em charge and also from number theoretical arguments as well as generalization of twistorialization from D=4 to D=8. One also ends up with the analog of AdS/CFT duality applying to the generalization of conformal invariance in TGD framework. This duality states that one can describe the physics in terms of Kähler action and related bosonic data or in terms of Kähler-Dirac action and related data. In particular, Kähler action is expressible as string world sheet area in effective metric defined by Kähler-Dirac gamma matrices. Furthermore, gravitational binding is describable by strings connecting partonic 2-surfaces. The hierarchy of Planck constants is absolutely essential for the description of gravitationally bound states in thems of gravitational quantum coherence in macroscopic scales. The proportionality of the string area in effective metric to 1/heff2, heff=n× h=hgr=GMm/v0 is absolutely essential for achieving this. If the stringy action were the ordinary area of string world sheet as in string models, only gravitational bound states with size of order Planck length would be possible. Hence TGD forces to say that superstring models are at completely wrong track concerning the quantum description of gravitation. Even the standard quantum theory lacks something fundamental required by this goal. This something fundamental relates directly to the mathematics of extended super-conformal invariance: these algebras allow infinite number of fractal inclusion hierarchies in which algebras are isomorphic with each other. This allows to realize infinite hierarchies of quantum criticalities. As heff increases, some degrees are reduced from critical gauge degrees of freedom to genuine dynamical degrees of freedom but the system is still critical, albeit in longer scale. 4. A naive model for the TGD analog of blackhole is as a macroscopic wormhole contact surrounded by particle wormhole contacts with throats connected to the large wormhole throats by flux tubes and strings to the large wormhole contact. The macroscopic wormhole contact would carry magnetic charge equal to the sum of those associated with elemenentary particle wormhole throats. 5. What about black hole collapse and blackhole evaporation if blackholes are replaced with wormhole contacts with Euclidian signature of metric? Do they have any counterparts in TGD? Maybe! Any phase transition increasing heff=hgr would occur spontaneously as transitions to lower criticality and could be interpreted as analog of blackhole evaporation. The gravitationally bound object would just increase in size. I have proposed that this phase transition has happened for Earth (Cambrian explosion) and increases its radius by factor 2. This would explain the strange finding that the continents seem to fit nicely together if the radius of Earth is one half of the recent value. These phase transitions would be the quantum counterpart of smooth classical cosmic expansion. The phase transition reducing heff would not occur spontaneusly and in living systems metabolic energy would be needed to drive them. Indeed, from the condition that heff=hgr= GMm/v0 increases as M and v0 change also gravitational Compton length Lgr=hgr/m= GM/v0 defining the size scale of the gravitational object increases so that the spontaneous increase of hgr means increase of size. Does TGD predict any process resembling blackhole collapse? In Zero Energy Ontology (ZEO) state function reductions occurring at the same boundary of causal diamond (CD) define the notion of self possessing arrow of time. The first quantum state function reduction at opposite boundary is eventually forced by Negentropy Maximization Principle (NMP) and induces a reversal of geometric time. The expansion of object with a reversed arrow of geometric time with respect to observer looks like collapse. This is indeed what the geometry of causal diamond suggests. 6. The role of strings (and magnetic flux tubes with which they are associated) in the description of gravitational binding (and possibly also other kinds of binding) is crucial in TGD framework. They are present in arbitrary long length scales since the value of gravitational Planck constant heff = hgr = GMm/v0, v0 (v0/c<1) has dimensions of velocity can have huge values as compared with those of ordinary Planck constant. This implies macroscopic quantum gravitational coherence and the fountain effect of superfluidity could be seen as an example of this. The presence of flux tubes and strings serves as a correlate for quantum entanglement present in all scales is highly suggestive. This entanglement could be negentropic and by NMP and could be transferred but not destroyed. The information would be coded to the relationship between two gravitationally bound systems and instead of entropy one would have enormous negentropy resources. Whether this information can be made conscious is a fascinating problem. Could one generalize the interaction free quantum measurement so that it would give information about this entanglement? Or could the transfer of this information make it conscious? Also super string camp has become aware about possibility of geometric and topological correlates of entanglement. The GRT based proposal relies on wormhole connections. Much older TGD based proposal applied systematically in quantum biology and TGD inspired theory of consciousness identifies magnetic flux tubes and associated fermionic string world sheets as correlates of negentropic entanglement.
518224288a4f64cb
Paper A64 Vortices in ultrashort laser pulses Peter Hansinger, Alexander Dreischuh, and Gerhard G. Paulus The propagation of optical vortices nested in broadband femtosecond laser beams was studied both numerically and experimentally. Based on the nonlinear Schrödinger equation, the dynamics of different multiple vortex configurations with varying topological charge were modeled in self-focusing and self-defocusing Kerr media. We find a similar behavior in both cases regarding the vortex–vortex interaction. However, the collapsing background beam alters the propagation for a positive nonlinearity. Regimes of regular and possibly stable multiple filamentation were identified this way. Experiments include measurements on pairs of filaments generated in a vortex beam on an astigmatic Gaussian background with argon gas as the nonlinear medium. Spectral broadening of these filaments leads to a supercontinuum which spans from the visible range into the infrared. Recompression yields <19 fs pulses. Further optimization may lead to much better recompression.
c132773fb0bd964b
Take the 2-minute tour × If we have a one dimensional system where the potential $$V~=~\begin{cases}\infty & |x|\geq d, \\ a\delta(x) &|x|<d, \end{cases}$$ where $a,d >0$ are positive constants, what then is the corresponding classical case -- the approximate classical case when the quantum number is large/energy is high? share|improve this question What is $V$ when $x \in (-d,0) \cup (0,d)$? –  Siyuan Ren Apr 27 '12 at 9:09 Did you mean "$\infty$ when $|x| > d$"? Also did you mean "$a$ when $x = 0$" i.e. $a\delta(x)$. Finally is $a$ of the order of classical energies or much less? If the latter, the system just looks like a square well with no barrier at classical energies. –  John Rennie Apr 27 '12 at 9:41 Dear @Sys, it's a virtue and necessity, not a bug, that the delta-function is infinite at $x=0$. If it were finite at a single point (i.e. interval of length zero), like in your example, it would have no impact on the particle because zero times finite is zero. So your potential as you wrote it is physically identical to $V=\infty$ for $|x|<d$ and $0$ otherwise which is just a well with the standing wave energy eigenstates. The finite modification of $V$ at one point, by $a$, plays no role at all. A potential with $a\delta(x)$ in it would be another problem. –  Luboš Motl Apr 27 '12 at 10:44 @LubošMotl: Thanks, actually the delta function version instead of V=a at x=0 is the right one. What is the classical limit of that? –  Sys Apr 27 '12 at 11:15 @JohnRennie: I think your comment suggestion was right, that there is a delta function at x=0. –  Sys Apr 27 '12 at 11:17 2 Answers 2 up vote 1 down vote accepted Here we derive the bound state spectrum from scratch. Not surprisingly, the conclusion is that the Dirac delta potential doesn't matter in the semi-classical continuum limit, in accordance with Spot's answer. The time-independent Schrödinger equation reads for positive $E>0$, $$ -\frac{\hbar^2}{2m}\psi^{\prime\prime}(x) ~=~ (E-V(x))\psi(x), \qquad V(x)~:=~V_0\delta(x)+\infty \theta(|x|-d), \qquad V_0~>~0, $$ with the convention that $0\cdot \infty=0$. Define $$v(x) ~:=~ \frac{2mV(x)}{\hbar^2}, \qquad e~:=~\frac{2mE}{\hbar^2}~>~0 \qquad k~:=~\sqrt{e}~>~0\qquad v_0 ~:=~ \frac{2mV_0}{\hbar^2}. $$ $$ \psi^{\prime\prime}(x) ~=~ (v(x)-e)\psi(x). $$ We know that the wave function $\psi$ is continuous with boundary conditions $$\psi(x)~=0 \qquad {\rm for}\qquad |x|\geq d.$$ Also the derivative $\psi^{\prime}$ is continuous for $0<|x|<d$, and possibly has a kink at $x=0$, $${\lim}_{\epsilon\to 0^+}[\psi^{\prime}(x)]^{x=\epsilon}_{x=-\epsilon} ~=~v_0\psi(x=0). $$ We get $$\psi_{\pm}(x)~=~A_{\pm}\sin(k(x\mp d))\qquad {\rm for } \qquad 0 \leq \pm x \leq d.$$ 1. $\underline{\text{Case} ~\psi(x=0)=0}$. Then $$n~:=~\frac{kd}{\pi}~\in~ \mathbb{N}.$$ We get an odd wave function $$\psi_n(x)~\propto~\sin(kx).$$ In particularly, the odd wave functions do not feel the presence of the Dirac delta potential. 2. $\underline{\text{Case} ~\psi(x=0)\neq 0}$. Then continuity at $x=0$ implies that the wave function is even $A_{+}+A_{-}=0$. Phrased equivalently, $$\psi(x)~=~A\sin(k(|x|-d)).$$ The kink condition at $x=0$ becomes $$ v_0A\sin(-kd)~=~2kA \cos(kd), $$ or equivalently, $$ v_0\tan(kd)~=~-2k.$$ In the semiclassical continuum limit $$k \gg \frac{1}{d}, \qquad k \gg v_0,$$ this becomes $$\frac{kd}{\pi}+\frac{1}{2}~\in ~\mathbb{Z}, $$ i.e., in the semiclassical continuum limit the even wave functions do not feel the presence of the Dirac delta potential as well. share|improve this answer Firstly, it's easy to start off with just the Dirac delta potential and see what that does. Wiki has a nice solution for the Delta fuction potential, and I am lifting off parts of it here. Consider a potential $V(x) = a\delta (x)$ and consider a scattering like configuration, where a plane wave $e^{ikx}$ is incident from the left. $$ \psi(x)=\begin{cases}e^{ikx}+re^{-ikx} & x<0 \\ te^{ikx} & x> 0\end{cases} $$ By matching the boundary conditions, like on the wiki page, you get $$ t = 1+r\\ (1-\alpha)t = 1-r $$ where $$ \alpha = \frac{ 2ma}{ik\hbar^2} $$ characterizes the effect of the delta potential. Solving for $r$ and $t$, $$ t = \frac{1}{1-\alpha/2}\\ r=-\frac{\alpha/2}{1-\alpha/2} $$ Now, it is easy to see that for high incident $k$, the only effect of the dirac delta potential is to write a phase discontinuity on the wavefuction. This is because, as $k$ increases, the transmission $|t|^2=1/(1+|\alpha|^2/4)$ approaches 1, but the transmitted wavefunction gets an extra phase given by $$ \text{Arg}(t) = -\tan^{-1}(|\alpha|/2) $$ Getting back to the problem at hand, for a particle in a box (without the delta function), the allowed $k$ vectors are given by forcing the wavefunction to be zero at the walls at $x=-d$ and $x=d$, which gives us the condition $$ k_n=\frac{\pi n}{2d} $$ If now, we add a delta potential, then for high values of $n$ (or $k$), all the delta function will do is introduce a phase discontinuity at the origin, and consequently what you should expect is that the boundary condition is matched not for $k_n$, but something slightly off $k_n+\delta k_n$, where $\delta k_n$ is a small correction due to the delta function potential. For high values of $n$, this correction would drop, as the phase discontinuity decreases, and for classical like states (very large $n$) you expect to recover 1D box states, as mentioned by John Rennie. share|improve this answer Thank you, Spot! –  Sys Apr 27 '12 at 17:59 Your Answer
4d35783b35692e70
Exponential function From New World Encyclopedia Jump to: navigation, search Graph of y = ex. The exponential function is nearly flat (climbing slowly) for negative values of x, climbs quickly for positive values of x, and equals 1 when x is equal to 0. Its y value always equals the slope at that point. The exponential function is one of the most important functions in mathematics. For a variable x, this function is written as exp(x) or ex, where e is a mathematical constant, the base of the natural logarithm, which equals approximately 2.718281828, and is also known as Euler's number. Here, e is called the base and x is called the exponent. In a more general form, an exponential function can be written as ax, where a is a constant and x is a variable. The graph of y = ex is shown on the right. The graph is always positive (above the x axis) and increases from left to right. It never touches the x axis, although it gets extremely close to it. In other words, the x axis is a horizontal asymptote to the graph. Its inverse function, the logarithm, \log_e(x) = y \,, is defined for all positive x. Sometimes, especially in the sciences, the term exponential function is more generally used for functions of the form kax, where a is any positive real number not equal to one. In general, the variable x can be any real or complex number, or even an entirely different kind of mathematical object. Some applications of the exponential function include modeling growth in populations, economic changes, fatigue of materials, and radioactive decay. Most simply, exponential functions multiply at a constant rate. For example the population of a bacterial culture that doubles every 20 minutes can be expressed (approximatively, as this is not really a continuous problem) as an exponential, as can the value of a car that decreases by 10 percent per year. Using the natural logarithm, one can define more general exponential functions. The function \,\!\, a^x=(e^{\ln a})^x=e^{x \ln a} defined for all a > 0, and all real numbers x, is called the exponential function with base a. Note that this definition of \, a^x rests on the previously established existence of the function \, e^x , defined for all real numbers. Exponential functions "translate between addition and multiplication" as is expressed in the first three and the fifth of the following exponential laws: \,\!\, a^0 = 1 \,\!\, a^1 = a \,\!\, a^{x + y} = a^x a^y \,\!\, a^{x y} = \left( a^x \right)^y \,\!\, {1 \over a^x} = \left({1 \over a}\right)^x = a^{-x} \,\!\, a^x b^x = (a b)^x These are valid for all positive real numbers a and b and all real numbers x and y. Expressions involving fractions and roots can often be simplified using exponential notation: \,{1 \over a} = a^{-1} and, for any a > 0, real number b, and integer n > 1: \,\sqrt[n]{a^b} = \left(\sqrt[n]{a}\right)^b = a^{b/n}. Formal definition The exponential function (blue curve), and the sum of the first n+1 terms of the power series on the left (red curve). The exponential function ex can be defined in a variety of equivalent ways, as an infinite series. In particular, it may be defined by a power series: e^x = \sum_{n = 0}^{\infty} {x^n \over n!} = 1 + x + {x^2 \over 2!} + {x^3 \over 3!} + {x^4 \over 4!} + \cdots or as the limit of a sequence: e^x = \lim_{n \to \infty} \left( 1 + {x \over n} \right)^n. In these definitions, n! stands for the factorial of n, and x can be any real number, complex number, element of a Banach algebra (for example, a square matrix), or member of the field of p-adic numbers. Derivatives and differential equations The importance of exponential functions in mathematics and the sciences stems mainly from properties of their derivatives. In particular, \,{d \over dx} e^x = e^x That is, ex is its own derivative. Functions of the form \,Ke^x for constant K are the only functions with that property. (This follows from the Picard-Lindelöf theorem, with \,y(t) = e^t, y(0)=K and \,f(t,y(t)) = y(t).) Other ways of saying the same thing include: • The slope of the graph at any point is the height of the function at that point. • The rate of increase of the function at x is equal to the value of the function at x. • The function solves the differential equation \,y'=y. • exp is a fixed point of derivative as a functional In fact, many differential equations give rise to exponential functions, including the Schrödinger equation and the Laplace's equation as well as the equations for simple harmonic motion. For exponential functions with other bases: \,{d \over dx} a^x = (\ln a) a^x Thus any exponential function is a constant multiple of its own derivative. If a variable's growth or decay rate is proportional to its size—as is the case in unlimited population growth (see Malthusian catastrophe), continuously compounded interest, or radioactive decay—then the variable can be written as a constant times an exponential function of time. Furthermore for any differentiable function f(x), we find, by the chain rule: \,{d \over dx} e^{f(x)} = f'(x)e^{f(x)}. Double exponential function The term double exponential function can have two meanings: • a function with two exponential terms, with different exponents • a function \,f(x) = a^{a^x}; this grows even faster than an exponential function; for example, if a = 10: f(−1) = 1.26, f(0) = 10, f(1) = 1010, f(2) = 10100 = googol, ..., f(100) = googolplex. Factorials grow faster than exponential functions, but slower than double-exponential functions. Fermat numbers, generated by \,F(m) = 2^{2^m} + 1 and double Mersenne numbers generated by \,MM(p) = 2^{(2^p-1)}-1 are examples of double exponential functions. See also • Carico, Charles C. 1974. Exponential and logarithmic functions (Wadsworth precalculus mathematics series). Belmont, CA: Wadsworth Pub. Co. ISBN 0534003141. • Fried, H. M. 2002. Green's Functions and Ordered Exponentials. Cambridge, UK: Cambridge University Press. ISBN 0521443903. • Konyagin, Sergei, and Igor Shparlinski. 1999. Character Sums with Exponential Functions and their Applications. Cambridge, UK: Cambridge University Press. ISBN 0521642639. External links All links retrieved October 11, 2013. Research begins here...
ac3efbd7f91dd2fb
Monday, May 29, 2006 Non-Relativistic QCD This is another installment in our series about fermions on the lattice. In the previous posts in this series we had looked at various lattice discretisations of the continuum Dirac action, and how they dealt with the problem of doublers posed by the Nielsen-Ninomiya theorem. As it turned out, one of the main difficulties in this was maintaining chiral symmetry, which is important in the limit of vanishing quark mass. But what about the opposite limit -- the limit of infinite quark mass? As it turns out, that limit is also difficult to handle, but for entirely different reasons: The correlation functions, from which the properties of bound states are extracted, show an exponential decay of the form $$C(T,0)\sim e^{-maT}$$, where $$t$$ is the number of timesteps, and $$ma$$ is the product of the state's mass and the lattice spacing. Now for a heavy quark, e.g. a bottom, and the lattice spacings that are feasible with the biggest and fastest computers in existence today, $$ma\approx 2$$, which means that the correlation functions for an $$\Upsilon$$ will decay like $$e^{-4T}$$, which is way too fast to extract a meaningful signal. (Making the lattice spacing smaller is so hard because in order to fill the same physical volume you need to increase the number of lattice points accordingly, which requires a large increase in computing power.) Fortunately, in the case of heavy quark systems the kinetic energies of the heavy quarks are small compared to their rest masses, as evidenced by the relatively small splittings between the ground and excited states of heavy $$Q\bar{Q}$$ mesons. This means that the heavy quarks are moving at non-relativistic velocities $$v<<c$$ and can hence be well described by a Schrödinger equation instead of the full Dirac equation after integrating out the modes with energies of the order of $$E\gesim M$$. The corresponding effective field theory is known as Non-Relativistic QCD (NRQCD) and can be schematically written using the Lagrangian $$\mathcal{L} = \psi^\dag \left(\Delta_4 - H\right)\psi$$ where $\psi$ is a non-relativistic two-component Pauli spinor and the Hamiltonian is $$H = - \frac{\bm{\Delta}^2}{2M} + \textrm{(relativistic and other<br />corrections)}$$ In actual practice, this is not a useful way to write things, since it is numerically unstable for $$Ma<3$$; instead one uses an action that looks like $$\mathcal{L} = \psi^\dag\psi - \psi^dag\left( 1 - \frac{a\delta H}{2} \right) \left( 1 - \frac{aH_0}{2n} \right)^n U_4^\dagger\left( 1 - \frac{aH_0}{2n} \right)^n \left( 1 - \frac{a \delta H}{2} \right)\psi$$ $$H_0 = - \frac{\bm{\Delta}^2}{2M}$$ whereas $$\delta H$$ incorporates the relativistic and other corrections, and $$n\ge 1$$ is a numerical stability parameter that makes the system stable for $$Ma>3/(2n)$$. This complicated form makes NRQCD rather formidable to work with, but it can be and has been successfully used in the description of the $$\Upsilon$$ system and in other contexts. In fact, some of the most precise predictions from lattice QCD rely on NRQCD for the description of heavy quarks. It should be noted that the covariant derivatives in NRQCD are nearest-neighbours differences -- the reasons for having to take symmetric derivatives don't apply in the non-relativistic case; hence there are no doublers in NRQCD. Wednesday, May 24, 2006 Calling lattice bloggers! As pointed out in Matthew's last post so far, this is the world's first (and only, hence best) group blog devoted to Lattice QCD. Unfortunately, Matthew's new job does not leave him too much time for blogging; therefore I'm running this blog all alone at the moment, which leads to the relatively low activity seen in recent weeks. So I was wondering if there are any other lattice people out there who would like to join this blog and post here about their research work, the most recent papers on the arXiv or in the journals, interesting developments in the field, science in the news, and any other matters appropriate for a physics blog (as opposed to a mere physicists' blog). It would be great if this blog saw a little more activity! Tuesday, May 23, 2006 Physics fun with sunglasses Yesterday was a public holiday (Victoria Day) in Canada, and (as opposed to last year, when I went to my windowless office at the university in blissful ignorance of the Canadian holiday schedule, wondered why it was so empty and the lights on the corridors were off, and only figured it out when I was unable to obtain any lunch in the food court) I got to enjoy the sunshine on a lovely day. I had completely forgotten how much fun sunglasses could be: I have these sunglass things (I don't really know what the technical term for them is) that can be clipped to my glasses to turn them into sunglasses; what makes them so much fun is that they really are nothing but polarisation filters! Of course polarisation filters make great sunglasses because the sunlight is unpolarised, and because the polarisation filter does not introduce a colour bias like an old-fashioned green or brown filter would. But as everybody remembers from their undergraduate optics course, light reflected from surfaces is partially polarised, and the same is true for scattered light. Therefore, when wearing your polarisation filters/sunglasses, the brightness of the road surface and of the blue sky will vary as you tilt your head towards the right or the left, which is quite fascinating. Unfortunately, other people will probably consider you to be crazy if they see you tilting your head from side to side while stepping forward and backward trying to determine the Brewster angle, or turning around your own axis trying to precisely locate the spot of maximal polarisation in the sky (which is how bees detect the direction towards the sun even if the sun itself is behind a cloud) -- but a real physicist shouldn't mind, right? So I got to feel like an experimentalist for a little while, while also taking a pleasant walk in the park, sipping lime juice on a terrace above Wascana Lake and generally enjoying myself, all thanks to great weather and Her Majesty's official birthday in Canada: God save the Queen! Oh, and of course those sunglasses are real fun to use with LCDs, too... Thursday, May 18, 2006 Analytical (3+1)d Yang-Mills and ontology A little while ago, there were two papers by Leigh, Minic and Yelnikov, in which they expanded on the previous work done by Karabali, Kim and Nair towards an analytical solution for (2+1)-dimensional pure Yang-Mills theory. By re-expressing the theory in terms of appropriate variables, they were able to find an ansatz for the vacuum wavefunctional in the Schrödinger picture which they could solve analytically, enabling them to find the spectrum of glueball masses. But can the same be done for the physical case of (3+1) dimensions? In this paper, Freidel, Leigh and Minic seem to say "probably". Their generalisation to (3+1) dimensions is based on the idea of "corner variables", which are essentially untraced Wilson loops lying within the coordinate planes which go through the point at infinity. If the theory is expressed in terms of these, there are a lot of formal algebraic analogies with the (2+1)-dimensional case, which renders them hopeful that it may be possible to treat the (3+1)-dimensional theory in an analogous fashion. In this case the only problem left to solve would be to determine the kernel appearing in the ansatz for the wavefunctional. There seems, however, to be a very important difference between the (2+1)d and (3+1)d cases, which they also mention but appear to consider as a relatively minor inconvenience that will be worked out: in (2+1) dimensions, the gauge coupling has a positive mass dimension: [g32]=[Mass], so the generation of a mass gap is expected on dimensional grounds just from looking at the Lagrangian, and it is even possible to compute the mass gap semi-perturbatively using self-consistent approximations. In (3+1) dimensions, there is no dimensionfull parameter in the Yang-Mills Lagrangian, so the existence of a mass gap is really an unexpected surprise. Of course an arbitrary mass scale will be introduced by regularisation, but even if this mass scale cancels from all mass ratios (as Freidel appear to assert it will), its arbitrariness still means that the overall mass scale of the theory will remain completely undetermined by the kind of analysis they propose. I am not sure if this can be a consistent situation. The corner variables they use reminded me of a talk by the philosopher Holger Lyre given at a physics conference in Berlin in 2005. He discussed the Aharonov-Bohm effect and exhibited three possible ways of interpreting electrodynamics ontologically, which he called the A-, B- and C-interpretations. In the A-interpretation, the gauge potential A is assumed to be a real physical field: that is probably what most working physicists would reply when asked for the first time, and it has the advantage of making the locality of the interaction explicit; on the other hand, how can a quantity that depends on an arbitrary gauge choice be physically real? In the B-interpretation, the field strength B (and E) is considered to be physically real; this means physical reality is gauge-invariant, as it should be, but the interaction with matter becomes maximally nonlocal, which is very bad. In the C-interpretation, finally, the holonomies (C is for curves) of the gauge connection are taken to be the only physically real part of the theory: this leads to gauge-invariance and a form of locality (not a point interaction, but a Nahewirkungsprinzip). Ultimately, the C-interpretation would therefore appear to be the most palatable ontology of gauge theories. Finding a quantum formulation of gauge theories in the continuum that contains only Wilson loops as variables would be very desirable from this philosophical point of view alone, even if it does not lead to an analytical solution. Friday, May 05, 2006 Language-dependent spectra I just noticed that the sequence of colours in the visible electromagnetic spectrum seems to be named differently in English and German. In English, it generally appears to be red/orange/yellow/green/blue/indigo/violet (as in the mnemonic "Richard of York gave battle in vain."), whereas in German, it appears to be red/orange/yellow/light-green/dark-green/blue/violet (at least that is how I remember learning it in elementary school). Now I wonder what the basic colours of the visible spectrum are called in other languages. In particular, I suppose they are rather different in languages that divide parts of the colour space differently anyway (such as, I believe, Gaelic and Russian, and probably lots of non-Indoeuropean languages). Does anybody have examples of how the spectrum is "different" in other languages? Update: As a clarification: German-speakers don't call blue "dark-green" -- it's just that the conventional rendition of the colour spectrum in German splits the green part into two colour bands, whereas the English one does the same to the blue part. And a Franco-Canadian told me that in (Canadian) French it just is red/orange/yellow/green/blue/violet (six colours only).
efd16eab24b759e5
Dismiss Notice Join Physics Forums Today! Writer's Help:Music & String Theory 1. Dec 4, 2011 #1 Hi, All: I am a fiction writer and am interested in writing a short story that ties actual music with physicists' string theory. I have (very) rudimentary knowledge based on layman readings and shows like Brian Greene's TV series. If I have it correct, at the very core of everything are tiny vibrating strings. The string vibrate in different ways to make up you, me, the chair on which I am sitting, the Universe. I know that there is no actual connection b/t music and string theory, and music is just used as a metaphor in explanations, but I would like to explore a hypothetical relationship for science fiction purposes. Does anyone have ideas for how to make the connection sensical? 2. jcsd 3. Dec 4, 2011 #2 Music is controlled vibrations. Controlled vibrations of, in our case, air. It could be vibrations of any media. 4. Dec 4, 2011 #3 I know zero about string theory but ordinary quantum mechanics has many close parallels to musical instruments. I can tell you more if you like. 5. Dec 4, 2011 #4 Please do tell me more. Aside from a thimbleful of knowledge about waves & harmonics, I am in the dark. Thanks in advance for your response. 6. Dec 5, 2011 #5 User Avatar Gold Member Welcome to PF! You’re right, there is no direct connection between science and music, and I must add that string theory is still under development; I’m not sure it can make a coherent picture of the chair on which I am sitting on (yet). :wink: Besides this, it’s of course very interesting. Einstein played the violin, and said that "life without playing music is inconceivable to me" [...] "I am happy because I want nothing from anyone. I do not care for money. Decorations, titles, or distinctions mean nothing to me. I do not crave praise. The only thing that gives me pleasure, apart from my work, my violin, and my sailboat, is the appreciation of my fellow workers". http://upload.wikimedia.org/wikipedia/commons/thumb/1/13/Albert_Einstein_violin.jpg/300px-Albert_Einstein_violin.jpg [Broken] Max Planck is another among many physicists that have been fascinated by music, and found relaxation and fulfillment in playing. Max Planck was a highly gifted pianist, cellist, composer and singer. You don’t need string theory to make working parallels to music. A vibrating guitar string could be described using classical Newtonian mechanics: You can also find mathematical similarities in QM and the fundamental Schrödinger equation for the wavefunction. Here are graphical representations for the first eight bound eigenstates: And in 'action', you could imagine that this is not that far from a vibrating 'guitar string': Two stationary states and a superposition state at the bottom. And in fact, Hilbert spaces (utilized in QM) can be used to study the harmonics of vibrating strings. And spherical harmonics are important in many theoretical and practical applications, particularly in the computation of atomic orbital electron configurations in QM. Dmitri Tymoczko at Princeton University has taken these relations one step further and applied musical harmonies to a 2D configuration space and developed software for this analysis. Chord geometries for Chopin: Here is more info on this research: The Geometry of Music - TIME Magazine Science What Music and String Theory Have in Common The Geometry of Musical Chords - AAAS Science Dmitri Tymoczko - Princeton University Good luck! :smile: Last edited by a moderator: May 5, 2017 7. Dec 5, 2011 #6 Quantum mechanics is about standing waves, just like in a musical instrument. Blow into a flute and you get a note that is a standing wave, blow harder and the octave. It depends on the length of the flute. In QM the length is the distance between the matter that releases the energy and the matter that absorbs it. If you have more energy then the frequency doubles, just like the flute. The difference in energy is the quantum that everyone talks about. Or guitar strings. Pluck it and you get a standing wave. It is a combination of the root frequency and the harmonics, which are double, triple, quadruple, etc. of the root frequency. One string gets all those tones mixed together. In QM you usually get that kind of mixture. Get a guitar and play a note on an open string, then touch that string lightly in the middle(12th fret). The frequency will double. Do it again and touch the string at the 7th fret. The frequency will triple. 1,2,3,4,etc. So in QM if you have an oven then you can think of any pair of atoms in the walls of the oven as being connected by an imaginary string like that, in which there is a standing wave of energy. Electrons and other massive particles behave this way too: the electron is more likely to appear where there imaginary wave is moving the most. 8. Dec 5, 2011 #7 Thank you Patrick and DA! This is a treasure trove for me. Now I've got to ruminate on what I've learned (and read/watch several more times), so that I can ask more specific questions. Although it's fiction it's important to me that what I write is based on truth (or, as the case may be, current theory), so that when I stretch and bend it to fit the plot it at least is grounded in something other than my imagination. And now, to make things interesting, is another element I wish to include: brain waves. I find it interesting that when musicians play together, their brain waves synchronize (at least for this study). Kind Regards, Last edited: Dec 5, 2011 9. Dec 5, 2011 #8 User Avatar Gold Member You are welcome! :wink: Roger Penrose have argued that consciousness is the result of quantum gravity effects in microtubules, but this idea was refuted by Max Tegmark, so if I were you, I go easy on the "brain waves", i.e. if you want to be based on mainstream science... 10. Dec 6, 2011 #9 Brain waves are mainstream science. Attach electrodes to the skull and observe. 11. Dec 6, 2011 #10 User Avatar Gold Member Of course you are right, it was just to make clear to NM that we are not talking "non-local QM waves". It’s an interesting study, but to me, it’s absolutely clear that what put the musicians in sync are primarily sound waves. Put two people in sound isolated boxes, and tell them to keep a steady beat for 10 min. Only most gifted pros could keep this beat precisely (to a metronome or in sync with the other person). 12. Dec 6, 2011 #11 DA- I'm familiar with what you are talking about (i.e., the "Quantum Mind"), but Patrick is correct in that my interest lies in mainstream brain research (brain waves; mirror neurons). I'm sure this story doesn't sound particularly exciting... brain waves, music and physics. (Sounds more like the kind of college torture lecture given at 8AM on a Monday). If you are at all interested, the inspiration for the short story is Mark Oliver Everett, Hugh's son, who is a musician. Speaking of Hugh... I hope someone can help me grasp the difference between Many Worlds and the Multiverse or Multiple Universes theories. It seems that in popular culture they are often used interchangeably, but from what I've read they are not the same. MW, as I understand it, suggests that all possible outcomes exist simultaneously (the proverbial cat is alive AND dead), and that it refutes the Copenhagen Interpretation that the wave function collapses when the observer peeks into the box, and that what s/he sees is the one reality (a dead or alive cat). Am I following? The MW also allows for worlds to interfere with each other; that is, they can merge again at some point in the future, though the memories must be retained from only one of the worlds (how it's determined which memory rules, I have no clue!). The Multiverse theories are more about bubble universes; how there are many out there that may or may not contain copies of us, or have the same physics. But they are completely separate -- there is no interaction at all between them, no merging, etc. I also don't understand what constitutes a "quantum event" that would cause the world to split in Everett's model. And, if the world branches when I , say, take a right rather than a left in the road, it must also split based on what else is going on, right? So if someone I am with falls asleep while I am turning left, there is another world where that person stays awake when I turn left but falls asleep when I turn right. That's a heck of a lot of branching out! It makes my brain explode. I wish I could speak to myself in the world where I understand this stuff; it would make things a lot easier! PS BTW, Patrick's response almost made me spit out my coffee this morning -- VERY funny--though I am sure DA is well-aware of skulls and wires. Similar Discussions: Writer's Help:Music & String Theory 1. String Theory (Replies: 1) 2. String Theory/M-Theory (Replies: 2)
602a1b1f9cddb1b4
Interpreting the Quantum World I: Measurement & Non-Locality In previous posts Aron introduced us to the strange, yet compelling world of quantum mechanics and its radical departures from our everyday experience. We saw that the classical world we grew up with, where matter is composed of solid particles governed by strictly deterministic equations of state and motion, is in fact somewhat “fuzzy.” The atoms, molecules, and subatomic particles in the brightly colored illustrations and stick models of our childhood chemistry sets and schoolbooks are actually probabilistic fields that somehow acquire the properties we find in them when they’re observed. Even a particle’s location is not well-defined until we see it here, and not there. Furthermore, because they are ultimately fields, they behave in ways the little hard “marbles” of classical systems cannot, leading to all sort of paradoxes. Physicists, philosophers, and theologians alike have spent nearly a century trying to understand these paradoxes. In this series of posts, we’re going to explore what they tell us about the universe, and our place in it. To quickly recap earlier posts, in quantum mechanics (QM) the fundamental building block of matter is a complex-valued wave function \Psi whose squared amplitude is a real-valued number that gives the probability density of observing a particle/s in any given state. \Psi is most commonly given as a function of the locations of its constituent particles, \Psi\left ( \vec{r_{1}}, \vec{r_{2}}... \vec{r_{n}} \right ), or their momentums, \Psi\left ( \vec{p_{1}}, \vec{p_{2}}... \vec{p_{n}} \right ) (but not both, which as we will see, is important), but will also include any of the system’s other variables we wish to characterize (e.g. spin states). The range of possible configurations these variables span is known as the system’s Hilbert space. As the system evolves, its wave function wanders through this space exploring its myriad probabilistic possibilities. The time evolution of its journey is derived from its total energy in a manner directly analogous to the Hamiltonian formalism of classical mechanics, resulting in the well-known time-dependent Schrödinger equation. Because \left | \Psi \right |^{2} is a probability density, its integral over all of the system’s degrees of freedom must equal 1. This irreducibly probabilistic aspect of the wave function is known as the Born Rule (after Max Born who first proposed it), and the mathematical framework that preserves it in QM is known as unitarity. [Fun fact: Pop-singer Olivia Newton John is Born’s granddaughter!] Notice that \Psi is a single complex-valued wave function of the collective states of all its constituent particles. This makes for some radical departures from classical physics. Unlike a system of little hard marbles, it can interfere with itself—not unlike the way the countless harmonics in sound waves give us melodies, harmonies, and the rich tonalities of Miles Davis’ muted trumpet or Jimi Hendrix’s Stratocaster. The history of the universe is a grand symphony—the music of the spheres! Its harmonies also lead to entangled states, in which one part may not be uniquely distinguishable from another. So, it will not generally be true that the wave function of the particle sum is the sum of the individual particle wave functions, \Psi\left ( \vec{r_{1}}, \vec{r_{2}}... \vec{r_{n}} \right ) \neq \Psi\left ( \vec{r_{1}} \right )\Psi\left ( \vec{r_{2}} \right )... \Psi\left ( \vec{r_{n}} \right ) until the symphony progresses to a point where individual particle histories decohere enough to be distinguished from each other—melodies instead of harmonies. Another consequence of this wave-like behavior is that position and momentum can be converted into each other with a mathematical operation known as a Fourier transform. As a result, the Hilbert space may be specified in terms of position or momentum, but not both, which leads to the famous Heisenberg Uncertainty principle, \Delta x\Delta p \geqslant \hbar/2 where \hbar is the reduced Planck constant. It’s important to note that this uncertainty is not epistemic—it’s an unavoidable consequence of wave-like behavior. When I was first taught the Uncertainty Principle in my undergraduate Chemistry series, it was derived by modeling particles as tiny pool ball “wave packets” whose locations couldn’t be observed by bouncing a tiny cue-ball photon off them without batting them into left field with a momentum we couldn’t simultaneously see. As it happens, this approach does work, and is perhaps easier for novice physics and chemistry students to wrap their heads around. But unfortunately, it paints a completely wrong-heading picture of the underlying reality. We can pin down the exact location of a particle, but in so doing we aren’t simply batting it away—we’re destroying whatever information about momentum it originally had, rendering it completely ambiguous, and vice versa (in the quantum realm paired variables that are related to each other like this are said to be canonical). The symphony is, to some extent, irreducibly fuzzy! So… the unfolding story of the universe is a grand symphony of probability amplitudes exploring their Hilbert space worlds along deterministic paths, often in entangled states where some of their parts aren’t entirely distinct from each other, and acquiring whatever properties we find them to have only when they’re measured, many of which cannot simultaneously have exact values even in principle. Strange stuff to say the least! But the story doesn’t end there. Before we can decipher what it all means (or, I should say, get as close as doing so as we ever will) there are two more subtleties to this bizarre quantum world we still need to unpack… measurement and non-locality. The first thing we need to wrap our heads around is observation, or in quantum parlance, measurement. In classical systems matter inherently possesses the properties that it does, and we discover what those properties are when we observe them. My sparkling water objectively exists in a red glass located about one foot to the right of my keyboard, and I learned this by looking at it (and roughly measuring the distance with my thumb and fingers). In the quantum realm things are messier. My glass of water is really a bundle of probabilistic particle states that in some sense acquired its redness, location, and other properties by the very act of my looking at it and touching it. That’s not to say that it doesn’t exist when I’m not doing that, only that its existence and nature aren’t entirely independent of me. How does this work? In quantum formalism, the act of observing a system is described by mathematical objects known as operators. You can think of an operator as a tool that changes one function into another one in a specific way—like say, “take the derivative and multiply by ten.” The act of measuring some property A (like, say, the weight or color of my water glass) will apply an associated operator \hat A to its initial wave function state \Psi_{i} and change it to some final state \Psi_{f}, \hat A \Psi_{i} = \Psi_{f} For every such operator, there will be one or more states \Psi_{i} could be in at the time of this measurement for which \hat A would end up changing its magnitude but not its direction, \begin{bmatrix} \hat A \Psi_{1} = a_{1}\Psi_{1}\\ \hat A \Psi_{2} = a_{2}\Psi_{2}\\ ...\\ \hat A \Psi_{n} = a_{n}\Psi_{n} \end{bmatrix} These states are called eigenvectors, and the constants a_{n} associated with them are the values of A we would measure if \Psi is in any of these states when we observe it. Together, they define a coordinate system associated with A in the Hilbert space that \Psi can be specified in at any given moment in its history. If \Psi_{i} is not in one of these states when we measure A, doing so will force it into one of them. That is, \hat A \Psi_{i} \rightarrow \Psi_{n} and a_{n} will be the value we end up with. The projection of \Psi_{i} on any of the n axes gives the probability amplitude that measuring A will put the system into that state with the associated eigenvalue being what we measure, P(a_{n}) = \left | \Psi_{i} \cdot \Psi_{n} \right |^{2} So… per the Schrödinger equation, our wave function skips along its merry, deterministic way through a Hilbert space of unitary probabilistic states. Following a convention used by Penrose (2016), let’s designate this part of the universe’s evolution as \hat U. All progresses nicely, until we decide to measure something—location, momentum, spin state, etc. When we do, our wave function abruptly (some would even be tempted to say magically) jumps to a different track and spits out whatever value we observe, after which \hat U starts over again in the new track. This event—let’s call it \hat M—has nothing whatsoever to do with the wave function itself. The tracks it jumps to are determined by whatever properties we observe, and the outcome of these jumps are irreducibly indeterminate. We cannot say ahead of time which track we’ll end up on even in principle. The best we can do is state that some property A has such and such probability of knocking \Psi into this or that state and returning its associated value. When this happens, the wave function is said to have “collapsed.” [Collapsed is in quotes here for a reason… as we shall see, not all interpretations of quantum mechanics accept that this is what actually happens!] It’s often said that quantum mechanics only applies to the subatomic world, but on the macroscopic scale of our experience classical behavior reigns. For the most part this is true. But… as we’ve seen, \Psi is a wave function, and waves are spread out in space. Subatomic particles are only tiny when we observe them to be located somewhere. So, if \hat M involves a discrete collapse, it happens everywhere at once, even over distances that according to special relativity cannot communicate with each other—what some have referred to as “spooky action at a distance.” This isn’t mere speculation, nor a problem with our methods—it can be observed. Consider two electrons in a paired state with zero total spin. Such states (which are known as singlets) may be bound or unbound, but once formed they will conserve whatever spin state they originated with. In this case, since the electron cannot have zero spin, the paired electrons would have to preserve antiparallel spins that cancel each other. If one were observed to have a spin of, say, +1/2 about a given axis, the other would necessarily have a spin of -1/2. Suppose we prepared such a state unbound, and sent the two electrons off in opposite direction. As we’ve seen, until the spin state of one of them is observed, neither will individually be in any particular spin state. The wave function will be an entangled state of two possible outcomes, +/- and -/+ about any axis. Once we observe one of them and find it in, say, a “spin-up” state (+1/2 about a vertical axis), the wave function will have collapsed to a state in which the other must be “spin-down” (-1/2), and that will be what we find if it’s observed a split second later as shown below. But what would happen if the two measurements were made over a distance too large for a light signal to travel from the first observation point to the second one during the time delay between the two measurements? Special relativity tells us that no signal can communicate faster than the speed of light, so how would the second photon know that it was supposed to be in a spin-down state? Light travels 11.8 inches in one nanosecond, so it’s well within existing microcircuit technology to test this, and it has been done on many occasions. The result…? The second photon is always found in a spin state opposite that of the first. Somehow, our second electron knows what happened to its partner… instantaneously! If so, this raises some issues. Traditional QM asserts that the wave function gives us a complete description of a system’s physical reality, and the properties we observe it to have are instantiated when we see them. At this point we might ask ourselves two questions; 1)  How do we really know that prior to our observing it, the wave function truly is in an entangled state of two as-yet unrealized outcomes? What if it’s just probabilistic scaffolding we use to cover our lack of understanding of some deeper determinism not captured by our current QM formalism? 2)  What if the unobserved electron shown above actually had a spin-up property that we simply hadn’t learned about yet, and would’ve had it whether it was ever observed or not (a stance known as counterfactual definiteness)? How do we know that one or more “hidden” variables of some sort hadn’t been involved in our singlet’s creation, and sent the two electrons off with spin state box lunches ready for us to open without violating special relativity (a stance known as local realism)? Together, these comprise what’s known as local realism, or what Physicist John Bell referred to as the “Bertlmann’s socks” view (after Reinhold Bertlmann, a colleague of his at CERN). Bertlmann was known for never wearing matching pairs of socks to work, so it was all but guaranteed that if one could observe one of his socks, the other would be found to be differently colored. But unlike our collapsed electron singlet state, this was because Bertlmann had set that state up ahead of time when he got dressed… a “hidden variable” one wouldn’t be privy to unless they shared a flat with him. His socks would already have been mismatched when we discovered them to be, so no “spooky action at a distance” would be needed to create that difference when we first saw them. In 1964 Bell proposed a way to test this against the entangled states of QM. Spin state can only be observed in one axis at a time. Our experiment can look for +/- states about any axis, but not together. If an observer “Alice” finds one of the electrons in a spin-up state, the second photon will be in a spin-down state. What would happen if another observer “Bob” then measured its spin state about an axis at, say, a 45-deg. angle to vertical as shown below? The projection of the spin-down wave function on the eigenvector coordinate system of Bob’s measurement will translate into probabilities of observing + or – states in that plane. Bell produced a set of inequalities bearing his name which showed that if the electrons in our singlet state had in fact been dressed in different colored socks from the start, experiments like this will yield outcomes that differ statistically from those predicted by traditional QM. This too has been tested many times, and the results have consistently favored the predictions of QM, leaving us with three options; a)  Local realism is not valid in QM. Particles do not inherently possess properties prior to our observing them, and indeterminacy and/or some degree of “spooky action at a distance” cannot be fully exorcised from \hat M. b)  Our understanding of QM is incomplete. Particles do possess properties (e.g. spin, location, or momentum) whether we observe them or not (i.e. – counterfactuals about measurement outcomes exist), but our understanding of \hat U and \hat M doesn’t fully reflect the local realism that determines them. c)  QM is complete, and the universe is both deterministic and locally real without the need for hidden variables, but counterfactual definiteness is an ill-formed concept (as in the "Many Worlds Interpretation" for instance). Nature seems to be telling us that we can’t have our classical cake and eat it. There’s only room on the bus for one of these alternatives. Several possible loopholes have been suggested to exist in Bell’s inequalities through which underlying locally real mechanics might slip through. These have led to ever more sophisticated experiments to close them, which continue to this day. So far, the traditional QM frameworks has survived every attempt to up the ante, painting Bertlmann’s socks into an ever-shrinking corner. In 1966, Bell, and independently in 1967, Simon Kochen and Ernst Specker, proved what has since come to be known as the Kochen-Specker Theorem, which tightens the noose around hidden variables even further. What they showed, was that regardless of non-locality, hidden variables cannot account for indeterminacy in QM unless they’re contextual. Essentially, this all but dooms counterfactual definiteness in \hat M. There are ways around this (as there always are if one is willing to go far enough to make a point about something). The possibility of “modal” interpretations of QM have been floated, as has the notion of a “subquantum” realm where all of this is worked out. But these are becoming increasingly convoluted, and poised for Occam’s ever-present razor. As of this writing, hidden variables theories aren’t quite dead yet, but they are in a medically induced coma. In case things aren’t weird enough for you yet, note that a wave function collapse over spacelike distances raises the specter of the relativity of simultaneity. Per special relativity, over such distances the Lorentz boost blurs the distinction between past and future. In situations like these it’s unclear whether the wave function was collapsed by the first observation or the second one, because which one is in the future of the other is a matter of which inertial reference frame one is viewing the experiment from. Considering that you and I are many-body wave functions, anything that affects us now, like say, stubbing a toe, collapses our wave function everywhere at once. As such, strange as it may sound, in a very real sense it can be said that a short while ago your head experienced a change because you stubbed your toe now, not back then. And… It will experience a change shortly because you did as well. Which of these statements is correct depends only on the frame of reference from which the toe-stubbing event is viewed. It’s important to note that this has nothing to do with the propagation of information along our nerves—it’s a consequence of the fact that as “living wave functions”, our bodies are non-locally spread out across space-time to an extent that slightly blurs the meaning of “now”.  Of course, the elapsed times associated with the size of our bodies are too small to be detected, but the basic principle remains. Putting it all together Whew… that was a lot of unpacking! And the world makes even less sense now than it did when we started. Einstein once said that he wanted to know God’s thoughts, the rest were just details. Well it seems the mind of God is more inscrutable than we ever imagined! But now we have the tools we need to begin exploring some of the way His thoughts have been written into the fabric of creation. Our mission, should we choose to accept it, is to address the following; 1)  What is this thing we call a wave function? Is it ontologically real, or just mathematical scaffolding we use to make sense of things we don’t yet understand? 2)  What really happens when a deterministic, well-behaved \hat U symphony runs headlong into a seemingly abrupt, non-deterministic \hat M event? How do we get them to share their toys and play nicely with each other? 3)  If counterfactual definiteness is an ill-formed concept and every part of the wave function is equally real, why do our observations always leave us with only one experienced outcome? Why don’t we experience entangled realities, or even multiple realities? In the next installment in this series we’ll delve into a few of the answers that have been proposed so far. The best is yet to come, so stay tuned! Penrose, R. (2016). Fashion, faith, and fantasy in the new physics of the universe. Princeton University Press, Sept. 13, 2016. ISBN: 0691178534; ASIN: B01AMPQTRU. Available online at Accessed June 11, 2017. Posted in Metaphysics, Physics | 9 Comments Scott Church guest blogging In order to cover for my recent wrist injury (which is getting better, but slowly, thanks for asking), I've invited St. Scott Church to fill in a bit. Scott is a frequent commenter; going out of his way to be helpful when explaining things to non-experts, but with little taste for nonsense from those who ought to know better.  He got an M.S. in Applied Physics from the University of Washington in 1988, and now works as a photographer in Seattle.  Here is his personal website. Scott will be writing a series of at least 2 posts on the Interpretation of Quantum Mechanics (and maybe later on, some other topics). Also, to all those who have left thoughtful questions, sorry that I can't attend to them all right now.  But I still might respond to a few of them.  (I will of course continue to read and moderate the comments.) Posted in Blog | 1 Comment Spam filter problems For the past couple months my spam filter (Akismet) has falsely identified a rather large number of legitimate comments as spam. (For those of you who arrived on the internet yesterday, "spam" is off-topic comments trying to get people to click on links to buy things.  Mostly it is left by "bots" that automatically scan the internet.   When I installed a second layer of protection called "Cookies for Comments" a few months ago, Akismet was processing over a million spam comments a month, causing a slowdown on the server!  The vast majority of these were caught and removed by the filter, but sometimes it gets it wrong and lets spam through (a "false negative") or rejects legit comments (a "false positive"). I'm periodically checking the spam filter to rescue these false positives (just did 2 today), but you can help me out by doing the following: • Send me an email if you try to leave a legitimate comment and it does not appear on the website within a few comments.  You can find a working email for me on my personal website, which is linked to in the bar at the top of the page. • If convenient, go ahead and include a copy of your comment in the email.  (Generally it's a good idea to save long comments on your home computer before submitting, but if you didn't do this, you can often reclaim it by pressing the `back' button on your browser.) My spam filter keeps a copy of all comments flagged as spam for 15 days, so I probably don't actually need this, but rarely there are other technical problems that cause comments to disappear. • Please don't take it personally if your comment doesn't appear.  The spam filtering is done automatically by a hidden algorithm, and I don't have anything to do with it! If you are an insecure person, please don't waste time worrying that maybe you stepped over an invisible line and accidentally insulted me, and therefore I blocked your comments without telling you.   If you are a flesh-and-blood human being, your comment was probably legitimate. While I do occasionally remove "by hand" comments that violate the rules, I generally try to notify the person by email, or in that comments section, except for the worst offenders.  So unless you went on a blasphemous tirade or are an obviously-insulting troll, that's probably not you!  (And even if that is you, you are certainly entitled to respectfully ask by email—once, anyway—for an explanation of why your comment was deleted.) • All this assumes you left me a real email address.  Of course, if you violated the rules by leaving a fake email address, then you might not receive my explanation.  In that case, you deserve what you get, and I may also delete your comment!  (But sometimes, in the case of commenters otherwise engaging in good faith, I have looked the other way on this issue, in order to show mercy to the weak.) Obviously, I promise not to give your email address to the spammers, or otherwise share this information without your permission! • It is also necessary for your web browser to accept "cookies" in order for you to successfully leave a comment.  If this happens to you, you will be redirected to a page with an explanation & instructions.  If you are wrongly redirected to this page, please send me an email saying so.  Also, if for some reason you don't want to accept cookies from other websites, you can add an exception for Undivided Looking. Christ is risen from the dead, Trampling down death by death, And upon those in the tombs Bestowing life! -Paschal troparion Posted in Blog | 2 Comments Christ is Risen! Alleluia!  Christ is Risen! Rafael - ressureicaocristo01.jpg The Resurrection of Christ by St. Raphael. Tapestry version in the Vatican museum, actually I don't have a lot more than that to say right now.  But it seemed relevant, so I thought I'd post it.  If you want to read more about the significance of this event, click here. Posted in Theology | 14 Comments Remember you are dust... I haven't been able to write much recently due to a recurrence of tendonitis in my wrist, a repetitive stress injury from writing and typing too much.  I expect this to be temporary; in the sense that it has always gone away before, with enough rest and ice.  (Which is like the smoker who said to an ex-smoker, you've only quit once; I've quit hundreds of times!) On a brighter note, I'm also being distracted by job interviews for tenure-track faculty positions at two excellent UC schools (Berkeley & Santa Barbara).  I will not call these permanent positions since such permanence is to be found only in Heaven: life is short.  But it would still be nice to settle down for a few decades... Anyway, these are very exciting places for physics research, and I would be extremely pleased to get an offer from either of them! I've still got plenty of the content you love planned, once I'm past these issues!  There's a series on "Comparing Religions" that's almost done, so I might be able to get that out without too much more typing. I apologize to anyone whose questions I haven't answered; I'll email you if I ever get around to it.  But it was nice to see the conversation continue for a while without my needing to continually stoke it.  I will continue to read and moderate the discussion, so you are free to continue talking amongst yourselves... I...I...I...I...I...  Is death really necessary, just for us to come to an end of our continual self-absorption? Posted in Blog | 8 Comments
cd67f553a885287b
Take the 2-minute tour × I'm not sure if this is the right place to ask a question about the Schrödinger equation, but I'll take my chances anyway. Basically, I would like to know how one can set up a potential function that represents a double-slit barrier and then solve the Schrödinger equation for this potential. Of course, according to classical optics, we will obtain an interference pattern, but it would be nice to see a solution entirely within the quantum-mechanical framework. I see this as a problem in mathematical physics, so hopefully someone could kindly provide me with some references. share|improve this question migrated from math.stackexchange.com Oct 15 '12 at 18:33 Note that in most languages that use diacritics (including German), the letters with and without diacritics have only a historical connection and are pronounced quite differently nowadays. Thus leaving off the diacritics is about as bad a misspelling as changing the vowels completely; you might as well have written "Schrudinger". :-) If you can't produce letters with diacritics, you can always copy them from the Web, e.g. from the corresponding Wikipedia articles (which you can find by Googling the names without diacritics :-). –  joriki Oct 15 '12 at 16:38 2 Answers 2 I will only sketch out how one would arrive at the actual equation, as I am rather lazy at the moment. First, you will have to decide how many spatial dimensions you want to include: Obviously, a double-slit experiment won’t work in one dimension, so we need at least two (which can then be generalised to three rather easily). You then basically have a two-dimensional scattering problem, for which one would need a certain potential: Assume waves to arrive from $x = -\infty$ parallel to the $x$ axis, to be scattered at a potential $U(x,y)$. $U$ should be $0$ nearly everywhere except for a certain barrier, possibly located around the $y$ axis. Something like this should do: $$U_1(x,y) = \theta(y-y_1) \cdot \theta(x) \cdot \theta(1-x) $$ $$U_2(x,y) = \theta(y_0-y) \cdot \theta(-y_0 - y) \cdot \theta(x) \cdot \theta(1-x) $$ $$U_3(x,y) = \theta(-y-y_1) \cdot \theta(x) \cdot \theta(1-x) $$ $$U(x,y) = u_0 \cdot( U_1(x,y) + U_2(x,y) + U_3(x,y) \quad u_0 \textrm{ large or }\infty, \quad y_{0,1} > 0 $$ $U_1$ is meant to describe a potential of height $u_0$ in the area where $0 < x < 1$ and $y > y_1$ (that is, the upper part of the barrier). $U_2$ describes a similar potential for $0 < x < 1$ and $- y_0 < y < y_0$, $U_3$ is similar to $U_1$ in the lower half-plane for $y < -y_0$. As you can see, this potential lacks basically any symmetry one could remotely hope for. I would try a planar wave ansatz, but it is really not nice. Maybe someone else has a better idea? :) share|improve this answer I suggest that you consult the next reference about quantum potential, in which De Broglie–Bohm theory considers that famous experiment according to its pseudo classic point of view, in which the path of the particle is followed meaningly, explaining its intrinsical behaviour due to the fact that the derived potential $Q(\vec{x},t)$, has to do directly with the quantic conduct (since the center of mass of the particle), independently of the usual $U(\vec{x},t)$. In other words, Bohm considers that wave function can be written like $\Psi(\vec{x},t)=R(\vec{x},t)\cdot\exp{[\frac{i}{\hbar}\cdot S(\vec{x},t)]}$, where $R\doteq{\rho}^{0.5}$ represents the probability density and $S$ is the already known action. After introducing $\Psi$ into the Schrödinger equation, it is obtained that $Q(\vec{x},t)\propto\dfrac{\nabla^2 R}{R}$. At references of the first link, you can find a lot of articles, explaining us how considering Quantum Mechanics in this way, we can appreciate/follow almost deterministically the trajectory of the particle. share|improve this answer Your Answer
5cbc0a06b22dac54
Hydrogen atoms under the magnifying glass Hydrogen atoms under the magnifying glass To describe the microscopic properties of matter and its interaction with the external world, quantum mechanics uses wave functions, whose structure and time dependence is governed by the Schrödinger equation. In atoms, electronic wave functions describe - among other things - charge distributions existing on length-scales that are many orders of magnitude removed from our daily experience. In physics laboratories, experimental observations of charge distributions are usually precluded by the fact that the process of taking a measurement changes a wave function and selects one of its many possible realizations. For this reason, physicists usually know the shape of charge distributions through calculations that are shown in textbooks. That is to say, until now. An international team coordinated by researchers from the Max Born Institute has succeeded in building a microscope that allows magnifying the wave function of excited electronic states of the hydrogen atom by a factor of more than twenty-thousand, leading to a situation where the nodal structure of these electronic states can be visualized on a two-dimensional detector. The results were published in Physical Review Letters and provide the realization of an idea proposed approximately three decades ago. The development of quantum mechanics in the early part of the last century has had a profound influence on the way that scientists understand the world. Quantum mechanics extended the existing worldview based on classical, Newtonian mechanics by providing an alternative description of the micro-scale world, containing numerous elements that cannot be classically intuited, such as wave-particle duality, the importance of interference and entanglement, the and the . Central to quantum mechanics is the concept of a wave function that satisfies the time-dependent Schrödinger equation. According to the Copenhagen interpretation, the wave function describes the probability of observing the outcome of measurements that are performed on a quantum mechanical system, such as measurements of the energy of the system or the position or momenta of its constituents. This allows reconciling the occurrence of non-classical phenomena on the with manifestations and observations made on the macro-scale, which correspond to viewing one or more of countless realizations allowed for by the wave function. Despite the overwhelming impact on modern electronics and photonics, grasping quantum mechanics and the many possibilities that it describes continues to be intellectually challenging, and has over the years motivated numerous experiments illustrating the intriguing predictions contained in the theory. For example, the 2012 Nobel Prize in Physics was awarded to Haroche and Wineland for their work on the measurement and control of individual quantum systems in quantum non-demolition experiments, paving the way to more accurate optical clocks and, potentially, the future realization of quantum computers. Using short laser pulses, experiments have been performed illustrating how coherent superpositions of quantum mechanical stationary states describe electrons that move on periodic orbits around nuclei. The wave function of each of these electronic stationary states is a standing wave, with a nodal pattern that reflects the quantum numbers of the state. The observation of such nodal patterns has included the use of scanning tunneling methods on surfaces and recent laser ionization experiments, where electrons were pulled out of and driven back towards their parent atoms and molecules by using an intense laser field, leading to the production of light in the extreme ultra-violet wavelength region that encoded the initial wave function of the atom or molecule at rest. About thirty years ago, Russian theoreticians proposed an alternative experimental method for measuring properties of wave functions. They suggested that experiments ought to be performed studying laser ionization of atomic hydrogen in a static electric field. They predicted that projecting the electrons onto a two-dimensional detector placed perpendicularly to the static electric field would allow the experimental measurement of interference patterns directly reflecting the nodal structure of the function. The fact that this is so, is due to the special status of hydrogen as nature's only single-electron atom. Due to this circumstance, the hydrogen wave functions can be written as the product of two wave functions that describe how the wave function changes as a function of two, so-called "parabolic coordinates", which are linear combinations of the distance of the electron from the H+ nucleus "r", and the displacement of the electron along the electric field axis "z". Importantly, the shape of the two parabolic wave functions is independent of the strength of the static electric field, and therefore stays the same as the electron travels (over a distance of about half a meter, in our experimental realization!!) from the place where the ionization takes place to the two-dimensional detector. To turn this appealing idea into experimental reality was by no means simple. Since hydrogen atoms do not exist as a chemically stable species, they first had to be produced by laser dissociation of a suitable precursor molecule (hydrogen di-sulfide). Next, the hydrogen atoms had to be optically excited to the electronic states of interest, requiring another two, precisely tunable laser sources. Finally, once this optical excitation had launched the electrons, a delicate electrostatic lens was needed to magnify the physical dimensions of the wave function to millimeter-scale dimensions where they could be observed with the naked eye on a two-dimensional image intensifier and recorded with a camera system. The main result is shown in the figure below. This figure shows raw camera data for four measurements, where the hydrogen atoms were excited to states with 0, 1, 2 and 3 nodes in the for the ξ = r+z parabolic coordinate. As the experimentally measured projections on the two-dimensional detector show, the nodes can be easily recognized in the measurement. As this point, the experimental arrangement served as a microscope, allowing us to look deep inside the hydrogen atom, with a magnification of approximately a factor twenty-thousand. Besides validating an idea that was theoretically proposed more than 30 years ago, our experiment provides a beautiful demonstration of the intricacies of quantum mechanics, as well as a fruitful playground for further research, where fundamental implications of can be further explored, including for example situations where the hydrogen atoms are exposed at the same time to both electric and magnetic fields. The simplest atom in nature still has a lot of exciting physics to offer! Explore further Watching an electron being born More information: physics.aps.org/articles/v6/58 Journal information: Physical Review Letters Provided by Max Born Institute Citation: Hydrogen atoms under the magnifying glass (2013, May 22) retrieved 21 February 2020 from https://phys.org/news/2013-05-hydrogen-atoms-magnifying-glass.html Feedback to editors User comments
11ae81a76e1c3388
« first day (3063 days earlier)      last day (300 days later) »  3:02 AM Yo anyone around here good at classical Hamiltonian physics? You can kinda express Hamilton's equations of motion like this: $$ \frac{d}{dt} \left( \begin{array}{c} x \\ p \end{array} \right) = \left( \begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array} \right) \nabla H(x,p) \, .$$ Is there a decent way to understand coordinate transformations in this representation? (By the way, the incredible similarity between that equation and Schrodinger's equation is pretty cool. That matrix there behaves a lot like the complex unit $i$ in that it is a rotation by 90 degrees and has eigenvalues $\pm i$.) 3:41 AM schroedingers eqn has strong parallel(s) to the wave eqn of classical physics (part of its inception) but it seems this connection is rarely pointed out/ emphasized/ seriously investigated anywhere... 2 hours later… 5:54 AM Q: Connection between Schrödinger equation and heat equation Kevin KwokIf we do the wick rotation such that τ = it, then Schrödinger equation, say of a free particle, does have the same form of heat equation. However, it is clear that it admits the wave solution so it is sensible to call it a wave equation. Whether we should treat it as a wave equation or a heat e... 3 hours later… 8:26 AM @JohnRennie physics.stackexchange.com/questions/468349/… What Georgio said. Can't you add additional dupe targets? Gold badgers on SO can, but maybe they haven't extended that functionality to Physics.SE. @PM2Ring fixed! :-) That was quick! Thanks. 9:04 AM This question prompted me to do a price check on osmium. I was surprised that it's relatively cheap compared to the rest of the platinum group, but that seems to be because it has a small & steady supply & demand, so it's not attractive to traders in precious metals. I guess the toxicity of its oxide (& other tetravalent compounds) is also a disincentive. ;) Wikipedia says it sells for around $1000 USD per troy ounce, but other sites quote a starting price of $400. These guys sell nice looking ingots of just about every metal you can think of, apart from the alkalis & radioactives. I think I'll pass on the osmium & iridium, but I suppose I could afford a 1 oz tungsten ingot. :) Sure, it's not quite as dense as osmium / iridium, but its density is still pretty impressive. 1 hour later… 10:25 AM @DanielSank the term you want to look for is "complexification" of a symplectic manifold also this: In mathematics, a complex structure on a real vector space V is an automorphism of V that squares to the minus identity, −I. Such a structure on V allows one to define multiplication by complex scalars in a canonical fashion so as to regard V as a complex vector space. Every complex vector space can be equipped with a compatible complex structure, however, there is in general no canonical such structure. Complex structures have applications in representation theory as well as in complex geometry where they play an essential role in the definition of almost complex manifolds, by contrast to complex... I thought Arnol'd had a more in-depth discussion, but there's only a brief mention in §41.E 1 hour later… 11:44 AM I just don't get why someone would think that proving a mathematical thing like Pythagoras' theorem is something that physics can do. physics.stackexchange.com/questions/468504/… I guess it'd be reasonable in Euclid's day, or even Newton's, but certainly not since the development of non-Euclidean geometries. 2 hours later… 1:33 PM Hi. Am I wrong about this solution? this question is from tensor algebra Hi, @Leyla. I hope you don't think my previous reply is rude, but it's much better if you write equations in MathJax. My old eyes can barely read the equations in that photo, especially on my phone. And MathJax is a lot easier to search than equations in images. @PM2Ring Ohh sure, if it's the case then I can type them in mathjax There are bookmarklets & extensions that can be used to render MathJax in chatrooms. Stack Exchange won't build it into chatrooms because they don't want to impose the overhead on chat users... @LeylaAlkan in a word, yes your formulation is incorrect unless there's crucial bits of context that you've omitted, the tensor cannot be assumed to be symmetric indeed if $t_{ijk]$ were indeed totally symmetric, then $t_{(ijk)}$ would be identically zero and there would be no need to consider it you're correct as far as $$t_{[321]} + t_{(321)} = \frac{2}{3!} \left[ t_{321} + t_{213} + t_{132} \right] $$ goes, but that's as far as you can take the calculation this is enough to ensure that $t_{321} \neq t_{[321]} + t_{(321)} $ for an arbitrary rank-three tensor particularly because it is perfectly possible for there to exist a rank-three tensor $t$ and a reference frame $R$ such that the components of $t$ on $R$ are such that $t_{321}=1$ and the rest of its components vanish. 2:08 PM > Write out $t_{(321)}$ and $t_{[321]}$ . Show that $t_{321}\neq t_{(321)}+t_{[321]}$ My solution: $t_{(321)}=\frac 1 {3!}(t_{321}+t_{312}+t_{231}+t_{213}+t_{132}+t_{123})$ $t_{[321]}=\frac 1 {3!}(t_{321}-t_{312}-t_{231}+t_{213}+t_{132}-t_{123})$ $t_{(321)}+t_{[321]}=\frac 1 {3!}2(t_{321}+t_{213}+t_{132})$ Since $(3,0)$ tensor $t_{ijk}$ is totally symmetric, so it's independent of ordering of indices. So,$t_{(321)}+t_{[321]}=\frac 1 {3!}2(t_{321}+t_{321}+t_{321})=t_{321}$ This how I done it first. For @PM2Ring Oh okay great @EmilioPisanty 2:32 PM @LeylaAlkan tensors are just vectors in a vector space. It's extremely important that you understand how these linear-independence and linearity arguments work, and that you get comfortable in producing them when they're needed. i.e. the core take-home message you should be extracting from this is how the counter-example was generated and why it works. 1 hour later… 3:47 PM @JohnRennie what do you mean by superposition? 4:08 PM @Akash.B it's like position only better 1 hour later… 5:33 PM @DanielSank What exactly do you want to understand? Any canonical transformation is just going to leave that equation unchanged, right? 6:19 PM Why are superstrings so hard @EmilioPisanty Excellent @ACuriousMind I suppose, but I'm trying to see it algebraically. In some sense I'm asking how to represent a canonical transformation in the notation used in my comment. 6:54 PM @DanielSank in general? it'll just be an arbitrary function your notation won't be helped much the cases where it gets interesting is if you want a linear transformation in which case it's required to be symplectic does that keyword get you closer to the core of your question? Q: When is separating the total wavefunction into a space part and a spin part possible? mithusengupta123The total wavefunction of an electron $\psi(\vec{r},s)$ can always be written as $$\psi(\vec{r},s)=\phi(\vec{r})\zeta_{s,m_s}$$ where $\phi(\vec{r})$ is the space part and $\zeta_{s,m_s}$ is the spin part of the total wavefunction $\psi(\vec{r},s)$. In my notation, $s=1/2, m_s=\pm 1/2$. Questio... in other news, this random thing has been on HNQ for most of the day colour me perplexed. I mean, not that I don't appreciate the rep-cap hit but still I do look forward to the gibbous-moon question getting flooded with outsiders, though =P is the weak isospin (quantum number) the so-called flavor (quantum number)? 7:28 PM or does flavor (quantum number) also involve weak hypercharge (quantum number)? 7:56 PM I don't know if there is such a rule that only particles with nonzero flavor would undergo weak interaction. I read from [Wikipedia-Weak isospin](https://en.wikipedia.org/wiki/Weak_isospin) that "Fermions with positive chirality ("right-handed" fermions) and anti-fermions with negative chirality ("left-handed" anti-fermions) have $T = T_3 = 0$ and form singlets that do not undergo weak interactions." and "... all the electroweak bosons have weak hypercharge $Y_ w = 0$ , so unlike gluons and the color force, the electroweak bosons are unaffected by the force they mediate." but $W^+$ has weak isospin 1 and $W^-$ has weak isospin -1, not zero, so they should participate weak interaction. so I am confused as to what quantum number determines whether a particle participates weak interaction. @EmilioPisanty I guess I'm asking how to transform the gradient. Suppose I pick new variables that are related to the previous ones through a linear transformation. I know what to do on the left hand side, but on the right I have to do something to the gradient. 8:16 PM When two harmonic waves going opposite directions collide do the completely cancel each other out? 1 hour later… 9:26 PM can we really assume spinors are more fundamental than vectors? if a manifold by chance doesn't admit spin structures, can we still assume spinors are more fundamental than vectors? but if a manifold doesn't admit spin structures, how do you discuss fermions? 1 hour later… 10:47 PM @DanielSank that's what the chain rule is for, right? 11:09 PM @EmilioPisanty Yeah yeah fine I get the point. "Do the damned calculation yourself." 11:44 PM @CaptainBohemian If a manifold does not admit spinors, you don't discuss spinors.
7d5f9ac8b1945053
Enviado por Do utilizador1524 Kent A. Peacock - The Quantum Revolution The Quantum A Historical Perspective Kent A. Peacock Greenwood Press The Quantum Titles in Greenwood Guides to Great Ideas in Science Brian Baigrie, Series Editor Electricity and Magnetism: A Historical Perspective Brian Baigrie Evolution: A Historical Perspective Bryson Brown The Chemical Element: A Historical Perspective Andrew Ede The Gene: A Historical Perspective Ted Everson The Cosmos: A Historical Perspective Craig G. Fraser Planetary Motions: A Historical Perspective Norriss S. Hetherington Heat and Thermodynamics: A Historical Perspective Christopher J. T. Lewis The Quantum Revolution: A Historical Perspective Kent A. Peacock Forces in Physics: A Historical Perspective Steven Shore The Quantum A Historical Perspective Kent A. Peacock Greenwood Guides to Great Ideas in Science Brian Baigrie, Series Editor Greenwood Press Westport, Connecticut • London Library of Congress Cataloging-in-Publication Data Peacock, Kent A., 1952– The quantum revolution : a historical perspective / Kent A. Peacock. p. cm. — (Greenwood guides to great ideas in science, ISSN 1559–5374) Includes bibliographical references and index. ISBN-13: 978–0–313–33448–1 (alk. paper). 1. Quantum theory— History—Popular works. I. Title. QC173.98.P43 2008 530.1209—dc22 2007039786 British Library Cataloguing in Publication Data is available. Copyright © 2008 by Kent A. Peacock All rights reserved. No portion of this book may be reproduced, by any process or technique, without the express written consent of the publisher. Library of Congress Catalog Card Number: 2007039786 ISSN: 1559–5374 First published in 2008 Greenwood Press, 88 Post Road West, Westport, CT 06881 An imprint of Greenwood Publishing Group, Inc. Printed in the United States of America The paper used in this book complies with the Permanent Paper Standard issued by the National Information Standards Organization (Z39.48–1984). List of Illustrations Series Foreword Introduction: Why Learn the History of Quantum Mechanics? The Twilight of Certainty Einstein and Light The Bohr Atom and Old Quantum Theory Uncertain Synthesis Elements of Physical Reality Creation and Annihilation Quantum Mechanics Goes to Work Symmetries and Resonances “The Most Profound Discovery of Science” Bits, Qubits, and the Ultimate Computer Unfinished Business Further Reading list of Illustrations Max Planck. Light Waves. The Electromagnetic Spectrum. Planck’s Law. Fluctuations and Brownian Motion. Spacetime According to Minkowski. Spectral Lines. Niels Bohr. Energy Levels in the Bohr Atom. Werner Heisenberg. Erwin Schrödinger. Typical Electron Orbitals. Heisenberg’s Microscope. Paul Dirac. The Dirac Sea. The Double Slit Experiment. Niels Bohr and Albert Einstein. Schrödinger’s Cat. The EPR Apparatus. Feynman Diagrams. There Is Only One Electron in the Universe! Richard P. Feynman. Barrier Penetration. Lise Meitner. The Laser. viiiList of Illustrations Typical Bubble Chamber Tracks. Table of “Elementary” Particles in the Standard Model. David Bohm. John S. Bell. The Aspect Experiment. Bob Phones Alice on the Bell Telephone. Classical Turing Machine. Quantum Turing Machine. Quantum Teleportation. The Hawking Effect. The Unruh Effect. Stephen Hawking. Series Foreword The volumes in this series are devoted to concepts that are fundamental to different branches of the natural sciences—the gene, the quantum, geological cycles, planetary motion, evolution, the cosmos, and forces in nature, to name just a few. Although these volumes focus on the historical development of scientific ideas, the underlying hope of this series is that the reader will gain a deeper understanding of the process and spirit of scientific practice. In particular, in an age in which students and the public have been caught up in debates about controversial scientific ideas, it is hoped that readers of these volumes will better appreciate the provisional character of scientific truths by discovering the manner in which these truths were established. The history of science as a distinctive field of inquiry can be traced to the early seventeenth century when scientists began to compose histories of their own fields. As early as 1601, the astronomer and mathematician Johannes Kepler composed a rich account of the use of hypotheses in astronomy. During the ensuing three centuries, these histories were increasingly integrated into elementary textbooks, the chief purpose of which was to pinpoint the dates of discoveries as a way of stamping out all too frequent propriety disputes, and to highlight the errors of predecessors and contemporaries. Indeed, histori­ cal introductions in scientific textbooks continued to be common well into the twentieth century. Scientists also increasingly wrote histories of their disciplines—separate from those that appeared in textbooks—to explain to a broad popular audience the basic concepts of their science. The history of science remained under the auspices of scientists until the establishment of the field as a distinct professional activity in the middle of the twentieth century. As academic historians assumed control of history of science writing, they expended enormous energies in the attempt to forge a distinct and autonomous discipline. The result of this struggle to position the history of science as an intellectual endeavor that was valuable in its own right, Series Foreword and not merely in consequence of its ties to science, was that historical studies of the natural sciences were no longer composed with an eye toward educating a wide audience that included nonscientists, but instead were composed with the aim of being consumed by other professional historians of science. And as historical breadth was sacrificed for technical detail, the literature became increasingly daunting in its technical detail. While this scholarly work increased our understanding of the nature of science, the technical demands imposed on the reader had the unfortunate consequence of leaving behind the general reader. As Series Editor, my ambition for these volumes is that they will combine the best of these two types of writing about the history of science. In step with the general introductions that we associate with historical writing by scientists, the purpose of these volumes is educational—they have been authored with the aim of making these concepts accessible to students—high school, college, and university—and to the general public. However, the scholars who have written these volumes are not only able to impart genuine enthusiasm for the science discussed in the volumes of this series, they can use the research and analytic skills that are the staples of any professional historian and philosopher of science to trace the development of these fundamental concepts. My hope is that a reader of these volumes will share some of the excitement of these scholars—for both science, and its history. Brian Baigrie University of Toronto Series Editor This book is a short version of the story of quantum mechanics. It is meant for anyone who wants to know more about this strange and fascinating theory that continues to transform our view of the physical world. To set forth quantum physics in all its glorious detail takes a lot of mathematics, some of it quite complicated and abstract, but it is possible to get a pretty accurate feeling for the subject from a story well told in words and pictures. There are almost no mathematical formulas in this book, and what few there are can be skimmed without seriously taking away from the storyline. If you would like to learn more about quantum mechanics, the books and Web pages I describe in “Further Reading” can lead you as far into the depths of the subject as you wish to go. One thing this book does not do is to present a systematic account of all of the interpretations that have been offered of quantum mechanics. That would take another book at least as long. However, certain influential interpretations of quantum theory (such as the Copenhagen Interpretation, the causal interpretation, and the many-world theory) are sketched because of their historical Quantum mechanics is often said to be the most successful physical theory of all time, and there is much justification for this claim. But, as we shall see, it remains beset with deep mysteries and apparent contradictions. Despite its tremendous success, it remains a piece of unfinished business. It is the young people of today who will have to solve the profound puzzles that still remain, and this little work is dedicated to them and their spirit of inquiry. My own research in foundations of quantum mechanics has been supported by the Social Sciences and Humanities Research Council of Canada, the University of Lethbridge and the University of Western Ontario. For valuable discussions, suggestions, guidance, and support in various ways I thank Brian Baigrie, Bryson Brown, James Robert Brown, Jed Buchwald, Kevin deLaplante, Kevin Downing, Brian Hepburn, Jordan Maclay, Ralph Pollock, and (especially) Sharon Simmers. Introduction: Why Learn the History of Quantum Mechanics? This book tells the story of quantum mechanics. But what is quantum mechanics? There are very precise and technical answers to this question, but they are not very helpful to the beginner. Worse, even the experts disagree about exactly what the essence of quantum theory really is. Roughly speaking, quantum mechanics is the branch of physical science that deals with the very small—the atoms and elementary particles that make up our physical world. But even that description is not quite right, since there is increasing evidence that quantum mechanical effects can occur at any size scale. There is even good reason to think that we cannot understand the origins of the universe itself without quantum theory. It is more accurate, although still not quite right, to say that quantum mechanics is something that started as a theory of the smallest bits of matter and energy. However, the message of this book is that the growth of quantum mechanics is not finished, and therefore in a very important sense we still do not know what it really is. Quantum mechanics is revolutionary because it overturned scientific concepts that seemed to be so obvious and so well confirmed by experience that they were beyond reasonable question, but it is an incomplete revolution because we still do not know precisely where quantum mechanics will lead us—nor even why it must be true! The history of a major branch of science like quantum physics can be viewed in several ways. The most basic approach to see the history of quantum mechanics is as the story of the discovery of a body of interrelated facts (whatever a “fact” is), but we can also view our story as a history of the concepts of the theory, a history of beautiful though sometimes strange mathematical equations, a history of scientific papers, a history of crucial experiments and measurements, and a history of physical models. But science is also a profoundly human enterprise; its development is conditioned by the trends and accidents of history, and by the abilities, upbringing, and quirks of its creators. The history of science is not just a smooth progression of problems being solved one after the other by highly competent technicians, who all agree with each other about how their work should be done. It is by no means clear that it is inevitable that we would have arrived where we are now if the history of science could be rerun. Politics, prejudice, and the accidents of history play their part (as we shall see, for instance, in the dramatic story of David Bohm). Thus, the history of quantum mechanics is also the story of the people who made it, and along the way I will sketch brief portraits of some of these brilliant and complex individuals. Quantum mechanics is one of the high points in humanity’s ongoing attempt to understand and cope with the vast and mysterious universe in which we find ourselves, and the history of modern physics—with its failures and triumphant insights—is one of the great stories of human accomplishment of our time. Why Would Anyone Be Interested in History of Science? Learning a little history of science is one of the most interesting and painless ways of learning a little of the science itself, and knowing something about the people who created a branch of science helps to put a human face on the succession of abstract scientific concepts. Furthermore, knowing at least the broad outlines of the history of science is simply part of general cultural literacy, since we live in a world that is influenced deeply by science. Everyone needs to know something about what science is and how it developed. But the history of modern physics, especially quantum physics, presents an especially interesting puzzle to the historian. In the brief period from 1900 to 1935 there occurred one of the most astonishing outbursts of scientific creativity in all of history. Of course, much has been done in science since then, but with the perspective of hindsight it seems that no other historical era has crammed so much scientific creativity, so many discoveries of new ideas and techniques, into so few years. Although a few outstanding individuals dominate—Albert Einstein (of course!), Niels Bohr, Werner Heisenberg, Wolfgang Pauli, Paul Dirac, and Erwin Schrödinger stand out in particular—they were assisted in their work by an army of highly talented scientists and technicians. This constellation of talented people arose precisely at a time when their societies were ready to provide them with the resources they needed to do their work, and also ready to accept the advances in knowledge that they delivered. The scientists who created quantum theory were (mostly) not embattled heretics like Galileo, because they did not have to be—their work usually was supported, encouraged, and welcomed by their societies (even if their societies were at times a bit puzzled as to what that work meant). The period in which quantum mechanics was created is thus comparable to a handful of other brilliant episodes in history—such as ancient Athens in her glory, or the England of Elizabeth I—when a multitude of historical factors somehow combined to allow the most talented people to do the best work of which they were capable. Exactly why do these amazing outbursts of creativity occur? And what could we do to make them happen more regularly? These questions certainly can’t be answered in this modest book, but the history of quantum mechanics is an outstanding case study for this large and very important problem. Why Should Scientists Learn History of Science? For the general public, history of science is an important part of culture; for the scientist, history of science is itself a sometimes neglected research tool (Feyerabend 1978). It may seem odd to suggest that knowing the history of a science can aid research in that science. But the history of science has particular value as a research tool precisely because it allows us to see that some of the assumptions on which present-day science is based might have been otherwise—and perhaps, in some cases, should have been. Sometimes, when science is presented in elementary textbooks and taught in high school or college, one is given the impression that every step along the way was inevitable and logical. In fact, science often has advanced by fits and starts, with numerous wrong turns, dead ends, missed opportunities, and arbitrary assumptions. Retracing the development of science might allow us to come at presently insoluble problems from a different angle. We might realize that somewhere along the line we got off track, and if we were to go back to that point and start over we might avoid the problems we have now. Science is no different than any other sort of problem-solving activity in that, if one is stuck, there often can be no more effective way of getting around the logjam than going back and rethinking the whole problem from the beginning. The history of science also helps to teach modern-day scientists a certain degree of humility. It is sobering to learn that scientific claims that are now treated as near-dogma (for instance, the theory of continental drift or the fact that meteors are actual rocks falling from the sky) were once laughed at by conventional science, while theories such as Newtonian mechanics that were once regarded as unquestionable are now understood to be merely approximately correct, if not completely wrong for some applications. Many of the new ideas of quantum mechanics were found to be literally unbelievable, even by their creators, and in the end they were accepted not because we understood them or were comfortable with them, but because nature told us that they were The history of quantum theory can also teach us much about the process of scientific discovery. How did Planck, Schrödinger, Heisenberg, or Dirac arrive at their beautiful equations? It may seem surprising to someone not familiar with theoretical physics to realize that there is no way of deducing the key equations of new theories from facts about the phenomena or from previously accepted theories. Rather, many of the most important developments in modern physics started with what physicists call an Ansatz, a German word that literally means “a start,” but which in physics can also be taken as an inspired insight or lucky guess. The new formulas are accepted because they allow a unified deduction of facts that had previously been considered to be unrelated and because they lead to new predictions that get confirmed by experiment. So we often end up with a scientific law expressed in mathematical form that works very well in the sense that we can learn how to use it to predict what will happen in concrete physical situations, but we do not understand why it can make those predictions. It just works, so we keep using it and hope that some day we will understand it better. We now have a branch of physics, quantum mechanics, which is the most powerful and effective theory of physics ever developed in the sense that it gives unprecedented powers of prediction and intervention in nature. Yet it remains mysterious, for despite the great success of quantum mechanics, we must admit in all humility that we don’t know why it must be true, and many of its predictions seem to defy what most people think of as “common sense.” Quantum mechanics was, as this history will show, a surprise sprung on us by nature. To the story of how this monumental surprise unfolded we now turn. The Twilight of Max Chooses a Career The time had come for Max Planck to make a career choice. He was fascinated by physics, but a well-meaning professor at the University of Munich told him that he should turn to music as a profession because there were no more important discoveries to be made in physics. The year was 1875. Young Max was an exceptionally talented pianist, and the advice that he should become a musician seemed reasonable. But he stubbornly chose physics anyway. Max was motivated not so much by a yearning to make great discoveries, as an aspiring young scientist might be today, but rather by an almost religious desire to understand the laws of nature more deeply. Perhaps this motivation had something to do with his upbringing, for his ancestors included pastors and jurists, and his father was a professor of law at the University of As a student he was especially impressed by the recently discovered First Law of Thermodynamics, which states that the energy books must always balance—the total amount of energy in a physical system never changes even though that energy can appear in many different forms. To Planck, the First Law seemed to express the ideal of science in its purest form, for it was a law that did not seem (to him!) to be a mere descriptive convenience for humans, but rather something that held true exactly, universally, and without qualification. It is ironic that the deeply conservative Planck would become the one to trigger quantum mechanics, the most revolutionary of all scientific developments. As we shall see, however, Planck was also possessed of unusual intellectual integrity, and the great discovery he was eventually to make had much to do with the fact that he was among those relatively rare people who can change their minds when the evidence demands it. The Quantum Revolution An Age of Nears its End Before we describe Planck’s discovery of the quantum, we should try to understand why his advisor was as satisfied as he was with the way things were in 1875. The complacency at the end of the nineteenth century was both scien­ tific and political. After the final defeat of Napoleon in 1815, Western Europe had enjoyed a long run of relative peace and prosperity, marred only by the Franco-Prussian war of 1870–1871. From this conflict Germany had Figure 1.1: Max Planck. AIP Emilio Segre Visual Archives. emerged triumphant and unified, proud France humiliated. The British Empire continued to grow in strength throughout the last decades of the century, although it was challenged by rival colonial powers like Germany, France, and Belgium. The brash new nation of the United States was healing from a terrible civil war, flexing its muscles and gaining in confidence, but it seemed unimaginable that the great empires of Europe could ever lose their power. Meanwhile, things were not so nice for many people who were not European. The prosperity of Europe was bought at the expense of subjugated peoples in Africa, India, and the Americas, who had almost no defense in the face of modern weapons such as machine guns, rapid fire rifles, artillery, the steamship, and the telegraph wire. Eventually Europeans would turn these weapons on each other, but the horrors of World War I lay 40 years in the future when young Max Planck began to study physics. Science and technology in the nineteenth century had enjoyed unprecedented growth and success. The world was being changed by innumerable innovations such as the steam engine, the telegraph, and later the telephone. Medicine made huge advances (so that by the end of the nineteenth century one could have a reasonable hope of actually surviving a surgical operation), and there was a tremendous expansion of what we now call “infrastructure” such as highways, railways, canals, shipping, and sewers. The technology of the nineteenth century was underpinned by a great increase in the explanatory and predictive power of scientific theory. Mathe- The Twilight of Certainty matics, chemistry, astronomy, and geology leaped ahead, and all of biology appeared in a new light with Darwin’s theory of evolution by natural selection. To many scientists of the time it seemed that there were just a few loose ends to be tied up. As we shall see, tugging on those loose ends unraveled the whole overconfident fabric of nineteenth century physics. Physics in the Nineteenth Century The Foundation Physics investigates the most general principles that govern nature, and expresses those laws in mathematical form. Theoretical physics at the end of the nineteenth century rested on the massive foundation of the mechanics of Sir Isaac Newton (1644–1727), an Englishman who had published his great book The Mathematical Principles of Natural Philosophy in 1687. Newton showed how his system of mechanics (which included a theory of gravitation) could be applied to the solution of many long-standing problems in astronomy, physics, and engineering. Newton also was coinventor (with the German Gottfried Leibniz, 1646–1716) of the calculus, the powerful mathematical tool which, more than any other advance in mathematics, made modern physics possible. (Newton, who was somewhat paranoid, accused Leibniz of having poached the calculus from him, and the two geniuses engaged in a long and pointless dispute over priority.) Newtonian mechanics was deepened and generalized by several brilliant mathematical physicists throughout the eighteenth and nineteenth centuries, notably Leonard Euler (1707–1783), Joseph Louis Lagrange (1736–1813), Pierre Simon de Laplace (1749–1827), and Sir William Rowan Hamilton (1805–1865). By the late nineteenth century it not only allowed for accurate predictions of astronomical motions, but it had evolved into an apparently universal system of mechanics which described the behavior of matter under the influence of any possible forces. Most physicists in late 1800s (including the young Max Planck) took it for granted that any future physical theories would have to be set within the framework of Newtonian mechanics. It is hard for us now to picture that up until almost the middle of the nineteenth century, electricity and magnetism were considered to be entirely distinct phenomena. Electrodynamics is the science that resulted when a number of scientists in the early to mid-nineteenth century, notably Hans Christian Oersted (1777–1851), Michael Faraday (1791–1867), and André Marie Ampère (1775–1836), discovered that electricity and magnetism are different aspects of the same underlying entity, the electromagnetic field. Faraday was a skilled and ingenious experimenter who explained his results in terms of an intuitive model in which electrified and magnetized bodies were connected by graceful lines of force, invisible to the eye but traceable by their effects on compass needles and iron filings. The Quantum Revolution Faraday may have been the last great discoverer in physics who did not express his insights in mathematical form. The Scottish mathe­matical physicist James Clerk Maxwell (1831–1879) unified the known laws of electricity and magnetism into an elegant and powerful mathematical picture of the electromagnetic field that Faraday had visualized intuiFigure 1.2: Light Waves. Maxwell and Hertz showed that light tively. Maxwell published the first and other forms of electromagnetic radiation consist of al- version of his field equations in ternating electric and magnetic fields. Illustration by Kevin 1861. He achieved one of the most outstanding examples in physics of a successful unification, in which phenomena that had been thought to be of quite different natures were suddenly seen to be merely different aspects of a single entity. Maxwell’s field equations are still used today, and they remain the most accurate and complete description of the electromagnetic field when quantum and gravitational effects can be ignored. One of the most important predictions of electromagnetic theory is the existence of electromagnetic waves, alternating patterns of electric and magnetic fields vibrating through space at the speed of light. In 1888 the German physicist Heinrich Hertz (1857–1894) detected electromagnetic waves with a series of delicate and ingenious experiments in which he created what were, in effect, the first radio transmitters and receivers. It was soon realized that light itself is simply a flood of electromagnetic waves that happen to be visible to the human eye. Different types of electromagnetic waves may be distinguished by their frequencies or their wavelengths. (Wavelength is inverse to frequency, meaning that as the frequency goes up the wavelength goes down.) The frequency expresses how fast the wave is vibrating and is usually given in cycles per second. The wavelength is the length of the wave from crest to crest. Electromagnetic waves are transverse, meaning that they vibrate in a direction perpendicular to their direction of motion, while sound waves and other pressure waves are longitudinal, meaning that they vibrate more or less in the direction of motion. The polarization of electromagnetic waves is a measure of the direction in which they vibrate. Electromagnetic waves can vary from radio waves many meters long, to the deadly high energy gamma rays produced by nuclear reactions which have wavelengths less than 1/5000 that of visible light. Visible light itself has wavelengths from about 400 billionths of a meter (violet) to about 700 billionths of a meter (red). The range of observed frequencies of light is called the spectrum. We shall have much to say about spectra, which will play a central role in the history of quantum mechanics. Maxwell’s theory was highly abstract, and it took several years before its importance was generally apparent to the scientific community. But by the end The Twilight of Certainty of the nineteenth century the bestinformed physicists (including Planck) regarded Maxwellian electrodynamics as one of the pillars on which theoretical physics must rest, on a par with the mechanics of Newton. In fact there were deep inconsistencies between the electromagnetic theory of Maxwell and Newtonian mechanics, but few thinkers grasped this fact, apart from an obscure patent clerk in Switzerland whom we shall meet in the next chapter. Figure 1.3: The Electromagnetic Spectrum. Electromagnetic waves exist in a spectrum running from low-energy, long-wavelength radio waves to very high-energy, shortwavelength gamma rays. For all such waves the energy is related to the frequency by E = hν, where ν (Greek letter nu) is the frequency, and h is Planck’s constant of action. Illustration by Kevin deLaplante. More than any other branch of physics, thermodynamics, the science of heat, had its origins in practical engineering. In 1824, a brilliant young French engineer, Sadi Carnot (1796–1832), published a groundbreaking analysis of the limitations of the efficiency of heat engines, which are devices such as the steam engine that convert heat released by the combustion of fuel to useful mechanical energy. Following Carnot, several pioneering investigators in the mid-nineteenth century developed the central concepts of what we now call classical thermodynamics. These include temperature, energy, the equivalence of heat and mechanical energy, the concept of an absolute zero (a lowest possible temperature), the First Law of Thermodynamics (which states to another), and the basic relationships between temperature, pressure, and volume in so-called ideal gasses. The mysterious concept of entropy made its first explicit appearance in the work of the German Rudolph Clausius (1822–1888). Clausius defined entropy as the ratio of the change in heat energy to the temperature and coined the term “entropy” from the Greek root tropé, transformation. He showed that entropy must always increase for irreversible processes. A reversible process is a cycle in which a physical system returns to its precise initial conditions, whereas in an irreversible process order gets lost along the way and the system cannot return to its initial state without some external source of energy. It is precisely the increase in entropy that distinguishes reversible from irreversible cycles. Clausius postulated that the entropy of the universe must tend to a maximum value. This was one of the first clear statements of the Second Law of Thermodynamics, which can also be taken to say that it is impossible to transfer heat from a colder to a hotter body without expending at least as much energy as is transferred. We are still learning how to interpret and use the Second Law. The concept of irreversibility is familiar from daily life: it is all too easy to accidentally smash a glass of wine on the floor, and exceedingly difficult to put The Quantum Revolution it together again. And yet the laws of Newtonian dynamics say that all physical processes are reversible, meaning that any solution of Newton’s laws of dynamics is still valid if we reverse the sign of time in the equations. It ought to be possible for the scattered shards of glass and drops of wine to be tweaked by the molecules of the floor and air in just the right way for them to fly together and reconstitute the glass of wine. Why doesn’t this happen? If we believe in Newtonian mechanics, the only possible answer is that it could happen, but that it has never been seen to happen because it is so enormously improbable. And this suggests that the increase in entropy has something to do with probability, a view that seems obvious now but that was not at all obvious in the mid-nineteenth century. Clausius himself had (in the 1860s) suggested that entropy might be a measure of the degree to which the particles of a system were disordered or disorganized, but (like most other physicists of the era) he was reluctant to take such speculation seriously. In the classical thermodynamics of Clausius, entropy and other quantities such as temperature, pressure, and heat are state functions, which means that they are treated mathematically as continuous quantities obeying exact, exception-free laws. Unlike electrodynamics, which seemed to have been perfected by Maxwell, thermodynamics therefore remained in an incomplete condition, and its troubles centered on the mysteries of entropy, irreversibility, and the Second Law of Thermodynamics. Planck himself tried for many years to find a way of explaining the apparently exception-free, universal increase of entropy as a consequence of the reversible laws of Newtonian and Maxwellian theory. But the brilliant Austrian physicist Ludwig Boltzmann (1844–1906) showed that there is an entirely different way to think about entropy. Other people (notably James Clerk Maxwell) had explored the notion that heat is the kinetic energy of the myriad particles of matter, but Boltzmann rewrote all of classical thermodynamics as a theory of the large-scale statistics of atoms and molecules, thereby creating the subject now known as statistical mechanics. In statistical mechanics we distinguish macroscopic matter, which is at the scale that humans can perceive, from microscopic matter at the atomic or particulate level. On this view, entropy becomes a measure of disorder at the microscopic level. Macroscopic order masks microscopic disorder. If a physical system is left to itself, its entropy will increase to a maximum value, at which point the system is said to be in equilibrium. At equilibrium, the system undergoes no further macroscopically apparent changes; if it is a gas, for instance, its temperature and pressure are equalized throughout. The apparent inevitability of many thermodynamic processes (such as the way a gas will spread uniformly throughout a container) is due merely to the huge numbers of individual molecules involved. It is not mathematically inevitable, but merely overwhelmingly probable, that gas molecules released in a container will rapidly spread around until all pressure differences disappear. Could there be exceptions to the Second Law? According to the statistical interpretation, it is not strictly impossible to pipe usable energy from a lower temperature to a higher—it is merely, in general, highly improbable. A pot of The Twilight of Certainty water could boil if placed on a block of ice—but we’re going to have to wait a very (very!) long time to see it happen. Boltzmann’s statistical mechanics ran into strong opposition from a number of scientists. Some, the “energeticists,” headed by Wilhelm Ostwald (1853– 1932), maintained that all matter consists of continuous fields of energy, so that no fundamental theory should be based on such things as discrete atoms or molecules. Severe criticism also came from the positivist Ernst Mach (1838–1916), who insisted that atoms were not to be taken seriously because they had never been directly observed. (Positivism is a view according to which concepts are not meaningful unless they can be expressed in terms of possible observations.) Mach’s influence on physics was both good and bad; while he impeded the acceptance of statistical mechanics, his penetrating criticism of classical concepts of space and time stimulated the young Einstein. Mach also did important work in gas dynamics, and the concept of Mach number (the ratio of the speed of an aircraft to the speed of sound) is named after him. A major barrier to the acceptance of the statistical interpretation of thermodynamics was the fact that thermodynamic quantities such as pressure, temperature, heat energy, and entropy were first studied as properties of ordinary matter. On the scale of human perception, matter appears to be continuously divisible. We are now accustomed to thinking of the heat content of a volume of a gas as the total kinetic energy (energy of motion) of the molecules of which the gas is composed, but heat was first studied as an apparently continuously distributed property of smooth matter. In fact, up until about the mid-1800s, heat was still thought of as a sort of fluid, called caloric. It therefore seemed reasonable that thermodynamic quantities such as heat or temperature should obey mathematical laws that were as exact as Newton’s Laws or Maxwell’s field equations, and it was very difficult for most physicists to accept the notion that the laws of thermodynamics were merely descriptions of average Most important, there was still no irrefutable theoretical argument or direct experimental evidence for the existence of atoms. The concept of the atom goes back to the ancient Greek thinker Democritus (ca. 450 b.c.), and the term “atom” itself comes from a Greek word meaning “indivisible.” By the nineteenth century the atomic theory was a mainstay of physics and chemistry, but it was regarded by many theoretical physicists as nothing more than a useful calculational device that allowed chemists to work out the correct amounts of substances to be mixed in order to achieve various reactions. There seemed to be no phenomenon that had no reasonable explanation except in terms of atoms. Demonstrations that there are such phenomena would be provided in the years 1900–1910 by a number of people, including Einstein himself. Boltzmann suffered from severe depression, possibly aggravated by the endless debates he was forced to engage in to defend the statistical view, and he committed suicide in 1906. On his gravestone (in Vienna) is engraved the equation that bears his name: S = klnW (the entropy of a state is proportional to the natural logarithm of the probability of that state). Had Boltzmann lived a few more years, he would have witnessed the complete vindication of his ideas. The Quantum Revolution What Is Classical about Classical Physics? Newtonian mechanics and Maxwellian electrodynamics together made up what we now call classical physics. The three defining characteristics of classical physics are determinism, continuity, and locality; all are challenged by quantum mechanics. In order to understand what determinism means, we need to know a little about the sort of mathematics used in classical physics. It was taken for granted that physics is nothing other than the mathematical description of processes occurring in space and time. (Later on even this assumption would be challenged by quantum physics.) From the time of Newton onward, most laws of physics are expressed in the form of differential equations, one of the most useful offshoots of the calculus invented by Newton and Leibniz. Such equations describe the ways in which physical quantities (such as electrical field strength) vary with respect to other quantities (such as position or time). Newton’s First Law of dynamics, for instance, states that the rate of change of momentum of a physical object equals the force impressed upon it. Physical laws expressed in differential equations are applicable to indefinitely many situations. All possible electromagnetic fields obey Maxwell’s field equations, for instance. To apply the equations we solve them for specific situations; this means that we use initial and boundary conditions (which describe a given physical scenario) to calculate a mathematical curve or surface that will represent the behavior of the system in that scenario. For example, if we know the initial position and velocity of a moving particle, and the forces acting on it, we can use Newton’s First Law to calculate its trajectory over time. The sorts of differential equations used in classical physics are such that (in most cases) fully specified initial and boundary conditions imply unique solutions. In other words, in classical physics the future is uniquely and exactly determined by the past, and this is just what we mean by determinism. The belief in continuity was often expressed in the phrase “Nature makes no jumps.” It was assumed by almost all physicists from the time of Newton onward that matter moves along smooth, unbroken paths through space and time. This view was only reinforced by the success of the Faraday-Maxwell theory of the electromagnetic field, which explained electrical and magnetic forces as a result of a force field continuously connecting all electrically charged bodies. On the field view, the appearance of disconnection between particles of matter is merely that—an appearance. Mathematically, the assumption that all physical processes are continuous required that physics be written in the language of differential equations whose solutions are continuous, differentiable (“smooth”) functions. By the late nineteenth century, many physicists (led by Maxwell and Boltzmann) were using statistical methods, which are indeterministic in the sense that a full specification of the macroscopic state of a system at a given time is consistent with innumerable possible microscopic futures. However, the classical physicists of the nineteenth century believed that probabilistic methods were needed only for practical reasons. If one were analyzing the be- The Twilight of Certainty havior of a gas, for instance, one can only talk about the average behavior of the molecules. It would be totally impossible to know the exact initial positions and velocities of all the gas molecules, and even if one could have these numbers the resulting calculation to predict their exact motions would be impossibly complex. Nevertheless, each individual molecule surely had a position and velocity, and a future that was in principle predictable, even if it was practically impossible to know these things. Planck and his contemporaries in the 1890s would have found it incredible that by the late 1920s it would be reasonable to question the apparently obvious belief that the parts of matter had a precise position and momentum before an experimenter interacts with them. Later in our story we shall have much to say about locality, and the circumstances under which quantum physics forces it to break down. For now, we will just take locality, as it would have been understood in Planck’s younger years, to mean that all physical influences propagate at a finite speed (and in a continuous manner) from one point to another. According to the doctrine of locality there is no such thing as action at a distance—a direct influence of one body on another distant body with no time delay and no physical vehicle by means of which the force was propagated. Most physicists from Newton onward felt that the very notion of action at a distance was irrational; recently, quantum mechanics has forced us to rethink the very meaning of rationality. Too Many Loose Ends Physics in the late nineteenth century was an apparently tightly knit tapestry consisting of Newtonian mechanics supplemented with Maxwell’s theory of the electromagnetic field and the new science of thermodynamics. Up to roughly 1905 most physicists were convinced that any electromagnetic and thermal phenomena that could not yet be explained could be shoehorned into the Newtonian framework with just a little more technical cleverness. However, in the period from about 1880 to 1905 there were awkward gaps and inconsistencies in theory (mostly connected with the nature of light), and a few odd phenomena such as radioactivity that could not be explained at all. In 1896, Henri Becquerel (1852–1909) noticed that uranium salts would expose photographic film even if the film was shielded from ordinary light. This was absolutely incomprehensible from the viewpoint of nineteenth-century physics, since it involves energy, a lot of it, coming out of an apparently inert lump of ore. The intellectual complacency of the late nineteenth century, like the confident empires that sheltered it, did not have long to last. Blackbody Radiation and the Thermodynamics of Light Before Planck There is an old joke (probably first told by a biologist) that to a physicist a chicken is a uniform sphere of mass M. The joke has a grain of truth in it, for physics is often written in terms of idealized models such as perfectly smooth The Quantum Revolution planes or point particles that seem to have little to do with roughhewn reality. Such models are surprisingly useful, however, for a well-chosen idealization behaves enough like things in the real world to allow us to predict the behavior of real things from the theoretical behavior of the models they resemble. One of the most useful idealized models is the blackbody, which was defined by Gustav Kirchhoff (1824–1887) in 1859 simply as any object that absorbs all of the electromagnetic radiation that falls upon it. (It doesn’t matter what material the object is made of, so long as it fulfills this defining condition.) Physicists from the 1850s onward began to get very interested in the properties of blackbodies, since many objects in the real world come close to exhibiting near-perfect blackbody behavior. While perfect blackbodies reflect no radiation at all, Kirchhoff proved that they must emit radiation with a spectral distribution that is a function mainly (or in the case of an ideal blackbody, only) of their temperatures. (When we use the term “radiation” here, we are talking about electromagnetic radiation—light and heat—not nuclear radiation. And by spectral distribution, we mean a curve that gives the amount of energy emitted as a function of frequency or wavelength.) Steelmakers can estimate the temperature of molten steel very accurately just by its color. A near-blackbody at room temperature will have a spectral peak in the infrared (so-called heat radiation). The peak will shift to higher frequencies in step with increasing temperature; this is known as Wien’s Displacement Law, after Wilhelm Wien (1864–1928). Around 550°C objects begin to glow dull red, while an electric arc around 10,000°C is dazzling blue-white. Blackbody radiation is also known as cavity radiation. To understand this term, consider an object (which could be made of any material that reflects radiation) with a cavity hollowed out inside it. Suppose that the only way into the cavity is through a very small hole. Almost all light that falls on the hole will enter the cavity without being reflected back out, because it will bounce around inside until it is absorbed by the walls of the cavity. The small hole will therefore behave like the surface of a blackbody. Now suppose that the object containing the cavity is heated to a uniform temperature. The walls of the cavity will emit radiation, which Kirchhoff showed would have a spectrum that depended only on the temperature and which would be independent of the material of the walls, once the radiation in the cavity had come to a condition of equilibrium with the walls. (Equilibrium in this case means a condition of balance in which the amount of radiant energy being absorbed by the walls equals the amount being emitted.) The radiation coming out of the little hole will therefore be very nearly pure blackbody radiation. Because the spectrum of blackbody radiation depends only on the temperature it is also often called the normal spectrum. In the last 40 years of the nineteenth century the pieces of the blackbody puzzle were assembled one by one. As noted, Kirchhoff was able to show by general thermodynamic considerations that the function that gave the blackbody spectrum depended only on temperature, but he had no way to determine the shape of the curve. In 1879 Josef Stefan (1835–1893) estimated on the The Twilight of Certainty basis of experimental data that the total amount of energy radiated by an object is proportional to its temperature raised to the fourth power (a very rapidly increasing function). Stefan’s law (now known as the Stefan-Boltzmann Law) gives the total area under the spectral distribution curve. Boltzmann, in 1884, found a theoretical derivation of the law and showed that it is only obeyed exactly by ideal blackbodies. Research on blackbodies was spurred in the 1890s not only by the great theoretical importance of the problem, but by the possibility that a better understanding of how matter radiates light would be of value to the rapidly growing electrical lighting industry. In 1893, Wien showed that the spectral distribution function had to depend on the ratio of frequency to temperature, and in 1896 he conjectured an exponential distribution law (Wien’s Law) that at first seemed to work fairly well. In the same period, experimenters were devising increasingly accurate methods of measuring blackbody radiation, using radiation cavities and a new device called the bolometer, which can measure the intensity of incoming radiation. (The bolometer was invented by the American Samuel P. Langley (1834–1906) around 1879, and it had its first applications in astronomy.) Deviations from Wien’s Law were soon found at lower frequencies (in the infrared), where it gave too low an intensity. It is at this point that Planck enters the story. Planck’s Inspired Interpolation Planck had been Professor of Physics at the University of Berlin since 1892 and had done a great deal of distinguished work on the applications of the Second Law of Thermodynamics and the concept of entropy to various problems in physics and chemistry. Throughout this work, his aim was to reconcile the Second Law of Thermodynamics with Newtonian mechanics. The sticking point was irreversibility. Boltzmann’s statistical account of irreversibility as a result of ever-increasing disorder was natural and immediate, but, as we have noted, it implied that the Second Law was not exact and exceptionfree. Although Planck had great personal respect for Boltzmann, he questioned Boltzmann’s statistical approach in two ways. First, the increase of entropy with time seemed inevitable and therefore, Planck believed, should be governed by rigorous laws. He did not want a result that was merely probabilistic since it was virtually an article of faith for Planck that the most general physical laws had to be exact and deterministic. Second, Planck wanted to rely as little as possible on models based on the possible properties of discrete particles, because their existence remained largely speculative. At first, Planck explored the possibility that the route to lawlike irreversibility could be found in electromagnetic theory. He tried to show that the scattering of light, which had to obey Maxwell’s Equations, was irreversible. However, in 1898 Boltzmann proved that Maxwell’s electromagnetic field theory, like Newtonian mechanics, was time-reversal invariant. This fact had not The Quantum Revolution been explicitly demonstrated before, and it torpedoed Planck’s attempt to find the roots of irreversibility in electromagnetic theory. Planck became interested in the blackbody problem in the mid-1890s for a number of reasons. First, many of his colleagues were working on it at the time. More important, he was very impressed by the fact that the blackbody spectrum is a function only of temperature; it was, therefore, something that had a universal character, and this was just the sort of problem that interested Planck the most. Also, he thought it very likely that understanding how radiation and matter come into equilibrium with each other would lead to a clearer picture of irreversibility. Planck devised a model of the blackbody cavity in which the walls were made of myriad tiny resonators, vibrating objects that exchanged radiant energy with the electromagnetic fields within the cavity. He established Wien’s formula in a way that probably came as close as anyone could ever come to deriving an irreversible approach to equilibrium from pure electromagnetic theory, but without quite succeeding. Around the same time as Planck and other German scientists were struggling to understand the blackbody spectrum, Lord Rayleigh (1842–1919), probably the most senior British physicist of his generation, took an entirely different approach to the problem. He derived an expression for the number of possible modes of vibration of the electromagnetic waves within the cavity, and then applied a rule known as the equipartition theorem, a democratic-sounding principle that claimed that energy should be distributed evenly among all possible states of motion of any physical system. This led to a spectral formula that seemed to be roughly accurate at lower frequencies (in the infrared). However, there is no limit to the number of times a classical wave can be subdivided into higher and higher frequencies, and assuming that each of the infinitely many possible higher harmonics had the same energy led to the ultraviolet catastrophe—the divergence (“blow-up”) of the total energy of the cavity to infinity. Rayleigh multiplied his formula by an exponential “fudge factor” which damped out the divergence, but which still did not lead to a very accurate result. As soon as it was found that Wien’s Law failed experimentally at lower frequencies, Planck threw himself into the task of finding a more accurate formula for Kirchhoff’s elusive spectral function. He had most of the pieces of the puzzle at hand, but he had to find how the entropy of his resonators was related to their energy of vibration. He had an expression for this function derived from Wien’s Law and that was therefore roughly valid for high frequencies, and he had a somewhat different expression for this function derived from Rayleigh’s Law and that was therefore roughly valid for low frequencies. By sheer mathematical skill Planck found a way to combine these two formulas into one new radiation formula that approximated the Rayleigh Law and Wien’s Law at either end of the range of frequencies, but that also filled in the gap in the middle. Planck presented his new radiation law to the German Physical Society on October 19, 1900. By then, it had already been confirmed within the limits of experimental error, and no deviations have been found from it since. The Twilight of Certainty Planck had achieved his goal of finding a statement of natural law that was about as close to absolute as any person can probably hope to achieve, but his formula was largely the product of inspired mathematical guesswork, and he still did not know why it was true. An explanation of his new law had to be The Quantum Is Born Boltzmann had argued that the entropy of any state is a function (the logarithm) of the probability of that state, but Planck had long resisted this statistical interpretation of entropy. He finally came to grasp that he had to make a huge concession and, in desperation, try Boltzmann’s methods. This meant that he had to determine the probability of a given distribution of energy among the cavity resonators. In order to calculate a probability, the possible energy distributions had to be countable, and so Planck divided the energies of the resonators into discrete chunks that he called quanta, Latin for “amount” or “quantity.” He then worked out a combinatorial expression that stated how many ways there are for these quanta to be distributed among all the resonators. (It was later shown by Einstein and others that Planck’s combinatorial formula was itself not much better than a wild guess—but it was a guess that gave the right answer!) The logarithm of this number gave him the expression for entropy he needed. There was one further twist: in order to arrive at the formula that experiment told him was correct, the size of these quanta had to be proportional to the frequencies of the resonators. These last pieces of the puzzle allowed him to arrive, finally, at a derivation of the formula for the distribution of energy among frequencies as a function of temperature. He announced his derivation on December 14, 1900, a date that is often taken to be the birthday of quantum mechanics. Planck was inspired by a calculation that Boltzmann had carried out in gas theory, in which Boltzmann also had taken energy to be broken up into small, discrete chunks. Boltzmann had taken the quantization of energy to be merely a calculational device that allowed him to apply probabilistic formulas, and the size of his energy units dropped out of his final result. This worked for classical gasses, where quantum phenomena do not make an explicit appearance. But Planck found that if he tried to allow his quanta to become indefinitely small, his beautiful and highly accurate formula fell apart. If he wanted the right answer, he had to keep the quanta. The constant of proportionality between energy and frequency was calculated by Planck from experimental data, and he arrived at a value barely 1% off the modern accepted value, which is 6.626 × 10–27 erg.seconds. (The erg is a unit of energy.) This new constant of nature is now called Planck’s constant of action and is usually symbolized with the letter h. Action, energy multiplied by time, is a puzzling physical quantity; it is not something like mass or distance that we can sense or picture, and yet it plays a crucial role in theoretical physics. Why action must be quantized, and why the quantum of The Quantum Revolution action should have the particular value that it has been measured to have, remain complete mysteries to this day. Historians of physics still debate the exact extent to which Planck in 1900 accepted the reality of energy quanta, and to what extent he still thought of them as a mathematical trick. In later years he tried without success to explain light quanta in classical terms. However, the indisputable fact Figure 1.4: Planck’s Law. The Rayleigh-Jeans Law fits the ex- remains that in 1900 he reluctantly perimental curve at long wavelengths, Wien’s Law fits the adopted Boltzmann’s statistical curve well at short wavelengths, and Planck’s formula fits the methods, despite the philosophicurve at all wavelengths. Illustration by Kevin deLaplante. cal and scientific objections he had had towards them for many years, when he at last grasped that they were the only way of getting the result that he knew had to be correct. Planck’s outstanding intellectual honesty rewarded him with what he described to his young son as a “discovery as important as that of Newton” (Cropper 1970, p. 7). Einstein and Light The Patent Clerk from Bern Physics today stands on two great pillars, quantum theory and the theory of relativity. Quantum theory was the work of many scientists, but the foundations of the special and general theories of relativity were almost entirely the work of one person, Albert Einstein. Einstein also played a large role in the creation of quantum mechanics, especially in its early stages; furthermore, he was among the first to grasp the extent to which quantum mechanics contradicts the classical view of the world. Later in his life he sought to replace quantum mechanics with what he thought would be a more rational picture of how nature works, but he did not succeed. He once said that he wanted above all else to understand the nature of light. Einstein was born of a Jewish family in Ulm, Germany, in 1879. His performance as a student was uneven, but he independently taught himself enough mathematics and physics that he was able to do advanced research by the time he was in his early 20s. He graduated from the Polytechnical Institute of Zurich, Switzerland, in 1900 with the equivalent of a bachelor’s degree, although he had a bad habit of skipping classes and got through the final exams only with the help of notes borrowed from a fellow student, Marcel Grossman (1878–1936). Einstein was unable to find an academic or research position and eked out a living with odd jobs, mostly tutoring and part-time teaching. Grossman again came to the rescue and through connections got Einstein an interview with the Swiss Patent Office in Bern. Einstein seems to have impressed the director of the office with his remarkable knowledge of electromagnetic theory, and in 1902 he was hired as a patent examiner-in-training, Technical Expert Third Class. The Patent Office suited Einstein perfectly. Here he found a quiet haven (and a modest but steady paycheck) that allowed him to think in peace. The 16The Quantum Revolution technical work of reviewing inventions to see if they were patentable was stimulating. In the years 1901–1904 he published five papers in Annalen der Physik (Annals of Physics), one of the most prestigious German research journals. All had to do with the theory and applications of statistical mechanics. One of his major aims in these papers was to find arguments that established without a doubt the actual existence of molecules, but he was also assembling the tools that would allow him to attack the deepest problems in physics. In three of his papers in this period, Einstein single-handedly reconstructed the statistical interpretation of thermodynamics, even though he had at that time no more than a sketchy acquaintance with the work of Boltzmann or the American J. W. Gibbs (1839–1903, the other great pioneer of statistical mechanics). Einstein’s version of statistical mechanics added little to what Boltzmann and Gibbs had already done, but it was an extraordinary accomplishment for a young unknown who was working almost entirely alone. Furthermore, it gave him a mastery of statistical methods that he was to employ very effectively in his later work on quantum theory. The Year of Miracles The year 1905 is often referred to as Einstein’s annus mirabilis (year of miracles). He published five papers: one was his belated doctoral thesis— an influential but not earth-shattering piece of work on the “determination of molecular dimensions”—while the other four changed physics forever. One showed that the jiggling motion of small particles suspended in liquids could be used to prove the existence of molecules; one laid the foundations of the theory of relativity; one paper took Planck’s infant quantum theory to its adolescence in a single leap; and one short paper (apparently an afterthought) established the equivalence of mass and energy. These papers, written and published within a few months of each other, represent one of the most astonishing outbursts of individual creativity in the history of Brownian Motion In one of his great papers of 1905 Einstein studied the “motion of small particles suspended in liquids” from the viewpoint of the “molecular-kinetic theory of heat” (i.e., statistical mechanics). This paper does not directly involve quantum mechanical questions, but it is important to the quantum story in that it was one of several lines of investigation in the period 1900–1910 (carried out by a number of scientists) that established beyond a reasonable doubt that atoms—minute particles capable of moving independently of each other—really do exist. Suppose that there are particles of matter (such as pollen grains), suspended in a liquid, invisible or almost invisible to the naked eye but still a lot larger than the typical dimensions of the molecules of the liquid. The molecules of the liquid bounce around at random and collide with the suspended Einstein and Light particles. Einstein realized that the statistics of such collisions could be analyzed using the same methods that are used to analyze gasses. A single collision cannot cause the particle to move detectably, but every so often (and Einstein showed how to calculate how often) a fluctuation will occur in which a cluster of molecules will just happen to hit the particle more or less in unison, and they will transfer enough momentum to make the particle jump. Over time the particles will zigzag about in a Figure 2.1: Fluctuations and Brownian Motion. Random flucrandom “drunkard’s walk.” This tuations in the jittering motion of the water molecules can is known as Brownian motion, cause the much heavier pollen grain to change direction. after the English botanist Robert This amounts to a localized violation of the Second Law of Brown (1773–1858), who in 1827 Thermodynamics. Illustration by Kevin deLaplante. observed pollen grains and dust particles mysteriously jittering about when suspended in water. Einstein derived a formula for the mean-square (average) distance the particles jump. Amazingly, his formula allows one to calculate a key quantity known as Avogadro’s Number, the number of molecules in a mole of a chemical substance. The Polish physicist Marian Smoluchowsky (1872–1917) had independently obtained almost the same results as Einstein, and their formulas soon passed experimental tests carried out by the French physicist Jean Perrin (1870–1942). This confirmation of the Einstein-Smoluchowsky picture provided one of the most convincing proofs that had yet been obtained of the reality of discrete atoms and molecules. It was also a demonstration that Boltzmann had been right in saying that the Second Law of Thermodynamics was only a statistical statement that holds on average, for a Brownian fluctuation amounts to a small, localized event in which entropy by chance has momentarily decreased. Einstein’s work on Brownian motion demonstrated his remarkable knack for finding an elegant, powerful, and novel result based on the consistent application of general physical principles. Special Relativity Relativity theory is not the main topic of this book, but it is impossible to understand quantum theory, and especially the challenges it still faces today, without knowing some basics of the theory that is most commonly linked to Einstein’s name. The problem that Einstein set himself in his great paper of 1905, “On the Electrodynamics of Moving Bodies,” was to unify mechanics with the 18The Quantum Revolution electrodynamics of Maxwell. Theorists of the late nineteenth century argued that one could make no sense of electromagnetic waves unless there was some fluid-like stuff which waved, and they imagined that the vacuum of space was filled with an invisible substance called the ether. If it existed, the ether had to have very odd properties, since it had to be extremely rigid (given the very high velocity of light) but at the same time infinitely slippery (since the vacuum offers no resistance to motion). The failure (in the 1880s and later) of all attempts to directly detect the motion of the Earth with respect to the ether is often cited as one of the factors that led to Einstein’s theory of relativity. Although Einstein was aware of these observations, they do not seem to have played a major role in his thinking. What really seems to have bothered him was the sheer lack of conceptual elegance that resulted from trying to force electrodynamics to be consistent with Newtonian mechanics. At the beginning of his 1905 paper, he points out that according to the electrodynamics of his time, the motion of a conductor with respect to a magnetic field had a different description than the motion of a magnet with respect to a conductor, despite the fact that only the relative motion of the two makes any difference to the actual phenomena observed. He argued that it should make no difference to the laws of physics whether they are described from the viewpoint of an observer in uniform (steady) motion or at rest. This is called the Principle of Special Relativity. To this he added the assumption, which he admitted might seem at first to contradict the Principle of Relativity, that the speed of light in vacuum must be the same (an invariant) for all observers regardless of their state of motion. In effect, he was insisting that we should take the observable value of the speed of light in vacuum to be a law of nature. (The qualification “in vacuum” is needed because light usually slows down when it passes through various forms of matter.) At the age of 16 Einstein had puzzled over the following question: what would happen if an observer could chase a beam of light? If he caught up with it, would the light disappear, to be replaced by a pattern of static electrical and magnetic fields? Nothing in Maxwell’s theory allows for this, a fact that led Einstein to the postulate that light is always light—for everyone, regardless of their state of motion. All of the startling consequences of the theory of relativity follow from the mathematical requirement that positions and time must transform from one state of motion to another in such a way as to maintain the invariance of the speed of light. Einstein expressed the speed of light in terms of the space and time coordinates of two observers moving with respect to each other at a constant velocity. He then set the resulting expressions for the speed of light for the two observers equal to each other and derived a set of equations that allow us to calculate distances and times for one observer given the distances and times for the other. These formulas are called the Lorentz transformations, after the Dutch physicist H. A. Lorentz (1853–1928). (Lorentz had been one of those who had tried without success to find a Newtonian framework for electrodynamics.) According to the Lorentz transformations, clocks run more Einstein and Light slowly in moving frames (this is called time dilation), lengths contract, and mass approaches infinity (or diverges) at the speed of light. But these effects are relative, since any observer in a relativistic universe is entitled to take him or herself to be at rest. For instance, if two people fly past each other at an appreciable fraction of the speed of light, each will calculate the other’s clock to be running slow. This contradicts the Newtonian picture, in which time is absolute—the same for all observers. The relativistic deviations from Newton’s predictions are very small at low relative velocities (although they can now be detected with sensitive atomic clocks) but become dominant at the near-light speeds of elementary particles. There are certain quantities, called proper quantities, which are not relative, however; hence the term “theory of relativity” is misleading because it does not say that everything is relative. As Einstein himself once noted, his theory could more accurately be termed the theory of invariants, because its main aim is to distinguish those quantities that are relative (such as lengths and times) from those that are not (such as proper times and rest masses). An important example of an invariant is elapsed proper time, which is the accumulated local time recorded by an observer on the watch he carries with him. The elapsed proper time of a moving person or particle determines the rate at which the person or particle ages, but it is path dependent, meaning that its value depends upon the particular trajectory that the person or particle has taken through spacetime. This is the basis of the much-debated twin paradox (not a paradox at all), according to which twins who have different acceleration histories will be found to have aged differently when brought back together again. This prediction has been tested and confirmed to a high degree of accuracy with atomic clocks and elementary particles. In 1908 the mathematician Hermann Minkowski (1864–1909) showed that special relativity could be expressed with great clarity in terms of a mathematical construct he called spacetime (now often called Minkowski Space), a four-dimensional geometrical structure in which space and time disappear as separate entities. (There are three spatial dimensions in spacetime, but it can be represented by a perspective drawing which ignores one of the spatial directions.) This is just a distance-time diagram with units of distance and time chosen so that the speed of light equals 1. Locations in spacetime are called worldpoints or events, and trajectories in spacetime are called worldlines. A worldline is the whole four-dimensional history of a particle or a person. The central feature of Minkowski space is the light cone, which is an invariant structure (that is, the same for all observers). The light cone at a worldpoint O defines the absolute (invariant) past and future for that point. The forward cone is the spacetime path of a flash of light emitted at O, while the past cone is the path of all influences from the past that could possibly influence O, on the assumption that no influences can reach O from points outside the light cone. (Such influences would be superluminal, traveling faster than light.) One of the central assumptions behind modern quantum mechanics and particle physics is that the light cone defines the causal structure of events in space and time; 20The Quantum Revolution that is, it delineates the set of all possible points in spacetime that could either influence or be influenced by a given point such as O. We will see that this assumption is called into question by quantum nonlocality, which we shall introduce in a later chapter. Minkowski showed how relativistic physics, including Maxwell’s electromagnetic theory, could be described geometrically in space­ time. An irony is that Minkowski had been one of Einstein’s mathematics professors at the Zurich Figure 2.2: Spacetime According to Minkowski. The curve Polytechnic, but he had not, at OE1 is the path of an ordinary particle with mass, such as an that time, been very impressed electron. OE0 is the path of a light ray emitted from O, and with Einstein’s diligence. it is the same for all frames of reference. OE2 is the path of a One of the most impressive hypothetical tachyon (faster than light particle). Illustration of special relativby Kevin deLaplante. ity is the famous equivalence of mass and energy. Physicists now commonly speak of mass-energy, because mass and energy are really the same thing. Any form of energy (including the energy of light) has a mass found by dividing the energy by the square of the speed of light. This is a very large number, so the mass equivalent of ordinary radiant energy is small. Conversely, matter has energy content given by multiplying its mass m by the square of the speed of light; that is, E = mc2. This means that the hidden energy content of apparently solid matter is very high—a fact demonstrated with horrifying efficiency in August 1945, when the conversion to energy of barely one-tenth of one percent of a few kilograms of uranium and plutonium was sufficient to obliterate two Japanese cities, Hiroshima and Nagasaki. It is commonly held that the theory of relativity proves that no form of matter, energy, or information can be transmitted faster than the speed of light. Einstein himself certainly believed this, and in his paper of 1905 he cited the divergence (blow-up) of mass to infinity at the speed of light as evidence for this view. Whether or not Einstein was right about this is a controversial issue that turns out to be crucial to our understanding of quantum mechanics. Some people hold that we can explain certain odd quantum phenomena (to be described later) only if we assume that information is somehow being passed between particles faster than light; others hotly deny this conclusion. The mathematical derivation of the Lorentz transformations and other central results of special relativity depends only on the assumption that the speed of light is an invariant (the same for all observers in all possible states of motion), not a limit. Some physicists suspect that the existence of hypothetical Einstein and Light tachyons (particles that travel faster than light) is consistent with the basic postulates of relativity (although they have never been detected), and whether or not any form of energy or information could travel faster than light remains an open question that quantum mechanics will not allow us to ignore. The Quantum of Light Einstein’s 1905 paper on the light quantum is entitled “On a Heuristic Viewpoint Concerning the Generation and Transformation of Light.” To understand Einstein’s use of the word “heuristic” we have to say a little about the history of light. Newton carried out pioneering experiments in which he showed that light from the sun can be spread out by a prism into the now-familiar spectrum (thereby explaining rainbows). In his Opticks (1704), Newton speculated that light moves in the form of tiny particles or corpuscles. The English polymath Thomas Young (1773–1829) showed, however, that the phenomena of interference and diffraction make sense only if light takes the form of waves. If light is shone through pinholes comparable in size to the wavelength of the light, it spreads out in ripples in much the way that water waves do when they pass through a gap in a seawall. Young’s views were reinforced by Maxwell’s field theory, which showed that light can be mathematically interpreted as alternating waves of electric and magnetic fields. The wave theory became the dominant theory of light in the nineteenth century, and the corpuscular theory was judged a historical curiosity, a rare case where the great Newton had gotten something wrong. Einstein’s brash “heuristic hypothesis” of 1905 was that Newton was correct and light is made of particles after all. To say that a hypothesis is heuristic is to say that it gets useful results but that it cannot be justified by, or may even contradict, other accepted principles. Thus it is something that we would not accept unconditionally, but rather with a grain of salt, hoping that in time we will understand the situation more completely. Einstein taught us something not only about light, but about scientific method: if you find a fruitful hypothesis, do not reject it out of hand because it clashes with what you think you already “know.” Rather, learn as much from it as you can, even if you are not yet sure where it fits into the grand scheme of things. Einstein knew perfectly well that there was abundant evidence for the wave interpretation of light. However, he showed that the hypothesis that light moves in the form of discrete, localized particles, or quanta, could explain some things about light that the wave theory could not explain, and would lead to the prediction of new phenomena that would not otherwise have been predictable. The starting point of Einstein’s argument in his 1905 paper on the light quantum was Wien’s blackbody energy distribution law. Even though Wien’s formula had been superseded by Planck’s Law, Einstein knew that it is still fairly accurate for high frequencies and low radiation density. Planck had worked forward from an expression for entropy to his energy distribution 22The Quantum Revolution law; in contrast, Einstein worked backwards from Wien’s law to an expression for the entropy of high-frequency radiation in equilibrium in a blackbody. Einstein demonstrated the remarkable fact that this quantity has the same mathematical form as the entropy of a gas, a system of independent, small, localized particles bouncing around freely in space. At least when Wien’s law can be used, therefore, light behaves not only like waves, but also very much like a gas of particles whose energy happens to be given by the product of Planck’s mysterious constant of action and the frequency of the light waves. Einstein applied this result to three types of phenomena where light interacts with matter. The most important historically was the photoelectric effect. It had been known for some years that light shone on a metal surface causes the emission of electrons from the metal, but the laws governing the process were not well understood. Suppose that light interacts with the electrons as a continuous wave. If the frequency of the light were held constant, a steady increase in the intensity (brightness) of the light would increase both the number and energy of the electrons knocked out of the metallic surface because the light would transmit its energy continuously to the electrons. However, if light interacts with the electrons in the form of discrete particles whose energy is given by the Planck Law, then whether or not the light has enough energy to knock the electrons loose would depend strictly on the frequency of the light. An increase in intensity would only increase the number of electrons emitted, not their energies. A test of this prediction of Einstein’s would therefore be a crucial test of the light particle hypothesis. Einstein’s theory of light quanta was regarded as very radical. Even Planck, who tirelessly championed Einstein’s work, thought that Einstein had overreached. The American physicist Robert Millikan (1868–1953) set out to disprove Einstein’s wild hypothesis, but to his surprise his careful experiments (carried out around 1915) confirmed Einstein’s theory of the photoelectric effect. The term photon for the light particle was introduced in 1926 by the American chemist Gilbert Lewis (1875–1946), and it quickly caught on. In the story of the photon we see the first clear sign of a phenomenon that would plague quantum theory from this point onward, the necessity of living with points of view that seemed to be in outright contradiction to each other. Is light really made of waves (which are continuous), or particles (which are discrete)? It was already apparent to Einstein in 1905 that we must accept the wave-particle duality, which is the fact that light somehow must be both continuous and discrete. In his 1905 paper Einstein sketched (nonmathematically) a possible resolution of the anomaly. The key, he pointed out, is that there is a distinction between what applies to averages and what applies to individual cases. (Think of the old joke that the average American family has 2.4 children.) Einstein argued that while we can only understand the interactions of light with matter if we think of light as particulate, the statistics of very large numbers of light particles averages out to the wave-like behavior implicit in Maxwell’s Equations. The challenge would then be to find the rules that related the behavior of light as individual quanta to their large-scale wave-like behavior. The only thing that Einstein could be sure of in 1905 was the Planck Einstein and Light relation between energy and frequency: light quanta have an energy that is a product of Planck’s constant of action and the frequency of the light wave with which the quanta are “associated.” However, the nature of that association remained utterly mysterious. Einstein Arrives Specific Heats In the years immediately following Einstein’s annus mirabilis, few apart from Planck understood Einstein’s theory of relativity, and even Planck did not fully grasp Einstein’s insights about the light quantum. However, in 1907 Einstein published a paper in which he applied the quantum hypothesis to another unsolved problem, the puzzle of specific heats. This paper, in effect, founded solid state physics, the quantum mechanical study of solid matter. The specific heat of a substance (solid, liquid, or gas) is the amount of energy required to raise its temperature by one degree. It can also be thought of as the heat capacity of the substance, the amount of heat that can be absorbed by a given amount of the substance at a certain temperature or pressure; equivalently, it is the amount of heat that would be given off if the substance’s temperature were lowered by one degree. The puzzle was that classical mechanics predicted that all solids should have exactly the same specific heat (about six calories per mole.degree). This is called the Dulong-Petit Law, after P. Dulong (1785–1838), and A. Petit (1791–1820), who established experimentally that their law works fairly well at room temperatures or higher. By the late nineteenth century it became technically possible to measure specific heat at lower temperatures, and it was soon clear that below room temperature most solids have much smaller specific heats than the Dulong-Petit Law says they should. The classical Dulong-Petit result follows from the assumption of the equipartition of energy (the idea that the energy of the system is shared equally among all possible vibration modes), the same assumption that tripped up Lord Rayleigh in the blackbody story. Einstein applied a quantum correction factor from blackbody theory to the classical Dulong-Petit result and arrived at a formula for specific heats that predicted the drop-off at low temperatures. Experimental work by Walther Nernst (1864–1941) showed, by 1910, that Einstein’s formula came quite close to predicting the actual measured values for many materials. In 1912 the Dutch physicist Peter Debye (1884–1966) filled in some gaps in Einstein’s theory and arrived at a formula that predicted the specific heats of most substances with remarkable accuracy. The success of the quantum theory of specific heats convinced most physicists that the quantum—and Einstein—had to be taken seriously. The Wave-Particle Duality By 1908 Einstein was recognized by Planck and a few other senior physicists as an up-and-coming star, even if they were not always sure what he 24The Quantum Revolution was talking about. It was time for Einstein to move on from the Patent Office. Getting an academic position was no simple matter, however, even for someone with a Ph.D. and the amazing publication record that Einstein had already built up by 1908. He had to serve time as a Privatdozent—an unsalaried lecturer—at the University of Bern, before he secured a professorship at the University of Zurich in 1909 and could finally resign from the Patent Office. Einstein’s friend Friedrich Adler (1879–1966) had been recommended for the position because his father was a powerful political figure. However, Adler, a person of selfless generosity, insisted that there was no comparison between his abilities as a physicist and Einstein’s and insisted that Einstein should have the job instead. We have already noted that our story can be viewed in several ways: as a history of ideas, a history of great papers, or a history of decisive experiments. It can also be viewed as a history of conferences. In 1909 Einstein delivered an influential paper at a conference in Salzburg in which he argued for the reality of light quanta and the necessity of accepting the wave-particle duality. In his work in 1909 Einstein applied his mastery of fluctuation theory to light itself. Light fluctuations are the short-lived deviations from average energy that occur in a radiation field. Einstein showed that the expression for fluctuations must consist of two parts, a contribution from the wave-like behavior of light, and one from its particle-like behavior. (The latter is similar to the fluctuations that cause Brownian motion.) It was another powerful argument for the waveparticle duality, and another demonstration that the quantum was not going to be explained away merely by fine-tuning the classical wave theory of light. Einstein was a central figure in the First Solvay Conference, held in Brussels, Belgium, in 1911. This conference brought many of the leading physicists of Europe together to debate the challenge posed to physics by the quantum. The Solvay Conferences on Physics, founded by the Belgian industrialist Ernst Solvay (1838–1921), are held every three years and continue to provide an important stimulus to the development of physics. Einstein rapidly moved through a succession of increasingly senior academic positions until, in 1913, he was appointed professor at the University of Berlin. He refused to support the German war effort and worked obsessively on his research through the dark years of World War I. In the period from 1917 to 1925 he made further decisive contributions to quantum mechanics, but these will be more easily understood after we describe the work of Niels Bohr and others in the next chapter. Einstein Maps the Universe Almost as soon as he had laid the groundwork of the theory of relativity in 1905, Einstein puzzled over ways to describe gravitation in a way that would be consistent with the Principle of Relativity. Newton’s Law of Gravitation makes no reference to the time it takes for the gravitational interaction to propagate from one mass to another; in other words, it describes gravity as a Einstein and Light force that acts instantaneously at a distance. Newton himself was very uncomfortable with this picture (since he thought it obvious that unmediated action at a distance was absurd), but he could find no better explanation of gravity. Einstein thought that it should be possible to describe gravity as a field, similar to Maxwell’s electromagnetic field although perhaps with a more complex structure, but none of the apparently obvious ways of doing this worked. The breakthrough came in 1907, when (sitting at his desk in the Patent Office) Einstein had what he described as the “happiest thought of my life” (Pais 1982, p. 178). It simply struck him that a person falling freely feels no gravitational force. We only feel the “force” of gravity when free fall is opposed by something; for example, when we stand on the surface of the Earth. This insight led Einstein to his Equivalence Principle: suppose an astronaut is in a rocket with no windows, and suppose that she experiences a constant uniform acceleration. There is no way she can tell whether the rocket motor is burning in such a way that the spacecraft accelerates constantly in a given direction, or whether it is sitting on the launch pad but subject to a uniform gravitational field. All motion, including the accelerated motion caused by the gravitational field, is relative. Gravitation is therefore remarkably like an inertial force such as the centrifugal force that tries to throw us off a rotating merry-go-round. The centrifugal force is merely a consequence of the mass of our bodies trying to obey Newton’s First Law and keep moving ahead in a straight line against the resistance of the merry-go-round, which is trying to hold us to a circular path. Similarly, Einstein realized that gravitation would be merely the consequence of our bodies trying to move along inertial paths, if inertial paths were curved by the presence of matter. The only sign that there is a gravitational field present would be the deviation from parallelism of the trajectories of freely falling bodies. For instance, two weights released side by side will move toward each other as they fall toward the center of the Earth. Generalizing relativity theory so that it would describe all possible accelerated motions would therefore give a relativistic description of gravitation, but it would require that space and time be curved. Einstein needed to find a way of describing objects in arbitrarily curved geometric spaces. Again, his friend Marcel Grossman helped, by pointing out that the mathematics he needed, the tensor calculus, had recently been invented. After some false starts and much hard work, Einstein published a monumental paper in 1916 that set forth his general theory of relativity. His relativity theory of 1905 is, by contrast, called the special theory of relativity because it is concerned only with relative motions in flat (uncurved) spacetime. General relativity describes gravitation by means of a set of field equations, which state, very roughly speaking, that the shape of space is determined by the distribution of mass-energy in space. There are many possible solutions of Einstein’s field equations; each valid solution is a metric that describes a possible spacetime geometry. It has taken mathematical physicists many years to find, classify, and study the solutions of Einstein’s equations, and it is still 26The Quantum Revolution not clear that all physically interesting solutions have been found. However, Einstein was able to show that the light from a distant star would be bent by a certain amount by the gravitational field of the sun. This was verified in 1919, when a total eclipse of the sun allowed astronomers to measure the variation from the Newtonian prediction of starlight passing very close to the edge of the sun. Einstein also showed that his theory could explain a small deviation in the orbit of Mercury that no one had been able to account for. With these successful predictions, Einstein was hailed as the successor to Newton, and he spent the rest of his life coping with the sort of adulation that usually only movie stars are subjected to. General relativity allows us to describe a vast range of possible universes, whose geometries are determined by the way mass-energy is distributed. As Misner, Wheeler, and Thorne (1973) put it, “Space tells matter how to move, and matter tells space how to curve.” Some solutions of the field equations have bizarre properties. For instance, in 1935 Einstein and his young coworker Nathan Rosen (1909–1995) found a solution of the field equations that described wormholes (or “bridges”) that could apparently link distant points in spacetime. (No one knows whether wormholes exist or whether they are merely a mathematical curiosity.) And in 1948 the mathematician Kurt Gödel showed that Einstein’s theory allowed for a rotating universe in which it is possible to travel backwards in time. In 1917, Einstein was startled to discover that his equations apparently predict that the whole universe is unstable—it must either expand or contract, but it cannot remain static. He thought that this had to be a mistake, and inserted a “fudge factor” (the cosmological constant) into his equations in order to stabilize the universe. However, in the 1920s Edwin P. Hubble (1889–1953) and other astronomers, using powerful new telescopes, discovered that the universe does indeed expand. Einstein later declared that his introduction of the cosmological constant had been the greatest scientific mistake of his life. However, recent work seems to show that the cosmological constant may be non-zero after all and is such that it tends to cause an acceleration of the expansion of the universe. Astronomers are still trying to measure the actual expansion rate of the universe as precisely as possible. Einstein’s general theory of relativity has so far stood up to every experimental test that astronomers have been able to devise. It is clear that general relativity is by far our most accurate description of the large-scale structure of spacetime, even though we still do not know why mass-energy should curve spacetime, or which of the myriad possible universes described by Einstein’s field equations is the one we actually inhabit. As we shall see, one of the major theoretical challenges of our time is to make general relativity consistent with quantum mechanics, the other great theory that Einstein did so much to create. Einstein’s Later Years Einstein was forced to flee Germany in 1933 when Hitler took power. He settled in Princeton, New Jersey, as the star attraction at the new Institute for Einstein and Light Advanced Studies. Over the next few years, he devoted much effort to helping other refugees from Nazi persecution find positions in the free world. Although Einstein had played a major role in the early years of quantum mechanics, he eventually became the leading critic of quantum mechanics rather than a contributor to it. In 1935, in collaboration with Boris Podolsky and Nathan Rosen, Einstein published an enigmatic article called “Can QuantumMechanical Description of Physical Reality be Considered Complete?” This paper (usually dubbed the EPR paper, for Einstein, Podolsky, and Rosen) is now recognized as one of Einstein’s most influential contributions to physics —although not exactly in the way he had intended! The EPR paper, the debate surrounding it, and its implications will be described in Chapter 6. In 1939, Einstein signed a letter drafted by his friend the Hungarian physicist Leo Szilard (1898–1964), urging President Franklin Delano Roosevelt to develop the atomic bomb before the Nazis did. Einstein forever regretted his role in triggering the development of nuclear weapons, and, in collaboration with other intellectuals such as the British philosopher Bertrand Russell (1872–1970), campaigned for nuclear disarmament. Throughout the last 30 years of his life he made repeated attempts to construct a field theory that would unify gravitation and electromagnetism in much the way that Maxwell had unified magnetism and electricity. His great hope was to show that particles could be seen as knot-like structures within some sort of universal field, thereby reattaching quantum mechanics and particle theory to a solid classical footing. He never succeeded, although he made many interesting suggestions that are still being digested today. He died in Princeton in 1955, the most revered scientist of modern times. The Bohr Atom and Old Quantum Theory Things Get Complicated The quantum mechanics of 1900 to 1925 is often referred to as the Old Quantum Theory, to distinguish it from the modern quantum mechanics that will emerge suddenly in the mid-1920s. The earliest years of the Old Quantum Theory, 1900 to about 1910, were dominated by two figures, Planck and Einstein. After this point the development of quantum mechanics starts to get complicated as more and more physicists get involved and new lines of investigation branch out. If there could be one person to whom the period from 1913–1925 belongs, it would be the Danish physicist Niels Bohr (1885–1962). However, before we can describe what Bohr did we need to understand the two convergent streams of investigation, spectroscopy and nuclear physics, which made his work possible. Take a chemically pure sample of any element and heat it to incandescence —that is, to the point at which it glows. Pass the light through a narrow slit and then through a prism. The prism will fan the light rays out into a band with frequency increasing uniformly from one side to the other (or equivalently, wavelength decreasing). A series of bright, sharp lines will be observed—the emission spectrum of that particular element. It was discovered in the 1860s that each element has a unique spectrum like a fingerprint, which allows it to be identified regardless of how it is combined chemically with other elements. If the element is immersed in a very hot gaseous medium (such as the atmosphere of a star) its spectrum takes the form of a series of dark lines called an absorption spectrum. These dark lines made it possible to determine what elements were present in distant stars, a feat that some philosophers had thought The Quantum Revolution would be impossible. Spectroscopy (the study of spectra) shows us that any matter in the universe that is hot enough to emit light is composed of the same familiar elements that we find on earth, and it quickly became one of the most powerful tools of the astronomer. The line spectra of the various elements exhibit regular patterns, with the lines generally being spaced more closely together at higher energies. The puzzle was to understand the rule that governed the patterns of the spectral lines. Physicists in the nineteenth century realized that the spectral fingerprints had to be a clue to the inner structure and dynamics of the atoms that emitted them, but they were unable to find an atomic model that correctly predicted the observed spectral lines. A breakthrough came in 1884, when a Swiss schoolteacher named Johann Balmer (1825–1898) discovered an elegant formula that expressed the wavelengths of the hydrogen lines, the simplest spectrum, in terms of certain combinations of integers. Similar formulas were discovered by several other investigators, and by the late 1890s spectroscopists could identify many known elements by their spectra. Improvements in experimental technique allowed observation of atomic spectra from the infrared to the ultraviolet, and spectral series in these frequency ranges were discovered and described as well. In 1908 W. Ritz (1878–1909), building on earlier work by J. R. Rydberg (1854–1919), showed that complex spectral lines are arithmetical combinations of simpler lines, called terms. (This is the Rydberg-Ritz combination principle.) Their formula depended on a number, which became known as the Rydberg constant, which could be estimated from spectral measurements. Frustratingly, however, there seemed to be no way to derive the value of the Rydberg constant theoretically, and no way of inferring atomic structure from the formulas of Balmer, Rydberg, and others. To go any further, something needed to be known about what went on inside the atom itself. The Electron Figure 3.1: Spectral Lines. H∞ is the beginning of the continuum, the energy range where the electron is separate from the atom and can move freely. Illustration by Kevin As we have seen, up until the first decade of the twentieth century many physicists and chemists believed that matter was continuous and discrete atoms did not really exist. The age of elementary particles began in 1897, when British physicist J. J. Thomson The Bohr Atom and Old Quantum Theory (1856–1940) discovered the electron. Thomson had been studying cathode rays, which are beams of energetic electrified matter emitted by the cathode (the negative electrode) in a vacuum tube. Thomson showed that cathode rays had to be made of something that was negatively charged, and he argued that the simplest explanation for the behavior of cathode rays was that they were made up of tiny negative particles, or corpuscles (“little bodies”) of electric charge. In 1891 G. J. Stoney (1826–1911) had coined the term “electron” for hypothetical particles of electricity, and it soon came to be applied to Thomson’s corpuscles. Thomson also showed that his corpuscles must be very small, less than one thousandth of the mass of the hydrogen atom. Just as Planck and Einstein were about to learn that electromagnetic energy comes in discrete quanta, Thomson had shown that electricity—at least, negatively charged electricity—also occurs only in chunks of a definite size. The charge of the electron is now taken as the basic unit of electrical charge, and the electron became the first of the hundreds of so-called “elementary” particles that were to be identified. But we have to be careful when we say that Thomson discovered the electron. Physicists do not discover particles the way that archeologists discover the long-lost tombs of Egyptian pharaohs. Nobody has ever seen an electron or any other elementary particle with their bare eyes (although we can come fairly close to doing that with devices such as the cloud chamber, which show the tracks of charged particles). It is more accurate to say that a particle is “discovered” when it is shown that its existence is the only reasonable explanation for certain observed phenomena. Thomson’s corpuscular hypothesis was at first met with much skepticism, but within the next few years it became clear that it had to be right. Another source of confirmation for the electron hypothesis was the Zeeman Effect, discovered in 1896 by the Dutch physicist P. Zeeman (1865–1943). The Zeeman Effect is the splitting of spectral lines into small groups of closely separated lines of slightly different energies or frequencies under the influence of a strong magnetic field. H. A. Lorentz soon showed that it could be explained if the light from the hot atoms was emitted by negatively charged particles with a mass around one-thousandth that of a hydrogen atom, and if those particles have a magnetic moment, meaning that they are like tiny magnets that can interact with an external magnetic field. This was not only further evidence for the existence of electrons, but showed that electrons had to be present within the atom itself as well as in the cathode rays studied by Thomson. Lorentz’s model was entirely classical, and it was several years before physicists realized that it had worked so well only because in the correct quantum description of the electrons interacting with the magnetic field, the value of Planck’s constant neatly cancels out of the equations! By not long after 1900 it was clear to physicists that the spectra of atoms had to be a sign of the periodic motions of the electrons within them. But what laws of motion did the electrons follow? Before it would be possible to begin to answer this question, more had to be known about the inner structure of the The Quantum Revolution The Nucleus Rays and Radioactivity In the last decade of the nineteenth century and the first decade of the twentieth century several extraordinary new types of radiation were discovered that could not be understood using the concepts of classical physics. In 1895 the German physicist Wilhelm Roentgen (1845–1923) discovered and named X-rays. (The name “X-ray” was meant by Roentgen to suggest an unknown, as in algebra.) Although several other investigators had accidentally produced what we now know were X-rays, Roentgen was the first to realize that they were a novel phenomenon. X-rays are produced by the process of accelerating cathode rays through an electrical field (in the range of a few thousand volts) and allowing them to collide with a metal target. Most materials, if bombarded with sufficiently energetic electrons, will emit X-rays, and it was eventually found that all elements had a characteristic X-ray spectrum. Exactly how the cathode rays force a target to emit X-rays, however, would not be properly understood until the advent of quantum mechanics. Roentgen and others suspected almost immediately that X-rays are a form of electromagnetic radiation much more energetic than ultraviolet light, but it would not be until 1912 that this was confirmed when Max von Laue (1879–1960) showed that X-rays would diffract, just like light, with a very short wavelength when shone through a crystalline lattice. Roentgen also noticed that X-rays would image the bones of the human hand, and by 1896 X-ray devices were being used to diagnose broken bones and find bullets in wounded soldiers. Roentgen would be the first recipient of the Nobel Prize in Physics, in 1901. Nuclear physics began in 1896, when the French physicist Henri Becquerel (1852–1908) discovered that salts of uranium would fog a photographic plate even if they were wrapped in heavy black paper. The term “radioactivity” itself was coined around 1898 by Marie Sklodowska Curie (1867–1934), a brilliant young student of physics at the Sorbonne in Paris who married the French physicist Pierre Curie (1859–1906) in 1895. Marie Curie discovered that thorium was also radioactive, and (with Pierre’s help) carried out an exhausting chemical analysis of uranium tailings, which led in 1898 to the isolation of two new elements, polonium and radium, that were hundreds of times more radioactive than uranium itself. Meanwhile, in 1902, Ernest Rutherford (1871–1937) and his assistant Frederick Soddy (1877–1956), working at McGill University in Montreal, grasped that radioactive decay consists of the actual transmutation of one element into another, as radioactive elements break down into other elements of lighter atomic weight. Rutherford was also the first to state the law of radioactive decay: unstable elements such as radium break down exponentially, at a rate proportional to the quantity of the element present, with a characteristic half-life. For example, a given quantity of radium will decay to half the original amount in about 1,600 years. Although it was not immediately apparent at the time, the law of radioactive decay hinted at one of the most profound myster- The Bohr Atom and Old Quantum Theory ies of the quantum world—its inherently probabilistic nature. Why does one radium atom rather than another break down at a given time, even though we have absolutely no way of detecting any difference between them? We still have no clear answer to this question, although we can now say a lot more than could anyone in Rutherford’s time about why we cannot answer this question. The early investigators of radioactivity were astonished at the huge amounts of energy that poured out of radioactive atoms in the form of the mysterious “uranic rays.” The energy of radioactivity can be a million times or more the energy released by the most powerful chemical reactions. It was remotely possible to imagine a classical explanation for the production of X-rays, since Maxwellian electrodynamics predicts that if a charged particle such as an electron is slowed down by collision with matter it must emit electromagnetic radiation. However, there was absolutely no way to account for the prodigious amounts of energy that was emitted by whatever lurked within the heart of the atom. Becquerel and Marie Curie at first explored the notion that radioactive elements might be somehow picking up some sort of previously undetected field of energy spread throughout space, or possibly absorbing energy from the sun; and Curie even toyed with the notion that the law of conservation of energy was being violated. (This would not be the last time that bizarre quantal phenomena would force physicists to speculate that Planck’s sacred first law of thermodynamics was violated at the quantum level.) However, Marie Curie and most other physicists soon realized that the energy was more likely coming from inside the atoms themselves, and she argued (also correctly) that there had to be a slow but measurable loss of mass from radioactive atoms. As early as 1914 the science fiction novelist H. G. Wells (1866–1946) predicted that the energy inside the atom could be used to create weapons of appalling power, and he coined the very term “atomic bomb.” It was soon found that Becquerel’s uranic rays consisted of three components, which were dubbed by Rutherford alpha, beta, and gamma radiation. These are quite different in character. In 1908, after a decade of painstaking investigation, Rutherford demonstrated that the alpha particle is an ion of helium, with a positive charge of two and an atomic weight of roughly four times the weight of the hydrogen atom. It was spectroscopy that clinched the identification of the alpha particle as a form of helium, but no one at this time had any clear idea why it should have the charge and mass it had. Every high school student today knows that the helium nucleus is made up of two protons and two neutrons, and some will even know that there is another isotope of helium with only one neutron. However, in 1905 neither protons nor neutrons had been identified and the concept of an isotope (same atomic number, different numbers of neutrons) was unknown; nor was it yet established (although some suspected) that electrons orbit around a compact nucleus or that the number of electrons in a neutral atom equals the atomic number. It is challenging to describe what physicists in the first decade of the twentieth century knew of the structure of the atom without carelessly lapsing into modern language. One can easily forget how much hard work by men The Quantum Revolution and women of extraordinary talent went into establishing the “obvious” facts that we take for granted today. Their work was often physical as much as intellectual; the early students of the atom such as Becquerel, the Curies, and Rutherford often had to build virtually all of their apparatus by hand, and the Curies labored like coal miners to extract radium from uranium ore, dangerously exposing themselves to radiation in the process. Beta rays were shown by several investigators to be none other than Thomson’s negative corpuscles—electrons—although possessing extraordinarily high energy. This was demonstrated as early as 1902, but beta radiation had certain mysterious properties that would take decades to fully understand. Gamma radiation was discovered by Paul Villard (1860–1934) in 1900, but it was not until 1914 that Rutherford and E. N. Andrade (1887–1971) conclusively showed that it is a form of electromagnetic radiation, no different from light, radio waves, or X-rays, but with by far the highest energies ever These three types of atomic radiation, together with X-rays, are sometimes called ionizing radiation, since they have enough energy to knock electrons out of the atoms they hit and thereby produce ions—charged atomic fragments. Nuclear radiation can shatter delicate biomolecules like a rifle bullet hitting the motherboard of a computer. It took many tragic deaths before the medical dangers of radioactivity were recognized. Marie Curie herself very likely died from the aftereffects of radiation exposure. But the powerful rays emitted from radioactive substances also gave scientists, for the first time, a tool with which they could probe the heart of the atom itself. Search for a Model Several physicists from about 1900 to 1912 proposed atomic models with a central positive core and electrons either orbiting around it like planets around the sun or like the rings of Saturn. But none of these models could explain either the stability of the atom (the fact that it does not immediately collapse in a flash of light) or the observed rules of the spectral lines. The most widely accepted theory of atomic structure up to about 1913 was J. J. Thomson’s “plum pudding” model. Thomson imagined that the electrons were embedded like plums throughout a pudding of uniform positive charge, with the number of electrons equal to the atomic number of the atom, so that the overall charge would add up to zero. At this point there was no clear evidence that positive charge comes in discrete corpuscles. Thomson imagined the electrons to be swarming around in shells or rings in such a way that their electrostatic repulsions balanced, and he was encouraged by the fact that with some mathematical fudging he could get the structure of the shells to suggest the layout of the periodic table of the elements. The model was quite unable to predict the frequencies of observed spectral lines, however, and some physicists were beginning to fear that this would forever be an impossible task. But Thomson’s model did at least give some account of the stability of the atom, and in the period of about 1904 to 1909, with still no direct evidence for The Bohr Atom and Old Quantum Theory the existence of a central positive core, the Thomson model seemed like the best bet—although likely few physicists (including Thomson himself) really believed in it. Rutherford Probes the Atom Ernest Rutherford, who in 1895 earned his passage from his homeland, New Zealand, to J. J. Thomson’s laboratory at Cambridge on the strength of scholarships and hard work, was an energetic and ingenious experimentalist who could learn enough mathematics to do theory when he had to. Rutherford was the epitome of scientific leadership; many of the key discoveries that founded nuclear physics emerged from his students and coworkers, and he had a powerful physical intuition that could quickly grasp the meaning of an unexpected experimental result. At the University of Manchester, England, in 1909, Rutherford, Hans Geiger (1882–1945), and Ernest Marsden (1889–1970) carried out a series of experiments that established the existence of the atomic nucleus. They did this by firing alpha particles from radium through a thin layer of gold foil. Gold was chosen because it is dense (thus more likely to interact with the alpha particles) and malleable. The way that particles scatter off a target can give a lot of information about its structure, and scattering experiments, of which Rutherford’s was the prototype, would become one of the most important tools of modern physics. Calculations showed that if Thomson’s plum pudding model was correct, alpha particles would be scattered by only a few degrees as they plowed through the gold atoms. Instead, Rutherford was amazed to discover that some alpha particles were scattered away at angles of 90 degrees or greater, with a few even rebounding almost straight backwards. Rutherford later said that this was as surprising as if an artillery shell had bounced off a piece of tissue paper. Rutherford published a pivotal article in 1911 in which he showed that this wide-angle scattering could be explained mathematically if all the positive charge in the atom was concentrated in a tiny volume in or near its center, with the negative electrons orbiting this central core. The force that caused the alpha particles to scatter was simply the electrostatic or Coulomb repulsion between the positively charged alphas and the positive atomic core. This picture implied that most of the atom had to be empty space, a conclusion that is in sharp contradiction to our commonsense beliefs about the solidity of matter. In 1912 Rutherford coined the term nucleus for the dense, positively charged nugget of matter at the heart of the atom. Rutherford’s nuclear model of the atom was met with skepticism by Thomson, whose refusal to give up his plum pudding model may seem like obstinacy in the face of Rutherford’s scattering experiments. But Rutherford’s nuclear model had a huge liability. Maxwell’s theory states that an accelerating electrical charge must radiate electromagnetic waves continuously. If negative electrons really do orbit around a positive nucleus, they would quickly radiate away all of their energy and spiral into the nucleus. All of the matter in the The Quantum Revolution universe should have long ago collapsed in a great flash of light. Furthermore, according to classical electromagnetic theory, moving electrons should give off radiation in a continuous range of frequencies, not in the sharp bursts that produce the observed line spectra. If Rutherford’s model was correct, then classical electromagnetic theory was in contradiction with both the existence of line spectra and the simple fact that ordinary matter is stable. Thomson’s plum pudding model seems implausible now, but it was in fact a sensible attempt to account for the stability of matter using the tools of classical physics. And yet, the nuclear model was the only possible explanation of the wide-angle scattering of Rutherford’s alpha particles. As Niels Bohr was soon to show, the missing ingredient was the quantum. The Danish physicist Niels Bohr (1885–1962) is second only to Einstein in his influence on modern physics. Bohr finished his Ph.D. in Copenhagen in 1911, and then traveled to England to work with J. J. Thomson at the Cavendish Laboratory in Cambridge. Thomson did not approve of the fact that Rutherford’s radical views about atomic structure had caught the imagination of the young Dane, and in 1912 Bohr moved to Rutherford’s lab in Manchester, where he received a more sympathetic reception. (The vigorous Rutherford was also impressed with the fact that Bohr played soccer.) Bohr was soon to solve the problem of the spectra, at least for the hydrogen atom. However, his resolution of the spectral puzzle was an unexpected bonus, because around 1912 to 1913 he was mainly concerned with understanding the puzzle of the mechanical stability of the atom—why the electrons do not spiral helplessly into the nucleus. Before any progress could be made on this very large problem, it was necessary to get straight on the number of electrons in the hydrogen atom. This is, again, one of those things that seems “obvious” now, but that required a lot of hard work to settle. Building on research by Figure 3.2: Niels Bohr. AIP Emilio Segre Visual Archives, Margrethe Bohr Collection. G. C. Darwin (1887–1962) (grand- The Bohr Atom and Old Quantum Theory son of the great biologist Charles Darwin), Bohr in 1913 argued that the hydrogen atom almost certainly contained only one electron. This implied that the nucleus (whatever it might be made of) could have a charge only of +1. The hydrogen atom, therefore, probably had the simplest structure of any of the elements, and so the first test of any new atomic theory had to be whether or not it could explain the properties of hydrogen. In 1913 Bohr published three brilliant papers in which he applied the quantum theory of Planck and Einstein to Rutherford’s planetary atomic model. The gist of Bohr’s approach was to take seriously the message delivered by Planck and Einstein, which was that matter can emit and absorb electromagnetic energy only in discrete amounts E = hν, where ν (the Greek letter nu) is a frequency. Since the emission of electromagnetic energy from the atom must correspond to a change of mechanical energy of the electrons in the atom, and since electromagnetic energy is quantized, Bohr thought it only reasonable to suppose that the mechanical energies of the electrons were quantized as well. He therefore proposed that electrons can orbit around the nucleus only in certain stationary states, each possessing a sharp, well-defined energy. As the electron orbits in a stationary state, its inward electrostatic attraction toward the positive nucleus is balanced by the outward centrifugal force due to its orbital motion, just as the gravitational attraction of planets toward the sun is balanced by the centrifugal force due to their orbital motions. Each possible stationary state can be identified by an integer, which became known as its principal quantum number. As George Gamow (1966) put it, Bohr in effect proposed that the atom is like the gearbox of a car that can run in first, second, or third gear and so on, but nothing in between. The idea of stationary states was the key that allowed Bohr to decipher the spectrum of hydrogen. Before Bohr, physicists had assumed that spectral lines were due to vibration modes of the electrons within the atom. The classical theory of vibrations says that the spectral frequencies ought to be multiples, or harmonics, of a basic frequency, like the vibrations of a guitar string. It ought to have been possible to use a powerful mathematical tool called Fourier analysis, developed in the nineteenth century by Joseph Fourier (1768–1830), to analyze these harmonics. (Fourier analysis is based on the mathematical fact that complicated waveforms can be represented as sums, or superpositions, of simpler sinusoidal vibrations.) As we have seen, however, the spectral lines of hydrogen and all other elements in fact obey the Combination Principle, which states that every spectral frequency can be expressed as a function of the difference between two whole numbers. This is completely incomprehensible in the classical view. Bohr showed that the combination principle finds a natural explanation if we assume that the electrons do not radiate a quantum of energy when they are in a stationary state, but only when they jump from one stationary state to another. The energy of a spectral line is then due not to the energy of any one stationary state, but is the difference in energies between stationary states (or “waiting places,” as Bohr sometimes called them) and is therefore a function of the quantum numbers of both states. When the electron The Quantum Revolution emits a quantum of light it jumps down to an orbit (a stationary state) with a lower energy, and the energy of the quantum emitted will be the difference in energies between the energies of the initial and final stationary states. When the electron absorbs a quantum of radiation (as in stellar absorption lines) it jumps up to an orbit of higher energy. So according to Bohr, Maxwell’s theory is just wrong when it predicts that electrons orbiting in the atom will radiate continuously. This brash conclusion was in the spirit of Einstein’s youthful suggestion that classical electrodynamics is not an exact theory, but rather something that holds only on average for very large numbers of light quanta. Bohr’s model thus gave an explanation of the stability of the hydrogen atom that was consistent with the principle of the quantization of energy, and that made perfect sense so long as one was willing to admit that classical electrodynamics was not quite right. But Bohr was able to do more: by assuming that the centrifugal force of the electron had to be balanced by the inward electrostatic attraction, and that the angular momentum of the electron in the atom was, like energy, also quantized, Bohr was able to derive a formula for the Rydberg constant of spectroscopy. The value he arrived at agreed closely with the observed value, and so he had thus produced a theoretical derivation of the hydrogen Balmer series. Furthermore, he correctly predicted the spectrum of hydrogen in the ultraviolet and infrared. Bohr had thus solved the spectrum of the hydrogen atom, in the sense that he had constructed a model that predicted the precise energies of the spectral lines of the hydrogen atom. This coup immediately established Bohr’s reputation as one of the leading physicists of Europe and gave a huge boost to the new quantum mechanics. Near the end of his life, Einstein described Bohr’s atomic theory as “a miracle” (Pais 1982, p. 416) and what he meant by this was that Bohr, by a sort of intellectual sleight of hand, had guessed the right answer to the atomic puzzle using contradictory and incomplete information. From the information available to Bohr in 1913 Figure 3.3: Energy Levels in the Bohr Atom. Bohr showed that the energy of an emission line is determined by the there was no way to logically derive difference in energies between electron orbits. Illustra- his quantum theory of the hydrogen tion by Kevin deLaplante. atom. Rather, it sprang from his intui- The Bohr Atom and Old Quantum Theory tive ability to know not only what was important (for instance, the combination principle) but what to ignore for the time being (such as the puzzle of what the electrons actually did between emissions of quanta). In fact, many scientific discoveries have been made because a creative thinker was able to guess correctly what discrepancies could be ignored in the development of a new theory. It was now impossible to doubt that there had to be something right about the quantum, if only one could figure out just what that could be. The New Theory Spreads Roots Bohr’s methods were quickly adopted by several physicists, notably Arnold Sommerfeld (1868–1961) of Munich. Sommerfeld was one of the most versatile mathematical physicists of the early part of the twentieth century, and it is odd that he did not win a Nobel Prize for his many contributions. His ready acceptance of the Bohr theory played a crucial role in its development. Sommerfeld quickly extended the Bohr theory to explain the phenomenon of fine structure. If spectral lines were examined closely using new and more accurate instruments that were becoming available, it could be seen that some spectral lines were split, meaning that they were actually combinations of lines very closely spaced in energy. This splitting of spectral lines is called fine structure. Bohr’s theory of 1913 applied only to circular orbits in the hydrogen atom; Sommerfeld extended Bohr’s methods to elliptical orbits. When an object moves on an elliptical orbit it has to move faster when it is closer to the focus. Sommerfeld showed that in many cases the electrons would move fast enough for relativistic effects to be important. The relativistic deviation of spectral lines from Bohr’s predictions were a function of a new number, the fine structure constant, approximately equal to 1/137. This mysterious number would prove central to later developments in quantum electrodynamics. Sommerfeld was also an important teacher (among his doctoral students were the future Nobel winners Werner Heisenberg, Wolfgang Pauli, Hans Bethe, and Peter Debye) and author of textbooks. His book Atomic Structure and Spectral Lines appeared in 1919 and went through several editions over the next few years, expanding rapidly each time, and was up until the mid-1920s the “bible” from which most physicists learned their quantum mechanics. Other Developments after Bohr Moseley and Atomic Numbers Another crucial piece of the atomic puzzle was filled in by the young British physicist Henry G. Moseley (1887–1915). He joined Rutherford, who quickly recognized his talent, at Manchester in 1910. Every element has a characteristic X-ray spectrum, just as it has a spectrum in visible light. Moseley showed that the energy of certain X-ray spectral lines varies with respect to the atomic number of the elements according to a simple formula. Before Moseley the meaning of atomic number was not well understood; it was simply an integer The Quantum Revolution that roughly but not exactly corresponded to the atomic weight. Moseley argued (correctly) that the atomic number could be understood simply as the positive charge on the nucleus. (The proton would not be isolated and named by Rutherford until 1918.) Moseley thus confirmed the guess made by Antonius van den Broek (1870–1926) in 1911. (Van den Broek, a lawyer and amateur physicist with a gift for numbers, made numerous helpful suggestions about atomic structure.) Moseley also noticed that assuming his formula held for all elements, there were four missing series when he studied the spectra of known elements. From this he inferred that there had to be four undiscovered chemical elements (in the rare earth group); all were eventually isolated and named. Moseley was killed at the battle of Gallipoli in 1915 at the age of 27. The Franck-Hertz Experiment A vivid illustration of the quantization of energy was provided by a pivotal experiment performed by James Franck (1882–1964) and Gustav Hertz (1887– 1975, a nephew of Heinrich) in 1914. Their method was simple enough that it is now carried out regularly by undergraduates. They accelerated electrons through a low-pressure gas of mercury. When the electrons’ energy reached the energy of a stationary state of mercury they gave up a quantum of energy to the mercury, resulting in a stepwise shape to the curve of current through the apparatus. This demonstrated that atoms could absorb energy only in discrete amounts. Einstein Sheds Further Light on Light, 1916–1917 As if having announced the general theory of relativity in 1916 was not enough, Einstein in 1916 and 1917 published three further radical papers on the quantum theory of radiation. These papers were by far the most general treatments of the statistics of light quanta to that point, and they again demonstrated Einstein’s uncanny ability to arrive at powerful conclusions using a few well-chosen assumptions. Like Planck, Einstein set out to describe the conditions that would have to apply for there to be equilibrium between matter and a radiation field. However, by 1916 Einstein had the discoveries of Rutherford and Bohr to work with. Einstein introduced two new concepts: spontaneous and stimulated emission of radiation. In spontaneous emission, an atom that is in an excited state (which means that its electrons have been energized to orbits of higher energy) will emit light quanta at random intervals and decay to its ground state (lowest energy orbit, having the smallest principal quantum number) according to a probabilistic law that looks exactly like Rutherford’s law of radioactive decay. In stimulated emission, a passing photon triggers the emission of a photon from an atom. Miraculously, the emitted photon is a clone of the triggering photon, having exactly the same energy, phase, polarization, and direction of motion as the triggering photon. However, The Bohr Atom and Old Quantum Theory the trigger photon acts like a catalyst in that it is not destroyed or absorbed in the process. Stimulated emission thus allows the amplification of a pulse of electromagnetic radiation, and this is the theoretical basis of the laser (“light amplification by stimulated emission of radiation”). Einstein found that he had to take both types of emission into account in order to correctly describe the balance between absorption and emission when radiation and matter interact. His statistical approach allowed him to derive Planck’s radiation law and Bohr’s quantum rules all in one stroke. Einstein’s radiation theory of 1916 had another important feature. In 1905 Einstein had treated the energy of light as quantized; in 1916–1917 he showed that light quanta had to have a definite momentum, which had to be quantized as well. He imagined a small mirror suspended in a radiation bath and derived a formula for the fluctuations (small random variations) in the momentum of the mirror as a result of fluctuations in the radiation hitting it. He could not get the right answer without assuming that the light quanta hit the mirror with some definite momentum, just as if they were ordinary particles. Since momentum is something that has direction, Einstein’s light quanta became known as needle rays. This picture gave further support to Einstein’s radical suggestion of 1905, that light is not only emitted and absorbed in discrete quanta, but moves precisely as if it were made up of particles with definite momenta and energy. Einstein’s conclusion would spark the first of several debates that were to occur between him and Bohr about the meaning of quantum The Correspondence Principle Bohr returned to Copenhagen in 1916, and in 1920 became director of his own Institute for Theoretical Physics. Bohr’s Institute (funded by a major Danish brewery) was to become a nerve center for the exciting developments in quantum physics that were to burst on the world in the 1920s. Bohr published the first statement of what he called the Correspondence Principle in 1920, although it had been used implicitly by him and other quantum physicists for several years beforehand. The Correspondence Principle is not a general physical law, but rather a recipe (or heuristic) for constructing quantum mechanical models of various systems, one at a time. It is a form of reasoning by analogy, and it demands considerable mathematical skill and good physical judgment to be employed without producing nonsense. Bohr arrived at more than one formulation of the Correspondence Principle, but the gist of it was that in some limit one can assume that a quantum-mechanical system will approximate a classical system with known behavior. The limit could be large quantum numbers (i.e., very high orbits), very small differences in frequency between spectral lines, or found by pretending that the constant of action is near-zero. The practical value of this rough rule is that a physicist who wanted to find the rules for the quantum mechanical behavior of a system could first work out the behavior of a similar system in the classical limit, and The Quantum Revolution then try to guess what quantum mechanical description would converge to that classical description in the limit. The Correspondence Principle was a very useful trick for finding a workable quantum mechanical description of systems that could be approximated classically, and it is still used by physicists today. However, it left two glaring questions unanswered. First, would it ever be possible to find general laws of quantum mechanics that would apply to all possible physical systems, so that a problem could be solved without guessing? Second, could there not be any physical systems that are entirely quantum mechanical in the sense that they do not have a large-scale behavior that approximates to a classical picture? It was obvious that physicists did not have the right to assume that all quantum mechanical systems would conveniently have a classical limit that could be used as a guide to their quantum behavior; and yet, in 1920 it was impossible to imagine what a quantum mechanical system with no classical analog could be like, since there were still no general laws of the dynamics of quantum systems. The Correspondence Principle, as useful as it was, could only be a The Great Debate Begins Bohr met Einstein in 1920 and they immediately became close friends, but this does not mean that they agreed on all matters to do with physics. An epochal debate began between the two about the meaning of quantum mechanics. In this long dialogue, which was to continue for the rest of their lives, Bohr and Einstein raised several questions that would spark important advances in quantum mechanics—although not always in the way that either Bohr or Einstein themselves had intended! One of the cornerstones of Einstein’s treatment of the light quantum had always been the view that light quanta behave in many respects like ordinary particles with definite positions and momenta. Both Einstein and Bohr were deeply troubled by the contradiction between the concept of light as particulate and the fact that so many optical phenomena, especially interference, only made sense if light traveled as a continuous wave. Einstein (probably more than Bohr) was also very unhappy with the inherently probabilistic way that light quanta want to behave—the fact that the best that physics seemed to be able to do was to describe the average behavior of many quanta over long periods of time, while there was no way to describe or predict the exact behavior of an individual light quantum (if that notion even made sense). But Bohr and Einstein responded to these puzzles in very different ways. Einstein felt that there was no choice but to accept the particulate nature of light even though this had to lead to the consequence that Maxwell’s theory was a statistical approximation; Bohr on the other hand surprisingly took a highly conservative line and tried every means he could think of to reject Einstein’s view that light quanta are particles, since he did not want to give up the idea that Maxwell’s Equations are exactly valid at the classi- The Bohr Atom and Old Quantum Theory cal level. In other words, Bohr was willing to give up virtually every vestige of classical behavior at the quantal level (including causation), in return for upholding classical physics at the macroscopic level of ordinary experience; Einstein, by contrast, sought a unified view of nature whereby the same principles, whatever they might be, would hold at both the classical and quantum The Compton Effect: Einstein Wins Round One In 1924 Bohr published an enigmatic paper on the quantum mechanics of the electromagnetic field, in collaboration with his younger research associates, the Dutchman Hendrik Kramers (1894–1952) and the American John C. Slater (1900–1976). The Bohr-Kramers-Slater or BKS paper is unusual in that it contains almost no mathematics at all. It somehow managed to be very influential despite its lack of clarity. One of the key tenets of the BKS prototheory was that the conservation of energy is something that holds for electromagnetic interactions only on average. According to BKS, atoms interact without actually interchanging energy or momentum, through a “virtual radiation field” that mysteriously guides their statistical behavior. At the level of individual light quanta the energy books need not balance, so long as everything adds up in the end. Bohr thought it should be this way so that the classical theory of the electromagnetic field would hold exactly, and not merely approximately, at the classical level. Experiment was not to be kind to the BKS theory. In 1923, the American physicist Arthur H. Compton (1892–1962) carried out relativistic calculations (assuming detailed conservation of energy and momentum on a quantum-byquantum basis) that predicted that energetic X-ray or gamma-ray quanta, when scattered off of electrons, would lose a certain amount of energy. Experiments by Compton himself and others soon confirmed his prediction. (Kramers had done essentially the same calculation as Compton, but Bohr persuaded him that he was wrong and Kramers discarded his notes—thereby losing his chance at the Nobel Prize that eventually went to Compton.) Bohr resisted for a while, but eventually it was clear to all physicists that the Compton Effect had confirmed Einstein’s view that the quantum of light interacted like a discrete particle. Einstein had thus won round one of his long debate with Bohr over the meaning of quantum theory—but more rounds were to follow. A Completely Contradictory Picture By the early 1920s quantum mechanics had evolved far beyond Bohr’s simple but powerful atomic model of 1913. Bohr, Sommerfeld, and others had produced a quantum theory of atomic structure that came fairly close to explaining the structure of the periodic table of the elements, and they were able to calculate (by methods that are now considered very inefficient and indirect) the spectra of several atoms and ions possessing relatively few The Quantum Revolution electrons. However, the uncomfortable fact remained that the BohrSommerfeld (as it was by then called) version of quantum mechanics was still little better than a process of inspired guesswork guided by the Correspondence Principle. Physicists of enormous skill painstakingly cooked up quantum mechanical models for each separate atom or ion without any clear idea of the general rules that such models should obey. They would opportunistically use whatever approximations they needed to get a good prediction even if there were contradictions with other models. There was no general method by which the spectra of an atom with any number of electrons could be predicted. Also, there was no general way of predicting the intensities and polarizations of the spectral lines of any atom, and the splitting of spectral lines by magnetic fields was still poorly understood. The question of intensity was especially important: spectral lines are more intense if the transition that produces them is more probable, and the problem of expressing quantum laws in terms of probabilities was soon going to take center stage. At a deeper theoretical level, there was still no notion of how to resolve the great contradiction between the quantization of electromagnetic energy and classical electrodynamics. Utterly lacking was a general theory of quantum mechanics from which one could deduce the quantum behavior of matter and energy from first principles. The surprising outlines of such a theory were about to emerge suddenly through the work of several extraordinarily talented young scientists. Matter Theory in China, the Middle East, and Bose Counts Quanta The heroic years of the 1920s are dominated by three figures: Werner Heisenberg (1901–1976), Erwin Schrödinger (1887–1961), and Paul A. M. Dirac (1902–1984). However, a good place to begin the story of this revolutionary period is the receipt of a letter by Einstein, in 1924, from an unknown young Indian physicist named Satyendra Nath Bose (1894–1974). Bose had written a short paper in which he had presented a remarkable new derivation of Planck’s radiation law. His paper (written in English) had been rejected by a leading British physics journal, whose editors thought that Bose had simply made a mistake, and he sought the help of Einstein, whom he revered. Einstein recognized the worth of Bose’s paper, and personally translated it into German and had it published. In those days a paper could be accepted for publication simply on the recommendation of a senior scientist; it did not have to run the gauntlet of skeptical referees eager to find any reason at all for rejection. It is quite possible that many of the most revolutionary physics papers of the 1920s might never have seen the light of day if they had to go through the sort of refereeing process that new work now faces. What troubled Bose was that all derivations of Planck’s Law that had been done to that date used, at a crucial point in the calculation, a classical expression for the relation between radiation energy density and average oscillator energy. Not only did this seem like cheating—a quantum result should be based on purely quantum principles—but it might even be invalid, because Bose felt that there was no guarantee that classical results (which might only be true on average) were fully correct in the quantal realm. Bose showed that he could derive Planck’s Law using a new trick for counting the number of possible arrangements of light quanta (still not called photons in 1924) with a given energy. (It was also essential to Bose’s method 46The Quantum Revolution that he, like Einstein, treated the light quantum as an object with momentum as well as position.) The essence of his trick, although this was only made clear by Dirac a few years later, was that Bose treated light quanta as if they were objects possessing what is now called permutation invariance. This means that any permutation (rearrangement of the order) of light quanta with the same energy would be indistinguishable from each other, and would therefore have to be counted as one. Compare this with how we would calculate the permutations of ordinary objects like pennies. There are, for instance, six permutations of three pennies, since each penny, no matter how perfectly manufactured, has small differences from the others that enable it to be tracked when moved around. But since light quanta are indistinguishable there is just one permutation of three quanta. Bose had demonstrated that there is something very odd about the way elementary particles such as photons have to be counted. From the point of view of modern quantum mechanics, Bose still did not quite know what he was doing—but like Planck 24 years earlier, he had somehow guessed the right answer by means of a penetrating mathematical insight. Einstein quickly published three more papers in which he applied Bose’s statistical methods to gasses of particles. In 1905 Einstein had treated radiation as if it were a gas. Now he turned the reasoning around in typical Einsteinian fashion and treated molecular gasses like gasses of radiation quanta. Einstein predicted that a gas that obeyed Bose’s peculiar counting rule would, below a certain critical temperature, condense into a collection of particles all in a single quantum state. His prediction turned out to be correct. Such states, now called Bose-Einstein condensates (BECs), were first observed when several investigators in the late 1920s and 1930s discovered the phenomenon of superfluidity. (This will be described in more detail later on.) Gaseous BECs were first produced in 1995. The gaseous BECs so far created in the lab consist only of a few hundred thousand atoms at most, but there is no theoretical limit to the size of a BEC in either liquid or gas form. Bose and Einstein had correctly predicted the existence of a form of matter that is purely quantum in its behavior even on the macroscopic scale. It is a form of matter to which the Correspondence Principle cannot apply, and which could therefore be expected to have strongly nonclassical behavior on arbitrarily large scales. This does not mean that one could not go on using the Correspondence Principle as a rough-and-ready guide to the construction of models, but after the work of Bose and Einstein one no longer had the right to assume that the Correspondence Principle will always work. There was another nonclassical feature of Einstein’s quantum gasses that would turn out to be very important, but that no one understood in 1924 (and that is perhaps not fully understood today). In a classical gas that can be treated by Boltzmann’s nineteenth-century methods, the molecules are statistically independent like successive tosses of dice; in a quantum gas and many other quantum systems there are correlations between the properties of the particles no matter how far apart they may be, and this fact deeply troubled Einstein. Uncertain Synthesis He was later to ironically describe this mysterious mutual influence between elementary particles as “spooky action at a distance” (Isaacson 2007, p. 450) and he never reconciled himself to its existence. Pauli’s Exclusion Principle One of the most talented of the constellation of young physicists who created quantum mechanics in the 1920s was the Austrian-born physicist Wolfgang Pauli (1900–1958). He contributed several key advances in quantum physics and was rewarded with the Nobel Prize in 1945. However, Pauli was also a sharp and relentless critic of work (including his own) that he thought was not clear enough, and some historians of physics have argued that Pauli, especially in his later years, exerted a detrimental effect on the advancement of physics through his scathing criticism of the not-yet-perfectly formulated ideas of younger physicists. Perhaps the last triumph of the Old Quantum Theory, and one of its most enduring legacies, was the Exclusion Principle, formulated by Pauli in 1924. In its simplest form, the Pauli Exclusion Principle states that no two electrons in an atom can have exactly the same quantum numbers. This means that the electrons in a given atom have to fill up all the possible orbits in the atom from the lowest energy orbits upward, and in one stroke this gave an explanation of the structure of the Periodic Table of the Elements, and many facts about chemical bonding and structure. It also explained the simple fact that there is such a thing as solid matter, for it is the Exclusion Principle that prevents matter from collapsing into a single quantum state (like a Bose-Einstein condensate) and which may even be responsible for the existence of space itself. But how many quantum numbers are there? By the early 1920s, Bohr’s atomic theory recognized three quantum numbers for electrons in the atom (representing the diameter, orientation, and eccentricity of an electron’s orbit); jumps between the different possible quantum numbers determined the energies of the various spectral series. The trouble was that atoms subjected to a magnetic field demonstrated a slight splitting of their spectral lines, called the anomalous Zeeman effect, that could not be accounted for by Bohr’s theory. Physicists in this period tried many ways of constructing quantum numbers that could account for the enormous variety of line splitting that occurred in multi-electron atoms, and they sarcastically referred to their efforts as “Term Zoology and Zeeman Botany.” Pauli, Alfred Landé (1888–1976), and a brilliant 19-year-old assistant of Sommerfeld’s named Werner Heisenberg proposed that there is an additional quantum number that could take on precisely two values. Thus, for every set of three possible values of the ordinary quantum numbers there were two extra “slots” that could be occupied by an electron. This showed promise in explaining the splitting of spectral lines in a magnetic field. Oddly, though, if the other quantum numbers came in units of 1, 2, 3, . . . , the new number had to be half-integral: it could only take on values of +/– 1/2. 48The Quantum Revolution The apparently obvious interpretation of the new quantum number was that it somehow labeled possible states of angular momentum of the electrons; in other words, its existence seemed to show that electrons were spinning objects. If a charged object rotates about its own axis, it generates a magnetic moment, meaning that it acts like a magnet with a definite strength. The most obvious way to account for the magnetic moment of the electron and other particles, therefore, was that they rotate about their own axes. The observed splitting of spectral lines would occur because the intrinsic magnetic moment of the electron would interact with the magnetic moment generated by the electron’s orbit about the nucleus. (A current loop generates a magnetic field.) However, Pauli at first insisted on describing this additional quantum number merely as a “classically indescribable two-valuedness” (Jammer 1989, p. 138), and he kept using it mainly because without an extra quantum number the Exclusion Principle could not explain the Periodic Table. Paili thought that it was necessary to avoid trying to find any physical picture or interpretation of the new quantum number, except that it had to be a label that is intrinsic to the electron and not merely a consequence of its motion in the atom. He especially was critical of any suggestion that the electron had an intrinsic angular momentum, because a rotating electron could not be anything like an ordinary spinning object. Given the known mass of the electron, and the fact that its radius, if it has a definite radius at all, is so small that it still cannot be observed today, Pauli calculated that the electron would have to be spinning much faster than the speed of light in order to have the angular momentum it was observed to have. And this, he believed, was absurd because it would violate the theory of relativity. But the notion of electron spin could not be so easily The Discovery of Electron Spin The fact that objects on the atomic scale have an intrinsic spin had already been demonstrated experimentally in 1922 by Walter Gerlach (1889–1979) and Otto Stern (1888–1969). Stern and Gerlach shot silver atoms through a nonuniform magnetic field that increased in a certain direction. In such an arrangement the particles will deflect due to the interaction of their own magnetic moments with the external field. Allow the particles to hit a detector screen, such as a photographic plate. If the orientation of the rotation of the particles varies randomly then classical physics says that the deflected particles will make a continuous smear on the detector screen. What Stern and Gerlach and others found, however, is that for atoms and elementary particles such as the electron, there will be a small number of discrete spots on the detection screen; for the electron there will be just two. It seems that the intrinsic angular momentum of atoms and the elementary particles of which they are composed is quantized, meaning that it can take on only certain discrete values. The Stern-Gerlach experiment was hailed as the direct observation of the quantization of a physical property other than radiant energy. (While the Bohr theory depended on the quantization of Uncertain Synthesis orbital parameters, this could only be indirectly inferred through the existence of spectral lines.) Another strikingly nonclassical feature of the electron’s intrinsic angular momentum is that the electrons will split into two directions regardless of the orientation of the applied magnetic field in the detector; this is called space Despite Pauli’s objections a number of physicists, including Compton and Ralph Kronig (1904–1995), had speculated that the electron might have an intrinsic spin, but the idea was first developed quantitatively in 1925 by two Dutch graduate students, George Uhlenbeck (1900–1988) and Samuel Goudsmit (1902–1978). Their theory of spin gave an excellent account of the magnetic behavior of electrons in the atom, except that its estimate of line splitting was out by a factor of two. In 1926 Llewellyn H. Thomas (1903–1992) explained the discrepancy in terms of an effect now called Thomas precession, which is due to the relativistic time dilation of the electrons. Spin causes line splitting because the magnetic moment of the electron either adds to or subtracts from its orbital magnetic moment, depending on the value of the spin. Pauli finally accepted that the concept of spin was valid. (Although Pauli was often a sharp critic of the work of others, he was also not afraid to admit he had been wrong.) The profound difficulty he had raised, that if the electron really was a spinning object then it was like no classical object at all, was not resolved but merely set aside. Once again physicists ignored a glaring discrepancy when doing so allowed them to make progress on another front. Eventually Pauli himself would create a formal theory of spin that would be consistent with fundamental quantum mechanics. But a lot had to happen before that would be possible. Matrix Mechanics Modern quantum mechanics burst rapidly on the scene in the years 1925 to 1927 through at least three major lines of investigation: matrix mechanics, wave mechanics, and the “transformation” theory of Paul Dirac. At first these approaches seemed very different, but they were eventually shown to be different ways of saying the same thing, or almost the same thing. These three approaches will be described separately to begin with, but this is a risky oversimplification since these developments occurred in parallel over a few years and strongly influenced each other in crucial ways. Heisenberg’s Sunrise Werner Heisenberg (1901–1976) was born in Würzburg, Germany, and studied physics under Sommerfeld at the University of Munich in the disordered years immediately after Germany’s defeat in World War I. One of his strongest intellectual influences was his youthful reading of the Athenian philosopher Plato’s Timaeus. This long dialogue, written before 350 b.c., is a rambling speculation on the origin and nature of the cosmos. Heisenberg was fascinated 50The Quantum Revolution by the crude but imaginative atomic theory sketched by Plato, who suggested that the structure and properties of matter could be understood in terms of the five regular or “Platonic” solids. Heisenberg recognized that there was no scientific basis for Plato’s speculations; however, he was inspired by Plato’s vision that it should be possible to understand the physical world in terms of mathematical symmetries. Heisenberg was gifted with exceptional intellectual quickness, and under Sommerfeld’s steadying guidance he was publishing useful contributions to the Old Quantum Theory before the age of 21. In 1922 he met Bohr at a conference, the “Bohr Festival,” held in Göttingen, Germany. A long conversation convinced Bohr that Heisenberg had unusual talent, and he invited Heisenberg to come to Copenhagen. First, however, Heisenberg went to Göttingen and worked with Max Born (1882–1970). Born had studied under the great Göttingen mathematician David Hilbert (1862–1943) and for Born it was very important to be clear and rigorous about the mathematics used in physics. Born was to display intellectual leadership that was crucial to the development of quantum mechanics in the following years, and he got Heisenberg involved in exploring the idea that the continuum methods of classical physics should be replaced by a mathematics written in terms of finite differences rather than differential quantities. This was inspired by the fact that energy and angular momentum had already been shown to be quantized. By now it was clear to Born (and Heisenberg) that an entirely new approach was needed in order to have any hope of understanding the quantum. Born was already calling it “quantum mechanics,” though no one yet knew what that could mean. By 1925 Heisenberg had spent time working in Copenhagen and had published papers with Kramers, stretching the Correspondence Principle as far as it could go. Much of their work involved the attempt to model atomic systems with virtual oscillators, just as Planck had done in 1900, although with much more powerful mathematical tools. In June Heisenberg was back in Göttingen trying to find the right quantum description of certain kinds of oscillators, but he was nearly incapacitated by hay fever. He took a vacation on the treeless island of Heligoland overlooking the North Sea, where he hoped that the sea air would give him some relief from pollen. He kept working, and began to see a new approach. The key was that he decided he should work only with observable quantities. The Bohr theory had depended on classically describable electron orbits, but these orbits are unobservable, like the unobservable ether of the old prerelativistic electrodynamics that Einstein had shown was irrelevant. Heisenberg’s idea, which had occurred to no one else, was to assume that classical equations of motion such as Newton’s Law were correct but to reinterpret the quantities in terms of which they were written (position, momentum) as a special kind of sum called a Fourier series, with the terms of the series expressed as functions of the observable intensity and polarization of the light emitted by the atom. Fourier analysis had been invented by Joseph Fourier (1768–1830) in the early nineteenth century, and it had become one of the most useful tools of the- Uncertain Synthesis oretical physics by the time Heisenberg studied it with Born in Göttingen. (Fourier, incidentally, was the first scientist to predict global warming as a result of carbon dioxide emissions.) Fourier showed that virtually all mathematical functions that could be useful in physics or engineering can be represented as a sum, or superposition, of sine and cosine waves with the right amplitudes and phase factors. Heisenberg set out to write position and momentum as Fourier sums of complex-valued quantities that Born came to call transition amplitudes. Each such amplitude is a function of two integers, the quantum numbers of the states that the virtual oscillator jumps or transitions between. The result was that position and momentum were represented as square arrays of complex numbers. Heisenberg had to work out the algebra of these arrays, so Figure 4.1: Werner Heisenberg. Photograph by Freidrich that he would know how to write Hund, AIP Emilio Segre Visual Archives. the dynamical equations in terms of them. To his surprise he found that he could only get the right equations if the arrays had the very odd property that they failed to commute—that is, the product xy of the arrays for x and y would not equal yx. Using his strange arrays of complex numbers, Heisenberg derived the existence of the zero point energy (a minimum energy that all systems have even in their ground states) for which no clear explanation had been given before, and showed the energy levels have to be quantized. He calculated through the night, and finally (using the assumption of conservation of energy) derived energy levels that agreed with experiment. He climbed a promontory on the island, watched the sunrise, and “was happy” (van der Waerden 1967, p. 25) because he knew he had found the key to Born’s elusive “quantum The Three-Man Work Heisenberg’s paper “On the Quantum-Theoretical Reinterpretation of Kinematic and Mechanical Relations” was soon in circulation, and Born had (with some difficulty) realized that Heisenberg’s odd arrays of numbers were 52The Quantum Revolution nothing other than matrices. It may seem odd that Born and Heisenberg, who were highly trained in mathematical physics, did not immediately realize this fact, since matrix algebra is now often taught in high school. Matrix theory had been developed in the nineteenth century by the British mathematician Arthur Cayley (1821–1895), but in the early 1920s it was still merely an abstruse topic in pure mathematics that was not normally learned by physicists. Born was only able to recognize what Heisenberg had done because of a brief encounter with matrices in his student days. However, he was not skilled in matrix manipulation. Fortunately his assistant, 22-year old Pascual Jordan (1902–1980), was expert in matrix algebra and soon became an important contributor to quantum mechanics in his own right. Born and Jordan quickly published a paper together, extending Heisenberg’s methods, and then all three collaborated on a monumental paper, published early in 1926. In this paper, often called the “three-man work,” they further developed matrix theory and applied it to several key problems. This paper definitively set forth matrix mechanics, which is the version of quantum mechanics based on the algebraic manipulation of matrices that represent observable quantities such as position, momentum, and energy. Detailed calculations showed that the new matrix mechanics was very successful in predicting the anomalous Zeeman Effect, other forms of line splitting, and line intensities. The three authors even produced a new derivation of Planck’s Law, taking advantage of Bose’s counting rules. Shortly after the publication of the three-man paper, Pauli used the new mechanics to rederive the entire Bohr theory of the hydrogen atom. In 1927 he showed how to construct a spin operator, which describes the spin of the electron in three-dimensional space. This operator is built up of four simple two-by-two matrices now called the Pauli spin matrices, and these have very wide application throughout quantum mechanics and, most recently, in quantum computation. There was no question that the new matrix mechanics was enormously effective. It had swept aside almost all the difficulties that had plagued the old Bohr-Sommerfeld approach. But its founders were aware that they had almost entirely lost contact with the physical picture of what might be going on inside the atom. Quantum mechanics was starting to look as if it was nothing more than a highly effective (although complicated) mathematical formalism for calculating observable results such as probabilities and energies, with little or no way of telling what that formalism actually meant—or, if a picture of what underlies the quantum rules could ever be uncovered, it would be unlike anything that the classical mind had ever dreamed of. Wave Mechanics Louis de Broglie: If Waves Are Particles Then Particles Are Waves Prince Louis Victor de Broglie (pronounced roughly “de Broi”) (1892–1987) was a scion of an old aristocratic French family. As a young man he explored Uncertain Synthesis several fields; his interest in physics was sparked by reading a report on the First Solvay Conference of 1911. He luckily escaped the deadly trenches of World War I and instead served his country as an electronics technician, which must have stimulated his thinking about electromagnetism. His older brother Maurice de Broglie (1875–1960) was an accomplished experimental physicist. Under his brother’s guidance, Louis became familiar with X-rays and the photoelectric effect. Louis’s greatest talent was a gift for spotting the obvious—or rather, for spotting what would eventually become obvious to everyone else. He thought deeply about the mysterious wave-particle duality that had been pointed out by Einstein as early as 1909, and arrived at a question that must have seemed almost childish at the time: if electromagnetic waves are also particles, then might not particles of seemingly solid matter (such as electrons) be somehow also waves? If this were so, then beams of electrons would have a wavelength and a frequency just like light, and this would offer a ready explanation of Bohr’s quantization conditions: an electron in its orbit about the nucleus would be like a standing wave, such as the wave in a plucked guitar string, and only an integral number of wavelengths could fit into an orbit. This very simple insight could have been seen (but was not) by anyone from about 1913 onwards. Using facts about momentum and the behavior of waves from Einstein’s special relativity, de Broglie derived simple but elegant formulas for the wavelike properties of particles, and also found yet another a new derivation of Planck’s blackbody law. He showed that any object with momentum p has a de Broglie wavelength λ, equal to h/p, where h is Planck’s constant. The quantum mechanical properties of matter become important when the de Broglie wavelength of an object is comparable to or larger than its size, while the wavelength for a classical object such as a car is utterly negligible. De Broglie published his ideas in three short papers in 1923 and then collected them together in his doctoral thesis of 1924. His work was praised warmly by Einstein, who said that de Broglie had “lifted a corner of the great veil” (Isaacson 2007, p. 327). (As with Bose, Einstein also helped to get de Broglie’s highly unconventional work published.) De Broglie’s insight brought the quantum story full circle, but in a way that only compounded the mystery of the wave-particle duality: all forms of matter and energy (be it light or electrons) are both particle and wave. But how could matter possibly be both particle and wave? And what were the laws that governed the structure and behavior of matter waves? If de Broglie’s picture of electrons as waves was correct, then electrons that were fired through small openings comparable to their wavelengths should exhibit wavelike diffraction and interference phenomena. These predictions were not directly confirmed until 1927, when the Americans C. Davisson (1881–1958) and L. Germer (1896–1971) demonstrated diffraction effects when they scattered electrons off a polished nickel crystal. Similar results were obtained by the British experimenter G. P. Thomson (1892–1975), the son of J. J. Thomson. Louis de Broglie was awarded the Nobel Prize in Physics 54The Quantum Revolution in 1929, by which time the complete duality of wave and particle was accepted as a cornerstone of quantum theory. Schrödinger: Music of the Orbitals Erwin Schrödinger (1887–1961) was born in Austria and studied at the University of Vienna. By the time Heisenberg took his vacation on Heligoland, Schrödinger was professor of physics in Zurich, Switzerland. Schrödinger was a thinker of wide-ranging interests, and in his later years he made important contributions to biology. Schrödinger found de Broglie’s insights about matter waves to be intuitively appealing. However, the problem was that de Broglie had only told half of the story, in that he had only described the kinematics of quantum waves. This means that he had shown in general terms how matter waves can be described in spacetime but had not given their dynamics. In other words, de Broglie had not said anything about what produced his matter waves. Schrödinger knew that if there are waves then there has to be a governing wave equation, which typically takes the form of a partial differential equation whose solutions are possible wave forms. Schrödinger was highly skilled in the mathematics of classical mechanics, and he set to work to find the right equation. He repaired to a cottage in Arosa, Switzerland, in the company of an unknown young woman. The mysterious lady of Arosa seems to have stimulated his creativity, and, like Heisenberg, he made one of those quantum leaps of the theoretical imagination that are so hard to analyze or explain. Schrödinger took an important cue from the work of the nineteenthcentury mathematician William Rowan Hamilton (1805–1865), one of the greatest of Ireland’s sons. Hamilton had shown that Newton’s Laws of mechanics could be rewritten in a form that made them look remarkably like the laws of optics. Schrödinger constructed a wave equation that he essentially guessed by analogy with Hamilton’s nineteenth-century version of Newtonian mechanics. Schrödinger’s Equation can be writFigure 4.2: Erwin Schrödinger. Photograph by Francis Siten in many forms, depending on the mon, courtesy AIP Emilio Segre Visual Archives. Uncertain Synthesis structure of the problem it is applied to. The functions that satisfy Schrödinger’s wave equation are called wave functions or sometimes ψ-functions, since they are usually written using the Greek letter ψ (psi). The most general timedependent Schrödinger Equation says, roughly, that the rate of change of the wave function with respect to time is proportional to the result of the Hamiltonian operating on the wave function. The Hamiltonian (borrowed and generalized from Hamilton) is an operator that represents the energy of the system. An operator is a mathematical machine that transforms functions into functions. Applying the Schrödinger Equation to a particular problem is largely a question of knowing what Hamiltonian to apply, and this is often a matter of inspired guesswork. Thus it is wrong to say that modern quantum mechanics eliminates all of the guesswork that was endemic to the Old Quantum theory; however, it concentrates the need for guesswork to a much smaller area. In January 1926 Schrödinger published the first of a series of papers with the title “Quantum Mechanics as an Eigenvalue Problem.” To find the eigenvalues of a vibrating system is to find its characteristic modes of vibration; like a guitar string, any vibrating system will have a certain basic and harmonic frequencies. Bohr was supposed to have shown that no such model could apply to the atom; however, Schrödinger suggested that the energies of the emission lines corresponded to beat frequencies between the characteristic vibration modes of the electrons in the atom. The attraction of this conception to Schrödinger was that he thought it would get rid of the need for what he later called “this damned quantum jumping” (Heisenberg 1971, p. 79), which offended his classically trained sensibilities. Schrödinger thought he had shown that a quantum “jump” would simply be a continuous (although no doubt very rapid) transition from one vibration mode to the next. While eigenvalues played an important role in the matrix mechanics of Heisenberg, Born, and Jordan, Schrödinger was the first to give them a possible physical meaning. Schrödinger showed that with his methods several problems in quantum mechanics could be solved, including the spectrum of the hydrogen atom. The way it is done is to write the Hamiltonian for the electron in the atom; this is simply a quantum-mechanical version of the classical expression for the total energy of the electron, expressed as a sum of the kinetic energy of the electron and the potential energy it possesses due to its electrostatic interaction with the nucleus. The resulting partial differential equation can be solved by a class of eigenfunctions called spherical harmonics, which are the normal modes, or natural modes of vibration, of an elastic sphere. The various possible values of the spherical harmonics are the eigenfunctions or eigenmodes of the Hamiltonian, and they give the familiar orbitals, such as the s and p orbitals, of chemistry. For a while Schrödinger believed that the waveforms he had described gave a classical and continuous distribution of electrical charge around the nucleus; this hopeful interpretation would not last long. He reproduced all of the results that had been obtained so laboriously in the Bohr theory, and so obscurely in matrix mechanics. In principle his equation can be used to calculate the orbital structures of any atom or molecule at 56The Quantum Revolution all; in practice this is limited by computational complexity. Large organic molecules such as proteins have very complicated orbital structures, and so solving Schrödinger’s Equation to find their wave functions has to be done with computers (not available to Schrödinger in 1926!) and various kinds of approximations have to be made that require skill and judgment. However, Schrö­ dinger’s wave mechanics was useful in a way that matrix mechanics was not, and it immediately spawned a host of applications in chemistry and many areas of physics. Schrödinger hoped that the success of his continuum methods would restore the classical picture of physics. One problem with this interpretation of the wave function is that wave functions tend to spread out in space, sometimes Figure 4.3: Typical Electron Orbitals. The wave function Ψ is a solution of Schrödinger’s Equation and the heavy curve very rapidly, while electrons them­ shows the probability function |Ψ|2 for finding an electron. selves are always found in discrete locations. Another clue that Illustration by Kevin deLaplante. the wave function would not easily admit of a classical interpretation was that the wave function is given by a complex-valued function. Complex numbers are numbers of the form a + ib, where a and b are ordinary real numbers, and i is the square root of –1. While such functions have a definite mathematical meaning and very important applications throughout mathematics, they are hard to visualize—and in particular they cannot in fact represent the density of anything physical at all. But it would be a few months longer before this would become painfully clear even to Schrödinger. The pioneers of matrix mechanics were at first horrified by Schrödinger’s wave mechanics, while it delighted those (including Einstein and Planck) who preferred a return to something that at least resembled the old classical On the face of it, no two approaches to the same sets of physical problems could seem more different than matrix and wave mechanics. The former was expressed in abstract algebra and made little attempt at picturing atomic Uncertain Synthesis phenomena, and it treated matter and energy as essentially discontinuous. It claimed to be able to do little more than calculate probabilities of transitions. Wave mechanics, on the other hand, almost seemed like a visualizable continuum theory, similar to those that might be used to represent vibrating objects, waves, or fluids—the sort of theory that classically trained physicists such as Einstein and Planck were used to and comfortable with. A great apparent advantage of the Schrödinger Equation to the older guard was that it was entirely deterministic; a given wave function would evolve in a unique and perfectly definite way, and there was—or so it was hoped by many for a few months at least—no more need for talk of random jumping, or particles shooting off in any direction at all with no good reason for doing so. A clue that all was not all that it seemed was provided by Schrödinger himself, for later in 1926 he published a paper showing that matrix mechanics and wave mechanics were mathematically equivalent; they were really just different ways of saying the same thing, or nearly the same thing. The choice of which to use would be largely a question of practicality or Dirac: Elegant Brackets Paul Adrien Maurice Dirac (1902–1984) was born in Bristol, England, of Swiss and English parentage. His first degree was in electrical engineering, because his father wanted him to study something practical. However, he switched to mathematical physics after becoming entranced by the elegance of the Minkowski metric (the formula for the interval in Einsteinian space time), which he heard about in a lecture on the theory of relativity given by the philosopher Charles D. Broad (1887–1971). He did much of his early work under the supervision and encouragement of the British physicist and astronomer Ralph Fowler (1889–1944). Probably more than any of the founders of quantum mechanics, Dirac was a creative mathematician of very great ability. While Bohr and Einstein were very competent applied mathematicians guided by physical intuition, Dirac made his great discoveries largely out of an exquisite feeling for mathematical simplicity. What matters, he said (Cropper 1970), was to get beauty into your equations—not always easy even for Dirac. Fowler gave Dirac a copy of Heisenberg’s paper. Dirac realized that the essential feature of matrix mechanics was noncommutativity, and Dirac certainly knew his matrix algebra. Relying on similarities with a structure from classical mechanics called the Poisson bracket, Dirac showed that the quantum mechanical behavior of quantities such as position or momentum can be defined by their commutator. The commutator of p (momentum) and q (position) is just pq – qp. The commutator is always a multiple of Planck’s constant, and this gives another answer to the question of what quantum mechanics is: since the commutator is zero if the constant of action is zero, quantum mechanics is simply physics where the size of Planck’s constant matters—because that is when certain quantities (which are said to be conjugate to each other) will fail to commute. 58The Quantum Revolution Dirac generalized Heisenberg’s matrices to linear operators that transform the state. A linear operator is a mathematical structure that maps vectors into vectors. Linear operators can be represented by matrices, but there are many possible matrix representations of a given operator, depending on what types of observations we make on the system; Dirac called these different coordinate systems “representatives.” By mid-1926, Dirac had independently created a generalized and mathematically clearer version of matrix mechanics. Dirac also introduced a very elegant and simple system of notation for quantum mechanics. The state of a physical system is represented by a vector called a “ket,” and every ket has a sort of mirror image called a “bra.” Put the two together, and one has a “bra-ket” that represents the transition amplitude from the ket state to the bra state. Dirac’s bras and kets can be very easily manipulated and they greatly simplify calculations in quantum mechanics. In later papers published in 1926 Dirac further developed his algebra of commutators, and made a distinction between what he called c-numbers and q-numbers. The former are classical quantities (such as position or momentum) given by ordinary real numbers, while q-numbers are quantum mechanical linear operators that can be represented by matrices. He also showed that the quantum mechanics of continuous quantities such as position and momentum required the use of a mathematical device commonly called the Dirac deltafunction (though it had in fact been introduced by Gustav Kirchhoff in 1882). This is an idealized “pulse” function whose value is zero everywhere except at one point, and whose integral over all of space is one. Up to this point matrix mechanics had been able to deal only with discrete quantities such as energy levels, but physicists still preferred to assume that some quantities such as position and linear momentum can be treated as continuous. (Whether space and time really are continuous, or whether this is just a convenient approximation, is still a current topic of investigation.) By studying the quantum mechanics of continuous quantities, Dirac arrived at Schrödinger’s Equation and thus showed that Schrödinger’s very useful equation, as he had written it in terms of continuous wave functions, is a special case that arises out of the formalism of the more abstract theory of linear transformations when continuous representatives are applicable. Dirac’s theory thus formed a bridge between matrix and wave mechanics, and the formalism of quantum mechanics is now usually given in terms of his notation and terminology. Born’s Momentous Footnote: The Probability Interpretation of the Wave Function A decisive breakthrough in understanding the wave function came in 1926, when Max Born, in a paper on the scattering of electrons from atoms, observed that the most obvious interpretation of the wave function is that it represents the probability of finding the electron at a given location. More precisely, he added almost off-handedly in a footnote, its square represents probability, and this observation is now called the Born Rule. (One might say jokingly that Born won a Nobel Prize for a footnote.) Wave functions are also often referred Uncertain Synthesis to loosely as probability waves, but this is a misnomer. The wave function is complex-valued, and so cannot stand for a probability by itself. However, if a complex number is squared up it gives a real number (called its modulus). The wave function can be normalized, which means that it is multiplied by a constant that keeps the modulus between 0 and 1; this allows the modulus to be interpreted as a probability. The wave function itself is referred to more accurately as a probability amplitude, which is, roughly speaking, a complex square root of a probability. A study of the wave mechanics of scattering seems to have led Born to his probability interpretation. He showed that the wavefronts of scattered particles would have the approximate form of expanding spheres—and yet the particles would always be detected as discrete objects traveling in certain definite directions. One never directly detects a wave; rather, the wave gives the probability of finding a particle. Suppose a headline in the newspaper says, “Crime Wave Sweeps City”; this is just a way of describing an increased frequency of discrete criminal acts. There is no wave separate from the acts themselves. With Born’s probability interpretation, it no longer seemed possible to uphold Schrödinger’s realistic interpretation of the wave function, although Schrödinger himself resisted mightily for a while. Indeterminism could not be gotten rid of as easily as Schrödinger and Einstein had hoped. Heisenberg’s Microscope and the Uncertainty Principle In 1927 Heisenberg traveled to Copenhagen and endured intense discussions with Bohr on the meaning of quantum mechanics. One of the points that especially troubled them was that they could not see how to reconcile the existence of apparently continuous electron trajectories with the fundamental laws of quantum mechanics. There are several experimental contexts where it seems as if free electrons (that is, electrons moving outside the atom) have trajectories just like bullets or baseballs, and yet quantum theory treats all detection events as discrete. Heisenberg realized that it was necessary to examine the conditions under which an electron can be observed. In order to find where an electron is, we have to bounce some particles off it, and as de Broglie had shown, all particles have a wavelength that is shorter the higher the energy of the particle. Suppose we use photons. It is a basic law of optics that the resolving power of a lens is determined by the wavelength of the light shone through it. We could pin down the electron quite narrowly if we used high-energy photons, such as gamma rays, but these would disturb the electron’s motion and thereby change the very property we are trying to observe. If we try to use lower-energy quanta (such as ordinary light) in the hope of disrupting the particle’s motion less, the quanta would have a much lower resolving power, and we would have a correspondingly larger uncertainty in the position of the particle. Heisenberg showed that if we know exactly where the electron is, it could have any velocity at all, even greater than that of light. On the other hand, if 60The Quantum Revolution Figure 4.4: Heisenberg’s Microscope. The shorter the wavelength of the gamma rays used to examine the electron, the sharper the determination of position but the larger the uncertainty in momentum. Illustration by Kevin deLaplante. we know exactly how fast the electron is going, it could be anywhere at all. Heisenberg even went so far as to suggest that that the experimenter creates orbitals by attempting to observe them. In any realistic case there is always an uncertainty in both the position and the momentum, and the reciprocal relationship between these uncertainties is called Heisenberg’s Uncertainty or Indeterminacy Principle. Uncertainties are usually symbolized using the Greek letter ∆ (capital delta); Heisenberg’s Indeterminacy Principle can then be expressed as ∆(position) times ∆(momentum) is greater than Planck’s constant of action. An interesting twist of history is that Heisenberg had almost failed his doctoral oral examination in 1922 because he did not know enough about the resolving power of lenses. Abashed, he had done some homework on the subject and later observed that this helped him in his work on uncertainty. Bohr argued that Heisenberg’s microscope is to an extent misleading. It is very easy to carelessly imagine that the electrons really do have definite positions and momenta at all times, and that the uncertainty relations are merely due to the fact that because of the finite size of the quantum of action, a certain minimum jiggling of the apparatus, and thereby a minimum experimental error, is inevitable in any experiment. The fact that this “obvious” interpretation of the uncertainty relations is wrong is one of the hardest things to grasp about quantum mechanics. However, by 1930 several authors, including Schrödinger, had shown formally that the uncertainty relations follow from the commutation relations that had been written explicitly by Dirac: any two observables that do not commute (for example, spin components in perpendicular directions) have to obey an uncertainty relation. The formalism has no way of even expressing the concept of a particle that simultaneously has sharp values of both position and momentum. Uncertain Synthesis What this means, as Heisenberg and other physicists began to realize by the late 1920s, is that we do not observe continuous electron trajectories at all. What we really observe is a sequence of discrete snapshots of an electron. If these snapshots are very close together then we are naturally inclined to suppose that the electron is following a continuous path. But in fact we have no warrant, either experimentally or theoretically, to conclude that the electron exists in between observations of it. Classical mechanics was supposed to be about particles—tiny, continuously existent chunks of matter—moving around under the influences of forces given by definite, deterministic laws. Instead, quantum theory tells us that the classical picture is only a larger-scale approximation that emerges under certain circumstances, like the apparently continuous image that is revealed if we pull back far enough from a digital image made up of thousands of pixels. Heisenberg declared that the classical picture is the picture we get when the quantum pixels, so to speak, blur together, and Schrödinger’s Equation is the rule that tells us the probability that a pixel will appear in a particular location. In the seesaw battle between continuity and discontinuity, discontinuity had taken the lead again. Heisenberg’s Uncer­tainty relations can be taken as a modern quantitative version of the ancient Paradox of the Arrow stated by the Greek philosopher Zeno of Elea (ca. 450 b.c.). Zeno sought to demonstrate the unreality of motion, but his argument could easily be adapted to show the unreality of rest. Consider an arrow in flight, said Zeno; if at any moment it truly occupies a definite position in space, we cannot say that it is moving, since if it moves it is changing its position. On the other hand, Heisenberg might have added, if the arrow truly is moving, it is constantly changing its position, so that at no moment is it precisely anywhere in particular. Heisenberg’s Uncertainty Relations for position and momentum express in mathematical form the paradox inherent in the very concept of motion. The idea of an uncertainty inherent in all natural processes is also found in Plato’s Timaeus, the work that inspired the young Heisenberg. Plato speaks of an inscrutable factor that he calls the Errant Cause—in effect, Plato’s own Uncertainty Principle—which is a sort of zero-point energy, a tendency of everything in the natural world to be in a perpetual state of restless motion. Heisenberg does not mention Plato’s Errant Cause in his memoirs (1971), but could he have been influenced by Plato’s idea that everything in the physical world has an uncertainty in its very nature? Heisenberg speaks repeatedly of his search for the inner order of nature, and yet he more than any other scientist revealed that nature is founded on the tension between order and a disorder that can never be made to go away. The Copenhagen “Orthodoxy” Out of the intense debates about the meaning and application of quantum mechanics from 1927 to 1935, one view quickly became dominant: the so-called Copenhagen Interpretation of Niels Bohr. Some have argued that the victory of the Copenhagen Interpretation was due as much to Bohr’s powers of persuasion (if not intimidation) as its intellectual virtues. There are stories that Bohr would browbeat his students and colleagues in debate, sometimes reducing them to near-collapse. (It is said that Schrödinger, following a long debate with Bohr, took to his bed and regretted that he had ever had anything to do with the quantum; Kramers was hospitalized with exhaustion, and even the redoubtable Heisenberg was once reportedly reduced to tears.) However, the students and coworkers of Bohr unfailingly spoke of him in terms of greatest affection and respect and insisted that his zeal in debate arose entirely from an intense desire to find the truth. The fact remains that Bohr and his followers hammered out (sometimes painfully) a way of doing quantum mechanics that was a delicate compromise between the revolutionary and the conservative, and that, like many compromises, worked well enough to allow physicists to get on with the job of applying quantum mechanics to a host of new problems. It remains to be seen whether the Copenhagen Interpretation of quantum mechanics will stand for all time. The cornerstone of the Copenhagen Interpretation is Bohr’s Principle of Complementarity, which he first announced following heated discussions in 1927 with Heisenberg. Bohr thought that Heisenberg’s discovery of the Uncertainty Relations was a great advance, but he also believed that Heisenberg, The Quantum Revolution in his attempt to interpret his discovery, had given too much primacy to the particle picture. The Principle of Complementarity states that it is not enough to point out that pairs of canonically conjugate observables fail to be simultaneously measurable. Instead, for Bohr, the breakdown of commutativity was merely an aspect of a larger fact, which was that any part of physics where quantum effects are important requires two mutually contradictory but complementary modes of observation and description. Both modes are necessary in order to make all of the predictions that can be made about physical reality, and yet each mode excludes the other; that is, they cannot both be applied simultaneously. The types of experiments in which (say) position can be measured exclude the types of experiments in which momentum can be measured. In other words, for Bohr it was not enough to say that we cannot measure position precisely; rather, the issue was that there are limitations on what we can mean by “position” and “momentum.” For Bohr, causal accounts of phenomena (that is, accounts in terms of dynamical quantities such as forces, momenta, and energy) are complementary to space-time accounts of phenomena (that is, accounts in terms of positions, times, and velocities). An important illustration of complementarity is the wave-particle duality itself: sometimes we have to treat matter and energy as if it is composed of waves, and sometimes we have to treat matter and energy as if it is composed of particles, but it does not make sense to do both at once. In fact, it is physically impossible to observe wave properties (such as interference) with precisely the same measurements in which particle properties (such as momentum) can be One wants to ask, “But is an electron really a wave or a particle?” Bohr insisted that this question is not meaningful. He thought that we can only ask what something is when we can specify an experimental context, and the experimental contexts that allow us to observe wave-like properties exclude those that allow us to measure particle-like properties. At the same time, to do all of the physics with electrons that is possible requires both wave-like and particle-like measurement procedures. “But surely,” the response might be, “an electron must really exist even if we can’t describe it without experimentally defined terms. We don’t just make it up!” Bohr would have insisted that the concept of the independent existence of the electron is not meaningful. He would have agreed that we don’t just make electrons up; rather, he would have said, electrons as they are observed in various sorts of experiments are manifestations of irreducibly quantum mechanical interactions between observer and observed, which obey the probabilistic laws of quantum mechanics. And at that point it would be understandable if the questioner, like Schrödinger, took to bed in exhaustion. The Quantum-Classical Divide The Copenhagen Interpretation also contains an important rule about the nature of measurement: the cash value of any quantum calculation must be a prediction that can be understood in terms of unambiguous classical observations. No one could quarrel with this statement if all it meant was that a measurement procedure cannot make sense to humans if it cannot be expressed in procedures that humans can grasp. However, Bohr intended to make a statement about physics itself that would be true for any beings anywhere in the universe doing quantum physics, since by “classical” he apparently meant nothing other than the physics of Newton, Maxwell, and Einstein. It is possible that Bohr had too narrow a notion of what would constitute a “classical” observation procedure. How do we know that there might not be new types of measurements or observations that are inherently quantum mechanical, but that, like Bose-Einstein condensation, can be grasped on the human scale? In other words, how do we know that our conception of what is classical cannot evolve in surprising ways? The fact that some quantum mechanical effects (such as Bose-Einstein condensation) can manifest themselves on the macroscopic scale suggests that this might be the case, but this remains an open question. The Principle of Complementarity therefore contains an echo of Bohr’s early 1920s attempt to defend the exact accuracy of classical electromagnetism. He had failed, with the BKS theory, to protect the absolute validity of classical electrodynamics, but now at least he thought he could show that classical physics within its own sphere was absolute. It could seem surprising that Bohr had to resort to such a radical position as complementarity in order to make room for his deeply conservative beliefs about classical physics. Some remarks of Bohr’s also hint at a statistical mechanics account of measurement. He said that every measurement procedure must be brought to a close by an “irreversible act of amplification” (Wheeler and Zurek 1983, p. 7). An example would be the exposure of a grain of photographic emulsion by a photon. Clearly, the kind of irreversibility Bohr had in mind here is thermodynamic or statistical, like the shattering of a wine glass on the floor, but Bohr did not develop this notion in detail. Dirac: Two Kinds of Particles While Bohr, Schrödinger, and Heisenberg wrestled with the meaning of quantum mechanics, Dirac (who had little interest in philosophical debates) kept constructing pretty equations. By 1928, he had worked out his own version of quantum mechanics, which focused on noncommutativity as the feature that distinguished quantum from classical physics and described observations using the language of linear transformations of state vectors. There was one major deficiency, he felt, in all of the formulations of quantum mechanics up to that point including his own—they were not relativistic. This meant that they were accurate only for relative velocities that are small compared to the velocity of light and did not take into account the invariance of the speed of light, the rock upon which special relativity is founded. Dirac thought that it was time to seek an equation The Quantum Revolution for the state function for the electron that could be written in covariant form. This means that it would be fully consistent with relativity and would treat time and space similarly, since in relativity time and space coordinates can be transformed into each other. From this point of view the Schrödinger Equation is defective because it is not homogeneous, which means that its derivatives are not all of the same order. It is first order in time, but second order in space There is a relativistic wave equation that is all second-order, usually called the Klein-Gordon Equation, although it seems to have first been discovered by Schrödinger. However, it does not describe the electron accurately. (It was eventually found to be a valid description of the state vector evolution for particles with spin-0—but that is getting ahead of the story.) Dirac then decided to see whether or not he could write a wave equation for the electron that would be first-order (first derivatives) in all four coordinates, space and time. The problem would be to find state functions with the right sort of mathematical structure to satisfy such an equation, and then try to solve the equation and see if it gave physically meaningful results. Some inspired algebraic manipulation showed Dirac that he could represent the states of electrons using spinors, which are four-component complex-valued vector-like objects that turned out to be constructible out of the Pauli spin matrices. (Spinors were invented by the distinguished French mathematician Élie Cartan, 1869–1951.) Using spinors, Dirac was able to write a relativistic wave equation for the electron, now called the Dirac Equation. It is essentially a covariant version of the Schrödinger Equation. It can be adapted to many other sorts of particles moving at relativistic velocities and is one of the basic tools of quantum field and particle theory. Dirac’s picture of the electron also has the satisfying feature that it predicts the electron’s intrinsic spin; it is no longer necessary to add spin into quantum theory by hand as it had been up to that point. The way this works is that in order to write a wave equation that was relativistically covariant, Dirac had to assume that the Figure 5.1: Paul Dirac. Photograph by A. Bortzells Tryckelectron had two components, and eri, courtesy AIP Emilio Segre Visual Archives. these correspond nicely to the two possible spin states of the electron. This is the kind of result that theoretical physicists love—deriving important facts from a small number of general principles. Particles and Antiparticles A very odd feature of Dirac’s theory was that the possible energy values turn out to be given by the square roots of a relativistic expression. Since square roots can be positive or negative, this seemed to predict the existence of negative energy states. Dirac boldly suggested that this was not a mistake. Instead, he argued, we can assume that the vacuum is an infinitely deep sea of mostly occupied negative energy states, often now called the Dirac Sea. By the Pauli Exclusion Principle each possible energy state could be occupied by only one electron. Normally we do not know about these energy states, precisely because they are occupied and because an energy state can only be observed if an electron can drop into it and emit a photon. However, if an electron is knocked out of a negative energy slot by a passing gamma ray it leaves a hole behind. In order to keep the electric charge balanced, the hole has to be positively charged, and the hole will move around exactly as if it were a particle. If the electron falls back into the hole then both it and the hole will disappear, and a gamma quantum will be emitted. At first Dirac hypothesized that the hole where the electron had been could correspond to the proton, which at the time was the only other positively charged particle known. In 1930 several physicists pointed out that if the proton was Dirac’s missing electron, then it would be possible for the proton and the electron in the hydrogen atom to suddenly annihilate each other, releasing two photons. All matter would disappear in a great flash of light, a process that is fortunately not observed. Furthermore, the hole corresponding to the electron would have to have the same mass as the electron, which is less than 1/1800 the mass of the proton. Finally, in 1931, Dirac took what now seems to be the obvious step and proposed that there must be a particle distinct from the proton, positively charged, but having the mass of an electron. Barely a year later American physicist Carl Anderson (1905–1991) detected positively charged particles in cosmic ray showers. Because of the amount of curvature their tracks showed in a magnetic field he deduced that they had to have very nearly the mass of the electron. However, they curved in the opposite direction to electrons, showing that they were positively charged. Anderson dubbed his new particles positrons—positive electrons, with precisely the same mass as the electron, but positively charged. Anderson himself did not realize at first that he had confirmed Dirac’s theoretical prediction, because he was too busy getting his delicate apparatus to work to study Dirac’s abstruse papers. The discovery of the positron provided a simple interpretation for the fact that Dirac’s spinors had to have four components: there are two components for the electron (one for each spin state), and two components for the The Quantum Revolution The confirmation of Dirac’s prediction had other important implications. First, it suggested that the vacuum is not nothing; rather, the vacuum (which classical physics naively describes as “empty” space) could be crammed with particles that we simply aren’t able to observe normally. Einstein and others noted that Dirac had, in effect, restored the ether of nineteenth-century physics, although in a strange, quantum form. Second, it showed that particle number is not conserved. Particles can be created and annihilated; the gamma ray bouncing (or scattering) off a negative electron will seem to split into a Figure 5.2: The Dirac Sea. All negative energy states are positron and an electron, and afnormally occupied by exactly one electron. A gamma ray ter these have careened around can knock an electron into a positive energy state, creatfor a while they will annihilate ing a hole that looks like a positron. Illustration by Kevin each other, leaving a gamma phodeLaplante. ton behind. To put it another way, particle number is not (unlike mass-energy or charge) a conserved quantity. The phenomenon of particle creation and annihilation would come to have huge importance in quantum mechanics. The discovery of the positron also suggested that other particles may have antiparticles as well, and as early as 1933 Dirac hypothesized that there had to be an antiproton and that there could even be stars or planets far away in space that were composed of antimatter. While the discovery of the positron was a satisfying confirmation of Dirac’s theory, it was also troubling, because it spoiled the nice simplicity of particle theory of the late 1920s. For a while it had seemed that it might be possible (as soon as a few more niggling technical details were worked out) to explain the whole structure of matter in terms of just two particles, the proton and the electron. How many more “elementary” particles were lurking in the mysterious nucleus of the atom, or in the vacuum itself, which was beginning to look like something with a very complex structure indeed? The answer would turn out to be—lots! In 1936, de Broglie, with his usual gift for spotting the obvious, articulated what was becoming apparent to many physicists: for every particle there is a corresponding antiparticle with opposite quantum numbers. With one stroke de Broglie nearly doubled the number of particles, although it would take a few more years to experimentally confirm his guess. Two Kinds of Statistics In 1926 the precocious Italian physicist Enrico Fermi (1901–1954), together with Jordan and Dirac, made an important advance in understanding the statistics of quanta. (Fermi was to make many contributions to physics, including leading the construction of the first nuclear reactor in 1942.) They showed that there are two kinds of quanta that obey importantly different statistics. In fact, there are three kinds of particle statistics. Boltzmann statistics are the classical statistics of distinguishable particles; this was described in the nineteenth century by Boltzmann and is still a useful approximation when particles can be treated as independent entities. However, Boltzmann statistics break down when quantum effects become important. When the de Broglie wavelengths of the particles begin to overlap, the particles become indistinguishable and begin to correlate in unexpected ways, and they can no longer be counted like classical objects. Fermi-Dirac statistics are the statistics of particles that obey Pauli’s Exclusion Principle. Such particles are now called fermions. The defining feature of fermions is that each possible quantum state can be occupied by no more than one particle. Electrons and protons, and in general, particles that can make up matter, are fermions. The stability of ordinary solid matter itself is a consequence of the fact that fermions obey the Exclusion Principle. Particles obeying Bose-Einstein statistics, now called bosons, are particles that, in direct contrast to fermions, obey what might be called the “inclusion principle,” which means that they have a higher probability of occupying a given state the more particles are already in that state. The photon was the first boson to be identified, and there would soon be many more. The laser is possible because photons will all go into precisely the same state if given a chance, making it possible to generate a beam of perfectly coherent light. Bosons typically appear as field quanta, which transmit interactions between fermions, the components of matter. The photon is the quantum of the electromagnetic field. Think of fermions as rugged individualists, and bosons as conformists who prefer to disappear into a crowd. The difference in their behavior is controlled by a mere plus-or-minus sign in the distribution function (the formula stating the number of particles having a given energy). At a more formal level, the difference between fermions and bosons lies in the behavior of their wave functions. The wave function for a number of fermions is antisymmetric under particle exchange; this means that if two particles are interchanged in the wave function, the sign (plus-or-minus) of the wave function changes. By contrast, the wave function for bosons is symmetric, meaning that it stays the same if particles are interchanged. In 1940, Pauli proved a key result called the Spin Statistics Theorem: particles with integral spin (0, +/–1, +/–2, etc.) are bosons, while particles with half-integral spin (+/–1/2, +/–3/2, etc.) are fermions. Spin is measured in multiples of Planck’s reduced constant  = h/2π, first introduced by Dirac in 1926. (The symbol  is pronounced “h-bar”.) All fermions have distinct The Quantum Revolution antiparticles; for instance, the antiparticle of the electron is Anderson’s positron. Neutral bosons such as the photon, however, are their own antiparticles —and that is why de Broglie only nearly doubled the number of particles by observing that every particle has an antiparticle. Things Are Made of Particles, but Are Particles Things? By the time that particle statistics were clarified by Fermi and others, there could no longer be any doubt that the elementary particles of which the world is, presumably, made are unlike ordinary objects in many ways. First, it is not clear that they can persist through time the way Mt. Everest can. As Heisenberg argued, even if a particle is following a detectable trajectory, the trajectory can only be defined by a sequence of discrete detection events, and we cannot be sure that the particle even exists between detections. Second, all quanta are indistinguishable, which expresses the fact that a particle such as an electron has no other distinguishing features than its quantum numbers. Ordinary objects possess an indefinitely large amount of detail; two pennies, for instance, can always be told apart if they are examined closely enough. Third, particles obey very different sorts of statistics than do ordinary objects. For the case of bosons this difference should be observable at the macroscopic level, as Einstein first predicted, since there is no limit to how many bosons can go into one quantum state. Yet another challenge to common sense is forced on us by the fact that the structure of elementary particles cannot be described by comprehensible classical models. While by 1926 electrons had been shown beyond a doubt to possess an intrinsic angular momentum, it is impossible to model an electron like an ordinary spinning object. As Pauli showed, it would have to be spinning far faster than the speed of light. So far it has been impossible to experimentally define a radius for the electron with even the most powerful of modern particle accelerators, and so particle theory usually treats the electron as a mathematical point (even though we know, by Heisenberg, that this does not make physical sense) with a definite rest mass, intrinsic spin, and electrical charge. It can be described with great accuracy using the formalism developed in its most clear form by Dirac, but it is like nothing that we can picture or hold in our hands. Somehow, despite these facts, ordinary matter is built up out of nothing but quantum mechanical combinations of extraordinary quantum matter. It is the job of the quantum physicist to show how this is done; and by about 1930 the tools were mostly at hand to do this, although it was less clear than ever why these tools worked. As quantum mechanics increased in predictive power, the duality between the quantum and classical pictures of the world had only sharpened. Life in Hilbert Space John von Neumann (1903–1957) was a Hungarian-born mathematician renowned for his phenomenal memory and powers of calculation. He made important contributions to quantum physics, mathematics, logic, computing, theoretical economics, and the development of the hydrogen bomb. By 1932 von Neumann had distilled the work of the founders of quantum mechanics into a unified axiomatic version of quantum theory, which means that he set forth a set of rules or axioms from which the rest of the theory follows mathematically. There are other ways of doing quantum mechanics, but von Neumann’s axiomatic formulation, expressed in Dirac’s efficient notation, is probably the most widely used version of nonrelativistic quantum mechanics, and it is usually what is taught to university physics majors. On von Neumann’s view, the mathematics of quantum mechanics is a kind of linear algebra. The basic object of study is the state vector, which represents a possible preparation state of the physical system under study. The state vector can be written as a column vector with complex-valued components, and it is usually represented as a “ket” in Dirac notation. State vectors live in complexvalued linear spaces called Hilbert Spaces, after the influential Göttingen mathematician David Hilbert. There is not one Hilbert Space but many; every experimental arrangement has its own. Hilbert Space is merely a mathematical device that encodes the number of degrees of freedom of the system and its symmetries; no one thinks that there really is such a place. The state vector for any system is a sum, or superposition, of components, each representing a possible state that the system could be in. For instance, the spin state of an electron is represented by a two-component vector, with one component representing spin up and the other representing spin down. Each component of a state vector is multiplied by a complex-valued constant called a phase factor. State vectors are transformed into other state vectors by mathematical machines called linear operators. The behavior of operators is defined by their commutation relations, and, as noted, the crucial fact is that some operators do not commute; that is, one gets a different result if the measurements they represent are performed in reverse order. (The notion of noncommutativity is not so counterintuitive in itself. Toasting bread and then buttering it produces a different and more satisfactory result than buttering bread and then toasting it.) Some operators rotate a state vector, while others merely stretch it, that is, they multiply it by a constant called a scalar. If an operator simply stretches a vector, then the vector is called an eigenvector of the operator, and the factors by which the operator multiplies the eigenvector are called the eigenvalues of that operator. Operators can be represented by square matrices, and an operator will be represented by different matrices in different “representations” (coordinate bases that are defined by different types of observations). Recall that Heisenberg’s first version of quantum mechanics was written in terms of matrices. If an operator is transformed in such a way that its matrix representation is diagonalized (which means that only its diagonal components are non-zero) then the diagonal values are the eigenvalues of the operator. Quantities that can actually be observed, such as position, momentum, or energy, are represented by Hermitian operators (after the French mathematician Charles Hermite, 1822–1901). A Hermitian operator is simply an The Quantum Revolution operator whose eigenvalues are real numbers; such operators are also called observables. The set of possible eigenvalues for an observable is often called the spectrum of that observable. A state vector is an eigenstate of an observable if the observable acting on that vector leads to the observation of one definite eigenvalue of the observable. The eigenstates of an observable can serve as the basis vectors for the state space of the system; basis vectors are just the unit vectors in terms of which all other vectors can be decomposed. The connection between theory and observation is much less direct in quantum mechanics than it is in classical mechanics. Quantum mechanics gives rules for the changes that state vectors and observables undergo, but we never perceive state vectors or observables as such; they merely serve as devices for calculating eigenvalues and the probabilities of their occurrence, which are the things that can actually be measured. Another useful quantity that can be calculated is the expectation value of an observable, which is the average value of its eigenvalues, weighted by the probabilities of their detection. Expectation values are the long-run average values that we can expect to observe in a series of similarly prepared experiments. When the system goes from a given initial state to a final state, via a certain measurement procedure, there is a quantity called the amplitude (probability amplitude, or transition amplitude) for getting to that final state. Amplitudes are complex numbers, and by themselves have no agreed-upon physical interpretation, but their moduli (squares) are the probabilities of getting various possible results. This is an abstract and generalized version of Born’s Rule. Phase differences between amplitudes become crucial when probabilities are calculated using the Born Rule, since they determine the interference between the various components of the state vector. State vectors and amplitudes obey the Superposition Principle, which states that any linear combination (superposition) of allowed state functions is an allowed state function. (There are some exceptions to this rule, called superselection rules.) The Superposition Principle allows for the possibility of interference between physical states that from the point of classical mechanics would be entirely independent (such as particles outside each other’s light cones). There is an especially important observable called the Hamiltonian. It represents the structure of energy in the system, but like all operators it does so in an indirect way, since its eigenvalues represent the possible energy values that the system can have. The differences in energy eigenvalues then gives, by Bohr’s rules, the frequencies of the possible spectral lines an atom can emit, while the probabilities of the transitions gives the intensities of those lines. There is more, much more, to the mathematical structure of von Neumann’s quantum mechanics, but enough has been sketched here to give some familiarity with the most commonly used vocabulary of the field. Collapse of the Wave Function In von Neumann’s picture of quantum mechanics we encounter yet another odd duality: there are two ways that the state vector can evolve. If the system is not under observation, the state vector evolves smoothly and deterministically according to Schrödinger’s Equation. Mathematically, Schrödinger evolution is a rotation of the state vector in the system Hilbert Space; such transformations are reversible, like any rotation, and are said to be unitary. When the system is measured, however, the state vector abruptly collapses (or reduces) to an eigenstate of the observable. This process is called the collapse of the wave function, and it is represented mathematically by a projection. Since several vectors can project onto one eigenstate, a projection is in general irreversible, and information is lost. Because of the loss of information, one cannot always tell from a given experimental result what the preparation state of the apparatus was. Hardly any physicists believe that von Neumann’s collapse postulate is literally true. There are several objections to it. First of all, it seems too much like calculating the wrong answer (that the system will remain in a superposition), rubbing it out, and penciling in the observed experimental outcome. Second, it seems mathematically clumsy to have two types of system evolution. Third, there are problems with finding the right way to describe state collapse in relativistic spacetime. There is also increasing evidence that state collapse can sometimes be reversed, so it may simply be false that some components of the state vector just go poof! like a soap bubble. There are several no-collapse versions of quantum mechanics, but none yet stand out as the obvious replacement for the von Neumann picture. The Double Slit Experiment There is no better way to capture the essence of the discoveries of the mid1920s than through the double slit experiment. It expresses the mysteries of the wave-particle duality in a very clear way. Although it is usually described as a thought experiment, it is based on observations that have been confirmed innumerable times in real quantum mechanical experiments. The purpose of the experiment is to reveal the difference between classical and quantum behavior. First, we set up a machine gun. There is a sheet of steel between the machine gun and a detector, which could simply be a wall that can absorb the shots. In the sheet of steel there are two closely spaced slits that are only slightly larger than the diameter of a bullet. When the gun is fired most of the bullets will go through either hole, but some will ricochet away and some will be deflected slightly when they glance off the edges of the holes. The result will be two humped distributions of bullet impacts. This is typical classical particle behavior. The second setup is designed to demonstrate how classical waves behave. Set up a tank with water in it. Have a bob moving up and down and thereby generating concentric wavelets, and have a barrier with two closely spaced holes that are roughly similar in width to the wavelengths of the ripples in the tank. Dream up some way of detecting the waves that get through the barrier. (The right side of the water tank could be a gently sloping beach, and bits of driftwood in the water could show how far up the beach the waves have gone.) The waves that go through the barrier will spread out from each hole in roughly The Quantum Revolution circular patterns that overlap, and when they hit the detector they will display a classic interference pattern. The peaks form where constructive interference between the wavelets occurs because the wavelets were in phase with each other. The troughs, or nodes, in the interference pattern form where destructive interference occurs because the wavelets were out of phase with each other. Interference effects such as this guided Thomas Young in the early 1800s to argue that light travels as a wave. Now, try the same arrangement with elementary particles such as electrons or photons. Shoot them through twin slits in a barrier, with the slits roughly comparable in size to the de Broglie wavelengths of the particles. Like the bullets, each particle will hit the detector screen (which could be a photographic plate so that there will be a record of the results) as if it were Figure 5.3: The Double Slit Experiment. The top curve shows a localized, discrete object, and the distribution of bullet impacts. The middle curve shows it will leave a small spot showing the distribution of driftwood on a beach due to water waves. where it landed. It seems that it The lower curve is a typical interference pattern of electron has been shown that electrons are impacts on a detector screen. If we try to determine which particles. hole the electrons go through, we destroy the interference Not so fast! It will be seen that pattern and get a bullet-like distribution. Illustration by pattern of particle detections Kevin deLaplante. forms a wave-like interference pattern. This is the wave-particle duality: particles are detected as if they are discrete, localized objects, but the probability of their detection is a waveform. One is tempted to think that this could be explained by some sort of interaction between the particles as they go through the slits. To see that this is not quite right, suppose that we turn down the particle intensity so low that only one particle is going through the apparatus every week on average. As before, the particles will be detected, one by one, as discrete spots on the screen. But the truly extraordinary fact is that over a long time the detection events will, again, build up the same familiar interference pattern that appeared when there were lots of particles going through the apparatus at once. It is as if each particle interferes with other particles that were emitted long ago or that will be emitted in the future. Or perhaps it is that each particle somehow interferes with itself. (Dirac described the phenomenon this way.) It would seem that in order for interference to be possible, the particles have to go through both slits at once. Suppose we now decide to find out which slit each particle goes through. The only way we can do this is to somehow intercept or interact with the particles as they go through the slits. We could, for instance, try shining light off them. We have to give the light a short enough wavelength for it to be able to reveal which hole the particles go through. This can be done, and it will, in fact, reveal individual particles going through one slit or the other, but not both at the same time. But when we can tell which slit each particle goes through, the interference pattern disappears, and we get a double-humped distribution just like the distribution for the machine gun bullets. The interference pattern appears only when we cannot tell which slit the particles go through. If we can tell which slit the particles go through, we get a classical “machine gun bullet” distribution. The double slit experiment illustrates the odd connectedness of quantum phenomena. How does one electron “know” how to interfere with itself or with question makes sense. The experiment also illustrates not only the inherently probabilistic nature of quantum mechanics, but an important difference between classical and quantum probabilities. What is the chance of a given major league baseball team winning the World Series? Suppose one knew nothing about the teams except how many there are; then the probability of a team winning would be simply 1 in 30, because there are 30 major league teams. However, if one knew more about the teams (such things as, for instance, their track records, or which players they have) one would be able to say that certain teams have a higher probability of winning than others. Estimates of classical probability are always conditional on the background information available, and the more background information one can get, the more accurately probabilities can be estimated. Classically, there is presumed to be no limit, other than obvious practical limits, to the amount of background information available and therefore no limit to how closely the probabilities of various events can be A major difference—some might say the major difference—between quantum mechanics and classical physics is that there is in general a maximum amount of information we can get about a system before changing (usually irreversibly) the very nature of the system. A large number of quantum systems can be prepared in exactly the same way, but they will not always behave the same way; this is what is meant by saying that quantum mechanics is indeterministic. Quantum mechanics is very good at calculating the probabilities that particles will behave in various ways with high accuracy. However, as the double slit experiment illustrates, if we try to learn more than a certain amount about the particles we change the very nature of the experiment, and we get an The Quantum Revolution entirely different sort of result. Quantum mechanics is probabilistic in principle, not merely by dint of practical necessity but because it appears that in many quantum mechanical experiments there is just no more information to be had. It was this that Einstein objected to more than anything else. A Historical Puzzle The historian of science Paul Forman has argued controversially (1971) that the sudden appearance of quantum mechanics, with its emphasis on discontinuity and uncertainty, was a reflection of the breakdown of old social and political certainties in the turbulent years of the Weimar Republic in Germany following World War I. Forman’s thesis is too complex to fully evaluate here. However, the history of quantum physics seems to reveal that most of the pioneers of quantum physics were not trying to justify preconceived ideas about how physics ought to be, but instead were doggedly following the leads given them by experimental results and mathematical necessity. In many cases they were surprised and even dismayed at what nature revealed to them; Heisenberg himself later spoke of his discovery of matrix mechanics as “almost frightening” (Heisenberg 1971, p. 69). Advances in physics are made by a combination (in proportions varying from scientist to scientist) of philosophical analysis, mathematical skill, experimental ingenuity, and physical intuition—and (very important) a willingness to accept unexpected results. Perhaps youth plays a role. Most of the decisive breakthroughs in the period 1920–1930 were made by scientists younger than 30 years of age, and often younger than 25. This was noted at the time, and the phenomenon was jocu­ larly described as Knabenphysik—“boy physics.” Schrödinger was near 40 and Born in his early 40s when they made their major contributions to quantum mechanics, and thus they were the old men of the team. Great innovations in science, especially in mathematics and theoretical physics, are rarely made by older people. Is this merely because of aging?—or do older people get too committed to comfortable ways of thinking? Not enough is known about the nature of scientific creativity to answer these questions. The advance of science, especially in an extraordinarily creative period such as 1925–1935, reveals yet another duality that is peculiar not just to physics but to all of science. Scientists, and academics generally, tend individually to be people of somewhat conservative character, however bold they may be in their speculations, and they usually work at universities, which have dual (Bohr might have said complementary) mandates. On the one hand, the task of a university is to preserve and transmit existing knowledge; on the other, a university exists to foster innovation and the search for new knowledge. Tensions arise from the fact that in order to seek new knowledge, researchers have to admit that their old knowledge, which they may have invested an important part of their lives in mastering, is in some respect incomplete or even wrong. It is sometimes very hard for academics to do this. The distinguished American physicist John Archibald Wheeler (1911–), who himself worked with Niels Bohr, once said that science advances by “daring conservatism.” There seems to be no general recipe, however, for knowing when one is being too daring or too conservative, and one can see the scientists who created quantum physics, especially in the period up to 1935, struggling to find this balance. Whether or not Forman is exactly right, there were historical forces at work in the years from 1900 to 1935 that made it possible for a group of exceptionally talented academic scientists to take the intellectual risks that made quantum mechanics possible. It would be nice to have a recipe for this historical magic so that it could be replicated whenever it is needed. Elements of Physical Reality This chapter covers the foundational debates about the meaning of quantum mechanics that occurred in the years 1927 to 1935. Most physicists of the time regarded these debates as largely irrelevant—at best, the sort of thing that one chatted about in the pub after a hard day at the lab—and preferred to press on and apply the powerful new techniques of quantum mechanics to the vista of unsolved problems in physics and chemistry that opened out before them. But the philosophical debates of 1935 would turn out to be the front-line research of the first years of the twenty-first century. Early Causal Interpretations of Wave Mechanics The most obvious way to respond to the puzzle posed by the double slit experiment is to imagine that the wave function that determines the probabilities of particle detection really does describe some sort of actually existing wave that guides the particles to the proper spots on the detection screen. It was clear by 1927 or 1928 that electrons themselves cannot be nothing but waves, since they are always detected as highly localized particles even though the probabilities of their detection follows a wave-like law. But perhaps the wave function is a physically real thing that guides or pilots the particles by some mechanism to be determined, rather than merely a description of probabilities. This is called the pilot wave interpretation of quantum mechanics, and early versions of it were explored by several physicists in the late 1920s. All pilot wave theories have two features in common: they are continuum theories (because they attempt to explain particulate behavior in terms of wave-like structures) and they are deterministic, because the behavior of waves and particles in these theories is governed by partial differential equations of types that lead to definite, unique solutions. For the latter reason pilot wave theories 80The Quantum Revolution Figure 6.1: Niels Bohr and Albert Einstein. Photograph by Paul Ehrenfest, courtesy AIP Emilio Segre Visual Archives. are examples of causal interpretations, and their authors hoped that if they could be made to work they would get rid of that “damned” indeterministic quantum jumping that Schrödinger had complained about. It was also hoped that causal versions of quantum mechanics would give a spacetime picture of quantum processes and thus satisfy an instinct for mechanical explanation that was frustrated by the new quantum mechanics, which was increasingly expressed in highly abstract mathematics. An early causal interpretation of wave mechanics was offered by Erwin Madelung (1881–1972), who in 1926 outlined a hydrodynamic interpretation of wave mechanics. Hydrodynamics, the physics of fluids, is based on partial differential equations describing fluid flow. Madelung started from Schrödinger’s suggestion that the wave function described a continuous distribution of charge and treated this charge distribution as an electrified fluid. Madelung had to reconcile the well-established existence of electrons as discrete particles with this model, and he proposed that electrons were in some unclear way dissolved into his hypothetical electrical fluid. His work was not found to be convincing. Still, Madelung’s mathematics, differently interpreted, has appeared in other causal rewritings of quantum theory. Louis de Broglie offered a more sophisticated causal model in 1927. At first he hoped to be able to show that the electron could be understood as a singularity in the wave field. What this means is that the electron would be a sort of knot or eddy whose structure would be determined by a dynamic law (probably nonlinear) acting on the wave function. De Broglie called this the theory of the double solution, because the same wave equation would have two sorts of solu- Elements of Physical Reality tions, one for waves, and one for highly localized concentrations of energy that would behave like particles. He was not able to arrive at a mathematical law that could do this ambitious job, and instead, at the 1927 Solvay Conference, he proposed a provisional pilot wave theory according to which the particles were carried along by the quantum wave field like chips of wood in a stream. The theory was inherently nonlocal, since the particles, in order to be able to behave in a properly quantum mechanical way, had to somehow sense the positions and momenta of all other particles in the system, instantaneously. De Broglie’s theory did not get a warm reception. Pauli argued that de Broglie had failed to give an accurate account of scattering. Wave mechanics says that when a particle scatters off a target, the wave function of the scattered particle expands outward from the target in spherical ripples, whereas particles are always detected within very narrow locations and directions. How could de Broglie explain this discrepancy? He did not have a clear answer—in 1927. Apart from Pauli’s specific complaint, followers of the Copenhagen Interpretation had two kinds of objections to causal theories, one radical and one conservative. First, the Copenhagenists thought it was just mistaken to return to a picture in which positions and momenta had exact meanings independent of the experimental context. This view would soon be reinforced by a mathematical proof by John von Neumann that apparently demonstrated, to everyone’s satisfaction at the time, that it is mathematically impossible for there to be a hidden variables theory that can reproduce the statistical predictions of quantum mechanics. Second, they could not accept the fact that the sort of causation contemplated in causal interpretations seemed to be inherently nonlocal, a sort of action at a distance. In the classical realm, Bohr insisted, relativity (the ultimate classical theory) could not be challenged. De Broglie abandoned his theory and became, for a while, a vocal advocate of the Copenhagen Interpretation. The causal interpretation of quantum mechanics would be revived by David Bohm 25 years later, in a form that would be less easily dismissed. Schrödinger’s Cat and the Measurement Problem In 1935 Schrödinger published a paper entitled “The Present Situation in Quantum Mechanics.” Although it contained no new results, it was a landmark paper that raised questions that were to be debated for decades afterward. Schrödinger invited us to consider a “fiendish device” in which an unfortunate cat is imprisoned in a box with a closed lid, so that the cat cannot be seen during the first part of the experiment. The cat’s box is connected to a radioactive source that has a 50 percent probability of decaying within one hour. If the source decays, the alpha-particle it emits is detected (by a device such as a Geiger counter) and a valve is electronically triggered that releases deadly prussic acid into the box, killing the cat instantly. The quantum mechanical description of this setup says that the radioactive atom is in a superposition of states, one for it to be decayed and one for it to be not-decayed. Because the 82The Quantum Revolution cat is coupled to a system that is in a superposition, its wave function gets entangled with that of the apparatus, and it is also in a superposition of states according to quantum formalism—one for it being dead, the other for it being alive. And yet, if the experimenter opens the lid of the box after an hour has passed, the experimenter will certainly not see the cat in a curious half-alive, half-dead state. Rather, the experimenter will either see a cat that is definitely dead (with 50% probability) or a cat that is definitely alive. In the mathematical language of quantum mechanics, the cat goes from an Figure 6.2: Schrödinger’s Cat. The reentangled state to a mixture (a system whose poslease valve for the poison is controlled sible states can be described using classical probby a quantum state in a superposition. Quantum theory says the cat is also in abilities), merely because someone opens the lid of a superposition, but if the box is opened the box. This thought experiment illustrates Bohr’s inafter one hour the cat will be either definitely alive or definitely dead. Illustra- sistence that any quantum mechanical experiment tion by Kevin deLaplante. will always result in a definite, classical result. It could also be taken as a confirmation of von Neumann’s collapse postulate—when an observation is made, the wave function collapses into one and only one eigenstate of the system. It also shows that interesting things can happen when quantum systems are coupled to macroscopic systems; the cat in effect acts as an amplifier of an event at the subatomic scale, and there is no theoretical limit to how large an amplification factor can be achieved. (A random subatomic event could be set up to trigger a powerful nuclear bomb, for instance.) What Schrödinger really wanted to demonstrate, however, was the arbitrariness of the quantum-classical divide. The indeterminacy in the state of the radioactive sample is transferred to the cat, but it disappears when the box is opened and the cat (alive or dead) is observed by an experimenter. But at the same time the formalism of the theory says that as soon as the experimenter interacts with the system, the indeterminacy is transferred to him as well and he goes into a superposition of states—except that this is not, of course, what an actual human observer experiences. Schrödinger had thus defined what became known as the measurement problem. Loosely speaking, it is simply the problem of understanding what happens during the process of measurement. More precisely, it is to explain how superpositions can turn into definite classical outcomes. Several authors have at various times shown that it is mathematically impossible for super‑ positions to evolve into classical states according to the Schrödinger Equation. Why is it that we always seem to get definite, very classical-looking results when quantum mechanics describes things as a blur of all possible states they could be in? Is the formalism of quantum mechanics wrong when it comes to describing measurement, or are things not as they seem to human Elements of Physical Reality The Mystery of Entanglement Einstein had suspected as early as 1905 that his light quanta were not statistically independent in the way that classical particles ought to be, and by the time the formalism of quantum mechanics was taking shape, in 1926 and 1927, it was very clear that quantum mechanical systems are interconnected in ways that defy classical intuitions. In 1935 Schrödinger introduced the term “entanglement” to describe this odd interconnectivity or mutual influence, and he described it as not “one but rather the characteristic trait of quantum mechanics” (1935, p. 555). The Mathematical Basis of Entanglement The formal basis of entanglement is fairly easy to understand in terms of the basic linear algebra of von Neumann’s Hilbert Spaces. Suppose there is a system composed of two particles. (It could be any number at all.) Each individual particle can be in a number of possible eigenstates of any observable, and these eigenstates can be used as basis vectors for the state space of each particle. The state space for the pair of particles is called a tensor product space of the spaces of the individual particles, and its basis vectors are possible states of the pairs of particles. The most general tensor product state is a linear combination (sum) of tensor product basis states of multiparticle systems; that is, it is the space built up out of all possible linear combinations (superpositions) of the states of each individual particle. If a tensor product state is factorizable then the particles are statistically independent. However, it is straightforward to show that tensor product states in general cannot be factored into products of the states of the individual particles of the system; there are almost always cross-terms that translate into correlations between the particles that cannot be explained classically. From a purely mathematical point of view, therefore, entanglement is a consequence of the nonfactorizability of tensor product states, which in turn is a consequence of the superposition principle. Entanglement can also be thought of as an interference phenomenon, like the interference of waves in the double slit experiment. Just as there are amplitudes for individual particles to go into various states, there are amplitudes for pairs of particles to go into various correlated states. Entanglement comes about when these amplitudes are out of phase and therefore interfere. From this point of view, therefore, entanglement is simply an interference phenomenon. However it may be expressed formally, the upshot is that entangled particles cannot be treated as separate entities. Their properties are mixed up with the properties of their partners. An entangled state of two particles is not merely two separate particles connected with some sort of odd force field (although some causal interpretations would attempt to treat them this way); rather, they are more like quantum mechanically conjoined twins that do not have fully separate identities. Historian of physics Don Howard argues that Bohr’s Principle of Complementarity was a response to entanglement. Although Bohr did not use the word 84The Quantum Revolution “entanglement,” he was well aware of the phenomenon. Bohr’s view seems to have been that when an experimenter makes a measurement of, say, position, his state gets entangled with the state of the apparatus. To the experimenter the apparatus is in a definite classical state (like Schrödinger’s cat), but in fact the experimenter’s state (because of its entanglement with the apparatus) cannot be fully distinguished from the state of the apparatus. If the experimenter should then choose to measure momentum the experimenter gets entangled in a different way, but there is no such thing as being entangled with both definite position and definite momentum states. Hence measurements of position and measurements of momentum are complementary but cannot be combined into one single process—or so Bohr seems to have thought. Quantum Logic An entirely new way of thinking about quantum mechanics was introduced in 1936 by von Neumann and the mathematician Garrett Birkhoff (1911–1996). They thought that it might be possible to learn something about the inner workings of quantum mechanics by trying to express it as a logic, that is, as a formal system of reasoning. Perhaps the fact that we live in a quantum world could be explained if it can be shown that there is deeper and more general logic to which ordinary classical logic is an approximation, just as classical mechanics is an approximation to quantum mechanics. Birkhoff and von Neumann showed that they could treat statements about possible measurement results as propositions, and represent the workings of their logic by a mathematical structure called a lattice. Ordinary classical logic can be represented by a so-called orthcomplemented lattice, while quantum logic is represented by a non-distributive lattice. A distinguishing feature of quantum logic is that it fails to obey the classical distributive law of logic. Classically, saying that the master is dead and either the butler did it or the maid did it is exactly equivalent to saying that the master is dead and the butler did it or the master is dead and the maid did it. (This is called the distributive law because the “and” distributes over the “or.”) Quantum mechanically, however, the distributive law fails if statements are made about noncommuting observables. For example, if spin-x is 1 and either spin-y is 1 or spin-y is –1, we cannot conclude that either spin-x is 1 and spin-y is 1 or spin-x is 1 and spin-y is –1. This is because spin-x and spin-y do not commute, so the last two statements cannot be made. While quantum logic has contributed relatively little to practical physics so far, it has been a very important stimulus for investigations into the foundations of quantum theory, and it may yet be reborn in the new field of quantum Einstein Digs in His Heels Albert Einstein had been one of the great pioneers of quantum mechanics. As late as his work with Bose, when he was in his mid-40s, he had been will- Elements of Physical Reality ing to forge ahead and seize at new technical results even if it was not clear how they could be reconciled with the classical worldview that he increasingly came to prefer. However, by the late 1920s Einstein ceased to participate in the development of quantum mechanics and instead became its harshest critic. He readily acknowledged that the new quantum mechanics had great empirical effectiveness. However, he felt that it could not possibly be the final form that physical theory would take, and he began to lose patience with the effort of living with contradictions in the hope that they would someday be resolved. He gradually got out of touch with recent technical developments in quantum mechanics, until he was investing almost all of his intellectual effort in his lonely search for a unified field theory that would resolve all contradictions between quantum and classical, particle and wave. Like de Broglie and Schrödinger, he thought that it should be possible to find equations that would describe elementary particles in the form of what he called “singularities” of the field. These would not be true mathematical singularities (such as what happens if one tries to divide by zero), but rather highly localized knots or concentrations of energy that would move about in a particle-like manner under the guidance of the wave field. But Einstein’s vision was grander than de Broglie’s, for his ultimate goal was to find a classical local field that would encompass all of the forces of nature, just as Maxwell had unified electricity and magnetism into a single field. Occasionally, however, Einstein found time to take stinging potshots at quantum mechanics—and especially the Copenhagen Interpretation of his friend Niels Bohr. “God Does Not Play Dice” One of Einstein’s strongest objections to quantum mechanics was its apparently inherent indeterminism. On several occasions Einstein famously quipped that “God does not play dice with the universe.” In response Bohr gently reminded Einstein that perhaps it is not for us to say what God will do. But Einstein had been troubled from the beginning by the inherently probabilistic nature of quantum physics, even as he pioneered its development by means of his own skillful use of statistical reasoning. Quantum mechanics can give extremely accurate estimates of the probabilities that an alpha particle will be emitted from a nucleus, for example, within a certain period of time and in a certain direction, but it has no way at all of telling us exactly when or in what direction the alpha will be emitted. Einstein was convinced that this marked an incompleteness in quantum mechanics: it cannot be the whole story about what is going on inside that nucleus. Realism and the Separation Principle Einstein was a realist in the sense that he believed that there is something about the physical world that is independent of the way humans perceive it or think of it. Like Planck, he thought of the mission of the scientist as an almost religious quest to understand this independent reality. A foundation of his con- 86The Quantum Revolution ception of realism was what he called the Separation Principle, the statement that physical systems that are distant from each other in space at a given time are entirely physically independent from and distinguishable from each other at that time. Of course, one system can have an influence on another at a later time by such means as sound waves or light signals, but Einstein believed that any way of transmitting an influence from one system to another has to take a definite amount of time, and in no case can it go faster than light. Like his hero Newton, he thought that the very notion of action at a distance was physically absurd, and he sarcastically referred to the quantum failure of separability as “spooky action at a distance” or as a form of “telepathy.” Einstein had two reasons for his belief in separability. The first was, of course, his theory of relativity, which showed beyond a shadow of a doubt, or so he thought, that faster-than-light motion of physical influences is impossible. But there was a deeper reason for his skepticism about the apparent failure of the Separation Principle in quantum mechanics: the conception of observer-independent reality can only be maintained rigorously, he insisted, if it is physically possible to separate observers from the systems they study. In the last 25 years of his life Einstein several times stated that he regarded the Separation Principle as necessary for the very possibility of science itself. If we could not separate objects and study them in isolation, he argued, how would science be possible? There are two answers to this. First, there is no reason to think that the world is structured for the convenience of human scientists. Second, science gets along just fine on the basis of partial knowledge of the parts of nature; one does not need to know everything about a physical system in order to make useful predictions about it. Einstein may, therefore, have simply been expecting too much of physics. Nevertheless, he was one of the first, if not the first, to grasp how enormous is the challenge to the classical worldview raised by quantum mechanical entanglement. Einstein’s Causal Wave Theory, and Why He Wouldn’t Publish It In 1927 Einstein, like Madelung and de Broglie, attempted his own causal version of wave mechanics and produced a mathematically sophisticated theory based on Schrödinger’s Equation. Its aim was to remove indeterminism by giving a recipe for determining particle velocities uniquely in terms of the wave function. Einstein was dismayed to discover that his theory, too, violated the Separation Principle. If he tried to describe the motion of a system of particles built up out of subsystems that he assumed did not interact to begin with, he found that the wave function for the composite system could not be written merely as the product of wave functions for the individual systems, but that inevitably cross-terms appeared indicating that the particles could not be treated as separate entities within the composite system. Einstein withdrew the paper from publication, convinced that his result must be wrong. Elements of Physical Reality Einstein versus the Uncertainty Principle Einstein did not doubt that Heisenberg’s Uncertainty Relations were accurate in practice and represented a profound insight. (In 1932 he nominated Heisenberg for the Nobel Prize.) Nor did he doubt the utility of statistical methods; after all, Einstein himself was one of the great masters of statistical mechanics. Rather, he wanted to show that Heisenberg’s ∆p’s and ∆x’s were merely the result of (perhaps unavoidable) inaccuracies in measurement, not a sign of indeterminacies in the very nature of positions and momenta themselves. He wanted to show that it was contradictory to suppose that a particle did not always have a definite position and momentum, even if we might not be able to determine those quantities with arbitrary accuracy in practice. Einstein was well aware that in the formalism of wave mechanics, as it had taken shape by the late 1920s, one cannot even express the notion of a particle having simultaneously precise position and momentum. It would be a mathematical contradiction in terms, like asking for a square circle. However, Einstein did not worry about this, since he felt that it was more important to get the physical picture right and repair the formalism afterward. Many of Einstein’s greatest breakthroughs had been sparked by simple but elegant thought experiments. Throughout this period Einstein devoted a great deal of his considerable ingenuity to searching for ways to show that there was more information available in an experimental setup than allowed for by the Uncertainty Relations. Somehow, Bohr would always find an error in his arguments, and then Einstein would try again. Embarrassment in Brussels One of Einstein’s most ingenious attempts to defeat the Uncertainty Relations was presented at the Solvay Conference of 1930 in Brussels, Belgium. It was based on another one of his deceptively simple thought experiments. Suppose there is a box containing one and only one photon. (The term “photon” was by this time in current use.) Suppose also that the box is equipped with a shutter that is controlled by a very precise timer. We design the timer so that it briefly opens the shutter for a time interval that can be set as narrowly as we want. If the photon flies out through the shutter we know the time interval within which it did so to arbitrary accuracy. We weigh the box before and after the photon leaves. The box will be slightly less massive when the photon has left, and by the relativistic equivalence of mass and energy we can determine the energy of the photon. Because Einstein took it that the box could be weighed to arbitrary accuracy, the energy of the photon could thereby be determined to arbitrary accuracy as well, and it would be possible, therefore, to violate the version of Heisenberg’s Uncertainty rules that states that the product of the uncertainties in energy and time must always be greater than Planck’s constant. Bohr could not see an immediate answer to Einstein’s argument, and he spent a sleepless night struggling to find the error. In the morning he appeared, 88The Quantum Revolution triumphant, and showed that there will be inevitable uncertainties in the timer reading and measurement of photon mass that are of exactly the right amount to save Heisenberg’s formula. In order to weigh the box it has to be suspended in a gravitational field, and when the photon is emitted the box recoils, introducing uncertainties in its position, momentum, and in the reading of the timer clock attached to it. The timer uncertainty comes from nothing other than Einstein’s own formula for the gravitational red shift of the rate of a clock when it moves in a gravitational field. Bohr generously emphasized how much had been learned from Einstein’s clever example. (Much can sometimes be learned from making an interesting mistake.) Another important implication of Bohr’s analysis of Einstein’s thought experiment is that there are deeper connections between quantum mechanics and gravitation than meet the eye. The irony of the story is that 10 years earlier it had been Einstein who victoriously defended the more radical quantum reading of particles while Bohr had tried to protect the classical picture of electromagnetism. Now the tables were turned, with Bohr defeating Einstein’s attempts to argue away quantum uncertainty by means of a principle from Einstein’s own theory of gravitation. Bohr had decisively won round two of the Bohr-Einstein debates. But there was another round to follow. The Einstein-Podolsky-Rosen Bombshell In 1935, Einstein, in collaboration with younger colleagues Boris Podolsky (1896–1966) and Nathan Rosen (1909–1995), published his last and greatest attempt to undermine quantum uncertainty. This time his arrow struck home—although it did not hit the exact target he had been aiming for. The title of their paper was “Can Quantum-Mechanical Description of Physical Reality be Considered Complete?” (It is usually called just the EPR paper, after the names of its authors, or perhaps after the phrase “elements of physical reality.”) The EPR paper is one of the most widely cited scientific papers of the twentieth century, but its argument is subtle and impossible to describe fully without the use of mathematics. The paper was actually written by Podolsky, because Einstein was not comfortable writing in English, while the detailed calculations were carried out by Rosen, who was expert in the wave mechanics of entangled states. It is unfortunate that Einstein did not write the paper himself, because his own prose (whether in his native German or in a competent English translation) is invariably crystal clear. Later on Einstein expressed annoyance at the way Podolsky had written the paper, because he felt that the simple point of the argument was “buried in erudition.” Einstein and his coauthors began by introducing their notion of the completeness of a physical theory. To be complete, they declared, a theory must somehow represent every element of the physical reality it supposedly deals with, and it must do so in a way that treats these elements as existing independently of the experimenter. To test the completeness of a theory (such as quantum mechanics), one has to know what the elements of physical reality Elements of Physical Reality are that it is supposed to describe. It might be very hard to come up with a complete list of such elements. But EPR declared that there was one method that would be sufficient to identify an element of reality (even if it might not give the whole list): if it is possible to predict the value of a physical quantity with certainty and without having disturbed, influ- Figure 6.3: The EPR Apparatus. Particles A and B are enced, or disrupted the system in outside each other’s light cones and therefore cannot inany way, then there must be an ele- fluence each other—or can they? If Alice measures the ment of physical reality correspond- momentum of A, she knows the momentum of B. If Alice ing to that quantity. If a theory could measures the position of A, she knows the position of B. Does this mean that B has definite position and momennot predict the value of that quantity tum before Alice makes any measurements? Illustration with certainty, it would therefore not by Kevin deLaplante. be complete in this sense. EPR’s object was to prove that quantum mechanics fails to predict the value of a quantity that they intended to show was predictable on other reasonable grounds. The basic structure of the apparatus in the EPR thought experiment (and many variants of this basic design have been described in the literature since 1935) begins with a source of two or more entangled elementary particles. These particles interact dynamically with each other. There are several ways in which this could happen. They could, for example, have decayed from other particles or simply collided with each other. This dynamic interaction entangles their wave functions. The particles are then allowed to fly off in opposite directions to a considerable distance, where they interact with detectors that measure some of their physical properties. EPR asked us to consider an entangled wave packet for two particles that is prepared in such a way that both the total momentum and the difference in position of the two particles are conserved. This is a tricky point that is often glossed over in nontechnical explanations of the EPR experiment. We know that the position and momentum for each individual particle fails to commute and therefore obeys an uncertainty relation. However, for the type of entangled wave packet they described, the total momentum of the system commutes with the difference in position; this means that both of these quantities have definite values throughout the experiment and in quantum mechanical terms therefore can be said to have simultaneous reality. The argument cannot go through without this fact in hand. Now, we let the two particles fly off to a great distance from each other. Let the detectors be staffed by the ubiquitous quantum mechanical experimenters Bob and Alice. (We will meet them again.) To simplify matters we shall assume that Bob, Alice, and the particle source are all at rest with respect to each other. The left particle enters Bob’s laboratory at precisely 12:00 noon. There is no way that he could measure both its position and its momentum at 90The Quantum Revolution precisely the same time, for position and momentum do not commute, but he is free to choose one or the other. If he chooses to measure the position of his particle at precisely 12:00 noon then he automatically knows the position of the other particle at that time, because the difference in those positions remains constant. On the other hand, suppose Bob decides instead to measure the momentum of his particle at precisely 12:00 noon. Then he would automatically know the momentum of the other particle at 12:00 noon because the total momentum of the two particles is known. According to quantum mechanics, no one can find the position and momentum of the distant particle at 12:00 noon by a single measurement. But it has just been shown that at 12:00 noon Bob could have inferred either the position or momentum of the distant particle at that time. Here’s the twist: EPR took it as obvious that because the particles are quite distant from each other, nothing done to one at precisely 12:00 noon could influence the other at exactly that time—because otherwise a causal influence would have had to travel from one to the other at infinite speed. Therefore, neither of the measurements that Bob could have carried out can influence the real physical condition of Alice’s particle at 12:00 noon. Therefore, EPR concluded, Alice’s particle must (at 12:00 noon) possess exact values of both position and momentum, even though the Heisenberg Uncertainty Relation says that it does not. Quantum mechanics therefore does not give a complete picture of all the elements of physical reality belonging to Alice’s particle. EPR in effect posed a dilemma (although they did not put it in exactly these words): either quantum mechanics is incomplete (in the sense that it cannot tell us about certain elements of physical reality) or else it is nonlocally causal, or both. (A nonlocally causal system is simply one that permits action at a distance.) Since nonlocal causality was, in Einstein’s view, an absurdity, quantum mechanics must be incomplete. To put it another way, Einstein took the apparent nonlocality of quantum mechanics to be a symptom of its The paper’s publication caused a furor in Copenhagen, and Bohr labored mightily to produce a response. A few months later he published a long article with the same title as the EPR paper. It is written in Bohr’s usual obscure style, and experts disagree about what he was actually trying to say. Bohr was in full agreement with Einstein on one point: there is no way that what is done to one particle has a direct, instantaneous influence on the other. That would be truly absurd, they thought. Instead, Bohr seemed to be saying that EPR made the mistake of supposing that complementary measurement procedures (measurements of position and momentum) could be taken to apply to the same reality—whereas in fact the notion of an independent reality has no experimental meaning, and therefore no meaning at all. Bohr could hardly disagree that entanglement violates classical expectations of separability. However, it is mistaken to try to seek a deeper “explanation” of entanglement, because that would involve trying to impose classical concepts on the quantum level of the world. Quantum mechanics is in essence just a set of recipes for calculat- Elements of Physical Reality ing predictions about phenomena that can only be observed with instruments (such as Stern-Gerlach devices, mirrors, photographic plates, etc.) that can be manipulated by humans at the classical (macroscopic) scale. The very concept of an explanation of quantum phenomena just does not make sense, because anything that could count as an explanation would have to be in classical terms. Classical stories about quantum objects always come in complementary pairs (such as the wave and particle pictures); this means that there is no consistent picture of a particle as either just a wave or just a particle, even though there is a consistent recipe that tells us when to use either wave or particle concepts in making verifiable physical predictions. To put it in simpler terms: Bohr’s reply to EPR is that they had an unreasonably stringent requirement for completeness; in fact, quantum mechanics is as complete as it can be. There is no more information to be had than quantum mechanics can give us. For a time, Bohr convinced most physicists that they could ignore Einstein’s worries about nonseparability, but the questions raised by EPR had not truly been resolved, and a minority of especially thoughtful physicists continued to think about them. With the perspective of 70 years of hindsight, it can be seen that there are no technical errors in EPR’s calculations, but their assumption that Bob’s measurements could not change the state of Alice’s particle is now known to be incorrect. Whether there is some mysterious faster-than-light influence (as in various causal interpretations of quantum mechanics) or whether it “just happens,” if a position measurement is made on Bob’s particle then the momentum measurement on Alice’s particle will probably come out differently than it would have had Bob not made his measurement. It would be nearly 30 years before this was demonstrated by J. S. Bell. Schrödinger (in 1935) also published a detailed study of entanglement in response to the EPR paper. Although his analysis was insightful, he made a mistake that EPR did not make: he predicted that entanglement would diminish as the correlated particles moved away from each other, just as ordinary interactions (such as gravitation or electromagnetic interactions) diminish with distance. Again, it would be 30 years or more before experiment would prove this wrong. Eventually the EPR paper forced scientists to take seriously the fact that there is still much to be learned about entanglement; in particular, it drew attention to the nonseparability of entangled states. The EPR paper was the stimulus for work by David Bohm and J. S. Bell (to be described later) that led to the direct experimental confirmation of quantum nonseparability. Bohr was right that Einstein had failed to prove the incompleteness of quantum mechanics, because Einstein’s high standard of completeness was not something that could reasonably be expected of any theory of quantum phenomena. Quantum mechanics, in a precise technical sense, is as complete as it can be, although this is another key point that would not be demonstrated formally for another 30 years. But while Bohr thus scored some points, round three of the BohrEinstein debate must in the end go to Einstein, because he showed that the 92The Quantum Revolution puzzle of entanglement cannot be made to go away by soothing words. In many respects what EPR said was mistaken, although subtly and instructively so. However, as we enter the twenty-first century, there are few facts about physics more interesting, challenging, and potentially useful than quantum entanglement. Once again, Einstein was right—in the long run. Creation and Particle Physics before World War II By 1930 the existence of only three presumably “elementary” particles had been confirmed (if by an elementary particle is meant something out of which more complex forms of matter, atoms and molecules and radiation fields, can be built): the photon, the electron, and the proton. The existence of a neutral particle in the nucleus that would be roughly the mass of the proton had been hypothesized by Rutherford around 1920, because there had to be some way to account for the extra mass of all nuclei beyond hydrogen. It was known that electrons could be emitted from radioactive nuclei in beta decay, and so it was natural to assume that enough electrons to produce the correct nuclear charge were squeezed into the nucleus. However, Pauli and Heisenberg showed that this would not work. (The electron’s de Broglie radius is too large for it to be squeezed into the nucleus; the electron can visit the nucleus, but it cannot live there.) Rutherford’s suggestion of a new neutral particle remained the best contender, but it had not yet been proven. Otherwise, physicists had little inkling of the complexity that was about to burst upon them. The Neutron The prediction by Dirac and discovery by Anderson of the first antiparticle, the positron, has already been described. In 1932 the British physicist James Chadwick (1891–1974) confirmed the existence of a neutral particle, slightly heavier than the proton, in emissions from beryllium that had been bombarded by alpha particles from radioactive polonium, and he named it the neutron. Chadwick’s discovery of the neutron was the key that unlocked the door to modern nuclear physics, with all its enormous potential for both good and harm. Only a few months later, in 1932, Heisenberg used Chadwick’s neutron to construct the first quantum mechanical nuclear model. The main mechanism 94The Quantum Revolution he proposed was an exchange force produced by protons and neutrons passing electrons around like basketball players tossing a ball. And in September, 1933, a Hungarian-born émigré named Leo Szilard (1898–1964) was sitting in a London hotel lobby, reading a newspaper report of Lord Rutherford’s recent pronouncement that any thought of releasing useable amounts of power by nuclear transformations was “moonshine” (Rhodes 1988, Ch. 1). Szilard had already made a reputation for himself with numerous inventions, including collaboration with Einstein on the invention of a new type of refrigerator, and fundamental theoretical contributions to thermodynamics. He had been an instructor in physics at the University of Berlin. However, when Hitler seized emergency powers in 1933 Szilard fled Europe, as thousands of his fellow Jewish scholars and scientists were dismissed from their posts under the Nazis’ new racial laws. Szilard thought that Rutherford had to be wrong, and it struck him immediately that because the neutron was electrically neutral it could penetrate the nucleus of the atom. If it could then cause the breakdown of the nucleus into smaller fragments including more neutrons, a chain reaction could be triggered in which the rate of reactions would grow exponentially, releasing huge quantities of energy. At this time no one had yet demonstrated that a neutron could split the atom as Szilard had imagined, but he knew that it was well within the realm of possibility. Szilard patented his concept for the nuclear chain reaction, and eventually turned it over to the British Admiralty. With his characteristic foresight he realized, years ahead of almost everyone else, that the first applications of nuclear power would be military. Beta Decay and the Neutrino Another important property of the neutron is that it when it moves freely outside the nucleus it is unstable, decaying with a half-life of around 15 minutes. Its decay products were apparently a proton and an electron. (Later it would be found that the neutron can also decay into an antiproton and a positron.) This explained the energetic electrons, or beta rays, that were produced in certain kinds of nuclear reactions, and so this process was called beta decay. But there was a puzzle, which was noted even before Chadwick’s discovery of the neutron: the energy of beta particles follows a continuous spectrum (all values allowed over a certain range), which implied that some energy and momentum was going missing in the reaction. Bohr (as with the old BKS theory) was willing to consider that energy conservation might be violated in beta decay. However, Pauli thought that a less radical explanation was required, and in 1930 he proposed that a very light, neutrally charged particle of spin-1/2 was also emitted during beta decay, with just the right amount of energy to balance the books. Fermi in 1931 jocularly dubbed the new particle the neutrino (“little neutron”), and in 1934 he produced an elegant quantum mechanical description of beta decay. Part of Fermi’s theory stated that the electron and the neutrino produced by beta decay shared the decay energy randomly, which accounted nicely for the continuous spectrum of the beta particle. Physicists Creation and Annihilation found it hard to doubt the neutrino’s existence, because they did not want to give up energy conservation. However, the neutrino is very hard to detect, because its probability of interaction with ordinary matter is so low. It was finally detected in 1956 in delicate experiments performed by Frederick Reines (1918–1998), Clyde Cowan (1919–1974), and others. Yukawa and the Strong Force Before the neutron was discovered, physicists supposed that protons in the nucleus were bound together electrostatically by electrons. Once it was realized that the nucleus was a combination of positively charged and neutral objects, there was no choice but to assume the existence of a powerful nuclear force that bound protons and neutrons together despite the electrostatic repulsion of the protons. This force would be almost like a glue that bonds very strongly when neutrons and protons are nearly touching, but that falls off rapidly (indeed, exponentially) with distance. The hardworking Japanese physicist Hideki Yukawa (1907–1981) made the next step. He realized that the attempts to model the nuclear interaction on beta decay had failed, and decided that he should consider the possibility that a new kind of field, not previously studied, is responsible for the nuclear force. Just as electromagnetic forces depend on the interchange of photons, there had to be a particle that was tossed around like a ball by protons and neutrons and would account for their strong short-range interaction. In conversation, the senior Japanese physicist Yoshio Nishina (1890–1951) made the suggestion that such a particle might obey Bose-Einstein statistics. At first Yukawa thought that this was too radical, but eventually he realized that he had to let the necessary characteristics of the nuclear force field tell him what sort of particle it used. The key was that the range of a force should be inversely proportional to the mass of the particle that carries the force. Physicists were learning that the vacuum is like a bank from which the currency of energy can be borrowed temporarily. Enough energy to create the particle can be borrowed from the vacuum so long as it is paid back in a time interval allowed by the uncertainty principle for time and energy; the known range of the force thus determines the time the particle can exist, and thus its energy. Yukawa estimated the mass of the new boson to be about 200 times the mass of the electron, or around 100 to 150 MeV. (Particle mass-energies and the energies of nuclear and atomic processes are commonly measured in electron volts, the energy acquired by an electron when it has accelerated through a potential difference of one volt; an MeV—usually pronounced “em-ee-vee”—is a million electron volts.) Yukawa thus predicted the existence of a particle that by the late 1930s was being called the mesotron or meson, the “intermediate particle,” since it was intermediate in mass between the electron and the proton. “Who Ordered That?” In 1937 Carl Anderson and Seth Neddermeyer (1907–1988) (who would play a key role in the design of the atomic bomb at Los Alamos) discovered in cosmic 96The Quantum Revolution ray showers a particle that for a while was suspected to be the carrier of the nuclear force particle that Yukawa had predicted. Some background on cosmic rays will be helpful: Primary cosmic rays are extremely energetic particles (probably protons) of unclear source, entering the Earth’s atmosphere from outer space. Fortunately, primary cosmic rays never reach the ground since they collide with the nuclei of oxygen or nitrogen atoms in the atmosphere. (Astronauts in orbit sometimes see bright flashes caused by primary cosmic rays blowing up nuclei inside their eyeballs.) Primary ray collisions produce showers of secondary cosmic rays, and thousands of these elusive particles sleet through our bodies every second. Anderson spotted his new particle when he and Neddermeyer were studying secondary cosmic rays. It had roughly the right mass to be Yukawa’s particle. The new particle came in a negative and positive version, presumably antiparticles of each other, and it was unstable, with a half-life of about two microseconds. By about 1946 it was realized that Anderson’s “meson” was a very poor candidate as carrier of the strong force, because its negative version would pass through a nucleus with hardly any probability of interacting at all. It is also a spin-1/2 fermion, which (as Nishina had suspected) should rule it out from the beginning as a possible carrier of a force field, and its lifetime was about 100 times too long to be Yukawa’s particle. To make a very complicated story short, by about 1950 further studies had shown that there are two kinds of “meson” in cosmic ray showers. In addition to Anderson’s particle of 1937, another meson (this time the quotes can be removed) was discovered by Cecil Powell (1903–1969) and collaborators by exposing photographic emulsions to cosmic rays at high altitude. Powell’s meson, which became known as the pi-meson or pion, is in fact the nuclear force carrier that Yukawa had predicted, and he was duly awarded the Nobel Prize in 1949. The pion is a spin-0 boson that comes in three versions, with positive, negative, or zero electric charge. Anderson’s particle of 1937 became known as the mu-meson or muon. The muon behaves exactly like an electron, only it is heavier and unstable. (The term “meson” is now reserved strictly for bosons that carry force fields.) Primary cosmic rays collide with the nuclei of atoms high in the atmosphere and release showers of pions, which in turn quickly decay into muons, and it is largely muons that are detected at ground level. The discovery of the muon was completely unexpected, and it prompted an incredulous quip by American physicist Isidore I. Rabi (1898–1988): “Who ordered that?” (Kragh 1999, p. 204). Although there is now a place for the muon in the modern scheme of particles, there still is no satisfactory answer to Rabi’s question. Four Forces By the late 1930s it was accepted that there are four forces in nature: gravitation (described by Einstein’s general theory of relativity), electromagnetism, Yukawa’s strong nuclear force, and the mysterious weak force or weak interaction that is responsible for beta decay. They vary greatly in strength: the Creation and Annihilation strong force is 1038 times as strong as gravitation (the weakest force) but it falls off exponentially and is only important at distances of 10–13 cm. or less. Gravitation becomes the dominant force for very large masses and over large distances, because all of the mass and energy in the universe partakes in the gravitational interaction. Recent explorations in quantum gravity suggest that gravitation may again become dominant at very small scales, much smaller than any scale we can presently probe experimentally. It would become the dream of physicists (still not fully realized) to unify all these forces into one under the guidance of quantum mechanics. Before World War II there were provisional but still useful and illuminating theories of the weak force (due to Fermi) and the strong force (due to Yukawa). However, while efforts continued to find the right equations for meson fields, most quantum physicists concentrated their efforts on understanding the electromagnetic field—a task that got off to a quick start, and then ran into some surprisingly difficult challenges. Quantum Fields First Attempts to Quantize the Electromagnetic Field So much had been accomplished in the few years between Heisenberg’s vacation on Heligoland and the publication of the Dirac Equation that the quantum physicists (most of whom were very young) were brimming with Hochmut (pride) by the late 1920s. In a perhaps forgivable moment of overconfidence, even the judicious Max Born said, “Physics as we know it will be over in six months.” As in the 1890s, there were just a few more loose ends to tidy up—such as the quantum mechanics of the electromagnetic field. The first step toward quantum field theory was relativistic quantum mechanics, which began in the late 1920s almost as soon as Schrödinger’s Equation appeared. The Dirac Equation has already been mentioned; it is a relativistic wave equation for the electron and other spin-1/2 particles. The Klein-Gordon Equation was also arrived at by several authors from the mid-1920s onward. (Richard Feynman was to derive it as a teenager, just for fun.) It is the wave equation for spin-0 bosons. Dirac published the first paper on what he called quantum electrodynamics in 1928, and he was soon joined in developing the new theory by Heisenberg, Pauli, Jordan, and others. Quantum electrodynamics, or QED as it is often now called, is the quantum theory of the electromagnetic field, written in a way that respects the constraints of Einstein’s special theory of relativity. A distinguishing feature of quantum field theory is the nonconservation of particle number. Particles are created and destroyed—or, more precisely, particles can transform into other particles, and these transformations can involve splitting and recombination in such a way that the total number of particles is a variable quantity. This fact emerged from Dirac’s Equation, which showed that positrons can be bumped out of their negative energy states in the Dirac Sea by a passing gamma ray, a process that can also be described as the splitting of a gamma photon into a positron and electron. Quantum field theory describes an unending dance of the creation of pairs of matter particles 98The Quantum Revolution (fermions) and their antiparticles, and then the annihilation of the pair with the release of field quanta (always bosons). Mathematically this process is represented by creation and annihilation (or destruction) operators, which, like all quantum operators, are defined by commutation relations. A creation operator increases the number of particles by 1, and an annihilation operator decreases the number of particles by 1. It is often useful to think of a quantum system as a collection of harmonic oscillators (an idea that goes back to Planck); in this case, creation and annihilation operators are called raising and lowering operators, because the creation of a particle is equivalent to raising or lowering the energy state of a virtual oscillator. In simple terms, quantum mechanics becomes quantum field theory when the creation and annihilation of particles is taken into account. In the early years of quantum field theory, the process of turning field variables into creation and annihilation operators was called second quantization. The move from ordinary quantization to second quantization is a move from quantum mechanics of a single particle to that of many particles. The term “second quantization” is now largely a historical curiosity, but the procedure is not. Physically, the creation and annihilation of particles is a consequence of Einstein’s equivalence of mass and energy in combination with the laws of quantum mechanics. If a particle possesses a certain amount of energy in addition to its rest mass, it can emit a particle or particles with that amount of energy, so long as all conservation laws are respected. For instance, the electrical charges and spins have to add up. In quantum mechanics, if something can happen (according to conservation laws) then there is an amplitude for it to happen, and therefore a probability (sometimes vanishingly small, sometimes not) that it will happen. Hence, so long as there is enough energy in a system to allow for the creation of particles or their annihilation and transformation into other particles, then it will sooner or later occur. One of the first successes of early QED in the hands of Dirac and others was that it gave methods for calculating the amplitudes for spontaneous and induced emission that Einstein had identified in 1917, and thus allowed for the most general derivation of Planck’s radiation law yet found. Early QED was most successful in dealing with free fields, fields that are not in the presence of matter; describing how the electromagnetic field interacts with matter proved to be a much harder task. Early in the 1930s Dirac predicted another bizarre quantum phenomenon, vacuum polarization. It is a consequence of pair creation. Consider an electron—a bare electron—sitting in the vacuum. Its charge will induce the creation of virtual pairs of electrons and positrons in the vacuum around it, for the vacuum itself is a polarizable medium. Virtual particles are those that exist for such a short period of time that the time-energy uncertainty principle forbids them to be observed; they would later be called vacuum fluctuations. The charge induced out of the vacuum will partially shield the electrical charge of the electron; this means that the charge that we actually observe is not the bare charge of the electron at all, but a net or effective charge. Creation and Annihilation Another early implication of quantum electrodynamics is that the energy hidden in the vacuum may be vast, many powers of 10 greater than the energy density even of nuclear matter. This has prompted unsettling speculations that the whole physical world we know is merely a higher-order perturbation, like foam on the surface of the sea, on a complex vacuum the structure of which we glimpse only dimly. Infinitely Wrong By the early 1930s QED ran headlong into the brick wall of the infinities. Calculations that looked perfectly valid seemed to predict that a host of electrodynamic quantities that have definite, measurable values are divergent, which means that they blow up to infinity. An important case is the self-energy of the electron, the energy that it possesses as a result of its electromagnetic interactions with its own field. A simple example of a divergent function is the classical formula for the electrical potential of a point charge, V(r) = 1/r, which becomes infinite when r = 0. This problem had never been entirely resolved within classical electrodynamics, but it was assumed that the electron had a finite radius and so no one worried about it very much. In quantum electrodynamics there was no getting away from the infinities, however, despite valiant and ingenious efforts by virtually everyone who worked on quantum mechanics at the time. In simple terms the problem is that there are an infinite number of ways that an electron can interact with itself. Similar infinites also appeared in the early field theories of mesons that were being written by Yukawa and several others. Faced with the divergences, many of the more senior quantum theorists (such as Bohr, Dirac, and Heisenberg) were by the late 1930s (with World War II looming) becoming convinced that another large conceptual revolution was inevitable, one so radical that it would require the abandonment of space­ time itself as a fundamental concept. Heisenberg in 1938 suggested that it would be necessary to replace the spacetime continuum with a discrete structure based on a fundamental quantum of length, just as Planck had based quantum mechanics on a fundamental, indivisible quantum of action. But the younger generation of quantum physicists in the late 1930s—notably Hans Bethe (1906–2005), Victor Weisskopf (1908–2002), Yukawa, and Yukawa’s good friend Sinichiro Tomonaga (1906–1979)—simply kept calculating, and eventually a surprisingly effective conservative solution began to emerge. Renormalization: A Brilliant Stop-gap? Progress in fundamental physics slowed during World War II, but immediately after the war physicists enjoyed unprecedented prestige (and funding) because of the success of the atomic bomb. The first item of unfinished business was the puzzle of the infinities of quantum electrodynamics. In quantum field theory quantities have to be calculated using a perturbation series, because it is impossible to solve the equations exactly. Anyone who has 100The Quantum Revolution taken first-year calculus is familiar with the idea of a Taylor series, in which a function is approximated by finding its value at a point and then writing a certain kind of infinite series. A perturbation series in quantum electrodynamics starts with a zeroeth term, which will be in terms of known quantities, and then expands around this term in a power series, usually in powers of Sommerfeld’s mysterious fine structure constant (about 1/137). The first term, the free field solution, is exact but not all that interesting since it does not describe any particle interactions. The second term, although not quite exact, corresponds fairly well to a lot of observable physics. The third and higher terms diverge. For example, calculations of the self-energy of the electron come out infinite, even though it has a perfectly definite mass. Mathematically the deep reason for the infinities is that space and time are assumed in quantum field theory to be continuous, so that at finer and finer scales there are more and more ways for the system to behave, all of which have to be taken account of. It is reminiscent of the ultraviolet catastrophe that plagued blackbody theory before Planck. This is why the notion of making space and time discrete occurred naturally to Heisenberg. But the theorists in the 1940s found that there is a less radical approach: there is a systematic way of replacing infinite quantities whenever they show up with their finite, observed values. This is called renormalization: erase an infinity whenever it occurs and replace it with a finite value. Two brilliant young American physicists emerged as leaders in the battle to tame the infinities: Julian Schwinger (1918–1994), and Richard P. Feynman (1918–1988). The Japanese physicist Sinichiro Tomonaga had also developed a renormalized version of quantum electrodynamics in 1943, but because of the war this did not become known outside Japan until much later. Thus Feynman and Schwinger were repeating Tomonaga’s work, although from different mathematical perspectives, and all three shared the Nobel Prize in 1965. With renormalization, QED emerged as a tool that could predict virtually all measurable electrodynamics quantities to precisions of 10 decimal places or better. An important example was the slight shift in the spectral lines of hydrogen discovered by the American Willis Lamb (1913–). Electromagnetic attraction and repulsion is explained in terms of the exchange of photons between charged particles. QED is sometimes said to be the most accurate physical theory ever written, and it became the model for quantum field theories that would be developed in the future. It was, however, not what anyone had expected in the mid-1930s, but a compromise between quantum and classical methods that has been found to work surprisingly well. It is a compromise because quantum fields act out their probabilistic antics against a classical Minkowski space background. If Heisenberg’s advice to quantize spacetime had been adopted, then it would have been necessary to reconstruct relativity theory. No one was ready to do that in the 1940s, and few are Creation and Annihilation Local Quantum Field Theory The field theory that had emerged through the work of many physicists from the late 1920s to about 1950 is often called local quantum field theory, because it contains special “patches” to ensure that it is consistent with relativity. The most important of these patches is a rule called “microcausality” or “local commutativity.” This was first introduced by Bohr and Leon Rosenfeld (1904–1974) in the early 1930s and used by Pauli in his axiomatic construction of quantum field theory around 1940 when he proved the spin-statistics theorem. Microcausality states that all quantum observables acting at a spacelike separation must commute even if they are observables such as position and momentum that would normally not commute if applied to the same particle. (To say that two observations are at a space-like separation is to say that they are outside each other’s light cones, so that they are acting at points that could only be directly connected by a signal moving faster than light. See the Minkowski diagram in Chapter 2, Figure 2.2.) Microcausality was introduced into quantum field theory in order to ensure that its predictions would not conflict with relativity. Otherwise, or so it seemed to physicists in the 1930s, if commutativity fails at space-like separations then it would be possible to do a series of measurements on distant but entangled particles in such a way that faster-than-light signaling would be possible. The logic of microcausality is subtle, however. Given the pervasive nature of nonlocality in quantum physics, is it really safe to assume that it must never conflict with relativity? This question remains open. Feynman’s Diagrams and the Path-Integral Formulation of Quantum Mechanics Richard Feynman had unusual mathematical skill, but his greatest virtues as a physicist were his physical intuition and his delight in finding simple and elegant approaches to problems that had baffled everyone else. While most physicists were dazzled by Figure 7.1: Feynman Diagrams. The probability amplitude for the electron to get from A to B has a contribution from every possible Feynman diagram for that process. Illustration by Kevin deLaplante. 102The Quantum Revolution Schwinger’s mathematical virtuosity, Feynman found another way of thinking about QED that is probably as visualizable as any theory of quantum fields is ever going to be, and relatively easy to calculate with. The key idea was to represent every possible particle interaction with a spacetime diagram. Photon wordlines (their trajectories through spacetime) are represented by wavy lines, while particles such as electrons and positrons are straight lines or Each diagram represents a possible way that the system can evolve and thus represents a term in Figure 7.2: There Is Only One Electron in the Universe! A positron going forward in time is just an electron going back the perturbation series for calcul­at­ in time. Wheeler suggested that all electrons and positrons ing the transitions amplitudes for are the same particle, pinballing off photons backwards and various processes. Though some forwards through time. Illustration by Kevin deLaplante. ways turn out to be much more probable than others, they all have to be added in to get the right transition amplitudes. Feynman’s methods are relatively easy to use compared to those of Schwinger; in fact, there was at first a suspicion that they were too simple to be correct. However, the British-born physicist Freeman Dyson (1923–) showed the mathematical equivalence of Feynman’s diagrammatic approach with the operator approach of Tomonoga and Schwinger, and they were soon adopted by physicists and applied widely in particle theory. Feynman diagrams are an application of an elegant rewriting of quantum theory by Feynman, who claimed that he could not understand standard quantum theory and had to recreate it on his own. Schrödinger’s Equation plays a secondary role in Feynman’s version of quantum mechanics; the principal actor is the probability amplitude. Problems are solved by adding up the amplitudes for all possible routes that a system can take. This results in a path integral, which gives the amplitude or propagator for a process (such as a particle decay or interaction) to occur. Feynman’s path integral interpretation was a natural development of work he did in the 1940s in collaboration with John A. Wheeler, his thesis advisor. In 1941 Wheeler arrived at an apparently crazy idea: there is only one electron in the universe! This is natural when thinking about what Dirac’s pair creation looks like in spacetime. Picture an electron going forward in time: it scatters off a photon and goes back in time, where to ordinary human observers it looks like a positron going forward in time. Then it scatters off another photon, Creation and Annihilation goes forward again for a while, then pinballs off another photon, and so forth, until its worldline has threaded itself through all of spacetime. This perhaps fanciful vision led Wheeler and Feynman to the idea of taking seriously the apparently far-fetched notion of influences from the future. They revived an old notion due to the great German mathematician Karl Friedrich Gauss (1777–1855), who unsuccessfully tried to construct a theory of electromagnetism based on action at a distance, unmediated by any field— the very thing that Newton and Einstein had considered to be absurd. Wheeler and Feynman showed that it is mathematically possible to construct a version of electrodynamics that is entirely equivalent to ordinary classical electrodynamics, but in which the net force on each charge Figure 7.3: Richard P. Feynman. AIP Emilio Segre Visual is a combination of retarded and Archives, Physics Today Collection. advanced effects. Retarded forces are ordinary forces that propagate forward in time; advanced forces are direct influences of the future upon the past. The Feynman-Wheeler action at a distance theory was classical, but it influenced Feynman’s spacetime picture of quantum mechanics, in which the amplitude for any process must include contributions from all possible routes through spacetime including the far future. The fact that in our ordinary experience influences from the past seem to dominate is purely a statistical matter. Summing Up Quantum Field Theory The complex history of quantum field theory can be summarized as follows: From about 1928 to shortly after World War II, quantum mechanics grew into quantum electrodynamics (the relativistic quantum theory of the electromagnetic field). This in turn was generalized into local quantum field theory, a very powerful and effective approach to the physics of particles and fields, which began to be applied (with varying degrees of success) to the other three forces in nature. As the predictive power of the theory increased, the process of abstraction that had begun in 1913 with Bohr’s unobservable stationary states reached new heights. Even Feynman’s version of quantum electrodynamics is of far 104The Quantum Revolution greater mathematical complexity and abstraction than the wave mechanics of Schrödinger. To the extent that quantum field and particle processes can be visualized at all, it is only by means of analogies that can never be taken too seriously. This is perhaps another reason why Einstein and several of the other aging founders of quantum physics would have little to do with the new quantum field theories. Probably most particle and quantum theorists now would consider accurate visualizability (and possibly even mathematical consistency) to be a luxury or even a forlorn hope; they are happy if their theories can yield calculable and testable predictions, and even that is no longer a given in modern particle theory. Some theorists today believe that any quantum theories of the future will be modeled on the highly successful local quantum field theories that emerged in the 1940s and 1950s. Others believe that local quantum field theory is only provisional, like the Old Quantum Theory. The problem is not only that there are jobs that today’s quantum field theories cannot yet do, such as predict particle masses. There are at least two deeper theoretical worries that some authors believe cannot be swept under the carpet. First, there is the need for renormalization. Some physicists believe that so long as renormalization can be done in a consistent way, there is nothing wrong with it at all. It has been possible to have much greater confidence in the mathematical soundness of renormalization after its mathematical basis was clarified by Leo P. Kadanoff (1937–) and Kenneth Wilson (1936–). Others believe that renormalization is merely an ingenious stopgap, and that there should eventually be a way of calculating a quantity such as the self-energy of the electron without having to, in effect, rub out the wrong answer like a schoolboy and write in the correct answer by hand. Feynman himself described renormalization as a “shell game” that he suspected is probably “not mathematically legitimate” (1985, p. 128). Should this shell game be okay just because (for certain fields) it is possible to always get away with it? Also, the fact that the gravitational field has so far proven to be nonrenormalizable (which will be discussed in more detail in Chapter 12) points to the need, say some critics, for not being satisfied with renormalization as the best solution to the infinities in any sort of field theory. Of course, another possibility, although it is crazy, is that the calculations are correct and the self-energy of the electron really is infinite because of its potential interactions with all other particles in the universe, except that we can only detect a finite part of its energy experimentally. If anything like this is true then there would never be a hope of writing a truly exact theory of any quantum field, although there would be many ways of finding useful approximations. Second, the fact that quantum field theory is written on a Minkowski background worries many physicists. In his very first paper on quantum theory in 1905, Einstein warned that Maxwell’s Equations might be merely an approximation that is valid only when the quantum graininess of the electromagnetic field can be ignored. But Einstein then set out to construct his theory of special relativity based on the assumption that Lorentz invariance is exact. As far back Creation and Annihilation as the 1930s Schrödinger speculated that it may be necessary to quantize the Lorentz transformations (though he did not say how this could be done). The field theorists of the 1940s and 1950s displayed great technical brilliance in showing that it possible to write a powerful and accurate quantum field theory that is Lorentz-invariant and that respects the causal structure of the Minkowski background spacetime. It is ironic that the younger physicists in the late 1930s and 1940s were mostly the ones to perfect a highly technical conservative solution to the problem of the infinities—a solution that was conservative in that it respected Einstein’s laws of relativistic spacetime. It was the older founding generation of quantum physicists who thought that far more radical approaches were needed to cope with the infinities of quantum field theory, such as quantizing space and time, or doing away with space and time Many physicists today would prefer to not question the framework of local quantum field theory. Others suspect that in the long run Heisenberg will again turn out to have been right, and local quantum field theory itself will be revealed as only an approximation (an example of what is called an effective field theory), which breaks down at some high energy level where the metric of spacetime can no longer be treated as a classical, continuous structure. Whether local quantum field theory can continue to serve as the basic framework for quantum mechanics, or whether it has to be replaced by something else, is one of the most pressing theoretical questions that physics faces as it moves into the twenty-first century. Quantum Mechanics Goes to Work This chapter will cover several of the important applications of quantum mechanics that grew out of the theoretical breakthroughs of the 1920s. Some of these applications were of great scientific interest in their own right, and some changed the very world we live in. Nuclear Physics If the 1920s were the years in which quantum theory leaped ahead, the 1930s were the great years of nuclear physics, when, sparked by the discovery of the neutron, and using the tools of quantum theory, it went from Rutherfordian table-top experiments to the discovery of nuclear fission. Gamow Tunnels Out of the Nucleus The flamboyant Russian physicist George Gamow (1904–1968) engineered one of the first great successes of the new quantum mechanics when he pro­ vided a partial explanation for nuclear decay. (Gamow would later make important contributions to cosmology as well, when he was one of the first to predict, in 1948, the existence of the cosmic microwave background radiation—the faded “flash” of the Big Bang.) Gamow visited Copenhagen in 1928, and Bohr, with characteristic generosity, made it possible for Gamow to stay in Copenhagen for a year. In 1928 Gamow devised a very simple but surprisingly accurate model that described alpha decay using the new tools of wave mechanics. Gamow knew that the alpha particle was somehow trapped within the nucleus by a very short-range force that overcomes the strong electrostatic repulsion of the protons, because otherwise the electrical repulsion of the positively charged protons would blow the nucleus apart instantly. This was six or seven years before the work of Yukawa, but Gamow did not have to know the details of how the mysterious 108The Quantum Revolution nuclear force works in order to provide a basic quantum mechanical explanation of alpha decay. What Gamow did would now be considered a very typical application of wave mechanics, although in 1928 he was breaking new ground. The powerful but short-range force that holds the nucleus together defines a potential barrier, a hill of energy that the alpha particle has to either get over or go through. The alpha is therefore like a marble in a dog dish. Classically, if the alpha does not have enough energy to Figure 8.1: Barrier Penetration. A classical marble in a dog bowl can- roll out of the dish, it will stay there forever. From the not spontaneously jump out (top). viewpoint of wave mechanics, the reason that a marHowever, a quantum marble can ble cannot roll out of a dog bowl is just that its wave tunnel out of the bowl if the tail of function does not have a tail that extends outside the its wave packet extends outside the bowl, and this is because the de Broglie wavelength of bowl (bottom). Illustration by Kevin an ordinary marble is much smaller than the marble Gamow showed (by means of skillful approximations) that the tail of the alpha’s wave packet extends outside the nucleus, so that there is a probability for the alpha to tunnel through the barrier. He thus defined the phenomenon of barrier penetration, which was soon found to occur in many quantum mechanical contexts. It is another example of quantum mechanics allowing something to happen that would be impossible classically. Once the alpha gets outside, electromagnetic forces take over, and the strong electrostatic repulsion between the positive alpha and the nucleus propels the alpha away from the nucleus at very high energies. Strictly speaking, therefore, the energy of radioactive alpha decay that the Curies puzzled over is electromagnetic in nature, not nuclear. The mysterious new force that held the protons together, though, would take a long time to understand. The phenomenon of barrier penetration by quantum tunneling is very well described by wave mechanics, but it is still not entirely clear what is going on. How long, for instance, does it take the alpha particle to penetrate the potential barrier? Recently, Günter Nimtz of the University of Cologne and others have provided evidence that in some cases wave packets can tunnel at speeds faster than light, but whether this can be used for the transmission of information remains very controversial. Splitting the Nucleus . . . Quantum mechanics played a crucial role in understanding the nature of nuclear fission, which is the splitting of heavy nuclei into lighter nuclei of intermediate atomic weight. This book is not the place to tell the whole story of nuclear energy and the creation of the atomic bomb, which has been told many times before. (See Rhodes 1988 for a good introduction.) The aim here is describe certain ways in which quantum mechanics made atomic energy possible. Quantum Mechanics Goes to Work In 1933 German supremacy in science came to an abrupt end with the Nazi expulsion of Jewish scholars and scientists and any others, especially intellectuals, who might be critics of the regime. This coup effectively lobotomized Germany and guaranteed that Hitler would lose the world war he was planning. Only a few top-rank scientists (including Heisenberg) stayed behind, and even Heisenberg was branded by the Nazis as a “white Jew” because of his long association with Jewish scientists. Despite the upheavals of the time, physicists continued to probe the nucleus with the new tool that Chadwick had provided. Important work was done by Irène Joliot-Curie (1897–1956) (daughter of Marie Curie) and her husband Frédéric Joliot-Curie (1900–1958), who in 1934 were the first to transmute elements artificially (using neutron bombardment). An Italian group led by Fermi in the 1930s also transmuted elements, demonstrated that slow (“thermal”) neutrons could be easily captured by other nuclei, and discovered neutron activation, the process in which neutrons transform other elements into radioactive forms. At first most physicists working with neutrons thought that they were going to create transuranic elements, elements heavier than uranium, by reacting uranium nuclei with neutrons. The first person to realize that the uranium nucleus might be splitting into smaller fragments was Ida Noddack (1896–1978), who in 1934 published an article charging that Fermi had misinterpreted his own results and that his uranium nuclei were splitting into fragments of intermediate size. Noddack’s ideas were ignored for decades, but it is clear now that she was essentially right. One of the most active research groups was in Berlin, led by Otto Hahn (1879–1968) and Lise Meitner (1878–1968), one of the few women physicists in Europe before World War II. Meitner’s particular skill seems to have been finding theoretical interpretations of Hahn’s experiments. Hahn and Meitner brought into their group the chemist Fritz Strassman (1902–1980), because it was apparent that an analytical chemist were needed to identify the nuclei they were producing. Meitner, who was of Jewish heritage, lost her position and was forced to flee Germany in 1938. Hahn was unable or unwilling to protect Meitner. She ended up in Sweden, where she kept in correspondence with Hahn, who continued to seek her help in understanding the bizarre results that he and Strassman were getting. Meitner realized that Hahn was observing the fission of the uranium nucleus, but a theoretical picture was needed. In conversations with her nephew Otto Frisch (1904–1979), who had worked in Copenhagen with Bohr on nuclear physics, she found a way to apply a model of the nucleus that had been proposed by Gamow. Not enough was known about the strong force at that time to create a fully quantum mechanical treatment of the nucleus. Gamow’s model, which had been further developed by Bohr and others, was an ingenious compromise between classical and quantum concepts. It treated the atomic nu­ cleus as if it were a drop of liquid held together by surface tension like a drop of water. The “molecules” of the liquid were the nucleons (neutrons and protons), and the surface tension was supplied by the strong force. The liquid drop model is a good example of a semiclassical model, a blend of quantum and 110The Quantum Revolution classical elements that allows the practical description of something for which a full quantum theory is not yet available or too complicated to be practical. Meitner and Frisch calculated that a uranium nucleus, destabilized by a slow neutron, would split into two smaller nuclei of roughly the same mass, releasing the extraordinary energy of 200 MeV in the process. Most important, a few more neutrons would be released, opening the way to Szilard’s nuclear chain reaction. Frisch borrowed the term “fission” from cell biology to describe this splitting process. Hahn would later receive a Nobel Prize for his part in the discovery of nuclear fission; Meitner did not. Figure 8.2: Lise Meitner. AIP Emilio Segre Visual Archives. Quantum Mechanics Goes to War In 1939 Leo Szilard and Eugene Wigner (1902–1995) drafted a letter for Einstein to sign warning President Roosevelt of the risk that the Nazis might develop an atomic bomb. The fact that Heisenberg had remained in Germany (for reasons that he never made entirely clear) and ended up heading the German bomb project was one of the factors that made Szilard take the threat of a German bomb seriously. In 1940 Frisch and Rudolph Peierls (1907–1995) described a method by which a workable atomic bomb could be created. They were the first to show conclusively that only a few kilograms of enriched uranium brought to a critical mass would be sufficient to destroy a city. Roosevelt ordered the creation of a group to study the new threat, and this evolved into the Manhattan Project, an industrial and scientific effort on a completely unprecedented scale led by many of the physicists who had helped to create quantum theory, including Fermi and Bohr himself. It brought about the construction of what would now be considered fission bombs of very modest yield, and the abrupt end of World War II with the atomic bombing of two Japanese cities, Hiroshima and Nagasaki. . . . And Putting Nuclei Together Again Fission is the process in which heavy nuclei split; fusion is the process in which light nuclei such as hydrogen, helium, and lithium fuse together to Quantum Mechanics Goes to Work form heavier nuclei. In fusion the source of the energy released is the strong nuclear force. If nucleons are pushed close enough together they attract each other, releasing several million electron volts of energy per nucleon. Just as electrons in orbit around the nucleus release photons (in the few eV range) when they undergo the atomic transitions first identified by Bohr, nucleons release photons in the gamma range, the most energetic electromagnetic radiation known. Maria Goeppert-Mayer (1906–1972) and Hans Jensen (1907– 1973) won a Nobel Prize for their work in describing the energy levels within the nucleus in terms of their quantum mechanical shell model of the nucleus and thereby explaining the gamma-ray spectra of various radioactive nuclei. Fritz Houtermans (1903–1966) and Robert Atkinson (1898–1982) in 1929 noted that the atomic weights of intermediate-weight nuclei such as carbon were slightly less than the sum of the weights of lighter nuclei. They used Einstein’s equivalence between mass and energy to estimate the energy equivalent of the mass lost when light nuclei fuse together. On this basis they argued that the joining together, or fusion, of light nuclei could account for the energy production of stars, which were known to be composed mostly of light elements such as hydrogen and helium. The combination of the intense gravitation of a star and its high internal temperature would allow light nuclei to overcome their electrical repulsion and get close enough together that the nuclear forces could cause them to fuse, releasing very surprising amounts of energy. The concept of fusion eventually led to an explanation for the nucleosynthesis in stars of all elements heavier than hydrogen. The challenge of finding safe and efficient sources of energy is becoming acute because of impending fossil fuel depletion and climate change caused by carbon dioxide emissions. Unfortunately, it has only been possible so far to release large amounts of fusion energy by means of thermonuclear bombs. Fission energy cannot be the long-term solution for humanity’s energy needs because it produces radioactive waste and because there is only so much uranium and thorium to go around. However, some scientists argue that fission should allow humanity to buy time until we figure out how to control fusion, which would be an ideal source of energy because the light elements that fuel it are very abundant and it produces little if any radioactive waste. Most approaches to nuclear fusion have treated it as a problem in applied plasma physics. Plasmas are gasses so hot that most or all of their molecules are ionized; they are electrically conductive and display complex collective, fluid-like behavior that is still poorly understood. Plasma is sometimes called the fourth state of matter, along with gasses, liquids, and solids. The sun is composed of plasma mostly made of hydrogen, and in fact, the larger part of matter in the universe is in the plasma state. (A candle flame is a familiar example of a plasma.) Because plasmas are electrified they can be manipulated by electromagnetic fields, although with difficulty. The main approach in controlled thermonuclear fusion research since the late 1940s has been to create a “magnetic bottle” in which a hot plasma of light elements could be trapped long enough to allow it to fuse. Since we don’t know (yet!) how to produce a 112The Quantum Revolution gravitational field as intense as that of the sun, the plasma has to be extremely hot, much hotter than the plasma in the interior of the sun, in order to provide enough kinetic energy for the nucleons to be forced together and fuse. This has proven to be an extraordinarily difficult technical problem for several reasons, especially because of the many kinds of instability that are endemic to plasmas. It is ironic to compare the slow progress made in fusion research with the explosive growth of semiconductor electronics. In the early 1950s no one had any idea how fast and compact computers were soon to become; popular science magazines depicted the personal computer of half a century later as clanking behemoths that would fill a room. On the other hand, confident predictions were made that nuclear fission would soon provide “meterless” power (that is, electricity so cheap to produce that it could be given away) and controlled fusion would not be far behind. Currently most of the world’s fusion research resources are concentrated into ITER, the International Thermonuclear Experimental Reactor to be built in France. It will use a toroidal (donut-shaped) magnetic bottle design, called a tokamak due to the Russian physicist and peace activist Andrei Sakharov (1921–1989). ITER is not expected to reach breakeven before about 2015, and even then it will be a long way from providing power to the grid. (Breakeven is the point at which the reactor generates more energy than it consumes.) So far, plasma physics is one of the few areas of physics where quantum mechanics is relatively unimportant, since most plasmas are so hot. But there is a lot of room for new approaches in the search for nuclear fusion, and perhaps the quantum can someday play a role after all. Chemistry Becomes a Science One of the most immediate and dramatic application of quantum mechanics from the late 1920s onward was to chemistry. Pauli’s Exclusion Principle, the theory of spin, and the Schrödinger Equation provided the tools to define the structure of electron orbitals and the Periodic Table. The new challenge was to understand the nature of the chemical bond itself from the viewpoint of quantum mechanics. Before quantum mechanics, chemistry was largely an empirical science, which means that it amounted to trying various combinations of compounds to see what would happen. There was little or no principled understanding of why certain atoms would bond and in what way. As soon as Schrödinger’s wave mechanics had been formulated, physicists and chemists began to apply it to understanding the chemical bond. The first quantum theory of the covalent bond was developed by Fritz London (1900–1954) and Walter Heitler (1904–1981) in 1927. Their work had a strong influence on American chemists, notably Linus Pauling (1901–1994), who became the dominant figure in theoretical chemistry from about 1930 to 1950. Pauling (who would win Nobel Prizes in both Chemistry and Peace) greatly advanced the quantum theory of the chemical bond and, with his coworkers, effectively turned chemistry into a branch of applied quantum mechanics. Quantum Mechanics Goes to Work By the late 1940s biochemistry was branching into an entirely new discipline, molecular biology, the study of biologically important compounds at the molecular level. The star accomplishment of molecular biology is the discovery of the structure of DNA (deoxyribonucleic acid) and the deciphering of the genetic code by Francis Crick (1916–2004), James Watson (1928–), Rosalind Franklin (1920–1958), and Maurice Wilkins (1916–2004). Crick had been a physicist before he got interested in biology, and his expertise in wave mechanics was very helpful. He had been fascinated by a suggestion by Schrödinger (in his book What is Life? published in 1944) that genetic information was transmitted by some sort of quasi-periodic molecular structure. Rosalind Franklin was expert in X-ray diffraction techniques, another application of wave mechanics. She scattered X-rays through DNA crystals, and from the resulting diffraction patterns it was possible to infer the fact that the crystal has a helical structure. Watson and Crick showed that given the possible bonds that could be formed by the components of DNA (the nucleotides and ribose sugars), there was only one way they could fit together to define a helix. The order of the nucleotides within the helix can be varied, and this is the physical basis of the genetic code. In principle, any aspect of chemical structure and reaction dynamics can now be described and predicted using quantum mechanics. In practice, the detailed calculations required in order to predict such things as the structure of a protein molecule or the exact steps in a chemical reaction are still a challenge, often requiring the use of clever approximation techniques and computers. But quantum mechanics says that there is an answer, although it may take some skill to find it. The Electron Microscope Another almost immediate application of the wave mechanics of Schrödinger and de Broglie was the electron microscope. The resolution of a microscope is a function of the wavelength of the waves it uses, and because electron wavelengths are much shorter than those of visible light, electron beams can resolve much smaller objects. The German physicist Ernst Ruska (1906–1988) and others developed the first prototype electron microscope (EM) in 1931; it was called a Transmission EM because the beam of electrons is fired directly through the sample. Later the Scanning Electron Microscope was developed, which allows surfaces to be imaged down to near-atomic scales, and the Scanning Tunneling Electron Microscope, which can image individual atoms. The electron microscope opened up many doors in cell biology. By the early 1960s practical electron microscopes were available that allowed scientists to study the ultrastructure of cells, and this led to a huge leap in understanding of cell biology. An important example is the theory of serial endosymbiosis championed by American cell biologist Lynn Margulis (1938–). Margulis proposed that many organelles within the cell, such as the mitochondria and chloroplasts, are actually bacteria that eons ago became gridlocked into a symbiotic 114The Quantum Revolution relationship with a cellular host. Before the electron microscope this could not be confirmed, because even with the best optical microscopes a mitochondrion appears as little more than an indistinct blur. However, with electron microscopy it can be immediately seen that mitochondria and chloroplasts look exactly like bacteria. Margulis and others have since shown convincingly that eukaryotic cells (nucleated cells, such as those the human body is composed of) can be best understood as vast symbiotic colonies of bacteria. The one drawback of the electron microscope in biology is that the highenergy beam is lethal to any living organism. Biological samples have to be fixed and prepared in various ways before they can be viewed. Another revolution in biology might occur if it became possible to image cellular ultrastructure in a living cell. Super Matter Another duality that manifested itself in the 1920s and 1930s is the fact that many kinds of matter can exist in two phases: ordinary matter and matter where quantum mechanical effects are dominant. The transition between these two phases is usually abrupt. The Dutch physicist Kamerlingh Onnes (1853–1926) discovered that mercury immersed in liquid helium at a temperature of only 4.2°K above absolute zero loses its electrical resistance. This phenomenon has since been discovered in many other conductive materials at cryogenic (near absolute zero) temperatures. Superconductors also exhibit the Meissner effect, which means that magnetic fields cannot penetrate their surfaces. (This effect is named after its discoverer, Walther Meissner, 1882–1974.) Superconductors behave like ordinary conducting materials until their temperature is lowered to a critical transition temperature, below which they abruptly switch to the superconducting state. Superconducting is still not fully understood, but it is known to be due to the tendency of charge carriers such as electrons to form Bose-Einstein condensates under the right conditions. Pairs of electrons (which are fermions) of opposite spin can be weakly coupled in certain kinds of metals and form Cooper pairs (named after Leon Cooper, 1930–). These pairs have a net spin of zero and thus, as a composite object, behave as bosons. This illustrates a general quantum rule, which is that if particles get combined by a dynamic interaction, the statistics of the combination is the statistics of the sum of the spins of the parts. One of the Holy Grails of modern applied quantum mechanics would be a room-temperature superconductor. All superconductors to date have to be maintained at cryogenic temperatures which makes them very expensive and awkward to handle. Room temperature superconductors would enable a revolution in electrical and electronic technology. Quantum Mechanics Goes to Work Another manifestation of Bose-Einstein condensation is superfluidity, discovered by the Russian physicist Peter Kapitsa (1894–1984) and others in 1937. The phenomenon was first observed in liquid helium-4, but it also occurs in helium-3 where it has a slightly different explanation. (Helium-4 atoms are bosons, while helium-3 atoms form Cooper-like pairs.) A superfluid will flow without any viscosity (fluid friction) whatsoever, and it has zero entropy and thermal conductivity. If superfluids are set into rotation their motion is quantized, and they form quantized vortex filaments and vortex rings that behave remarkably like fermions and bosons respectively. The Laser The laser (“light amplification by stimulated emission of radiation”) will be described under “super matter,” since it is also a large-scale manifestation of Bose-Einstein statistics. The laser was an offshoot of the maser (“microwave amplification by stimulated emission of radiation”) created by Charles H. Townes (1915–) in 1953. In the late 1950s Townes, Arthur Schawlow (1921–1999), Gordon Gould (1920–2005), Theodore Maiman (1927–2007), and several others applied similar techniques to visible light and produced the first lasers. (The term “laser” itself was coined by Gould in 1957.) The function of a laser is to emit a beam of coherent light, which means light of a uniform frequency and phase. (By contrast, the light from an ordinary light bulb is incoherent.) It exploits the process of stimulated emission of radiation, identified by Einstein in 1917. (It is not clear whether Einstein himself had any idea that stimulated emission would have practical uses; he was mainly concerned with the theoretical problem of finding equilibrium conditions between matter and the radiation field.) The laser works by a kind of chain reaction: an optical cavity with mirrors at both ends (one partially silvered) is filled with a material (solid, liquid, or gas) that can be pumped to an excited state. The material is chosen so that it will fall back into its ground state, emitting light of a definite frequency; this light reflects back and forth within the cavity, stimulating the emission of further photons in the same state, because photons obey Bose-Einstein statistics. The beam is emitted through the halfsilvered mirror. The applications of lasers are too Figure 8.3: The Laser. The lasing medium (ruby crystal, CO2, or neon gas, etc.) is pumped to an excited state by the numerous and well-known to detail energy source. Spontaneously emitted photons stimulate here, except to note that excep- the emission of more photons of the same quantum state, tionally high-powered lasers are producing a beam of coherent light. Illustration by Kevin being explored as possible means deLaplante. 116The Quantum Revolution of igniting nuclear fusion through a process called inertial confinement. If it works, this will involve compressing and heating a pellet of fusionable material by means of laser beams, and quantum mechanics will have made controlled fusion possible after all. Spin Doctoring In 1928 Heisenberg wrote the first quantum mechanical theory of ferromagnetism, the familiar sort of magnetism that makes compass needles swing to the north. There are several types of magnetism, but virtually all magnetic phenomena now can be understood in terms of the quantum mechanics of One of the most important applications of quantum mechanics is nuclear magnetic resonance (NMR), which was discovered by the American Isidore I. Rabi (1898–1988) in 1938. The nuclei of atoms that have an odd number of nucleons will have a net magnetic moment (a measure of magnetic field). If they are exposed to a strong external magnetic field, they can absorb a quantum of field energy in such a way that they line up in an excited state. They can then be tweaked by additional radio-frequency fields, and they will give off characteristic signals in the radio range. These signals, properly processed, give a great deal of information about the structure of the material being probed. By 1970 NMR researchers could image a test tube of water. Since then NMR has been developed into the powerful technique of Magnetic Resonance Imaging (MRI), which can provide highly detailed images of soft tissues in the human body for diagnostic and research purposes. MRI is not only highly accurate but, unlike X-rays, safe, since there so far do not seem to be harmful effects from the exposure of the human body to the strong magnetic fields required for MRI. Recent work in neuroscience utilizes MRI for “real-time” imaging of the brain while it is in the process of performing tasks. The Semiconductor Revolution Einstein’s insights of 1907 on specific heats grew into the field of solid state physics, or condensed matter physics, as it is now often called. It is impossible to do justice here to everything that has been accomplished in this field. The application of quantum mechanics to semiconductors will be briefly mentioned, however, because this led to the semiconductor revolution, one of the more apparent ways in which quantum mechanics has shaped the modern Semiconductors are metalloid elements such as germanium and silicon, roughly in the middle of the Periodic Table, which have electrical conductivities part-way between metals and insulators. Important contributions to the quantum theory of semiconductors were made by Rudolph Peierls in 1929. The operation of semiconductors can be understood in terms of quantum energy levels. All solids have a band gap, which is the energy required for the atomic electrons to jump free of the nucleus and serve as conductors of elec- Quantum Mechanics Goes to Work tricity. Metals have small band gaps, insulators have large band gaps, and semiconductors have band gaps that are intermediate in size. The conductivity of a semiconductor can be sensitively controlled by applied electric fields (as in an FET, or field-effect transistor) and by the addition of impurities (doping), which increases the number of charge carriers, which are either positively charged holes or electrons. A p-type semiconductor has excess holes, and an n-type has excess electrons. The first working transistor was unveiled at Bell Labs in 1948 by John Bardeen (1908–1991), William Shockley (1910–1989), and Walter Brattain (1902–1987), although they may have drawn some inspiration from much earlier designs by Oskar Heil (1908–1994) and Julius Lilienfeld (1881–1963). (Bardeen would become the only person to win two Nobel Prizes in Physics; his other was for work on superfluids.) One of the many advantages of solid state electronic devices over their bulky vacuum tube predecessors is that they can be miniaturized. Their first crude-looking transistor has evolved into modern integrated circuit chips containing billions of microscopic transistors. Semiconductor physics has enabled the modern electronic revolution. It is amusing to read science fiction written as late as the 1950s in which futuristic electronic devices still use vacuum tubes. What glaring failures of the imagination are we guilty of today? Symmetries and By the early 1960s the list of “elementary” particles had grown into the hundreds. Once renormalization had allowed physicists to tame the electromagnetic field (even if they still did not really understand it), the dominant problem was to understand the strong force (responsible for nuclear binding) and the weak force (responsible for beta decay). This long process culminated in the creation of the so-called Standard Model of elementary particles, which will be sketched below. The Standard Model is a qualified success: using it, a skilled practitioner can predict the properties (with certain important exceptions to be described) of all particles that are presently observable, and it contains within itself the unification of electromagnetism and the weak interaction into the electroweak gauge field. The Standard Model had settled into more or less its present form by the early 1980s. What seemed like the next natural step was the unification of the electroweak force and the strong force into a Grand Unified Theory (GUT). This was attempted during the late 1970s and early 1980s, taking the most obvious mathematical route. However, GUT failed in its most important prediction. Since the mid-1980s, particle theory has mostly gone off in another direction, pursuing a new dream of unification called string theory. But string theory has its own problems, as described below. The Tools of the Particle Physicist It is not possible here to give anything more than the sketchiest presentation of detector and accelerator physics. Rutherford used natural alpha particles as probes of the nucleus. However, naturally occurring alphas have energies only up to about 7 MeV. Physicists realized that if they could hit the nucleus with a harder hammer they could get a better look at what was inside, but, as always in quantum mechanics, there is a tradeoff. Probing the nucleus with higher energy particles can reveal finer details of structure, because the 120The Quantum Revolution wavelength of a higher-energy probe particle is shorter. However, the shorter the wavelength of the probe, the more energy it imparts to the target, and thus the more it changes the very nature of the target. Strictly speaking it is not correct to say that a 10 GeV (ten billion electron volt) proton hitting another proton detects certain particles within the target particle; rather, the incident proton in combination with whatever was inside the target particle causes certain observable products to appear. Bohr would have insisted that it is not even meaningful to ask what was inside the target before it was probed, while Heisenberg might have preferred to say that the target particle possesses certain potentialities (that can be expressed in the mathematical language of quantum theory) that determine its possible interactions with the incident particle. Whatever the proper interpretation, it is mistaken to suppose that a quantum mechanic looks inside a nucleon the way an auto mechanic looks under the hood of a car. There are two main kinds of accelerators, the linear accelerator, which uses high voltage to accelerate charged particles in a straight line, and the cyclotron, which accelerates particles in a circular path. There is no known way (apart from gravity) to directly accelerate neutral particles such as the neutron, although they can be produced by various reactions and then allowed to interact with targets. Linear accelerators evolved from the Cockroft-Walton and van de Graaf machines of the early 1930s to the two-mile-long Stanford Linear Accelerator in California (SLAC), opened in 1962. Linear accelerators remain an important tool, but the highest energies are produced by cyclotrons and their descendents. The invention of the cyclotron is generally credited to the American physicist Ernest O. Lawrence (1901–1958) in 1929. Lawrence’s idea was that charged particles could be confined to a circular path by magnets and accelerated by pulsed electromagnetic fields. As the energy gets higher, the diameter of the particle track has to go up, and cyclotrons evolved rapidly from Lawrence’s first desktop device by the end of World War II into machines several feet in diameter. After World War II cyclotrons and their descendents would evolve into miles-wide monsters costing billions of dollars to construct. Particle energies climbed in the billion-electron volt (written BeV, or GeV for “giga” electron volt) range by the 1960s, and the study of elementary particles became known as high energy physics. One design limitation of cyclotrons is the need for powerful magnetic fields. With the introduction of practical supercooling it became possible to use superconducting magnets, which allows for much higher beam energies. The largest operational accelerator in the world at this writing is Fermilab (near Batavia, Illinois), with a beam energy of 2 TeV (tera- or trillion electron volts). Accelerator construction hit a financial and political wall in the early 1990s. In the United States, $2 billion had been already spent on the 54-mile diameter Superconducting Supercollider (SSC) in Texas when it was canceled by the U.S. Congress in 1993. The reasons for the cancellation were cost and doubt that it would yield scientific results of sufficient interest compared to Symmetries and Resonances other projects on which the money could be spent. The SSC was designed to collide protons with an energy of 20 TeV (tera-electron volts). The cancellation of the SSC meant that since the mid-1980s particle physicists have been largely unable to test their theories, which depend on predictions at energy levels beyond any accelerator currently in operation. This will change soon, however, for the Large Hadron Collider (LHC) near Geneva, Switzerland, is expected to come on line in 2008. The LHC will have beam energies of 14 TeV. This colossal machine (financed and operated by a consortium of 38 countries including the United States and Canada) is the great-grandchild of Rutherford’s improvised table-top device of 1910 that demonstrated the existence of the nucleus by scattering alpha particles through a scrap of gold foil. Despite the enormous sophistication and power of modern particle accelerators, what they mostly do is just what Rutherford did: fire an energetic particle at a target and see what scatters off. Particle detection is a complex process. Many particle detectors take advantage of the particles’ electric charges and therefore cannot directly image a neutral particle. (If a pair of particles branches off apparently from nothing, however, that is a sign of the decay of a neutral particle.) An important early detector was the Wilson cloud chamber, designed in 1911 by C.T.R. Wilson (1869–1959). The chamber contains supersaturated water vapor. Charged particles such as alpha particles or electrons will ionize water, leaving a trail of mist through the chamber. If a magnetic field is applied, positively charged particles will curve one way, negative the other, and the curvature of the track is a function of the mass of the particle. Cloud chambers were used by Anderson to detect the positron and muon in cosmic ray showers. In the 1950s hydrogen bubble chambers were introduced. A volume of liquid hydrogen is allowed to expand just as a jet of particles are fired into it from an accelerator, and myriad tracks of bubbles are produced in the hydrogen by the charged particles in the beam. Two generations of technicians have strained their eyesight recording the positions of bubble chamber tracks so that the particle trajectories could be reconstructed by computer analysis and analyzed for evidence of the creation of new particles. Bubble chambers have been largely replaced in high energy physics by devices such as the wire chamber, drift chamber, and spark chamber, but the Figure 9.1: Typical Bubble Chamber principle is the same: energetic charged particles can Tracks. A and B are beam particles. C is a collision of a beam particle be tracked because of their ability to ionize parts of with a proton. D is the decay of a neutheir surrounding medium. tral particle emitted from C. E is the Most particles that were being searched for have decay of a gamma ray into a positronextremely short lifetimes (half-lives), and so they of- electron pair. Illustration by Kevin ten are detected indirectly by their decay products. A deLaplante. 122The Quantum Revolution high energy proton-proton collision, for example, can produce jets containing hundreds of by-products, the vast majority of which will be uninteresting, and the analysis of the tracks is a daunting task. The aim is to produce a threedimensional reconstruction of the collision and its products. Sometimes high energy physicists will have a definite prediction to test, and they will be looking for particles with expected properties such as half-life, charge, and so on; sometimes they just blast away and see what comes out. Taming the Particle Zoo Gauge Theory While particles multiplied like rabbits in high energy experiments, quantum theory moved to new heights of mathematical abstraction with the publication in 1954 by C. N. Yang (1922–) and Robert Mills (1927–1999) of a new kind of field called a gauge field. It is impossible to describe the meaning of a gauge field adequately here. It is a generalization of the quantum theory of the electromagnetic field, and Yang and Mills intended that it could take account of symmetries that had been noted in the study of the strong nuclear force. One of the important consequences of gauge theories is that they predict the existence of new particles that mediate the forces described by the fields. But the Yang-Mills theory seemed to predict particles that did not exist, and it was so complex that it was not clear that it could be renormalized. The Yang-Mills theory languished until the surprising proof in 1971 by Martinus Veltman (1931–) and Gerard t’Hooft (1946–) that it is renormalizable after all. It was immediately possible to apply it successfully to the strong and weak forces, as described below. The history of quantum field theory can therefore be summarized by saying that it started as the attempt to formulate a relativistic quantum theory of the electromagnetic field; it suffered a temporary setback due to the divergences; it was rewritten as a renormalizable theory, generalized as gauge theory, and is now applied with some (though not unqualified) success to all particle fields—except gravitation, which is a very special problem that will be discussed later. The Heyday of High Energy Physics By the early 1960s hundreds of short-lived particles, called resonances, had been detected in high energy collisions. Some resonances have half-lives as short as 10–23 seconds, and are only detectable indirectly by their decay products. The shorter the half-life, the less certainty there is in the mass of the resonance, by the uncertainty principle. Decay modes can be very complicated, with a resonance shattering into numerous fragments via the strong force, and then many of the fragments undergoing further weak force decay. Much of particle theory during the 1950s amounted to little more than desperate attempts to classify the denizens of the particle zoo. But eventually patterns would begin to emerge. Symmetries and Resonances Disparity and a New Symmetry One of the most interesting discoveries in the 1950s was parity violation, which was found to occur in the weak decay of certain unstable nuclei. A parity flip is a change of handedness, as in mirror reflection. It had been widely assumed that all particle interactions were invariant under change of parity; that is, if a decay were mirror-reflected it would look the same. However, it was found by C. S. Wu (1912–1997) and others that if Cobalt-60 nuclei are lined up the same way they will emit beta rays in a preferential direction, which means that the process would not look the same in a mirror. The operator that reverses parity is symbolized as P, while the operations that flip electrical charge and time order are symbolized C and T respectively. It was eventually discovered that while some weak interactions violate parity, and some violate two of C, P, or T, all interactions in physics apparently obey CPT conservation. This means that in a particle interaction such as a decay, if one could reverse all charges, mirror-reflect the process, and run it backwards in time, one would observe precisely the same interaction. This is a bit difficult to test in practice, however. But most physicists accept CPT conservation as a fundamental property of quantum fields, because it can be shown that if CPT were violated then Lorentz invariance would be violated. Symmetries, Made and Broken Group theory became increasingly important in particle theory from the 1950s onward, as physicists sought clues to the dynamics of particles by trying to classify the symmetries they obeyed. Groups are a way of describing the symmetries of an object, which are the operations under which it remains unchanged. For instance, a square rotated through 90˚ looks just the same as it did before it was rotated. The Lorentz transformations of special relativity, as an important example, can be understood as the result of the operation of a Poincaré group. The concept of symmetry breaking also became important. As example of symmetry breaking is the way ice might freeze in an unsymmetrical way if there is a small impurity in the water. Many current particle theories predict that all interactions are the same at some extremely high energy, but that as the universe “froze out,” symmetry breaking led to the different types of interactions we observe at our energy levels. The concept of symmetry breaking gives a useful way of understanding why we do not observe what fundamental theory says we should observe. Probing the Proton When SLAC went on-line in 1962 the door was opened to a deeper understanding of particle structure through deep inelastic scattering. In this process high-energy particles such as electrons are scattered through heavy particles such as protons. These experiments revealed that protons and neutrons have 124The Quantum Revolution an internal structure, just as Rutherford’s scattering experiments revealed the presence of the atomic nucleus. Various names for these new particles inside the nucleon were proposed— partons and aces, for instance—but eventually the name “quark,” proposed by Murray Gell-mann (1929–), was the one that stuck. Gell-mann named his hypothetical elementary particles “quarks,” after a line in James Joyce’s Finnegan’s Wake, in which sea-gulls flying overhead call, “Three quarks for Mr. Mark!” This started a trend of whimsical names for particles and their properties; there were so many new particles being discovered or hypothesized that bemused physicists simply did not know what else to call them. An important characteristic of quarks is that they possess fractional electrical charges of either +/–1/3 or +/–2/3 of the charge of an electron. Up-quarks have a charge of +2/3, while down-quarks have a charge of –1/3 . On the quark model, strongly interacting particles are classified as hadrons, and these can be divided into two groups, baryons (neutrons, protons, and their heavier unstable analogues), and mesons such as the pion. Baryons are fermions and are composed of three quarks, and hadrons are bosons and are composed of two quarks. The proton, on this scheme, is built up out of two up quarks and a down quark, and the neutron is an up and two downs. The positively charged version of Yukawa’s pion is made of an up-quark and an anti-down quark. Gell-mann used the quark model and symmetry considerations to argue that groups of hadron resonances could be classified either as octets, sometimes called the eight-fold way, or as decuplets, arrays of 10 particles. One member of a decuplet, the omega-minus particle, had not yet been discovered, and when it was found in 1962 this gave an excellent verification of the quark Hunting of the Quark Evidence mounted through the 1960s and 1970s that quarks are the best explanation for the structure of hadrons and baryons. A problem, however, was that it did not seem to be possible to produce free quarks. Finally, it became clear that quarks are forever confined within baryons or hadrons. Within a particle such as a nucleon the quark can move almost as a free particle (this is called asymptotic freedom), but if a quark is expelled from the nucleus by a collision it polarizes the vacuum around itself, pulls other quarks out of the vacuum, and combines into a baryon or hadron too quickly to be observed as a free particle. It is as if quarks have little if any mutual attraction when they are very close to each other (inside a nucleon) but attract each other with a force that increases rapidly to a high constant value the farther apart they are. Most particle physicists are now convinced that the quark does exist, because of the tremendous explanatory and predictive power of the quark hypothesis. But it still seems discomfiting to have particle theory dependent on an object that physicists apparently have no hope of ever observing Symmetries and Resonances The Standard Model With the discovery of the quark, and the proof that gauge theories are renormalizable, it became possible to produce a unified theory of the structure of elementary particles that is now called the Standard Model. The main conclusions of the Standard Model will be sketched here. It is the best picture we have so far of the ultimate constituents of the physical world. It would be bewildering if not impossible to trace all of the changes in particle terminology and classification since the time of Yukawa. (For example, the term “neutron” was briefly applied to the particle that we now call the “neutrino.”) Therefore, this section describes the particle zoo using terminology that has been current since the 1970s. However, beware that books and articles written before this time may not use exactly the same language. The Standard Model says that matter and energy are described in the language of quantum gauge field theory. All particles are divided into fermions and bosons. The field quanta are bosons, and they mediate forces between particles of matter, which are fermions. This field-theoretic picture is just the wave-particle duality in a more mathematically abstract form. There is a complementarity between continuous field and discrete particle points of view. As with the double slit experiment, the cash value of a field theoretic calculation always can be expressed as a function of the probabilities of detecting particles, or other probabilistic quantities such as expectation (average) values of observables, or scattering cross-sections (probabilities of particle interaction). There are two groups of fermions: quarks and leptons. Leptons include the massless neutrinos. (There is recent evidence that neutrinos may have a very small mass, but this remains controversial.) The family of quarks and leptons is divided into three generations, and in each generation there are six quarks (plus their corresponding antiquarks), a lepton, and the lepton’s partner neutrino. Neutrinos move at the speed of light. As described above, all hadrons (baryons and mesons) are built up out of various combinations of quarks and antiquarks. If quarks and leptons have a finite diameter it is less than 10–18 m. They are often treated as point particles—even though by Heisenberg’s Uncertainty Relations the notion of a point object does not make physical sense! There are three flavors of lepton, the electron, muon, and tau lepton, and three corresponding flavors of neutrinos, the electron neutrino (postulated by Pauli in 1930), the mu neutrino, and the tau neutrino. All particles in this scheme have now been observed in accelerators, although some of the heavier particles in the third generation were not produced until quite recently. The three generations of particles are almost carbon copies of each other, except that the higher generations are heavier. This strongly suggests that all three generations are simply different energy states of one fundamental structure, but it remains unclear what that could be. There are two kinds of forces in the Standard Model, the color force (a generalization of Yukawa’s strong force), and the electroweak interaction. The color force is mediated by bosons called gluons, and the electroweak interaction is mediated by the photon and a series of heavy bosons (the so-called 126The Quantum Revolution intermediate vector bosons). The detection of the predicted intermediate vector bosons in the early 1980s was one of the last great triumphs of the Standard Model. The field theory of the color force is called quantum chromodynamics; it is a Yang-Mills theory. Several attempts have been made to unify quantum chromodynamics and the electroweak field, but (as discussed below under “Protons Refuse to Decay”) they have not been successful. The Standard Model is the work of literally thousands of theorists, experi­mental high energy physicists, and technicians over a period of nearly 40 years, with the expenditure of billions of dollars, and it is a bit difficult to assign priority to the researchers involved. Apart from those already mentioned, Nobelists Stephen Figure 9.2: Table of “Elementary” Particles in the Standard Weinberg (1933–), Sheldon Gla­ Model. Matter is composed of spin-1/2 quarks, which come show (1932–), and Abdus Salam in three generations, each with its corresponding lepton and (1926–1996) played an especially neutrino. There are two flavors per generation. Field quanta important role in creating elecare bosons. The “color” force between quarks is mediated by gluons, the electroweak force is mediated by photons troweak theory. In summary, the short version of and intermediate vector bosons, the Higgs field (if it exists) is mediated by the Higgs boson, and gravity is mediated the history of the Standard Model by the so-far unobserved graviton. Simple! Illustration by goes like this: quantum electrodyKevin deLaplante. namics was generalized into gauge theory by Yang and Mills. When it was proven that gauge theories are renormalizable, they were applied to the quantum “color” fields that mediate the forces between quarks. Yukawa’s early theory of the strong force follows from QCD as a low-energy approximation. The resulting field theory was called quantum chromodynamics (QCD). Electromagnetism and the weak force were unified into the electroweak field. Protons Refuse to Decay A number of particle theorists since the 1970s have attempted to define so-called Grand Unified Theories (GUTs) which would unify quantum chromodynamics with electroweak theory. The assumption is that at very high energies, all interactions are the same, and the differences between the three Symmetries and Resonances non-gravitational forces are due to symmetry-breaking. Most GUTs predict proton decay since they treat leptons and quarks as different states of the same particle. During the 1980s several experimental groups searched for proton decay in large tanks of highly purified water or various hydrocarbons (which contain many protons). Although the predicted decay modes would be very rare, with enough protons a few decays should have been observed. However, from these experiments it is now possible to say that if the proton does decay, its half-life must be substantially greater than 1033 years. This rules out a number of GUTs such as the model proposed in 1974 by Sheldon Glashow and Howard Georgi (1947–), but the idea remains alive. Too Many Empirical Constants? Despite the great success of the Standard Model, it requires the use of approximately 50 empirical parameters, which, like the old Rydberg constant of pre-Bohr spectroscopy, can be determined directly or indirectly from experiment but which have no theoretical explanation or derivation. In particular, there is no way of calculating the masses of the elementary particles. There is presently no more understanding of why the mass of the electron is .511 MeV than physicists before Bohr understood why the H-alpha line of hydrogen has a wavelength of exactly 656.3 nm. Bohr was able to make the Rydberg constant “go away” in the sense that he found a derivation for it from the dynamics of his theory. No theory of particle structure will be truly satisfactory until it can make some of those empirical constants go away, especially the mass spectra. Frontiers of Particle Physics One of the guiding hypotheses that have guided research in elementary particles since the early 1970s is supersymmetry, which says that there should be a complete symmetry between fermions and bosons: for every fermion (such as the electron) there should be a corresponding boson. This idea was introduced by Pierre Ramond (1943–) in 1970, not merely out of a love of mathematical symmetry, but because it was essential to make string theory allow for the exis­ tence of fermions. A great deal of intellectual effort has been invested in trying to predict the properties of the supersymmetric “twin” particles or sparticles that supersymmetry says should exist. However, so far, there is absolutely no experimental confirmation of supersymmetry, and some particle physics are beginning to doubt that the idea was viable in the first place. It is, however, essential to most versions of superstring theory, and so the idea of supersymmetry remains very attractive to many particle theorists. Several physicists in the 1980s explored the possibility that leptons and quarks could be understood as composite particles, built up out of a 128The Quantum Revolution hypothetical really elementary particle dubbed the preon. However, no compelling theoretical formulation and no experimental evidence has been found for preons, and the idea has largely been put on the back burner. One barrier to preon theory is that because of the Uncertainty Relations, any particle that could be small enough to fit inside the electron would have to have an enormously larger mass than the electron. Most attempts to find a unifying basis for the Standard Model therefore seek to understand particles in terms of radically different kinds of structures that would be something other than merely more particles inside particles. Search for the God Particle The missing link of the Standard Model is the Higgs particle or Higgs boson, proposed by British particle theorist Peter Higgs (1929–) in 1964. It is sometimes called the “God particle” because it plays such a central role in the Standard Model. According to theory, the Higgs boson is the quantum of a field that pervades all of space, and other particles such as leptons acquire mass by polarization of the Higgs vacuum field. So far, the Higgs has not been detected, and it must have a mass above 100 GeV. It is hoped that the Large Hadron Collider (LHC) coming on line at CERN will have beam energies that could be high enough to produce the Higgs particle, and some physicists are now taking bets about whether or not it will be discovered. If it is not then the Standard Theory is due for further revisions, an outcome that would surprise no one. Heisenberg’s Last Word In 1976 the aging Heisenberg published an article in which he commented from his long experience on the state of particle theory. He argued that the most important result of particle physics over its 50-year history since Dirac predicted the existence of the positron was that there are no truly elementary particles (in the sense of an entity that cannot be changed into something else). All particles are mutable, subject only to conservation laws such as electrical charge, momentum, and mass-energy. He also pointed to the importance of symmetry principles and symmetry-breaking as guides to which particle reactions are possible. Heisenberg also noticed the similarity between the state of particle physics in 1976 and the Old Quantum Theory during its later years, with its recourse to well-educated guesswork: spotting mathematical regularities, guessing that these regularities have wider applicability, and trying to use them to make predictions. Such guesswork is very useful; a good example is Balmer’s skill in spotting the mathematical structure hidden in the hydrogen spectrum. But what is needed for real understanding, Heisenberg argued, is a theory of the underlying dynamics, such as was provided by Schrödinger’s Equation in 1926. The key, Heisenberg stated, is that the table of particle masses form a spectrum just as atomic energies form a spectrum, and a spectrum must imply a dynamical law whose solution, together with the right boundary conditions, would give as eigenstates the spectral values (particles) we observe. If quantum mechanics applies then there must be some kind of Symmetries and Resonances eigenfunction whose eigenvalues are the masses of the elementary particles, just as the spherical harmonics give the energies and structure of the electron shells in atoms. One would like to be able to understand the leptons and neutrinos, for instance, as merely different eigenstates of the same particle; the challenge is to find the operator they would be eigenfunctions of. Heisenberg also made an interesting comparison between the atomism of Democritus (according to which the world can be broken down into atoms— indivisibles—and the Void) and Plato’s atomism, according to whom the physical world is to be understood in terms of mathematical symmetries. Plato, argued Heisenberg, was closer to the truth. Heisenberg cautioned, however, that one must distinguish between a “phenomenological” symmetry (an approximate symmetry that is merely descriptive) and a fundamental symmetry built into the laws of physics (such as Lorentz symmetry). Heisenberg suggested that accelerator physics could be approaching the asymptotic regime, a region of diminishing returns in which fewer and fewer new particles will be discovered regardless of the energies applied. It was high time, Heisenberg concluded, for physicists to move beyond the gathering and classification of particles, and attend to the problem of finding the right dynamical laws. Strings and Things The closest thing to an attempt to answer Heisenberg’s demand for a theory of the underlying dynamics of particles is string theory, which has become the most popular particle model since the 1980s. The essential idea of string theory is to replace the point-like model of quarks and leptons with a one-dimensional elastic string. The different possible particles would then be the eigenmodes (possible quantized vibrations) of these strings and then— in principle—it ought to be possible to calculate particle properties the way spectral energies, intensities, and selection rules can be calculated for atomic orbitals. String theory that incorporates supersymmetry is often called superstring theory. String theory was sparked by the publication in 1968 by Gabriele Veneziano (1942–) of a new formula that was very successful at describing the scattering of strongly interacting particles. Over the next few years several other theorists including Leonard Susskind (1940–) realized that the Veneziano formula suggested that the force between quarks behaved rather like a quantized string, and they developed the idea in more detail. An important addition to the string picture is that one can think of both open strings and closed, looplike strings. The two ends of an open string could be an electron-positron pair, and when they annihilate they form a closed loop, or photon. With further refinements string theory began to resemble a fundamental theory of all particles, not merely another way of describing the color force. Strings obey a very elegant law of motion: closed loop strings trace out tubes in spacetime that move in such a way as to minimize the area of the tube. Furthermore, strings seemed to automatically allow for the existence of the elusive graviton, the hypothetical quantum of the gravitational field. 130The Quantum Revolution Despite their promise, these ideas were not taken very seriously at first. In 1984 there occurred the “first superstring revolution,” when John Schwarz (1941–) and Michael Green (1946–) proved that string theory is mathematically consistent so long as it is 10-dimensional. This discovery sparked a nearstampede of physicists to superstring theory, and soon most particle theorists, especially of the younger generation, were working on it. The notion of 10 dimensions, 9 spatial and 1 for time, may seem bizarre. We only observe 3, says the string theorists, because the rest are compactified, meaning roughly that they are rolled up into tiny tubes too small to be observed. Unfortunately, there was no one obvious way to do this, and the door was opened to many possible string theories, which soon seemed to proliferate more rapidly than hadron resonances. It was in the 1980s that string theory began to be criticized by some senior physicists, such as Feynman and Glashow, who were unhappy that string theorists did not seem to be trying very hard to make any testable predictions. Despite these worries, string theory has continued to thrive. Almost everyone who works in particle theory these days is a string theorist, and string theorists often display a remarkable confidence that they are just a few complex calculations away from the Theory of Everything. This is despite the fact that up to now virtually the only evidence in favor of string theory is its theoretical consistency, and in particular the fact that it seems to provide a natural place for the graviton. However, one needs a very high level of mathematical training to appreciate these facts. This in itself is not necessarily a sign that something is wrong with the theory, however; theoretical physics has always been difficult. Very recently intense controversy about string theory has flared up again. It has been heavily criticized by several prominent physicists, notably the distinguished particle and gravitational theorist Lee Smolin (1955–). He and others argue that string theory is an approach to elementary particle physics that initially had a lot of promise, but that has become an enticing dead end, the modern equivalent of the epicycles of pre-Copernican astronomy which could explain everything but predict nothing. The grounds of the criticism of string theory are simple: string theory has so far made virtually no testable quantitative predictions and there is therefore no way to check whether the theory is on the right track. All of the other successful advances in quantum physics described in this book were recognized as advances precisely because they were able to two things. First, they could explain facts that previously could not be explained, such as the structure of the hydrogen spectrum. Second, they also predicted phenomena that no one would have thought of looking for without the theory, such as Bose-Einstein condensation and various elementary particles such as the neutrino and Gell-mann’s omega-minus. There are two reasons for string theory’s lack of predictive success. Both the defenders and the critics of string theory will agree that since the mid1980s it simply has not been possible to experimentally test the predictions of any theory, string or otherwise, that attempts to predict phenomena beyond the energy limits of the Standard Model. Following the cancellation of the Symmetries and Resonances Superconducting Supercollider there have not been any accelerators powerful enough to do the job. If energies in the 10 TeV range could be probed it would immediately eliminate many candidate particle models, or perhaps uncover something entirely new that would force current theory to change direction. The other problems with string theory are theoretical. In strong contrast to quantum mechanics in its early years, string theory cannot calculate definite predictions in many cases; for instance, there is no way to calculate the mass of an electron or proton, or derive any of the numerous empirical constants needed to make the Standard Model work. The mathematics is just too hard. Second, string theory turns out to be strongly under-determined, in that there are in fact innumerable string theories (Smolin estimates 10500), all equally mathematically valid. This has encouraged some string theorists such as Susskind to postulate the existence of a “landscape” of possible string theories, with the one that works for our world being essentially an accident. Smolin charges that at this point the theory has lost most of its contact with reality. String theorists say, give us more time. Smolin and other critics say, you’ve had time enough, and more funding should be given to alternative approaches. The author of this book once heard a talk given by the distinguished particle theorist Howard Georgi. He joked that in the absence of experimental data, theoretical physicists get excited into higher and higher flights of theoretical fancy like so many elementary particles pumped into excited energy states, but when experimental results come along (perhaps confirming a few theories, but likely falsifying most) the theorists fall back into their humble ground states. Perhaps the Superconducting Supercollider would have been able to settle a lot of speculative dust, but it was cancelled. By the time this book is in your hands, the LHC at CERN will be up and running and may have generated some data that will cause string theorists to fall back into their ground states again, no doubt emitting many papers in the process. In 1995, Edward Witten (1951–), today’s leading string theorist, proposed that there is a yet-to-be-discovered theory behind string theory, which he called M-theory, an 11-dimensional picture that would unify the different versions of string theory and perhaps serve, in effect, as the long-sought Theory of Everything. (Witten’s proposal is sometimes called the second superstring revolution.) But Witten does not yet know exactly how M-theory would work, any more than Born knew how quantum mechanics would work when he argued, in the early 1920s, that there had to be such a theory. Perhaps Witten will find the key himself or perhaps he must wait for his Heisenberg, whoever that will be, to see another sunrise on Heligoland. “The Most Profound Discovery of Science” Throughout the 1950s and 1960s the majority of physicists who used quantum mechanics applied it to an ever-growing variety of areas in pure and applied physics—elementary particle physics, semiconductors and electronics, chemistry and biochemistry, nuclear physics, masers and lasers, superfluids, and superconductors. But a few physicists and philosophers of physics, including some of the highest ability, continued restlessly to probe the foundations of the theory. They realized that the foundational questions that had been raised by Einstein, Schrödinger, and others in the 1930s had merely been set aside, mostly because of World War II and the excitement of the new developments in particle theory and quantum electrodynamics, but had not been solved. David Bohm: The Search for Wholeness David Bohm (1917–1992) was born in Wilkes-Barre, Pennsylvania, and did his doctoral work under the direction of J. Robert Oppenheimer (1904–1967), the “father” of the atomic bomb. Bohm possessed an exceptional combination of physical intuition and mathematical ability, and a deep fascination with the foundational problems that many other physicists preferred to ignore. Shortly after World War II Bohm wrote a paper in which he laid out the basic ideas of the renormalization theory that was soon to be developed so successfully by Feynman, Schwinger, and Tomonaga. However, when he submitted his paper to Physical Review it was rejected after a critical referee report by Pauli, and Bohm let the idea drop. In the late 1940s Bohm made important contributions to plasma physics. This is the study of gasses that are so hot that they become a soup of ionized particles and respond collectively to electromagnetic fields in complex ways that are still poorly understood. Bohm thought deeply about the basis of The Quantum Revolution quantum mechanics, and in 1951 he published a text on quantum theory in which he explained the EPR paradox in a novel way. He described the thought experiment in terms of spin measurements on particles such as electrons. Pairs of electrons are emitted at the source, and measurements on them have to obey certain global conservation laws (for example, their total spin remains constant), just as in the original EPR experiment. They travel down the arms of the appa­ ratus and encounter Stern-Gerlach detectors, which can be set at various angles to measure their spins. Bohm’s spin-measurement version of the EPR experiment paved the way Figure 10.1: David Bohm. Library of Congress, New York World-Telegram and Sun Collection, courtesy AIP Emilio to versions of the EPR experiment that could actually be performed. Segre Visual Archives. Bohm’s text of 1951 states the orthodox Copenhagen view in an especially clear way, but Bohm was very unsatisfied with the claim that no deeper account could be given of quantum mechanics. In 1952 he published a monumental paper in which he advanced what still is by far the most thoroughly worked-out causal interpretation of quantum mechanics. Bohm’s interpretation of quantum mechanics is a “nocollapse” pilot wave theory, depending in part on mathematical steps very similar to those taken by Madelung and de Broglie years earlier. He showed that hidden in the structure of Schrödinger’s Equation there exists a curious nonlocal force field that Bohm called the quantum potential. All particles in Bohm’s theory have continuous trajectories just like classical objects, and the Heisenberg Uncertainty relations are purely statistical uncertainties that do not imply that particles do not have exact positions and momenta. There is a rule called the guidance condition, which sets the initial positions of the trajectories like the gates that line up racehorses before the gun is fired. The quantum potential maintains the particle correlations once the race has begun. Bohm set the guidance condition in such a way as to make the particles behave the way they ought to according to orthodox theory, because he wanted to prove that a hidden variables theory could reproduce the predictions of standard wave mechanics. However, there could conceivably be other guidance conditions; this possibility does not seem to have been explored. The quantum potential is a function of phase differences in the wave functions of the correlated particles; it can therefore be distance-independent, depending on the phase structure of the system. It has physical units of energy; “The Most Profound Discovery of Science” therefore, part of the total energy of systems of elementary particles is tied up in the quantum potential of their wave functions. It can be shown that if the wave functions of the particles are entangled, then the quantum potential is entangled as well—which means that it cannot be broken up into separate energies linked to each of the localized particles in the system. The quantum potential, in other words, is a nonlocal form of energy, a property of the entangled state as a whole. It is like the energy of an atomic orbital, which is not localized until the orbital decays and emits a photon in a particular direction. A further peculiar feature of the quantum potential is that it implies the existence of a nonlocal force (although one of a rather complicated mathematical structure), because any potential, when differentiated with respect to distance, gives a force. And yet most physicists are very reluctant to draw this conclusion; like Einstein, they are very reluctant to take seriously any sort of “spooky action at a distance,” and tend to change the subject when this topic comes up in discussion. De Broglie realized that Bohm had found ways around most of the objections that had been raised to his own, less developed causal theory. Bohm showed that the answer to Pauli’s worry about scattering was that the outgoing wave will in fact break up into packets that correspond to the motion of the outgoing particles. De Broglie regained confidence in his old approach and developed a relativistic generalization of Bohm’s theory. A defect of Bohm’s theory is that (like the Schrödinger Equation on which it is based) it treats time and space classically and is valid only for nonrelativistic velocities. De Broglie found a way of writing causal quantum mechanics in a covariant, four-dimensional format. De Broglie’s approach has been largely ignored, perhaps unfortunately so, and Bohm himself was uncomfortable with it because it predicts an odd kind of backwards causation (i.e., events in the future influencing events in the past). Apart from the interest shown by de Broglie and a colleague of de Broglie’s, Jean-Pierre Vigier (1920–2004), the reaction to Bohm’s theory in the months and years immediately after its publication were almost entirely negative. Even Einstein, who was personally sympathetic to Bohm, and who might have been expected to welcome Bohm’s apparent restoration of determinism, rejected Bohm’s approach as “too cheap” (Cushing 1994, p. 146). What Einstein apparently meant by this was that one can rather easily account for distant quantum correlations if one imagines that some sort of odd action at a distance connects the particles, but that this would be unacceptable simply because any sort of action at a distance is unacceptable. The criticism directed by Pauli and other orthodox Copenhagenists toward Bohm was scathing. J. Robert Oppenheimer, who was at that time director of the Institute for Advanced Studies in Prince­ ton, commented that “if we cannot disprove Bohm, then we must choose to ignore him” (Peat 1997, p. 133). Part of what motivated Oppenheimer’s cynical remark was that Bohm had refused to testify against friends to the House Committee on Un-American Activities, and in the rabidly anti-Communist climate of the time it was politically risky to be seen supporting Bohm. Bohm lost The Quantum Revolution his position in Princeton and ended up teaching in Sao Paulo, Brazil. Later he worked in Israel and ended his career at Oxford University. In Israel, Bohm and his student Yakir Aharonov (1932–) demonstrated the existence of an odd manifestation of quantum nonlocality now called the Aharonov-Bohm Effect. They showed that a magnetic coil can influence interference fringes of electrons in a double slit experiment even if the field is zero outside the coil. Toward the end of his life, Bohm explored a new interpretation of quantum mechanics that he called the “implicate” or “enfolded order.” Suppose the three-dimensional physical space that we ordinarily experience is, in fact, merely a virtual reality like the world in a computer game. Think of what it is like to play “Space Wars,” a computer game that particle physicists used to run on their PDP-12s late at night when they were supposed to be analyzing bubble-chamber data. In this game the planets and spaceships seemingly obey Newton’s laws of motion and his law of gravitation, and in order to win you have to steer your rocket and launch your missiles in accordance with these laws. The little spaceships on the screen are not really moving under the influence of gravitation or inertia, however, but rather under the command of the instructions encoded in the computer program that defines the game. The program is really what is in control, not Newton’s laws; the behavior of the rockets on the screen is merely a logical implication of the inner program. Bohm hinted that maybe something like this is happening in the real world as well. Bohm did not mean to suggest that ordinary physical reality is just a computer game; rather, he meant that there might be some inner logic, some program or set of laws controlling the world, that is not inherently in space and time at all, but that produces space and time as a byproduct of certain rules that we have yet to uncover. The inner rules would not look at all like ordinary laws of physics as we presently understand them, although perhaps we could work out those laws from the rules of quantum mechanics that we presently are familiar with. If anything like this is true, then there might be perfectly deterministic “code” underlying the apparently random and indeterministic behavior of quantum mechanical particles. Suppose an electron is emitted from point A, and could be absorbed at either point B or point C. In general there is a probability that it will appear at either B or C, and if we try to assume that the electron follows a definite classical trajectory after its emission from A, we will in many cases get a wildly incorrect answer for the probabilities that it will arrive at B or C. It is as if the electron simply jumps from A to either B or C according to its own mysterious whim. (Recall Schrödinger’s disgust at this “damned quantum jumping.”) The electron’s path is both discontinuous (because of the jump) and indeterministic (because given everything we can know about how it was emitted from A, we cannot tell for sure whether it will end up at B or C ). (Don’t forget, on the other hand, that the probability amplitude evolves deterministically, according to Schrödinger’s Equation.) Now, if there is an “inner program” that controls how the electron jumps around, then the jumping is deterministic after all—there is one and only one way the electron “The Most Profound Discovery of Science” can jump—even if we humans have no direct access to the program itself. After all, the programmer of “Space Wars” could have allowed the rockets in the game to make hyperspace jumps as well, if he had wanted to. The implicate order might also help to explain nonlocality. Imagine a fish swimming in an aquarium. The image of the fish can be refracted (bent) through the water in such a way that if we look at a corner of the aquarium we might seem to see two strangely similar fish performing closely correlated motions. But in fact there is only one fish. Perhaps in an EPR experiment, when we seem to see two particles behaving in a way that is more correlated than local physics could allow for, there is really only one particle encoded in the “inner” order. Bohm never worked out a detailed mathematical theory of the implicate order, and it remains a tantalizing suggestion. But if it could be made to work it might restore some semblance of the determinism sought by de Broglie and Einstein, although at the price of reducing ordinary space and time to a sort of virtual reality. Was Bohm right that we can underpin the probabilistic predictions of quantum theory with a deterministic, realistic theory so long as we are willing to use the quantum potential? Is it really true that quantum physics is just classical physics with a quantum potential added in? It would be as if everything really commutes after all; if Bohm was right, the only reason conjugate quantities do not seem to commute is because of unavoidable statistical fuzziness caused by the quantum potential of the apparatus interfering with the quantum potential of the system under observation. Some very recent work by a number of physicists suggests that the quantum potential itself is a mathematical consequence of quantum uncertainty; in other words, if there were no uncertainty, there would be no quantum potential. If this is correct then we could hardly hope to explain quantum uncertainties on the basis of the quantum potential, and Bohm’s theory would have to be treated as yet another very useful semiclassical approximation to the “true” quantum theory that still eludes us. But these investigations remain in a very preliminary stage. Bohm made us aware, as no one else had, of the fact that quantum nonlocality applies to the dynamics of quantum systems as well—that is, the energy of quantum systems, especially entangled states, is nonlocal. Philosophically, perhaps the most important lesson that Bohm taught us (apart from the fact that the most respected experts in a field can sometimes be wrong) is the unbroken wholeness of the physical world. Although Einstein deplored the fact, it seems that quantum mechanics shows that nothing is ever entirely separate from everything else. This is a physical fact that we have yet to fully acknowledge, let alone understand. Whether it validates any particular religious or mystical view is an entirely different question. Bell’s Theorem Tolls John S. Bell (1928–1990) was an Irish-born particle physicist who made what American physicist H. P. Stapp famously called “the most profound dis- The Quantum Revolution Figure 10.2: John S. Bell (on the left) with particle theorist Martinus Veltman. CERN, courtesy AIP Emilio Segre Visual Archives. covery of science” (Bub 1997, p. 46). From Bell’s writings one gets the impression that he was the sort of person who did not like to take things on authority. Although much of his work was on conventionally respectable physics, he thought deeply about the foundations of quantum mechanics. In particular, Bell wondered whether Einstein, Podolsky, and Rosen could have been right when they hinted at the completion of quantum mechanics by a theory of hidden John von Neumann had produced a proof that a hidden variables interpretation of quantum mechanics was mathematically impossible. It was hard to imagine that von Neumann could have made a mathematical mistake, and yet Bohm seemed to have done precisely what von Neumann said was impossible: he had shown that quantum mechanics (at least, nonrelativistic wave mechanics) was consistent with a picture in which particles do have definite trajectories, so long as their motions are coordinated by the quantum potential and the guidance condition. Bohm had therefore shown that a hidden variables picture of quantum mechanics is indeed mathematically possible, but in the way that Einstein himself would least have liked—namely, by invoking nonlocal (that is, faster-than-light) dynamics. So Bell set out to solve two problems. First, where had the usually impeccable von Neumann gone wrong? Second, and much more important, did any hidden variables completion of quantum mechanics have to have this disturbing nonlocal character? “The Most Profound Discovery of Science” In 1964, Bell published a short paper in which he demonstrated the result that had so impressed Stapp: the mathematical predictions of quantum mechanics for entangled EPR states are inconsistent with local realism, which is the supposition that elementary particles could be programmed at the source with instructions that would be sufficiently complex to tell them how to respond to all possible experimental questions they could be asked in such a way as to give the predicted correlations of quantum mechanics, but without the particles being in any sort of communication after they leave the source. In short, if there is any explanation at all of how entangled particles stay as correlated as they do, it has to be nonlocal. In order to demonstrate this result, Bell adapted David Bohm’s 1951 version of the EPR experiment, in which spin measurements are made on pairs of electrons emitted from a source in an entangled state called the singlet state. Bell took the novel step of considering correlations taken with more than one combination of detector settings, and he showed how to define a correlation function, which expresses the relations between the measurement results at each detector in a simple way. Bell showed on very general mathematical grounds that if local realism applies—that is, if each particle can carry within itself all the information it needs in order to know how to respond to all the experimental questions it could be asked—then the correlations must obey a certain mathematical inequality, now called Bell’s Inequality. Bell then calculated the correlation coefficients using standard quantum mechanics, and from his result it is easily shown that the quantum mechanical prediction violates the Bell Inequality. In other words, no local hidden variables theory can explain quantum correlations, but nonlocal theories (theories that countenance some sort of faster-than-light influence between the distant particles) are not ruled out. Several physicists in the 1970s generalized Bell’s Inequalities and tried to confirm his predictions directly by experiment. It is not an easy experiment to do, and the first experimental confirmation of Bell’s Theorem that is considered to be decisive was performed by Alain Aspect (1947–) and coworkers in 1980. Correlated photons are sent in opposite directions along the arms of the device, where they strike polarizers. Very rapid switches change the relative angle of the polarizers while the photons are in flight, presumably ensuring that no information about one detector setting can reach the photon in the other arm of the apparatus before it hits its own detector. The Aspect experiment was thus a delayed choice experiment, which means that the choice of detector setting is made after the particles leave the source. The Aspect experiment confirmed the quantum mechanical predictions to high accuracy. Since then there have been many further tests of Bell’s Theorem; in all cases quantum mechanics violates the Bell Inequality appropriate to the experimental design. The author of this book once heard the distinguished physicist Gordon Fleming of Penn State University reminisce on the period between the publication of Bell’s prediction and its confirmation by the Aspect experiments. Fleming, a field theorist with strong interests in the problems connected with nonlocal- The Quantum Revolution ity, observed that during the 1960s and 1970s many physicists of his acquaintance were almost “schizophrenic” (his term) in their attitude toward Bell’s Theorem: they very much wanted Bell to be proven wrong because they thought that nonlocality was crazy; on the other hand, they used quantum mechanics in their work all the time and they knew perfectly well that it, and Bell, would be proven right. Now Bell has been Figure 10.3: The Aspect Experiment. Pairs of entangled photons are emit- proven right, but the desire to have it both ways—to ted from the source S. Polarization have quantum mechanics and to have physics be filters F1 and F2 randomly either local—remains. The title of this section is (with apologies) a double reflect or transmit the photons after they are emitted from S. Photomul- pun that appears in many variants in the foundations tipliers P1 and P4 detect transmitted of quantum mechanics literature. It is based on the faphotons, and P2 and P3 detect refle­ct­ mous phrase “for whom the bell tolls” used by novelist ed photons. The coincidence counErnest Hemingway, and originally penned by the En­ ter C keeps track of the correlations between reflected and transmitted glish eighteenth-century poet John Donne. One meanphotons. (Many details are omitted!) ing of the pun is that Bell’s Theorem tolls to mark the refutation of local realism; another sense of the joke Illustration by Kevin deLaplante. is that the logic of Bell’s argument is an example of a valid argument structure called modus tollens, which has the form If p then q; not-p; therefore not-q. Modus tollens is the logic behind scientific falsification, whose importance was emphasized (some would say over-emphasized) by philosopher of science Karl Popper (1902–1994). The way falsification works is simple: a theory makes a prediction; if experiment and observation show that the prediction is false, then the theory is false and requires repair in whole or part. Science is perpetually correcting itself by means of the feedback from experiment to theory. Bell showed that local realism predicts that quantum mechanical correlations will satisfy the inequalities that now go by his name; Aspect and many others demonstrated that the inequalities fail experimentally; therefore local realism is false. It is still unclear what, if anything, we can replace it with. It seems unfortunate that Bell did not receive the Nobel Prize—or should we say “no-Bell” Prize?—before his sudden death in 1990. (There are in fact two no-Bell prizes in physics, since many people feel that British astronomer Jocelyn Bell should have won the Nobel for her part in the discovery of pulsars in 1968.) Bell’s Theorem can also be described as a failure of common cause explanations of quantum correlations. Suppose Alice and Bob happen to be siblings, and suppose that on 12:00 noon on a certain day they are standing next to each other. A friend notices that there are strong facial resemblances between Alice and Bob. These need not be due to any influence from Bob to Alice or vice versa at 12:00 noon. The similarity between their features could be due primarily to their genetic heritage, which can be traced back to a common cause in the past—in this case, their parents. Now suppose that Alice says, “The Most Profound Discovery of Science” “Hello, Bob!” and Bob replies, “Hi to you, Alice.” Bob’s reply to Alice is not due to anything in his heritage alone, but requires for its full explanation the fact that Alice spoke to him. In the Bohm-EPR-Bell experiments, it is just as if the distant particles are “speaking” to each other, for they behave in ways that cannot be fully explained in terms of their “heritage” at their common One might ask why it is not possible, no matter how unlikely, that there is some oddity in Alice and Bob’s common heritage such that Alice just happens to say “Hello, Bob!” at noon on a certain day, and Bob just happens to reply as he does. What Bell proved is that while this might work for people, it is mathematically impossible to explain the apparent communication between the quantum particles this way. The fact that the two particles in the EPR experiment were emitted from a common source in a certain well-defined quantum state cannot be sufficient explanation for all of the details of how they behave when they encounter the detectors. Finally, why had von Neumann made what Bell was later to characterize as a “silly” mistake? He certainly had not made any errors in his calculations. Rather, Bell showed that von Neumann had made a crucial assumption that ruled out from the beginning the very sorts of functions he needed to consider. In other words, he had implicitly assumed the proposition he had set out to prove. This error is technically known as a “circular argument,” or “begging the question,” and it is one of the easiest conceptual errors to fall into whenever the proposition we are supposed to be proving is something we are so convinced of that we don’t quite know how to think without it. There is still a small but active literature that seeks to find loopholes in Bell’s argument or the experiments verifying his theorem, but none that have been suggested have been generally convincing. (Some critics of Bell have argued that detector inefficiencies could somehow be giving a false impression that the Inequalities are violated.) The problem for critics of Bell’s argument is not only the very solid experimental evidence in support of his predictions, but the fact that the calculation of the inequality-violating correlation coefficients follows directly from the core formalism of quantum mechanics. If Bell was wrong then the quantum mechanics that has worked so well since 1926 is deeply wrong, and that just does not seem to most physicists to be a likely It is important to see that the confirmation of Bell’s Theorem is not necessarily a vindication of causal interpretations of quantum mechanics such as those proposed by de Broglie or Bohm. It is, by itself, strictly a negative result: it rules out any sort of locally realistic explanation of the correlations of entangled states, but it does not, by itself, tell us what actually makes those correlations come out the way they do. Based on what is known today, it is logically possible that entanglement could have no explanation at all beyond the mathematical formulas that predict its manifestations. And some physicists prefer this way of thinking about it because then they do not have to take “spooky action at a distance” seriously. The Quantum Revolution Is There a “Bell Telephone”? Peaceful Coexistence Bell himself admitted that he found his theorem to be profoundly disturbing, because it seems as if the correlated particles are connected by some sort of influence moving faster than light, which Bell feared would imply that the theory of relativity might be wrong. Bell was therefore another case (like Planck, Schrödinger, and Einstein) of an innovator in quantum physics who was unhappy with what he had discovered because it shattered his conservative expectations of the way physics should be. The prevailing view since the late 1970s is that despite the threat of quantum nonlocality, relativity and quantum mechanics stand in a relation of what philosopher of physics Abner Shimony (1928–) ironically called “peaceful coexistence” (Shimony 1978). The phrase “peaceful coexistence” was borrowed by Shimony from the sphere of international relations, and it suggests a state of mutual tolerance between political jurisdictions (such as the United States and the former Soviet Union) whose underlying ideologies are utterly at odds. Shimony and several other authors in the 1970s and 1980s argued that peaceful coexistence between relativity and quantum mechanics is assured because of the no-signaling theorem, which claims that one cannot use quantum nonlocality to signal controllably faster than the speed of light. Shimony, with tongue in cheek, suggested that quantum nonlocality should be called not action at a distance, but passion at a distance. Shimony (building on work by philosopher of science Jon Jarrett) made a careful distinction between what he called Controllable Nonlocality (sometimes also called Parameter Dependence), which would be the ability to control the nonlocal influence by means of local detector settings, and Uncontrollable Nonlocality (sometimes called Outcome Dependence), which is the demonstrated fact that correlations in entangled systems cannot be explained by common causes. Shimony and most other authors believe that Controllable Nonlocality is ruled out by the nosignaling theorems, and that Uncontrollable Nonlocality is sufficient to explain the violations of Bell’s Inequalities. In order to see what sort of information transmission (or “transmission”) is possible with an EPR apparatus, let us arrange an EPR setup as follows: Alice and Bob will be at rest with respect to each other, but a large distance apart, and equipped with highly efficient Stern-Gerlach detectors. We will put a source of pairs of correlated electrons exactly halfway between Alice and Bob, and also at rest with respect to them, and we will have the source emit entangled pairs of electrons at regular intervals. When an electron enters the magnetic field of the Stern-Gerlach device it will be deflected either up or down. Alice and Bob will record the results as they receive particle after particle, and they may from time to time change the angles of their detectors. What each experimenter will see will be an apparently random sequence of ups and downs, like a series of coin tosses. It will turn out that if they compare results after a long experimental run, they will find that the correlations between their results will violate a Bell “The Most Profound Discovery of Science” Inequality. (For electrons the correlation coefficient is given by –cosθab where θab is the angle between Alice and Bob’s detectors.) This means that it is mathematically impossible for their results to have been due to preexistent properties of each electron that they detected. It seems as if information is being transmitted or exchanged, faster than light speed, between the electrons in each pair of particles. This fact gives Alice and Bob an idea: is there any way that they could send messages to each other faster than the speed of light using the EPR apparatus? Suppose that they try to test this by making the following arrangement: Bob will hold his detector at a constant angle throughout the experimental run, while Alice will turn her detector back and forth in such a way that the correlation coefficient jumps from 1 to 0 so that she can spell out a message in Morse Code. Will Bob be able to read the message? No; the most we could say is that Bob would probably detect a different random sequence of results than he would have received had Alice not manipulated her detector, but there is no way for Bob to tell this from his local measurements alone. The violations of locality appear only in the correlations between Alice’s and Bob’s results. Quantum signaling with an EPR apparatus can be compared to a telephone line over which all Alice and Bob hear is static, and yet when Alice tries to speak to Bob this somehow induces correlations in the static. This means that if we made a recording of the apparently meaningless crackles received by Bob, and compared them to the apparently meaningless crackles heard by Alice, we would find that the crackles were correlated in a nonrandom way, such that it would be possible to decode what Alice was trying to say by comparison of her noise with Bob’s noise. In other words, while Alice cannot send a message directly to Bob, she can encode a message in the correlations, and this fact is the basis of quantum cryptography. The key point is that it is possible for Bob to read the message, but he has to have Alice’s results in order to do so, because the message, as noted, is built into the correlations; his own results and her own results, by themselves, still look like purely random sequences of ups and downs. And the only way that Bob can get Alice’s results is by normal, slower-than-light means of transmission. Now at this point Alice gets annoyed and decides to try something drastic. She introduces some extra magnets into the apparatus in such a way that she can force the electrons she receives always to go either up or down at her command. Surely, she reasons, if she and Bob have their detectors set at a suitable relative angle, his electrons will go down whenever hers go up, and vice versa, and she can send him a message. She finds, to her dismay, however, that if she tries to do this, the Bell-Inequality violating correlations disappear, and Bob just gets uncorrelated static no matter what Alice does with her detector. It is exactly like the double slit experiment, where if we try to determine which slit the electrons go through, the interference pattern disappears. The no-signaling theorem is the statement that this will always happen: the general laws of quantum mechanics guarantee that there is no arrangement of detectors that will allow Alice to utilize quantum nonlocality in order to send Bob a message faster than the speed of light. The Quantum Revolution A Quantum Fly in the Ointment Numerous authors from the 1970s onward have published versions of the no-signaling theorem, and probably most physicists consider it to have been established beyond a reasonable doubt. However, there is a small but growing band of dissenters who question the logic of the no-signaling proofs. Briefly, the critics are worried that the standard no-signaling proofs rely upon specialized or restricted assumptions, which no doubt seemed reasonable to their authors, but which automatically rule out signaling from the beginning without giving the possibility a fair hearing. Such arguments, say the critics, simply assume what they had to prove and therefore do not rule out faster-than-light signaling in general. For example, many no-signaling proofs depend crucially on the assumption that the energy of entangled states is localized to the particles. However, as noted in the last chapter, Bohm showed that this is incorrect, although even Bohm himself did not seem to have fully grasped the implications of his own discovery. It is quite possible that the existence of the quantum potential does, in principle, allow for the possibility of a “Bell telephone,” although no one at this stage has the slightest idea how it might actually be constructed. Causal Loopiness Bob Phones Alice on the Bell Telephone. In (a), Bob phones Alice faster than light. Alice’s return call can arrive at Bob’s worldline before he made his call, allowing for paradoxes in which Alice prevents Bob from calling her if and only if she does not. In (b), there is no risk of paradox if there is a “preferred” frame S which limits the speed of faster-than-light interactions. Illustration by Kevin deLaplante. Figure 10.4: A major worry about the possibility of fasterthan-light signaling is that it might allow for causal paradoxes of an especially nasty sort. Suppose Alice and Bob are moving away from each other at very high speed, suppose they have quantum EPR devices that can signal at any speed faster than light, and suppose Bob wants to ask Alice out for a date. (We’ll drop the assumption that they are siblings.) He sends a faster-thanlight signal at point b2 which reaches Alice at point a1. What Bob does not know is that Alice has doctored Bob’s quantum sending apparatus in such a way that she can turn it off with one of her own faster-than-light signals. Alice decides that she does not want to accept Bob’s offer, and so she sends a signal to Bob that reaches his worldline at the earlier point b1 and turns off his sending apparatus. It is therefore impossible for him to transmit his request at b2. But wait!—Alice would never have sent her signal from a1 to b1, and thereby turned off Bob’s apparatus, unless she had received Bob’s signal from b2. So Bob’s apparatus gets turned off at point b1 if and only if it is not turned off, and this is a logical contradiction. “The Most Profound Discovery of Science” Some physicists believe that the risk of such paradoxes, which apparently could occur whenever there is a closed causal loop, are sufficient to rule out the possibility of faster-than-light transmission of information, especially of a controllable kind. However, there are a few conceivable ways around the Suppose it is not the case that Alice and Bob’s faster-than-light signals could be sent at any velocity. Suppose there is a velocity that, although much faster than light, is still a maximum velocity for faster-than-light quantum signals. If Bob sends his message from b1, then Alice’s return message, no matter how quickly she tries to send it, will reach Bob’s worldline at a point b2, which is later than b1 according to Bob’s proper time, since neither her signals nor his can go below the line S. (See Figure 10.4 (b).) There is no risk of paradox. Many physicists are uneasy about this scenario, because the existence of S may involve a subtler violation of the principle of relativity in that it apparently defines a “preferred frame” in which the laws of physics might take a special form. On the other hand, it could be that the precise angle that S takes as it cuts through spacetime could be determined by cosmological factors, in which case there would be no violation of relativity so long as there was a proper four-dimensional description of the process. However, no one has worked out a detailed theory showing how quantum “signals” would be guided by influences from the whole universe. A very recent model by Aharonov, Anandan, Maclay, and Suzuki (2004) seems to allow for a limited sort of nonlocal signaling, but this is still under investigation and has not yet been digested by the physics community. Aharonov’s model depends on the (still controversial) possibility of “protective measurements” that do not fully collapse the wave function, and these are not covered by the standard no-signaling arguments. Bell, Boole, and Pitowsky A deep logical and mathematical analysis of Bell’s Theorem was carried out in the late 1980s and early 1990s by the Israeli philosopher of physics Itamar Pitowsky (1950–), who showed that the Bell Inequalities are in fact special cases of mathematical inequalities first written down by the great British mathematician George Boole (1815–1864) in the 1850s. Boole was one of the founders of modern symbolic logic. The mathematics of any physical or mathematical system that has two but only two distinct “truth” values (which we can call True and False, or 1 and 0) is called Boolean algebra. Boole tried to define what could be meant by the notion of logical consistency, and he showed that it can be expressed mathematically by means of inequalities on correlation This is simpler than it sounds. Suppose Bob and Alice are examining the contents of a large urn or vat containing a large number of balls. The balls are made of several different materials and are colored differently. Bob and Alice remove the balls one by one, note their color and material, and toss them back The Quantum Revolution in the urn again. Provided that nothing Alice and Bob do causes any changes to the composition or color of the balls, Bob and Alice will observe, as they build up statistics on the balls, that certain simple inequalities will hold. For instance, they will find that the frequency with which they pull out red wooden balls will be less than or equal to the sum of the frequencies with which they pull out red balls (of any composition) and wooden balls (of any color). (That’s because there could be red balls made of other materials, or wooden balls of some other color.) This is an example of what Boole called a “condition of possible experience,” and that’s really all there is to the Bell Inequalities, although of course they can be expressed in much more general mathematical terms. Boole’s inequalities could fail, however, if Alice and Bob altered the balls in various ways before they threw them back in the urn, or if the result that one person gets is somehow dependent on the result that the other Pitowsky showed that if local realism or common cause explanations hold, then observing the EPR particles is like examining balls from a Boolean urn without changing them. The quantum mechanical violation of the Bell Inequalities is thus a sign that quantum mechanics describes something that is inherently non-Boolean; that it, it is something that cannot have a fixed set of properties that are independent of how we investigate it. However, Pitowsky and many other contemporary authors balk at accepting the message of Bohm’s causal interpretation, which is that there literally is a nonlocal force (almost like the “Force” of Star Wars) that permeates the whole universe and correlates quantum particles, no matter how far apart they may be. They prefer what might be called the “no-interpretation interpretation” of quantum mechanics: one can hope to find no deeper explanation of why particles are correlated in quantum mechanics than the mathematics of the theory itself. The non-Booleanity of quantum mechanics was evident as early as the first work on quantum logic by Birkhoff and von Neumann in the mid-1930s, but it was proven in an especially decisive way by Simon Kochen and Ernst Specker in 1967. There is now a class of results known as Kochen-Specker Theorems; such theorems are often called no-hidden-variables or “no-go” theorems. Bell’s Theorem of 1964 has been shown to be an example of a Kochen-Specker paradox when the quantum system under study is an entangled state spread out through space. Schrödinger outlined the essential point of the Kochen-Specker Theorem in his cat-paradox paper of 1935, although he did not give a formal proof. The Kochen-Specker Theorem is technically complex, but the upshot can be expressed fairly simply. Suppose we are studying a quantum-mechanical system such as a nucleus or pairs of electrons emitted from the source of an EPR device. There is a long list of experimental questions we could ask about the particles in these systems, which might include such questions as, what energies do they have? What are their spin components in various directions? What is their angular “The Most Profound Discovery of Science” momentum? What are their electrical charges? To simplify matters, we could frame these questions in such a way that they must give either “yes” or “no” answers. For instance, we could ask questions such as, “Is the spin in the z-direction of Particle 1 up?’’ “Is the energy of Particle 6 greater than .1 electron volts?” and so forth. Now, write a long list containing all these possible questions about the system. These will include questions about noncommuting observables. The essential content of the Kochen-Specker Theorem is that (except for a few quantum mechanical systems with an especially simple structure) it is mathematically impossible to go through this list and assign either a “yes” or a “no” answer to each question in a way that would not lead to a contradiction, where a contradiction means having to answer both “yes” and “no” to at least one question on the list. (Mathematically, the problem of simultaneously evaluating all of the propositions on the list is something like trying to smooth out a hemisphere onto a flat surface without a fold; it can’t be done!) To put it another way, there is no consistent valuation (assignment of truth values) to all the questions on the list; knowing the answers to some questions on the list precludes knowing the answers to other questions on the list. So our uncertainty about the values of some physical parameters belonging to quantum systems does not come about merely because we don’t happen to know those values; it is because they cannot all have yes-or-no values at one go. This is what is meant by saying that the logic of quantum propositions is non-Boolean. To see how odd this is, compare it to a simple classical example. Suppose Alice wants to know how tall Bob is and what he weighs. Let’s also say that Alice finds out that Bob is definitely taller than six feet. Alice naturally assumes that weighing Bob will not change the height that she just determined. But suppose that Alice measures Bob’s weight and finds that he is definitely less than two hundred pounds, and then discovers that she no longer knows whether or not he is more than six feet; that is what the logic of quantum propositions is like. And it should be obvious by now that this is a consequence of noncommutativity: certain measurement operations cannot be performed independently of others. For instance, if the z component of spin of an electron is known with certainty, then the x component of spin could be either up or down with equal probability. The breakdown of Booleanity at the quantum level is due to the existence of noncommuting observables, and this in turn is due to the existence of Planck’s quantum of action. The fact that quantum mechanical systems are inherently non-Boolean tells strongly against Einstein’s hope of rewriting physics in terms of an independent reality, and it also throws into question the plausibility of causal theories of quantum mechanics such as Bohm’s which hope to underpin quantum statistics with an underlying Boolean mechanism. This is not to say that Bohm and de Broglie were entirely mistaken; in particular, their emphasis on the role of the nonlocal quantum potential could be quite important. But if the message of Kochen and Specker is as universal to quantum systems as The Quantum Revolution it increasingly seems to be, then whatever is right about causal versions of quantum mechanics can only be approximately right. EPR sought to show that quantum mechanics is incomplete, by assuming that it could not be nonlocal. What is increasingly apparent is that quantum mechanics is both incomplete (because it is mathematically impossible for it to be any more complete than it is) and nonlocal. Bits, Qubits, and the Ultimate Computer The Ultimate Computer From the mid-1970s onward physicists began to suspect that quantum mechanics could allow the construction of a radically new type of computer. The first design for a quantum computer was published by Paul Benioff in 1980. In 1982 Richard Feynman wrote an influential paper in which he approached the problem indirectly, by wondering whether an ordinary classical computer could simulate the behavior of a quantum mechanical system. He concluded that it could not, for two reasons. First, there is a barrier posed by complexity. In order to be useful, a computer has to be able to predict the behavior of systems that are more complex than it is. And yet, if we try to model the evolution of a number of particles, the complexity grows so rapidly as we increase the number of particles that any conceivable classical computer could not predict the behavior of most quantum systems as quickly as those systems can themselves evolve. Second, quantum entanglement (as in the EPR experiment) shows that quantum mechanical systems are using information to compute their own behavior that could not have been encoded within the particles themselves. And yet quantum mechanical systems quietly go on their way evolving according to the Schrödinger Equation, untroubled by Feynman’s arguments. Feynman then inverted the problem, and suggested that it might be possible to construct a computer using quantum mechanical principles such as superposition to do calculations much more quickly than classical computers could, or perhaps solve some problems that classical computers cannot solve at all. The concept of quantum computing was generalized in papers published in 1985 by the British physicist David Deutsch (1953–). Deutsch outlined a theory of a quantum mechanical version of the Turing machine, the universal computer designed by Alan M. Turing (1912–1954), one of the pioneers of modern computing theory and logic. There are several ways to model a 150The Quantum Revolution Turing machine as Turing originally conceived of it. The essential idea is that it is some sort of device with a memory register that records the internal state of the machine. The machine reads an infinitely long inputoutput device, which can be thought of as a tape. The tape is divided into discrete cells, and each cell has Figure 11.1: Classical Turing Machine. some data recorded in it. The information in the cells C1, . . . , C4 are possible computa- and in the machine’s internal state is given in discrete, tional circuits in the computing head digital form. The machine is programmed with instrucof a Turing machine. All circuits are independent, and one is chosen at tions aimed at performing a computation, and the inrandom for each computation. The structions take the form of precise rules for replacing probability of getting the output A is one bit of information in a cell with another, dependthe sum of the probabilities that each ing on what information is in the cell. When the mapossible path will be used for the chine arrives at the result it was programmed to get, it computation. Illustration by Kevin halts. (The machine might have been programmed to calculate the square root of an integer to some definite number of decimal places.) Turing showed that this seemingly mundane device is universal in that it can perform any algorithm whatsoever. (An algorithm is simply a definite set of rules or a recipe for carrying out a computation that produces a specific result.) All computers are logically equivalent to Turing’s generalized computer, in the sense that whatever their differences in hardware they are doing no more computation than what the universal Turing machine can do. This implies that any algorithm that can be performed on one computer can be performed on another, although perhaps not as efficiently. Turing also showed that his machine, although universal, has one important limitation. It cannot always tell in advance whether or not it will halt. That is, presented with a given computational task, it cannot determine, before it tries to complete the task, whether it will be able to do so. The only way in general to find out whether a Turing machine can do a given computation is to run it on the machine and see what happens. (There are many relatively simple problems for which the halting problem can be solved, of course; the question is whether it can be solved for all possible computations.) The inability of a Turing machine to solve the halting problem is closely related to the powerful incompleteness theorems of the Austrian logician Kurt Gödel (1906–1978), which (roughly speaking) say that no single Turing machine could generate all of the true theorems about the natural numbers. The key difference between Deutsch’s quantum Turing machine and a classical Turing machine is that in the quantum machine there is interference between possible computational pathways within the machine. Like Schrödinger’s cat, the quantum computer goes into a superposition of computational states, and in each component of the superposition a version of the required computation is taking place. In simple terms the effect is massive parallelism, which allows for a huge speed-up, in principle at least. Deutsch was also able to show that his quantum Turing machine was universal in the same sense as Bits, Qubits, and the Ultimate Computer Turing’s classical version; that is, all known types of computations can be performed on it. The catch is that the results of a quantum computation, like all quantum processes, are inescapably probabilistic. Thus, while a quantum computer can come up with an answer long before an equivalent classical computer could, there would only be a certain prob- Figure 11.2: Quantum Turing Maability that the answer would be correct. But this might chine. C1, . . . , C4 are possible computational circuits in the computing be good enough for many purposes: getting an answer head of a quantum Turing machine. (like a weather forecast) that had only a 90 percent All computational paths are used chance of being right, but getting it when you need it, simultaneously. The amplitudes for might be more useful than getting an answer that is the paths interfere whenever there is 99.9 percent likely to be right but too late to be useful. no way to tell which path was used Deutsch argues that the practical design of quantum to get the final result. The probcomputers essentially amounts to arranging the phases ability of getting output A is given of the various amplitudes of the system in such a way by Born’s Rule (square of the sum of the amplitudes). Illustration by as to produce the desired result with the desired deKevin deLaplante. gree of reliability. Just as classical information is parceled out in bits, quantum information comes in qubits and ebits. A qubit is simply a one-particle superposition, while ebits are multi-particle entangled states, and they are sometimes also called “Bell” states because of the role such states play in Bell’s Theorem. Qubits and ebits are operated upon by unitary matrices, just as in the old matrix mechanics of Heisenberg and Born. (A unitary matrix is one that represents a rotation in Hilbert Space.) In quantum computation these operators are treated as quantum logic gates. They are generalizations of the classical Boolean logic gates, such as AND and OR gates, that run in computers everywhere today. From the strictly theoretical point of view, a quantum computer simply is a linear operator or series of operators designed to process quantum information in certain ways. Quantum logic gates can perform operations that do not exist in standard Boolean circuit theory or Boolean logic, such as the square root of NOT. This is a matrix that when squared gives the matrix that negates an input qubit. Quantum computation therefore offers a new way to think about quantum logic as a natural generalization of classical Boolean logic. One of the most difficult problems in mathematics is factorization. This is simply the process of breaking down an integer into its prime factors; for instance, showing that 527 = 17 × 31. Finding the factors of numbers less than (say) 1,000 is usually pretty easy, but the difficulty of factorization mounts rapidly with the size of the number. Factorization has important applications to cryptography (code-breaking), since the security of some of the most widely used encryption systems depends upon the difficulty of factoring a large integer. In 1994 the American mathematician Peter Shor (1959–) devised a quantum algorithm that factors integers dramatically faster than any classical method yet found. So far the highest number that anyone has been able to factor using Shor’s algorithm is 15, but there is little question that his method is valid. 152The Quantum Revolution Shor’s algorithm and some other algorithms recently discovered prove that quantum computers could (if they could ever be built) enormously speed up many calculations; however, the prevailing opinion is that they cannot solve any type of problem that a classical Turing machine cannot solve. However, the full potential of quantum computers still remains an open and controversial question, which is the subject of intense research. It does not seem to be completely out of the question to imagine that a quantum computer might be able to solve the halting problem for itself (although again the answer would no doubt be a matter of probabilities), because there is a sense in which quantum mechanical systems have access to their own futures. It is easy to solve a problem if you already know the answer. But this possibility remains highly speculative. There remains one major barrier to constructing practical quantum computers that could implement Shor’s Algorithm for arbitrarily large integers and carry out other computational tasks that we cannot even imagine now. A quantum computer can only do what it does if it is a coherent superposition of computational states. However, to get the answer out of the computer it is necessary to make a measurement, and (at least according to the standard von Neumann account) this will collapse the wave function of the computer. The challenge, therefore, is to find a way of extracting information from the computer without destroying it in the process. Some progress has been made with quantum devices of very small size, but the general problem remains to be solved, and it may pose a challenge to the orthodox understanding of the measurement process. Like string theory, many of the claims of quantum computation are not yet verified experimentally. Unlike string theory, however, quantum computation is a straightforward application of well-established rules of quantum mechanics, and few doubt that it will work if a few practical problems can be solved—especially finding a way to get the information out of the computer without collapsing it. Too Many Worlds? How could a quantum computer, which presumably is going to be instantiated on a rather small physical system of finite volume, perform calculations so quickly? Deutsch has controversially suggested that the answer to this question lies in one of the most startling interpretations of quantum mechanics, suggested by the American physicist Hugh Everett III (1930–1982). In 1957, working under the direction of his Ph.D. supervisor John A. Wheeler, Everett produced a novel solution to the measurement problem of quantum mechanics. This problem is to explain how it is that quantum states in superpositions appear to experimenters who interact with them as if they have definite classical outcomes. Like many physicists, Everett was unhappy with the von Neumann collapse postulate and the arbitrary quantum-classical divide, and he proposed that the simplest way to resolve the measurement problem was to suppose that reality is nothing more than a quantum wave function, and that the wave function does not collapse. As Schrödinger emphasized, if a Bits, Qubits, and the Ultimate Computer classical measuring device interacts with a system that is in a superposition, the wave function of the measuring device (and the experimenter who runs it) becomes correlated with each component of the superposition. (Technically, the resulting wave function is a tensor product state of the observer and the observed system.) What bothered Schrödinger is that real observers do not see superpositions, but only definite results (such as either a definitely alive cat or a definitely dead cat). Everett’s answer was simplicity itself: the observer together with his or her apparatus actually does split into two components (one who perceives a dead cat, the other who perceives a living cat)—but each version of the observer is correlated with the corresponding component of the cat. That is, the new state is a superposition of two states, one with an observer perceiving a live cat, and one with an observer perceiving a dead cat. Each observer seems to perceive a definite cat-state and not a superposition. However, these two components do not interact with each other in any way. It is exactly as if the universe has split in two. Every time one system becomes correlated with another system that is in a superposition, the world splits into as many versions as there are components of the observed system, and so on, ad infinitum. Everett at first called his theory the relative state formulation of quantum mechanics. By this he meant that every observer could be in a number of different states, each one defined relative to a state of the system being measured. This later became known as the Many-Worlds Interpretation of quantum mechanics, and Deutsch prefers to call it the multiverse interpretation, because on this view there literally is a colossal multiplicity of universes, multiplying exponentially or faster into more universes, with each universe playing out every possibility that is consistent with the laws of physics. It is a dizzying vision, but Deutsch himself seems to take it literally despite its conflict with common sense. The multiverse view is preferred by some especially mathematically oriented physicists because it does away with von Neumann’s arbitrary-seeming collapse postulate. Most physicists are agnostic or dismissive towards the multiverse view. However, Deutsch believes that there is an empirical argument for the multiverse theory. He points to the fact that a huge speed-up of calculations is possible with a quantum computer and argues that there must be somewhere that those calculations are taking place. Quantum computing, Deutsch argues, is just a kind of massively parallel computation, with all the speed-up advantages of parallelism. Any computation is something that takes place on a physical platform, and if all those bits are being crunched then there has to be some physical thing that is crunching them. Deutsch thinks it is clear that they are not taking place in the spacetime that we perceive, because Feynman and others showed that this is impossible. There just isn’t enough room for it. So the calculations must be taking place in parallel universes. He challenges anyone who cannot accept the multiverse theory to come up with another explanation of quantum computing; if you don’t believe in the multiverse, challenges Deutsch (1997), then where are all those calculations taking place? 154The Quantum Revolution Quantum Information Theory: It from Bit? Recent intense interest in quantum computation has drawn attention to the nature of information in quantum theory. Some recent authors have argued that quantum mechanics is nothing other than a new form of information theory, and that the whole world, or at least the world as we can know it, is nothing but information. Although this idea has become current since the quantum computing revolution of the 1990s, it was expressed as far back as the 1960s by John Archibald Wheeler, who suggested that if we better understood the relationship between information theory, logic, and physics, we would see how to deduce “it from bit.” Does it make sense to think that the world could be made of information? There is a certain mystique surrounding information, but mathematically it is a very simple idea. Classical information theory was formulated in the late 1940s by Claude Shannon (1916–2001) of Bell Laboratories. Shannon studied the efficiency of classical communications devices such as telephones and radio, as part of a general mathematical analysis of communication. Consider a binary system (one that can be in one of two states), like a coin that can be heads or tails. There are 23 = 8 possible combinations of heads or tails for three coins. One needs to know three facts to specify the state of the three coins (that is, whether each is a head or a tail). But 3 is just the logarithm to base two of the total number of combinations of the three coins. Shannon argued that the logarithm (to some convenient base, usually 2) of the number of arrangements of a system is the information contained in that system; classical Shannon information, therefore, is merely a logarithm. The great mathematical convenience of logarithms is that they make it easier to think about quantities of information, for logarithms are additive: if the number of possibilities (often called the multiplicity) in a system is multiplied, the increased information capacity is found by simply adding the logarithms of those numbers. The qubit is a natural quantum generalization of a classical bit of information. The classical bit can be in two distinct states, while the quantum bit is in a superposition of states. The problem is that no one has so far found an obvious interpretation of the qubit as a logarithm, and it is therefore unclear that the interpretation of quantum states as measures of information has gone as far as it can. Rolf Landauer (1927–1999) was a senior research scientist at International Business Machines (IBM) who made important contributions to computational theory. Landauer insisted that “all information is physical,” by which he meant that if there is information present then it has to have been encoded in some form of mass or energy. There is no such thing as “pure” information except in the ideal world of mathematics. It is possible that this casts doubt on Wheeler’s idea that the world could be built up out of pure information, since there is no information, according to Landauer, without a physical substrate to encode it in. Landauer made an important contribution to a question that had dogged physicists and communications engineers for many years: what is the mini- Bits, Qubits, and the Ultimate Computer mum cost in energy of a computation? The operation of electronic components such as transistors produces waste heat, and waste heat is a barrier to making circuits smaller and more powerful. Circuit manufacturers constantly strive to produce components that waste less energy. However, there had been a long debate in physics and computing theory about how far they can hope to go with In 1961 Landauer proved that even if there were circuit components that were ideally efficient, any computation in which bits of information are discarded must inevitably waste a minimum amount of heat. It is the destruction of information itself that costs energy; every bit of energy lost leads to the production of a minimal amount of waste heat. Consider the logic gate known as an OR gate: this transforms any of the inputs (1,1), (1,0), or (0,1) into the output 1. Information is lost in the operation of the OR gate, since from the output 1 we cannot tell which of the three possible inputs was used. And therefore by Landauer’s Principle the operation of an OR gate will inevitably result in the loss of a small amount of energy, no matter how efficient we make the components. Surprisingly, a number of computer scientists in recent years have shown that it is theoretically possible to construct fully reversible logic circuits in which unneeded bits are shunted to one side and recirculated. This prevents the loss of energy predicted by Landauer, and it means that an ideally efficient computer could operate with no heat losses at all (except, again, one has the problem of getting useful information out of it). Surprisingly, reversible computing can be done, at least in principle, with classical circuit elements, although it would be difficult to build and unnecessarily complicated for most practical purposes. An ideal quantum computer is also fully reversible, because it is simply an example of a quantum system evolving in a unitary way. A quantum computer is therefore similar to a superfluid or superconductor. In liquid helium, for instance, it is possible to set up a frictionless circulation pattern that will flow forever so long as it is not interrupted, and so long as the temperature of the fluid is kept below the critical point. This fact again emphasizes why it is difficult in practice to build a quantum computer, because like other coherent states such as superconductors they are very sensitive to Entanglement as a Resource Schrödinger had speculated that entanglement would fade away with distance, but all experimental evidence to date suggests that entanglement is entirely independent of distance. This is precisely what theory indicates as well, because entanglement is purely a function of phase relationships within the wave function of the system. While phase coherence can vary with distance (depending on the structure of the wave packet) it does not necessarily have to decrease with distance. As far as it is known now, the entanglement in the singlet state, for instance, could persist to cosmological distances if the particles were not absorbed by something along the way. 156The Quantum Revolution It has been widely noted in the past 10 years or so that entanglement has properties remarkably similar to energy. Entanglement can be converted into different forms and moved around, in ways remarkably like energy. Quantum information theorists often speak of entanglement as a resource that can be used to store or transmit information in various ways. There is not yet a general agreement about how to define units of entanglement, however. Very recent thinking suggests that Landauer’s Principle could be used to show that entanglement does have energy associated with it. When particles are entangled they are correlated, which means that if something is known about one particle, it is possible to infer information about other particles in the system. Information that manifests itself through correlations is sometimes called mutual information. Another way of stating Bell’s Theorem is that the distant particles in an EPR experiment can possess more mutual information than could have been encoded in their correlations at their common If one particle is measured, then by the von Neumann rule the entanglement disappears and any nonclassical correlations disappear. This means that information is lost, and on the face of it this means that waste heat has to be released, by Landauer’s Principle. Since the local energies of the particles (their kinetic and potential energies) do not necessarily change, the waste heat has to be coming from somewhere else. Because it is produced precisely when entanglement is destroyed, it seems sensible to suppose that entanglement itself has an energy associated with it. This energy will be a property of the entangled state as a whole and will not be localized to the particles, just like Bohm’s quantum potential for entangled states. (In fact, entanglement energy and the quantum potential might be the same thing.) This is front-line research and it is potentially controversial, especially since there has been recent criticism of the accuracy of Landauer’s argument. These questions are now in process of careful reexamination; stay tuned! Other Curious Quantum Creatures From the 1960s onward several other startling new applications of quantum mechanics appeared in the literature. None of these developments involved any change in the fundamental structure of quantum theory that had been laid down in the 1920s and 1930s by Dirac, von Neumann, and others, although some pose a challenge to orthodox measurement theory. They all show that we have only begun to see the possibilities inherent in quantum mechanics. A few interesting developments are sketched here. (See Aczel 2002, Johnson 2003, McCarthy 2003, or Milburn 1997 for more detail.) Quantum Cryptography As described in the last chapter, Alice and Bob cannot use an EPR device to signal faster than light as far as we know (although there are some theoretical doubts about this point). Alice, however, can build a message into the cor- Bits, Qubits, and the Ultimate Computer relations, because the correlation coefficient between her results and Bob’s depends on the relative angle of their detectors. The catch is that the message can only be read by someone who has both sets of results. Each set of results in isolation looks like a totally random sequence of ups and downs, like the results of a toss of a fair coin repeated many times, but each result set is the key for the other. Quantum mechanical entanglement allows for the most secure method of encryption known, because it depends on quantum randomness. Suppose Eve decides to listen in to Alice and Bob’s communication by intercepting some of the particles sent out from the source. Her eavesdropping can be detected by the tendency of the correlations to obey a Bell Inequality, because eavesdropping will destroy the correlations, just as in the double slit experiment we destroy the interference pattern if we try to tell which slit the electrons go through. But Eve might decide it was worth it if she can get away with a little bit of the message, even at risk of getting caught in the process. It is also remotely conceivable that the quantum potential of Bohm could be used to eavesdrop, but this remains an open and highly speculative question. However, quantum cryptography is one of the most active areas of current The GHZ State In 1989 Daniel Greenberger (1933–), Michael Horne (1943–), and Anton Zeilinger (1945–) (GHZ) described a theoretical spin state of three entangled particles, which permits an especially vivid illustration of Bell’s Theorem without the use of inequalities. The assumption of local realism about the GHZ state produces an outright contradiction with the quantum mechanical predictions for the three-particle state using, in principle, only one measurement. The GHZ has been very recently created, and it is the most direct verification yet of the failure of local realism. Quantum Teleportation One of the most amazing applications of entanglement is quantum teleportation. This was theoretically predicted by IBM Research Fellow Charles Bennett (1943–) and several others in 1993, and has been demonstrated by a number of research groups since then. The invention of teleportation was stimulated by the No-Cloning Theorem, which states that it is impossible to copy a quantum state if the original state is to be preserved. (The No-Cloning Theorem was arrived at in response to a faster-than-light signaling scheme proposed by physicist and author Nick Herbert in the 1970s, which involved beating the light barrier by copying quantum states. Herbert’s method won’t work as he designed it, but its history shows that much useful thinking can follow from a productive mistake.) Several physicists realized that a state can be copied and moved anywhere else in the universe if the original is destroyed, and so long as the sender of the transmission does not try to read the message, because that would collapse and thereby destroy the state of the message. 158The Quantum Revolution The key to quantum teleportation is the use of two channels of information, an ordinary classical channel (any normal means of communication, such as a cell phone, that sends information no faster than the speed of light) and an entangled state that acts as the carrier of the information to be teleported. Alice and Bob are each poised to observe the state of distant particles belonging to an entangled EPR pair. The object is for Alice to transmit the state of a particle to Bob. Alice takes the particle without looking at it and allows it to interact with her EPR particle. This entangles the target particle with Alice’s EPR particle and thereby entangles it with Bob’s EPR particle as well. Alice then makes a joint measurement on her EPR particle and the target particle and then phones Bob on her cell phone and gives him the (apparently random) result. Alice’s measurement collapses her local state and thereby erases the information about the target particle, but if Bob makes a certain kind of measurement, and then combines his (apparently random) result with the information Alice gave him by cell phone, he can infer the state of the teleported particle. This process is therefore very similar to quantum cryptography, in which a highly nonclassical quantum effect can be exploited only with the aid of a classical channel of information. Physicists who work with quantum teleportation hasten to add that it will be a very long time, if ever, before it is possible to teleport an astronaut off of the surface of a hostile Quantum Non-Demolition Figure 11.3: Quantum Teleportation. Particles A and B are entangled EPR pairs emitted from source S. X is the unknown particle whose state is to be teleported from Alice to Bob. In (a), Alice performs a measurement on A and X together, which entangles X with A and B. In (b), Alice sends her measurement results to Bob via a classical channel. In (c), Bob measures B and combines Alice’s data with his own to reconstruct X. The original X is collapsed by Alice’s measurement; hence X has not been cloned; rather, B has been transformed into X by Bob’s measurement. Illustration by Kevin deLaplante. Recently the distinguished Israeli physicist Yakir Aharonov has challenged one of the most basic dogmas of quantum mechanics, the view that any measurement on a superposition of states collapses the wave function into a single pure state. Aharonov and others are exploring the possibility that there could be nondemolition measurements that could extract information from a quantum state without collapsing it. A nondemolition measurement involves an adiabatic perturbation of the system being measured; this means that the system is interfered with very slowly by means of a very weak interaction. There is evidence that it is possible to extract some (though likely not all) information from a quantum system by means of such very gentle measurements without completely collapsing the state. If this can be made to work reliably, it opens up the possibilities of both superluminal signal- Bits, Qubits, and the Ultimate Computer ing and quantum computation. The requirements for quantum computation are remarkably similar to the requirements for signaling: in either case, one has to create a coherent entangled state that can somehow process or transmit information without being collapsed in the process. The standard arguments against faster-than-light signaling in entangled quantum states have no relevance to this process, since they do not allow for the possibility of the kinds of measurements that Aharonov and others are considering. A superluminal signaling device might therefore turn out to be nothing more than a quantum computer that extends over a large distance in space. The major difference is that superluminal signaling is regarded by most physicists as highly undesirable because it would (they think) mark the demise of special relativity, whereas quantum computing is regarded as a highly desirable outcome. If Aharonov and others who are investigating nondemolition measurements are right, it may turn out that we cannot have quantum computation without having a Bell telephone as well. Unfinished Business Quantum mechanics is by far the most successful physical theory ever devised, and it is also the most revolutionary, because it poses a profound challenge to conceptions of space, time, causality, and the nature of reality itself that have seemed beyond question since the beginning of the modern scientific era. Many authors have observed, though, that if quantum mechanics is revolutionary, it is an unfinished revolution. This story concludes by describing some of the unfinished business facing today’s young physicists. Quantum Mechanics and the Mind One of the most intriguing frontiers is the possible interactions between quantum mechanics and neuroscience. This line of investigation was stimulated by Eugene Wigner (1902–1995), who, along with Hermann Weyl (1885– 1955), pioneered the use of group theory in quantum mechanics and field theory, and whose many contributions to quantum physics earned him a Nobel In 1961 Wigner published an intriguing essay, “Remarks on the Mind-Body Question,” in which he explored the possibility that the collapse of the wave function is brought about by consciousness. Schrödinger’s cat paradox illustrates the puzzling fact that the dividing point between the quantum and the classical description seems to be entirely arbitrary: when the box is opened the experimenter (let us say it is Bob) sees the cat in a definite state, and yet the theory says that Bob and the cat both go into a superposition (a tensor product, technically) of the cat’s state and the experimenter’s. As Everett pointed out, there is no inconsistency in these two descriptions as far as they go: Bob might be in a superposition, but in each component of it he seems to perceive a definite cat state. But now, argued Wigner, suppose that Bob’s friend Alice enters the room. If Bob asks her what she saw, Wigner argues that she is not The Quantum Revolution going to report that she perceived a superposition; rather, she either definitely saw Bob with a dead cat or definitely saw him with a living cat. Human beings never have conscious experience of quantum superpositions, Wigner insisted. He thought that this showed that the quantum buck stops at the point at which the wave function interacts with a conscious mind. Wigner believed that he had presented an argument for dualism, which is the claim that mind and body are of essentially different natures, so that mind is not subject to ordinary (quantum) physical law. Dualism has a long history in philosophy and religion, but the working hypothesis of most modern neuroscientists is that mind is purely a manifestation of physical activity within the brain and sensory system of a living being. Other contemporary scientists who have explored the possibility that quantum mechanics could be important for understanding the mind, such as H. P. Stapp, Stuart Hameroff, and Roger Penrose, work within the materialistic camp; that is, they do not advocate dualism, but instead argue for the importance of quantum mechanics in understanding the physics of the human neurosystem. Roger Penrose (1931–) is a multitalented British mathematician, best known for his work with Stephen Hawking on general relativity. Penrose believes that quantum mechanics is needed to explain not only consciousness but the ability of the human mind to solve problems creatively. He suggests that microtubules, tiny strand-like objects with a very regular, period structure, which occur within neurons and other cells, could be the site of macroscopic-scale quantum coherence; in effect, Penrose proposes that the brain may be in part a quantum computer. The majority of physicists and neuroscientists doubt that quantum mechanical coherence could play a role in the operations of the brain, for the simple reason that the brain is too hot. Quantum-coherent states of matter, such as Bose-Einstein condensates, superfluids, and superconductors, are typically very cold, whereas the human brain operates at temperatures around 37˚C. However, the nonlocal correlations observed by Alain Aspect were in systems at normal room temperature, and there is no reason to think that quantum mechanical correlations in general are temperature-dependent. The question of whether quantum mechanics could have anything to do with whatever it is that allows brains to generate the conscious mind remains open. Quantum Cosmology Quantum cosmology is the application of quantum mechanics (which arose out of the study of the smallest possible physical entities) to the largest object we know, the universe itself. This story begins with the remarkable discovery of cosmic microwave background radiation in 1965 by Arno Penzias (1933–) and Robert Wilson (1936–). Penzias and Wilson were telecommunications engineers with Bell Labs, and they were trying to eliminate an annoying hiss that was being picked up by their microwave antennas. They discovered that the hiss was due to a microwave signal reaching the Earth from all directions in space, and found that the spectral distribution (the curve of its energy as Unfinished Business a function of frequency or wavelength) of this background radiation follows Planck’s curve for the spectral distribution of a blackbody. The observed universe is therefore a blackbody cavity; in other words, the universe as a whole is a quantum mechanical object. There is therefore a deep connection between the physics of the small and the physics of the very large. As an important example, it is highly likely that quantum mechanics may play a role in explaining cosmic acceleration. Probably the most surprising scientific discovery in the past ten or twenty years was the finding in 1998 by several teams of astronomers that the Hubble expansion of the universe is actually accelerating. It would be as if you threw a baseball straight up in the air and saw it accelerate upwards, to your surprise, rather than fall back down. Unless energy conservation is being violated on a massive scale, there has to be some presently-unseen source of dark energy that is causing the universe to accelerate. There is still no convincing explanation of the nature of dark energy, except that it almost certainly has something to do with the quantum mechanics of the vacuum itself. See Kirshner 2002 for an introduction to dark energy and its impact on modern cosmology. Cosmologist Andrei Linde (1948–) has offered a startling speculation that shows how deep the connection could be between the laws of quantum mechanics and the history of the universe. The second law of thermodynamics tells us that the entropy of the universe must always be increasing, as the universe interacts with itself over and over and steadily randomizes itself. There are two linked puzzles faced by any version of the Big Bang cosmology. First, according to the Big Bang theory the universe must have started from a very low entropy state, but on the face of it this seems to be a violation of the Second Law of Thermodynamics. What physical mechanism could have gotten the entropy of the universe so low to begin with? Second, and more basic, how could something, namely a whole universe, come from nothing? Linde’s clever but disturbing suggestion is that the whole universe itself might be merely a quantum fluctuation in the vacuum. As Einstein showed long ago with his theory of Brownian motion, fluctuations can and do occur, and they amount to localized pockets of temporarily lowered entropy. It is a purely probabilistic process; even in a totally undisturbed quantum vacuum there is a probability (no matter how small it might be) that an entire universe could pop out of pure nothingness if one waits long enough. It only had to happen once! Whether or not Linde’s ingenious speculation is right, it is clear that the nature and origin of the universe itself has become a problem in quantum The Quest for Quantum Gravity Most physicists today agree that the central problem facing modern physics is to clarify the relation between quantum mechanics and relativity and in particular to construct a quantum theory of gravity. But the quest for quantum gravity poses technical and conceptual challenges that may be among the toughest faced by physics so far. The Quantum Revolution Understanding gravity better than we do now is not merely of theoretical interest. It is conceivable that quantum gravity might some day lead to the ability to control gravitation (perhaps making controlled fusion possible), or to other effects that we cannot presently imagine or that still belong only to the realm of science fiction. Most physicists prefer not to go so far out on the limb of speculation, but such possibilities have to be kept in mind. One thing that the history of quantum mechanics has demonstrated is that purely theoretical attempts to resolve contradictions or fill gaps in understanding can lead to unexpected practical consequences. There are, in the long run (although sometimes only the very long run), fewer more practically important enterprises than theoretical physics. Early Efforts One of the first approaches to quantum gravity was to treat it as a problem in quantum field theory. This meant writing a perturbation series starting with the flat-space metric (the function that determines the geometry of space­ time) as the first term and trying to find first and higher-order corrections. By the mid-1930s several theorists were able to show that if there is a particle that mediates gravitation, it has to be a spin-2 boson, massless and therefore moving at the speed of light. This hypothetical quantum of the gravitational field was dubbed the graviton. No such particle has ever been detected, and it would be very difficult to do so because its interactions with matter would be so weak. Standard quantum field-theoretic methods in quantum gravity are recognized as provisional, since they are background-dependent, meaning that like most kinds of quantum field theory they assume a fixed Minkowski spacetime as a backdrop. This is inconsistent with the message of Einstein’s general relativity, which teaches that mass-energy and spacetime geometry are inextricably entwined. One of the earliest to realize this was the brilliant Russian scientist Matvei Bronstein (1906–1938), who in 1936 outlined an early quantum theory of gravitation and argued that it may be necessary to go beyond spatiotemporal concepts in physics. Bronstein, tragically, was murdered by the Soviet secret police at the age of 32 during one of Stalin’s purges. It is impossible to give here a comprehensive picture of the many ways in which quantum gravity has been explored since the 1930s until now. Prominent names in this field include Bryce DeWitt (1923–2004), John Archibald Wheeler (1911–), Abhay Ashtekar (1949–), and numerous workers in string theory including Edward Witten (1951–). The fundamental problem with any quantum theory of gravitation that has been attempted so far is that such theories are all nonrenormalizable. Unlike the electromagnetic and Yang-Mills gauge fields, it seems to be impossible to juggle the infinities in quantum gravity so that they either cancel out or can be ignored. The physical basis for this mathematical problem is the nonlinearity of gravitation. The simple fact is that gravitation itself gravitates. The gravitational field has energy and thus has a gravitational effect, while the electromagnetic field, although it transmits Unfinished Business electromagnetic interactions, is not itself electrically charged. (To put it another way, the photon itself is not electrically charged, while the graviton must itself gravitate.) Electromagnetic and Yang-Mills fields, if they are written the right way, can be made to add up linearly; gravitational fields add up nonlinearly. This introduces a whole new level of mathematical complexity beyond anything dealt with in quantum field theory, and so most approaches to quantum gravity so far have been linearized approximations to a physics that is profoundly nonlinear. A Historical Perspective on Background Dependence In 1905 Einstein erected the special theory of relativity on the assumption that the speed of light is an invariant, a quantity that must be the same for all observers in all possible states of uniform motion. Einstein did this because he believed that Maxwell’s equations were more fundamental than Newtonian dynamics, so that the Newtonian picture should be modified to be consistent with electromagnetism. For many years physicists had been trying without success to explain electrodynamics in terms of classical mechanics; in 1905 Einstein turned the problem around and, instead of trying to make electrodynamics fit classical mechanics (a round peg in a square hole if there ever was one), modified classical mechanics in order to fit electrodynamics. The speed of light in vacuum should be a universal constant because it appears as such in Maxwell’s Equations. Einstein’s approach was brilliantly successful, and up to the present time it has been assumed by most (though not all) physicists that quantum mechanics has to be kept consistent with the theory of relativity. However, as in 1905, it may be necessary to turn the problem around and, just as Einstein rewrote Newtonian theory to make it consistent with electrodynamics, rewrite our spacetime theories to make them consistent with quantum mechanics. There is more and more evidence that the world is quantum all the way down. However, there is a contradiction in twentieth-century physics that was apparent in Einstein’s pioneering papers of 1905, but never resolved. In 1905 Einstein also suggested that the wave-like behavior of light is only a statistical phenomenon. If this is right, then Maxwell’s theory itself, and (a crucial point) the symmetries that it obeys, could well be only statistical averages. If this is the case, then there might not be any reason at all to suppose that detailed quantum interactions are exactly Lorentz invariant. There is a parallel to the challenge faced by Boltzmann and Planck in the late nineteenth century: the rules of thermodynamics were originally formulated as exact differential laws applying to definite mathematical functions, but it became apparent they had to be understood statistically and were thus not exact (except for the First Law of Thermodynamics, to which no exceptions have been found, a fact that certainly would have pleased Planck). As described earlier, physicists put off the problem of quantizing spacetime until the 1930s, when the infinities of quantum electrodynamics made it impossible to ignore the possibility that the smoothness of the background metric The Quantum Revolution might break down at small enough distances or high enough energies. During the 1930s both Heisenberg and Schrödinger explored the possibility that space itself might be quantized, meaning that there would be a fundamental quantum of length just as there is a quantum of action. This would automatically cut off the divergences, at least in quantum electrodynamics. However, it surprisingly became possible to again put off the problem, because of the success of renormalized quantum field theory. Many physicists now argue that there is no way to avoid the ultimate breakdown of background-dependent theories, since there is a distance range, 1020 times smaller than the nucleus, at which gravitation has to equal or exceed all other known forces in strength. This distance is called the Planck length, because it is based on Planck’s proposal in 1899 that physics could be expressed in combinations of fundamental “natural” units that (unlike the meter, inch, or second) would be independent of the accidents of human history. Corresponding to the Planck length (or time) is a conjugate Planck energy, around 1016 TeV. This is vastly beyond the range of any conceivable Earth-bound particle accelerator, so anything we can say about processes at the Planck scale would have to be tested by their indirect effects—at least, for the time being! Very recently a number of physicists have been exploring the possibility that Lorentz invariance might break down at very high energies, perhaps near the Planck energy. This implies that the vacuum would be dispersive at such energies, meaning that the speed of light would vary slightly at very high frequencies. (Lorentz invariance implies that the vacuum is a nondispersive medium, which means that the speed of light is the same for all frequencies.) Attempts are now being made to write versions of Einstein’s special relativity that could take high-energy dispersivity of the vacuum into account, and these new versions of relativity—called Doubly Special Relativity—may disagree with the predictions of standard special relativity at high enough energy. (For an accessible introduction to recent work on Doubly Special Relativity, see Smolin 2006.) In some respects the quantum mechanics that grew up from 1925 to 1932 represents a retreat from Heisenberg’s bold vision on Heligoland. In his great paper “A Quantum-Theoretical Reinterpretation of Kinematic and Mechanical Relations,” Heisenberg rewrote position and momentum as linear operators built up out of transition amplitudes between observable energy states. A particle only has a (discrete!) spectrum of possible positions when it is observed in an experimental context in which its position matrix is diagonal. Heisenberg thereby demoted position from its privileged position as the unchanging background of physics, Newton’s absolute space, and made it just one of many quantum observables, any one of which can be written as functions of the others. However, shortly thereafter, by finding ways to treat continuous observables quantum mechanically, Dirac made it possible for position and momentum to be treated more like classical variables than perhaps they really are. By 1927 Heisenberg himself had retreated to a more conservative position in which space and time are continuous quantities. Unfinished Business Carlo Rovelli has recently argued that in order to satisfy Einstein’s principle of general covariance, the foundation of general relativity, we have to construct a picture in which no observable (time, space, energy, or momentum) is privileged in the sense of being an independent c-number (classical) parameter. Instead, Rovelli insists, there should be a complete democracy of coordinates, in which all observables would be intertranslatable, and which ones are most useful would depend simply on the experimental context. Another barrier to quantum gravity is that in the mid-1920s Pauli showed that there are severe technical barriers to constructing a quantum mechanical time operator, and no universally acceptable way has yet been found of getting around these difficulties so far. The result is that time is still treated like a c-number while virtually all other measurable quantities are q-numbers (quantum operator). Penrose has argued that there is no hope of constructing true quantum gravity until a way is found to construct a genuine quantum mechanical time operator, because in any relativistic theory space and time must be comparable and interchangeable. This problem also remains unsolved. Loops or Strings? Physics today is experiencing a tension between continuity and discontinuity that is very similar to the situation in which Planck and his contemporaries found themselves around 1900. The dominant approach to quantum gravity in the past 15 or 20 years has been string theory, based on continuous background spacetime, since it seems to predict the graviton in a natural way. However, another theory, called loop quantum gravity, is being worked on by an increasing number of theorists led by Rovelli and Smolin. This approach divides space and time up into discrete cells (Smolin calls them “atoms of space”) with a quantized spectrum of possible volumes. There is a smallest possible non-zero volume, just as any discrete spectrum of eigenvalues has a smallest possible value, and the attempt to probe this volume with a high energy probe would simply create more cells. The cells can be combined into spin networks, based on ideas due to Roger Penrose, and each possible combination of spin networks is a possible quantum state of space itself. The key point is that Smolin’s spin networks are not structures within space, like the particles of conventional quantum field theory; instead, space itself is built up out of them. The fact that space is discretized eliminates, in principle at least, the need for renormalization. Despite its great conceptual attractiveness, loop quantum gravity still has not produced much more in the way of testable predictions than has string theory, and both approaches (as well as some others that cannot be described here) are being pursued vigorously. Gravitation and Thermodynamics Work in the past 35 years has shown that there are profound connections between gravitation and thermodynamics. The Quantum Revolution The Israeli physicist Jakob Bekenstein (1947–) noticed in the 1970s that there is a very odd parallel between the area of a black hole and the behavior of entropy. A black hole is a mass that has collapsed to within its own event horizon, a mathematical surface surrounding the hole over which the escape speed from the object is the speed of light. The area of a black hole is the area of its event horizon. If two black holes coalesce, their total area is always equal to or greater than the areas of each separately, and this is just like the entropy of two volumes of gas, which if the volumes of gasses are mixed must always be greater than or equal to the sum of the entropies of the separate volumes of gas. It seems as if black hole area, like entropy, can never decrease. Bekenstein proposed a formula for the entropy of a black hole as a function of its area. The physical meaning of this is that the entropy of a black hole is a measure of the amount of information that has disappeared inside it. A black hole of a given mass is the highest possible entropy state for that mass. Many scientists have speculated that as the universe ages its entropy must gradually increase until all temperature differences in the universe are smoothed out and there is nothing left but a uniform diffuse gas, everywhere. This is called the “heat death” of the universe. Black hole thermodynamics implies that if there ever is a heat death of the universe, it will be much more dramatic. Since the highest entropy state of matter is a black hole, the “heat death” of the universe would have to amount to collapse into a vast black hole. In the 1960s Hawking and Penrose had proven, using the austere mathematics of general relativity, that gravitational collapse is inevitable: any mass or energy if left to itself must eventually become a black hole. There is therefore a deep consistency between Einstein’s general relativity and thermodynamics, and it is possible (though not yet proven) that gravitation itself is nothing other than a manifestation of the tendency of entropy to increase. Bekenstein’s insights paved the way to the quantum mechanical treatment of black holes. Following Bekenstein, it is known that a black hole has a definite entropy. Furthermore, a black hole is an object that (because of its intense gravitational field) absorbs all radiation that falls upon it and therefore fulfills Kirchhoff’s old definition of a blackbody. In order to be in thermal equilibrium with its surroundings it must, as Kirchhoff showed long ago, emit radiation as well as absorb it. It is just a short step to conclude that a black hole has to have a temperature and has to radiate energy with a Planck spectral distribution, and this is precisely what was shown mathematically by Stephen Hawking in 1974. Physics has thus returned to its roots in the work of Kirchhoff and Planck, with another one of those predictions that seem obvious in retrospect but that were surprising at the time. Hawking further showed that the temperature of a black hole is inversely proportional to its mass. A black hole with the mass of the sun will have an undetectably low temperature, while black holes with the mass of a proton would radiate energy with a peak in the gamma range and would in effect detonate in a flash of lethal gamma radiation in a fraction of a second. Hawking arrived at this startling conclusion following speculation that there could be miniblack holes left over from the Big Bang, still floating about in the universe. He Unfinished Business showed that proton-sized mini-black holes would evaporate in a flash of gamma radiation almost as soon as they were created. The mechanism of black hole radiation is called the Hawking effect. It is based on the quantum fieldtheoretical fact that virtual particle pairs are constantly being created and destroyed in the vacuum. If this happens near the event horizon, one particle of the pair can fall into the hole, while the other carries away some of the hole’s energy. The reason a black hole is black is because its escape speed is equal to the speed of light, and therefore from a classical point of view no matter or energy can get out of the event horizon of a black hole once it has fallen in. Although physicists do not usually like to put it this way, the Hawking effect is a mechanism whereby mass-energy can escape a black hole and thus a mechanism whereby quantum mechanics allows mass-energy to exceed the speed of light, at least briefly. However, no way has yet been found of getting close enough to an event horizon to check if Hawking was right. Another startling gravitational-thermodynamic prediction is the Unruh effect, named after Canadian physicist William Unruh (1945–). Building on suggestions by Paul Davies (1946–), Unruh showed that an observer accelerating through an apparent vacuum will detect electromagnetic radiation having a Planck blackbody distribution, with a temperature proportional to the acceleration. The effect is too small to detect for Earthly acceleration rates, and so, like the Hawking effect, Unruh’s prediction has not yet been directly verified. Is information that falls into a black hole lost forever? Suppose one particle of an EPR pair falls into a black hole: recent theoretical work suggests that the quantum mechanical correlations between the particles will persist even when one of them has disappeared beyond the event horizon, when presumably any information that it could share with its partner particle would have to be transmitted faster than the speed of light. This could be further indication that quantum mechanics allows faster-than-light transmission of information, or it could simply indicate that all our classical intuitions about the nature of information are wrong. The Hawking Effect. Vacuum polarization due to the intense gravitational field near the event horizon causes pair creation. One particle falls in while its antiparticle escapes to infinity, causing the black hole to radiate like a blackbody with a temperature inversely proportional to its mass. Illustration by Kevin deLaplante. Figure 12.1: Figure 12.2: The Unruh Effect. An accelerated observer detects radiation in the vacuum with a Planck spectrum and a temperature proportional to the observer’s acceleration. For an observer in free fall the temperature of the Unruh radiation is zero. Illustration by Kevin deLaplante. The Quantum Revolution The work of Hawking and others shows that there are very deep connections between quantum mechanics, thermodynamics, and the structure of spacetime itself, and it may well be that the ultimate theory of quantum gravity could be a quantum statistical mechanics of granular spacetime. The Topology of Spacetime Topology is the branch of mathematics that deals with the ways in which geometric structures connect. The connectivity of a geometric object is, roughly speaking, the number of holes in it, and a multiply connected structure is one that has holes. From the topological point of view, a coffee cup with a handle and a donut are equivalent, even though metrically (in terms of measurable shape) they are quite distinct. Led by John Archibald Wheeler, a number of theorists, before the adFigure 12.3: Stephen Hawking. AIP Emilio Segre Visual Arvent of string theory, explored the chives, Physics Today Collection. idea that at a deep level spacetime itself is multiply connected. Wheeler suggested that the seething play of vacuum fluctuations can be described as quantum foam, and he also proposed the idea that a charged particle such as the electron could be understood as lines of electrical force trapped in the topology of spacetime (Misner, Wheeler, and Thorne 1973). This means that the electron would be a sort of vortex or wormhole in space, with some of Faraday’s field lines threaded through it like thread through the holes of a button. The wormhole would not be able to pinch off like a classical wormhole because it would be quantized. Wheeler’s elegant idea continues to intrigue physicists, but no one has yet found a way to make it work in mathematical detail. Nonlocality of Energy in General Relativity Another challenge for quantum gravity is the nonlocality of gravitational energy. Einstein constructed general relativity in the hope of setting up a local field theory of gravitation that would replace Newton’s action at a distance picture of gravitation. However, nonlocality crept into general relativity through the back door. Any spacetime geometry has a certain energy, but (except for Unfinished Business special cases) this energy is partially or wholly nonlocal; that is, it cannot be localized at spacetime points throughout the geometry. This fact follows from the Equivalence Principle, the basic insight on which general relativity was founded. Locally a gravitational field can be made to disappear, in that an observer falling freely in a gravitational field cannot detect the field. This means that the energy of the field must not be detectable locally either, and yet the energy of the field does not go away just because it can be transformed away locally. Is there a connection between the nonlocality of energy in general relativity, and quantum nonlocality? This remains an open question. Who Ya Gonna Call? Lee Smolin’s controversial critique of string theory is only part of a larger set of worries he has about the state of physics. Very recently (2006), Smolin has leveled the radical charge that physics has made less progress in the last 30 years than in any comparable period since the eighteenth century. He blames this in part on what he considers to be an obsession with string theory, but he argues that there are other systematic barriers to progress in the way modern theoretical physics is done. Above all else, Smolin (and a few other senior quantum gravity researchers, such as Carlo Rovelli) feel that innovation is hobbled by the failure of most modern physicists to think philosophically about their work. Many of the great pioneers of modern physics (notably Einstein, Bohr, Heisenberg, and Schrödinger) were not only very technically skilled but possessed broad humanistic educations and a strong interest in philosophy. Many of their key advances were stimulated by thinking that could only be described as philosophical in the sense that it involved a willingness to challenge deep assumptions about the meaning of such concepts as time, space, measurement, or causation. And like all good philosophers, Einstein and Bohr were not above taking intellectual risks, some of which (as this history shows) turned out to be wrong but instructive. Perhaps we simply need to let our young physicists make some interesting mistakes. The Feynman Problem Faced with a bewildering variety of nonclassical and often bizarre quantum effects—quantum teleportation, superfluidity, nonlocalilty, and so on—it is hard to tell what really is the deepest puzzle about quantum mechanics. Richard Feynman, who understood quantum mechanics about as well as anyone ever has, argued that the most insistent mystery about quantum mechanics is simply this: how can anything that is both so simple and so utterly basic be so completely lacking in an explanation? It may seem odd to describe quantum mechanics as “simple,” because the mathematical applications of the theory can be dauntingly complicated. What Feynman meant is that the basic rules of quantum mechanics can be stated in a few lines using only high school mathematics. A rudimentary grasp of complex numbers and probability theory are all that is really needed. The Quantum Revolution Here is a nontechnical rendition of Feynman’s statement of the basics of quantum mechanics (see Feynman, Leighton, and Sands 1965 for more detail): 1.Every physical process can be thought of as a transition from a preparation state (call this the input, to use more modern jargon) to a number of possible outcome states, or outputs. That is, we set up the system in a certain condition, something happens to it, and then we observe what result we got. 2.There can be many ways in which a physical system can undergo a transition from its input state to a given possible output state. 3.For every possible route the system can take from input to a possible output there is a complex number, called the transition amplitude, probability amplitude, or simply amplitude, for that route. 4.If there is a way of telling which route the system took to a particular output, then the probability of getting that output is found by taking the amplitude for each possible route, squaring it up to get the modulus (which will be a real number), and then adding the resulting probabilities together. 5.If it is impossible to tell which route the system took in order to get to a particular output without disturbing the system in such a way that it changes the possible outputs or their probabilities, then we find the probability of getting a particular output by adding the complex amplitudes together and then taking the modulus to get the probability. As Feynman said, that’s all there is to it, and no one has any deeper explanation of how or why this works. All the rest of quantum mechanics is merely an elaboration of these rules, using the rich mathematics of complex numbers. Rule 4 is just the classical way of adding up probabilities for statistically independent possible events: if there is a .2 probability that a certain bird will fly from its nest to its feeding ground via the forest, and a .3 probability that this bird will fly from its nest to its feeding ground via the river, then the probability that it will fly from its nest to the feeding ground via either the river or the forest is just .5. In classical probability theory, there is no such thing as a probability amplitude; we just add the probabilities directly, and probabilities have a simple interpretation in terms of frequencies of events. If, however, we are talking about an electron that has been fired through a double slit apparatus, we use rule 5 if we do not know which slit it went through. This means that we will get interference terms if the amplitudes are not perfectly in phase, because we add the amplitudes before we square up to get the probabilities. If, on the other hand, we slip another detector in the apparatus that tells us which slit the electron goes through, we can use Rule 4, since we have destroyed the interference. The Feynman problem is simply to explain Rules 1 through 5. Where do probability amplitudes come from, and why do they superpose that way? Feynman of course knew that it has a lot to do with noncommutativity. As Dirac and Heisenberg showed, quantum nonclassicality manifests itself in the noncommutativity of certain possible measurement operations. However, there is still Unfinished Business not a clear explanation of why noncommutativity should lead to the mathematics of probability amplitudes that was discovered by Schrödinger, Dirac, von Neumann, and others. And asking this question only leads us to the further question of why there is noncommutativity in the first place. All observables would commute if Planck’s constant was zero, but it is not. To borrow Rabi’s phrase and apply it to Planck’s constant, “Who ordered that?” We are not much further ahead on this question than Planck was. It is possible that quantum information theory might lead to a solution of the Feynman problem: if quantum information is logarithmic the way classical information is, then Rule 5 could be the simple consequence of the fact that when we multiply complexities, we add their logarithms. But what sort of complexity would a probability amplitude be a logarithm of ? That question remains unanswered. Some philosophers of science have speculated that there could be no explanation for quantum statistics, because there is nothing more basic in terms of which it could be explained. Others respond that it is hard to imagine that we have found the final formulation of quantum mechanics when there are still so many gaps in the theory, so many unsolved problems, so many temporary props holding up the structure. Another challenging view, explored by philosopher Colin McGinn, is that a genuine explanation for the basis of quantum mechanics could well be beyond human cognitive capacity, just as the differential calculus is beyond the grasp of any dog. This concern has to be taken seriously. We certainly would be foolishly arrogant if we did not concede the possibility that there are things that will forever be beyond the ability of any human to understand. At the same time, however, it should be obvious that we have no principled way of telling what those things are, since we would have to be smarter than we are in order to define the limits of our own understanding. We can see that dogs cannot understand certain things because we are generally smarter than dogs, but we cannot be smarter than ourselves. Maybe some day we will create a quantum computer that is smarter than we are, and it might be able to tell us what subjects to not bother trying to understand. But in the meantime, we might as well keep on trying!—especially since, as Feynman suggested, we really ought to be able to figure out the basis for a set of rules that can be expressed in such a simple way. Despite everything that has been learned since 1875, the present situation in physics is remarkably like the way it was when an idealistic young scientist named Max Planck dedicated his life to understanding the nature of light. ca. 450 b.c. Zeno of Elea sets forth a series of paradoxes attempting to show that the concept of motion is inconsistent. Democritus of Abdera argues that the world is made of atoms (tiny indivisible particles of matter) and the Void. The Athenian philosopher Plato, in his Timaeus, speculates ca. 385 b.c. that the properties of matter could be explained in terms of the symmetries of the five regular (“Platonic”) solids, but states that the perfection of the physical world is inevitably marred by the Errant Cause, an early Indeterminacy Principle. Aristotle, a former pupil of Plato’s, describes a qualitative ca. 330 b.c. theory of change, motion, and time that was to dominate physical thought for over 1,500 years. Aristotle states that time is nothing more than a “measure of motion.” 1660–1680IsaacNewton and G. W. Leibniz invent the calculus, which was to become the most important mathematical tool of 1686Newton publishes his Mathematical Principles of Natural Philosophy, setting out basic laws of mechanics and a theory of gravitation that were to become the backbone of physics for centuries to come. 1700–1850Newtonian mechanics is developed by several mathematicians and physicists (notably Lagrange, Laplace, and Hamilton) into a powerful analytical tool, which until the end of the nineteenth century is assumed to be universally applicable. 1704Newtonpublishes his Opticks, in which he describes his experiments that establish many of the laws of refraction and dispersion of light. Newton speculates that both light and matter are composed of “corpuscles,” tiny discrete particles. 1801Thomas Young demonstrates the interference of light and argues that light is best understood as a wave. This view of light becomes dominant in the nineteenth century. Spectroscopy (the study of light spectra) begins with the invention of the spectroscope by Joseph von Fraunhofer; throughout the nineteenth century Robert Bunsen, Kirchhoff, and others discover emission and absorption spectra allowing the identification of many elements; the first empirical laws governing spectra are defined. 1820–1850Manyof the basic laws of electromagnetism are developed experimentally by several researchers, including Ampère, Henry, Øersted, and Faraday. Faraday outlines the concept of the electromagnetic field. 1820–1880Thelaws of classical thermodynamics are defined by several scientists, notably Carnot, Mayer, Clausius, Kirchhoff, and Helmholtz. These laws include the First Law (conservation of energy) and the Second Law (entropy must always 1859G.Kirchhoff defines the concept of the blackbody (an object that absorbs all electromagnetic radiation incident upon it) and proves that the emission spectrum of a blackbody is a function only of its temperature. However, he is not able to predict the shape of the curve. 1860–1900Physicists,notably Maxwell and Ludwig Boltzmann, begin to understand thermodynamics in statistical terms. Boltzmann argues that entropy is a measure of disorder, which implies that the Second Law is not exact. 1861–1865JamesClerk Maxwell presents his equations describing the electromagnetic field as a unified structure. Maxwell argues that light is nothing other than transverse electromagnetic waves of a certain frequency range. 1885JohannBalmer writes a formula expressing the wavelengths of the visible lines of the hydrogen spectrum in terms of the squares of integers; the formula is generalized by Johannes Rydberg in 1888; the spectrum depends in part on an empirical constant, which became known as the Rydberg constant. 1888HeinrichHertz demonstrates experimentally the existence of electromagnetic waves, thus verifying Maxwell’s mathematical theory of electromagnetism. 1895Wilhelm Roentgen discovers X-rays, high-energy electromagnetic radiation that can penetrate most solid matter. 1896Henri Bequerel discovers that salts of uranium will fog a photo­graphic plate, thus demonstrating the existence of sponta­neous radioactivity. 1897J. J. Thomson discovers the electron, the first elementary particle to be identified. 1898–1902Marieand Pierre Curie isolate the radioactive elements polonium and radium. 1898–1907Rutherford and coworkers discover the fact that radioactive elements transmute into other elements, emitting alpha radiation (which Rutherford showed was the ion of helium) and beta radiation (later shown to be comprised of electrons and positrons). Rutherford also announced his law of radioactive decay, according to which elements have a half-life and decay at an exponential rate. 1900PaulVillard detects gamma rays emitted by uranium; these are shown by Rutherford and A. E. Andrade to be electromagnetic radiation that is more energetic than X-rays. Max Planck discovers a formula for the spectral distribution of the radiation emitted by a blackbody at a given temperature; he then shows that this formula can be explained on the assumption that the radiation field emits and absorbs radiation only in discrete “quanta” of energy given by E = h , where (nu) is light frequency, and h is a new physical constant, a fundamental “quantum” of action. 1905Einstein’s“year of miracles”: he pioneers special relativity and establishes the equivalence of mass and energy, shows that Brownian motion is a statistical effect demonstrating the existence of molecules, and describes a theory of the photoelectric effect based on the assumption that light is transmitted in particulate form. He also speculates that Maxwell’s theory may hold only as a statistical limit, and is the first to realize that light quanta may be correlated in ways that throws doubt on their separability. 1906–1909Rutherford,together with Ernest Marsden and Hans Geiger, discovers the atomic nucleus by means of scattering experiments. 1907Einsteinpublishes his first papers on the quantum theory of specific heats (thus founding solid state physics) and finds an explanation for the breakdown of the nineteenth-century Dulong-Petit law of specific heats. 1908HermannMinkowski generalizes Einstein’s special relativity into a coherent geometric picture of four-dimensional spacetime (often called Minkowski space). 1909Einsteinargues that a complete theory of light must involve both wave and particle concepts. 1911FirstSolvay Conference; the quantum becomes much more widely known to physicists. Kamerlingh Onnes discovers superconductivity. 1913NielsBohr publishes the first version of his quantum theory of the atom. He assumes that spectral lines are due to quantum jumps between stationary states of the electrons orbiting Rutherford’s positively charged nucleus, and derives the Rydberg constant. H. G. Moseley demonstrates that atomic number is simply the positive charge of the nucleus, and predicts the existence of several new elements. 1914James Franck and Hertz perform an experiment showing that light is absorbed by atoms in discrete energy steps; this is a further confirmation of the quantum principle. 1914–1924TheBohr theory is elaborated under the impetus of Arnold Sommerfeld and with the collaboration of many other physicists into the Old Quantum Theory. This approach enjoys some success in calculating spectral properties of simpler atoms, but by 1924 it is clear that it has outlived its usefulness. 1916Einstein publishes his general theory of relativity, which describes gravitation as a manifestation of the curvature of space and time. 1916–1917Einsteindevelops the quantum statistical mechanics of light quanta, arguing for the quantization of light momentum as well as energy and introducing the concepts of spontaneous and induced emission of light, the latter of which would become the basis of laser physics. 1920Bohrannounces the Correspondence Principle, which states that quantum systems can be expected to approximate classical systems in certain limits, such as the limit of large (orbital) quantum numbers. Although the Correspondence Principle is not rigorous, it is a useful guide to model construction. 1922Discovery of the electron’s intrinsic magnetic moment by O. Stern and Walther Gerlach. The “Bohr Festival” (an informal physics conference) in Göttingen, at which Bohr and Heisenberg meet and begin their momentous interactions. 1923Discovery by Arthur H. Compton of the Compton effect, which is the scattering of gamma ray quanta off of electrons. Compton showed that the resulting shift in the wavelength can be explained neatly using relativistic rules for the conservation of momentum and energy, so long as it is assumed that light quanta interact as if they are discrete particles with momentum and energy. The Compton effect is a decisive confirmation of Einstein’s view that light quanta behave as particles in their interactions with other forms of matter. 1923–1924Louis de Broglie generalizes the wave-particle duality by suggesting that particles have wave-like properties just as light waves have particle-like properties. He derives laws relating the energy, momentum, wavelength, and wavenumber of “matter waves,” and predicts that particles such as electrons should exhibit wave-like interference and diffraction effects. These predictions were confirmed in the late 1924Bohr,together with H. Kramers and J. Slater, publishes an abortive but influential theory in which the authors argued (incorrectly) that energy is conserved only on average in emission and absorption events; this theory also includes a qualitative notion of the field of virtual oscillators, which is a precursor of quantum electrodynamics. Wolfgang Pauli announces his Exclusion Principle, according to which no two electrons can have precisely the same set of quantum numbers. This gives an immediate explanation for many facts about the structure of the Periodic Table, so long as it is allowed that the electron has an extra quantum number (later identified as spin). 1924–1925S. N.Bose shows that Planck’s Law can be derived from a new statistical law that assumes that light quanta are indistinguishable and inclined probabilistically to aggregate in the same energy state; Einstein generalizes Bose’s methods to gasses, and predicts the existence of Bose-Einstein condensates. 1925GeorgeUhlenbeck and Samuel Goudsmit present the first theory of electron spin. In June Heisenberg discovers matrix mechanics, although he does not yet realize that he is working with In December Paul Dirac grasps that noncommutativity is the most novel feature of Heisenberg’s approach, and independently discovers most of the features of matrix mechanics that would be worked out by the Göttingen school in the next few months. 1926Appearance of modern nonrelativistic quantum mechanics: the matrix mechanics of Heisenberg and the Göttingen school is developed, culminating in the “three-man work” of Heisenberg, Pascual Jordan, and Max Born; Schrödinger elaborates de Broglie’s wave theory into a complete theory of wave mechanics; Dirac develops his own version of quantum mechanics based on the noncommutative algebra of linear operators. Pauli solves the hydrogen atom using matrix mechanics. Max Born argues that the wave function (or more precisely its square) is most naturally interpreted as a measure of Schrödinger demonstrates the mathematical equivalence of matrix and wave mechanics. 1926–1932Von Neumann develops his Hilbert Space version of nonrelativistic quantum mechanics. 1927The Uncertainty Principle is stated by Heisenberg. Bohr announces his Principle of Complementarity, according to which causal and spacetime accounts of quantum phenomena are complementary, meaning that they are inconsistent with each other but both are required in certain experimental contexts. Einstein creates a causal theory of the quantum wavefield, but refuses to publish it because it is not separable, which he is convinced is a mistake. Enrico Fermi and Dirac clarify the distinction between Bose-Einstein statistics (according to which particles tend to occupy the same energy states) and Fermi-Dirac statistics (according to which particles obey the Exclusion Principle). Dirac shows that photons obey Planck’s Law because they are Bose-Einstein particles (bosons), while electrons are Fermi-Dirac particles (fermions). In October the Fifth Solvay Conference is held; de Broglie presents his first causal theory of quantum mechanics, but the Copenhagen Interpretation holds sway; foundational debates continue between Bohr and Einstein. 1928Dirac presents his relativistic wave equation for the electron. First papers on quantum electrodynamics (QED) by Dirac, Jordan, and others. Gamow describes alpha decay in terms of barrier penetration by quantum tunneling. Heisenberg explains ferromagnetism by means of quantum 1929Houtermans and Atkinson propose nuclear fusion as the means by which stars release energy. 1930Pauli predicts the existence of the neutrino, although the name is due to Fermi. Dirac reluctantly predicts the existence of a positive electron, based on his hole theory. 1931Ruska creates a prototype electron microscope. 1932Discovery of the positron in cosmic ray showers by Carl The existence of the long-suspected neutron is confirmed by James Chadwick. eisenberg creates the first theory of nuclear structure that includes the neutron. 1933Szilard conceives of the nuclear chain reaction. 1934Hideki Yukawa discovers a theory of the strong nuclear force that binds protons and neutrons, and predicts the existence of a meson, a new particle that would transmit the strong force. Ida Noddack argues that nuclear fragments found in neutron-nuclei studies done by Fermi and others are due to the splitting of the uranium nucleus. Fermi publishes a theory of beta decay, in which he introduces the concept of the weak force. 1935Schrödingerdescribes his “cat” paradox and coins the term The paper of Einstein, Podolsky, and Rosen (EPR) questions the completeness of quantum mechanics and inadvertently highlights the importance of entanglement. Bohr replies to EPR, arguing that they had unreasonable expectations regarding the completeness of quantum mechanics. 1936Quantum logic created by Garrett Birkhoff and John von 1937Carl Anderson discovers a particle eventually called the muon, identical to the electron except heavier and unstable, in cosmic rays; it is nearly 10 years before it is clear that the muon is not Yukawa’s meson but a new and totally unexpected particle. Kapitsa and others discover superfluidity in Helium-4. 1938Rabi discovers nuclear magnetic resonance. 1939Discoveryof nuclear fission by Hahn, Meitner, Strassman, and Frisch. 1942First nuclear chain reaction at the University of Chicago. 1943Tomonaga finds a way to renormalize quantum electrodynamics, but his work is not communicated to the West until after the end of World War II. 1945InJuly, the first atomic bomb is tested successfully at Alamogordo, New Mexico. On August 6 and 9 the Japanese cities of Hiroshima and Nagasaki are obliterated by atomic bombs; World War II comes to an end. 1947Thepion is discovered by Cecil Powell. It turns out to be the quantum of the nuclear force field predicted by Yukawa in 1935. 1948Renormalizationtheory is created by Schwinger and Feynman; experimental confirmation of quantum electrodynamics to high accuracy. The first transistor is invented at Bell Labs by Bardeen, Shockley, and Brattain. 1951Bohmdescribes a performable version of the EPR thought experiment in terms of spin measurements. 1952Bohm publishes his causal version of quantum theory. 1953The creation of the first masers (1953) and lasers (1957) confirms Bose and Einstein’s statistics for light quanta. 1954Gauge theory published by Yang and Mills. 1956–1957C.S. Wu and others demonstrate the violation of parity (invariance of physical law under mirror reflection) in certain beta decays; recognition of CPT as a fundamental symmetry. 1957HughEverett publishes the “many worlds” interpretation of quantum mechanics. 1961Landauerargues that the erasure of a bit of information produces a minimum amount of waste heat (Landauer’s Principle). 1962SLACcomes on line, and deep inelastic scattering experiments show that nucleons have internal structure. Quark model developed by Gell-mann and others; experimental confirmation with observation of omega-minus hadron. 1964Higgs predicts the existence of a massive boson, which should account for particle masses in the Standard Model; it so far remains undetected. Publication of Bell’s Theorem, which shows that predicted quantum correlations are inconsistent with locality. 1968TheVeneziano scattering formula, which would lead eventually to string theory. 1970sUnification of electromagnetism and the weak force by Weinberg and Salam. 1980Thefirst published design of a quantum computer, by Paul 1980–1981Confirmation of Bell’s Theorem by Alain Aspect and others. 1981, 1984Influentialpapers by Richard Feynman on quantum computation. He shows that no classical computer could simulate a quantum mechanical system in the time it takes the quantum system to evolve naturally according to quantum 1983Discoveryof neutral vector mesons confirms the prediction of their existence by the Standard Theory. 1983–1986TheGrand Unified Theory predicts the decay of the proton, but highly sensitive experiments fail to detect proton decay. 1984The first string theory revolution, as Schwarz and Green show the mathematical consistency of string theory; this leads to an explosion of interest in the theory. 1985Firstpapers on quantum computation by Deutsch, showing the possibility of a universal quantum Turing machine. 1993Quantumteleportation predicted by Bennett, and observed in 1998 by Zeilinger and others. 1994PeterShor discovers a theoretical quantum computational algorithm for factoring large numbers. 1998Astronomersdiscover that the expansion of the universe is accelerating. This is still poorly understood, except that it is almost certainly caused by some sort of dark energy that is quantum mechanical in nature. TheLarge Hadron Collider at CERN comes on line and ei2008+ ther confirms or does not confirm the existence of the Higgs “God particle.” action: A fundamental quantity in physics that has units of energy times time, or (equivalently) angular momentum. amplitude: A complex number (often a waveform) that is associated (for unclear reasons) with a transition from an initial to a final state. angular momentum: Momentum due to rotation; in quantum mechanics angular momentum is quantized (meaning that it is observed to have a discrete spectrum). barrier penetration: The ability of quantum objects such as the alpha particle to tunnel through a potential barrier in a way that would be classically Bell Inequality: A mathematical inequality between correlations coefficients that relate measurements taken on distant particles belonging to an entangled state. Such inequalities express the (generally false) postulate of local realism, which says that all of the properties of each particle in an entangled state are local to the particle. blackbody: Any object that absorbs without reflection any electromagnetic energy that falls upon it. blackbody radiation: In order to be in thermal equilibrium with its surroundings, a blackbody must radiate energy with a spectral distribution that is a function only of its temperature. Planck found the correct shape of the curve, and showed that it could be explained if radiant energy is emitted or absorbed only in discrete quanta by the “oscillators” in the walls of the cavity. black hole: A gravitationally collapsed object such as a star that has fallen inside its gravitational radius; classically, no light, matter, or information can escape from a black hole. Born Rule: The fundamental rule of quantum mechanics (first stated explicitly by Max Born in 1927) that the probability of a transition (such as an electron jump in an atomic orbital) is given by the square (modulus squared) of the amplitude (a complex number) for the transition. Bose-Einstein condensate: A gas of bosons that at low enough temperature undergoes a phase transition into a pure quantum state; more generally, any system of bosons that settles into a pure or nearly pure state due to the BoseEinstein “inclusion” principle. boson: Any particle such (as the photon) that obeys Bose-Einstein statistics. Such particles are opposite to fermions in that they tend to congregate in the same quantum state. causal interpretations of quantum mechanics: interpretations of quan­ tum mechanics that treats both particles and the wavefield as actually existent objects; many versions of the causal interpretation account for quantum correlations by means of nonlocal dynamics, often mediated by the quantum cavity radiation: Another term for blackbody radiation. commutation relations: A formula expressing whether or not two observables (such as spin components) commute. conjugate variables or canonically conjugate variables: Observables such as position and momentum that fail to commute; all observables come in conjugate pairs. Copenhagen Interpretation of Quantum Mechanics: The view of quantum mechanics pioneered by Niels Bohr, based on the Principle of Complementarity. It says that it is not meaningful to speak of what is going on at the quantum level independently of a definite experimental context. Correspondence Principle: A heuristic guideline stated by Niels Bohr for the construction of quantum mechanical models; it states that quantum systems will approximate to classical systems in suitable limits, usually the limits of large quantum numbers or ignorable Planck’s constant. cross section: The probability that particles will interact when they collide. de Broglie waves: Just as light waves have particles (photons) associated with them, particles of matter such as electrons have waves, whose properties were first described by de Broglie. Dirac’s Equation: A relativistic (covariant) version of Schrödinger’s Equation, first written by Dirac in 1928. It is valid for electrons and any other spin-1/2 fermion (such as quarks), and it represents the states of the particles it describes in terms of spinors. divergence: A mathematical function diverges at a point when its value blows up to infinity at that point; the divergences in the calculated values of many electromagnetic quantities were a major challenge to quantum electrodynamics. ebit: An entangled qubit (sometimes in the quantum computing literature also called a Bell state, after J. S. Bell) representing the state of more than one entangled particle. eigenstate or eigenvector: A state vector in which a quantum system gives a single result when a certain observable is measured on that system. eigenvalue: A possible result of the measurement of a quantum mechanical electron: The lightest lepton, having a rest mass of about .5 MeV, electric charge of –1, and spin +/– 1/2. The electron was the first elementary particle to be identified. elementary particle: The smallest identifiable units of matter, which must be described using the rules of quantum theory. For detailed particle terminology, see Chapter 9. entanglement: The tendency of particles that have interacted to remain statistically correlated after they have separated to a degree that would be impossible if the particles were physically independent. entropy: The change in entropy was defined by Clausius as the ratio between the change in energy and temperature. In statistical mechanics, entropy is a measure of disorder masked by the apparent order of a macrostate, and is given by the logarithm of the number of microstates compatible with a given Exclusion Principle: In the form originally prescribed by Pauli, this stated that no two electrons in the atom can have the same quantum numbers; more generally, it states that no two particles that obey Fermi-Dirac statistics can be in the same quantum state. expectation value: The quantum mechanical long-run average value of an fermion: Any particle obeying Fermi-Dirac statistics, which implies that no two such particles can be in exactly the same quantum state (Pauli Exclusion general covariance: A fundamental building block of Einstein’s general theory of relativity, according to which the laws of physics are the same under any mathematically possible coordinate transformation. Hamiltonian: In classical physics, the Hamiltonian is the energy of a physical system (usually the sum of its kinetic and potential energies); in quantum physics the Hamiltonian is an operator whose eigenvalues are the possible energies of a system. A lot of the skill in applying quantum mechanics to concrete problems lies in finding the correct form of the Hamiltonian; Schrödinger’s Equation can then be solved to calculate observable quantities such as energies, probabilities, or scattering cross-sections. harmonic oscillator: A system in which a mass on a spring experiences a restoring force that is proportional to its distance from the equilibrium position. There are both classical and quantum mechanical harmonic oscillators, and many types of systems in quantum physics can be modeled as collections of oscillators. Hawking Effect: A process predicted by Stephen Hawking, in which quantum effects near the event horizon of a black hole will cause the radiation of energy at a thermal temperature inversely proportional to the mass of the black hole. high energy physics: The experimental and theoretical study of elementary Hilbert space: The mathematical space in which state vectors live; technically, it is a vector space of complex numbers. interference: The overlapping of wave functions that occurs because of phase differences; this leads to non-zero probabilities for classically forbidden processes such as violations of the Bell Inequalities. Lagrangian: Classically, the Lagrangian is any function that represents the difference between kinetic and potential energy of a system; in statistical mechanics, it is the free energy, the energy available in the system to do work. The equations of motion (generalizations of Newton’s Laws) for a system can be derived from the Lagrangian for the system. In quantum field theory, the dynamics of various kinds of fields can be derived from their Lagrangians. laser: Acronym for Light Amplification by Stimulated Emission. Stimulated emission is a phenomenon predicted by Einstein in 1917 and is a manifestation of Bose-Einstein statistics, according to which photons tend to crowd into states that are already occupied by other photons. Lasers are devices that produce beams of coherent light, meaning light that is in phase and at the same frequency throughout. Lorentz covariance: A system of physical laws is Lorentz covariant if it takes the same form under Lorentz transformations of its coordinates; these are the transformations used in Einstein’s special relativity, and that are based on the assumption that the speed of light in vacuum is the same (invariant) for all inertial observers. M-Theory: The hypothetical, yet-to-be-described metatheory proposed by Edward Witten, which he hopes will unify and explain string theory and thus, in effect, be a Theory of Everything. Many-Worlds, Multiverse, or Relative State interpretation of quantum mechanics: An attempted solution of the measurement problem by Hugh Everett III according to which the wave function never collapses, but keeps on branching ad infinitum as systems interact with each other. measurement problem: Loosely, this is the problem of understanding what happens when a quantum system is measured by a macroscopic measuring device; more precisely, it is the problem of understanding how superpositions are translated into definite outcomes by a measurement. moment, magnetic: A measure of magnetic field strength. Particles or combinations of particles with net spin will have a magnetic moment, meaning that they will act like tiny magnets. neutron: An electrically neutral, spin-1/2 fermion discovered by James Chadwick in 1932; protons and neutrons comprise the nucleus. noncommutativity: Quantum mechanical observables fail to commute if the value of their product depends on the order in which the observables act on the wave function of the system. Noncommuting observables (such as position and momentum) come in pairs, called canonical conjugates, and their behavior is defined by a commutation relation that gives the value of their commutator, the operator that is the difference between their two possible nonlocality (controllable): Controllable locality (also called Parameter Dependence) would be the ability to instantly control remote events by local manipulations of entangled states—superluminal signaling, in other words. It is generally (although not without controversy) believed to be impossible because of the no-signaling theorems. nonlocality (uncontrollable): Uncontrollable nonlocality (also called outcome dependence) is the dependence of distant measurement outcomes in entangled states that leads to the violation of Bell’s Inequalities. It is a sign of the failure of common cause or local hidden variable explanations of quantum nucleon: Protons and neutrons are collectively called nucleons, since they make up the atomic nucleus. nucleus: The central dense, positively charged core of the atom, discovered by Rutherford and coworkers. observable: A linear operator with real eigenvalues acting on state vectors in a Hilbert Space. Any observable is presumed to represent a possible measurement operation, with its eigenvalues being possible measurement results. Old Quantum Theory: The early quantum theory of the period 1913–1924, based on Bohr’s atomic theory, and characterized by the opportunistic mixture of classical and quantum methods. path integral: A key part of Feynman’s interpretation of quantum mechanics, in which the total amplitude for any quantum mechanical process is found by taking an integral over all possible spacetime trajectories for that process. peaceful coexistence: Abner Shimony’s ironic term describing the relation between special relativity and quantum mechanics. Shimony argued that peaceful coexistence is guaranteed by the no-signaling theorems, but that the underlying “ideologies” of quantum mechanics and relativity are different, since the former is nonlocal and the latter is local. phase: Loosely speaking, the phase (expressed as an angle) of a wave or other periodic process is where the system is in its cycle; the phase of a wave function (which generally takes the form of a complex exponential) is the exponent of the exponential. photon: The massless, spin-1 particle that is the quantum of the electromagnetic field—the light particle of Einstein’s 1905 theory. Planck’s constant: The size of the fundamental quantum (indivisible unit) of action, discovered by Max Planck in 1900. Its modern value is 6.626 × 10–27 principle of relativity: The statement that the laws of physics are the same for all frames of reference regardless of their states of motion. probability amplitude: In the formalism of quantum theory, a probability amplitude is a scalar product of a ket (representing a preparation state) and a bra (dual to a ket and representing an outcome state). The modulus (realvalued square) of a probability amplitude is a probability. quantization: The representation of a physical quantity in the form of a series of discrete (often integral or half-integral) quantities. quantum chromodynamics: The quantum field theory in the Standard Model describing the interactions between quarks and gluons. quantum computer: A computer that would utilize quantum interference and entanglement to greatly speed up calculations. quantum cryptography: The use of quantum correlations in entangled states to encode information; theoretically it is the most secure form of encryption known. quantum electrodynamics: The quantum theory of the electromagnetic quantum field theory: A generalization of quantum electrodynamics that can be applied to other sorts of fields such as the “color” interactions between quarks. A distinguishing feature of quantum fields is that they allow for the creation and annihilation of particles. quantum gravity: The “Holy Grail” of modern theoretical physics, a quantum theory of space and time that would be a natural quantum mechanical extension of Einstein’s general relativity, and (it is hoped!) will explain all properties of matter in the bargain. quantum information theory: A quantum mechanical generalization of classical information theory that allows for interference between information states. quantum jump: First postulated by Bohr, this is the transition of an electron from one orbital to another, coupled with the emission or absorption of a quantum of radiant energy; more generally, a discontinuous transition from one quantum state to another. quantum logic: An attempt to rewrite quantum theory as a logic of propositions about possible measurement operations. It can be distinguished from classical (Boolean) logic by the failure of distributivity for propositions about noncommuting operations. quantum potential: Mysterious nonlocal energy studied by Bohm and de Broglie as a way of explaining quantum correlations. It can be derived from Schrödinger’s equation, and is a function of phase differences. quantum teleportation: A process in which an entangled pair of particles is used as a channel or carrier of a quantum state from one particle to its distant entangled partner. It is similar to quantum cryptography in that the teleported state cannot be “read” without further classical information sent by classical means. qubit: The state vector for a single particle, thought of as a quantum bit of relativity, general theory: A generalization of special relativity in which the laws of physics are the same for all reference frames regardless of their state of relative motion; as shown by Einstein, general relativity contains within itself a theory of gravitation that supersedes Newton’s theory for strong fields. relativity, special theory: A comprehensive framework for physical laws based on the assumption that the speed of light is invariant, and that the laws of physics are the same for all frames of reference moving at constant relative renormalization: The process in which infinities are removed in a systematic way from a quantum field theory. scattering: The process in which one or more particles are made to collide with other particles. Sometimes they scatter elastically, meaning that they just bounce off, while other times they scatter inelastically, meaning that either the target particles or the probe particles or both break down into fragments. From the time of Rutherford, scattering has been one of the most useful tools of particle physics. Schrödinger’s cat: The unwilling protagonist of a thought experiment, outlined by Schrödinger, that demonstrates the apparent contradictions between the quantum and classical views of physics. Schrödinger’s equation: The fundamental dynamical equation of quantum physics. It describes the evolution in time of the wave function of a system as determined by the system Hamiltonian. The equation as first written by Schrödinger is nonrelativistic, though still a very useful approximation for many purposes in physics and chemistry. It can be generalized in various separability: Two physical systems are separable if they can be fully described in isolation, and if the evolution of each can be considered in isolation from the evolution of the other. Separability is a more general concept than locality. Nonseparability is the tendency of elementary particles to interact or to be correlated in ways that violate classical expectations of statistical independence. spectrum: The set of possible eigenvalues of a quantum mechanical observable; this is a generalization of the concept of the spectrum of light emitted by electronic transitions in atoms. spinor: Vector-like object in a complex space used to represent the states of certain particles, especially spin-1/2 fermions such as the electron. In 1928 Dirac used four-component spinors to describe electrons. spin-statistics theorem: The rule that particles with half-integral spin (such as electrons and protons) are fermions, and particles with integral spin (such as pions and photons) are bosons. “spooky action at a distance”: An ironic phrase of Einstein’s, which refers to the mysterious way in which quantum particles seem to be able to influence each other at arbitrary distances. Standard Model: The model of particles and their fields based on quantum chromodynamics, which settled into its present form in the early 1980s; its predictions are well confirmed and despite the need for numerous empirical parameters it affords a fairly good explanation of the properties of all elementary particles observed so far. state vector: A vector in Hilbert Space with complex-valued components, which represents the structure of the preparation of a physical system. State vectors can be manipulated in various ways to calculate probabilities, expectation values, and scattering cross-sections. Why the state vector can so successfully encode information about physical systems remains a matter of debate. stationary state: As postulated by Bohr, an electron is in a stationary state when it is orbiting the nucleus but not emitting energy. statistical mechanics: The branch of physics pioneered by Boltzmann and Maxwell, and further developed into a quantum mechanical form by Einstein and many others, which explains thermodynamics and other large-scale behavior of matter in terms of the statistics of extremely large numbers of particles. superconductor: A material that conducts electricity with zero resistance, due to the formation of Cooper pairs of electrons that obey Bose-Einstein statistics. superfluid: A fluid in which most or all of the particles all have the same quantum wavefunction due to Bose-Einstein statistics; such fluids exhibit remarkable nonclassical properties such as zero viscosity. All superfluids known can form only at cryogenic temperatures. superposition: A linear combination of state vectors or wave functions, leading to constructive or destructive interference such as happens to waves in a ripple tank. superstring or string: A quantized one-dimensional vibrating object like an elastic string that is hypothesized to account for the structure of elementary tachyon: A hypothetical particle traveling faster than light. Some physicists argue that tachyons are permitted by relativity theory, but their existence has not been confirmed observationally. Thermodynamics: The theory of the transformations of heat, largely developed in the nineteenth century. Thermodynamics, First Law: Energy conservation: energy may be transformed in many ways, but never created from nothing or destroyed. Thermodynamics, Second Law: For closed systems, the statement that entropy always increases to a maximum (except for localized probabilistic fluctuations); for open systems, the statement that gradients exist that tend to maximize entropy locally. It can also be stated as follows: it is impossible (strictly speaking, highly improbable) to transform waste heat into useful work. Thermodynamics, Third Law: As a system’s temperature approaches absolute zero, its entropy approaches a constant value. Because of zero point energy the Third Law is not strictly correct. Tunneling: See barrier penetration. Turing machine (classical): A hypothetical digital computer first described by Alan M. Turing in 1936. It operates by changing the state of a memory register on the basis of inputs and outputs of information according to definite rules. Turing and others showed that his machine is the most general form of digital computer. Turing machine (quantum): A quantum mechanical version of Turing’s universal computer, first described by David Deutsch in 1985. The quantum Turing Machine differs from the classical version in that there can be interference between the differing computational paths it can take; this makes possible a large speed-up in calculation power, although most authors believe that the quantum Turing Machine cannot do any types of calculations that the classical Turing Machine cannot do. Uncertainty Principle: First set forth by Heisenberg, this rule states that canonically conjugate observables (such as position and momentum, or spin components in different directions) have uncertainties (sometimes called dispersions) whose product must always be greater than or equal to Planck’s constant. The Uncertainty Relations imply that if one observable could be measured with infinite precision, its conjugate would be completely uncertain. Unruh Effect: A process predicted by William Unruh in which a body accelerating in the vacuum will detect radiation with a Planck spectrum and at a temperature proportional to its acceleration. vacuum polarization: The tendency of a charged particle such as an electron to attract oppositely charged particles from the vacuum surrounding the particle; this implies that the “bare” charge of the electron is not directly observed, but rather its net or physical charge due to partial charge cancellation by the virtual particles surrounding it. virtual oscillator: From the time of Planck onward it has been found that the behavior of quantum systems can often be modeled as if they were collections of quantized harmonic oscillators (see harmonic oscillators). wavefunction: A wavefunction is a probability amplitude for the state vector to project into what Dirac called a continuous representative—a basis of continuous observables (normally either position or momentum). wavefunction, collapse of: The process in the von Neumann formulation of quantum mechanics in which the state vector reduces, or projects, into a subspace (often just a pure state) when the system is measured. There remains considerable controversy as to whether collapse of the wavefunction represents a real physical process or is merely a mathematical approximation to a superluminal statistical reweighting of outcomes. wavepacket: A wavefunction in which the phase factors of the various components interfere in such a way that the wavefunction takes the form of a fairly compact “lump” or bundle; wavepackets can represent the motion of zero point energy: A minimum energy possessed by all quantum systems; because of zero point energy, it is strictly speaking impossible to reach absolute zero. Further Reading Primary Sources Primary sources are papers in professional journals that set forth original results. Of the thousands of scientific papers that have been published on quantum theory and related problems, the few listed here are either are among the major turning points in the history of modern physics, or represent recent work that seems (to this author) to be especially interesting or potentially important. Historically Important Papers Most of the papers mentioned here are available in English in the anthologies listed below. Aspect, A., P. Grangier, and G. Roger. “Experimental Tests of Realistic Local Theories via Bell’s Theorem.” Physical Review Letters 47 (1981): 460–67. The first confirmation of Bell’s Theorem that is generally felt to be decisive. Bell, John S. “On the Einstein-Podolsky-Rosen Paradox.” In Quantum Theory and Measurement, ed. J. A. Wheeler and W. H. Zurek, pp. 403–8. Prince­ ton, NJ: Princeton University Press, 1983. First publication in Physics 1 (1964): 195–200. Bell’s first statement of his theorem that local realism conflicts with the predictions of quantum mechanics. Birkhoff, G., and J. von Neumann. “The Logic of Quantum Mechanics.” Annals of Mathematics 37 (1936): 823–43. The first presentation of quantum logic. Further Reading Bohm, David. “A Suggested Interpretation of the Quantum Theory in Terms of ‘Hidden’ Variables.” In Quantum Theory and Measurement, ed. J. A. Wheeler and W. H. Zurek. pp. 369–96. Princeton, NJ: Princeton University Press, 1983. First publication in Physical Review 85, no. 2 (1952): The first presentation of Bohm’s causal interpretation of nonrelativistic quantum mechanics, based on the quantum potential and guidance Bohr, Niels. “On the Constitution of Atoms and Molecules.” Philosophical Magazine 26 (1913): 1–15. The first presentation of Bohr’s atomic hypothesis. Bohr, Niels, H. A. Kramers, and J. C. Slater. “On the Quantum Theory of Radiation.” In Sources of Quantum Mechanics, ed. B. L. van der Waerden, pp. 159–76. New York: Dover Publications, 1967. First publication in Philosophical Magazine 47 (1924): 785–802. Bohr and his collaborators demonstrate that a scientific hypothesis does not have to be right in order to be instructive. Born, Max. “On the Quantum Mechanics of Collisions.” In Quantum Theory and Measurement, ed. J. A. Wheeler and W. H. Zurek, pp. 52–55. Prince­ ton, NJ: Princeton University Press, 1983. First publication as “Zur Quantenmechanik der Stossvorgänge.” Zeitschrift für Physik 37 (1926): Born states in a footnote what is now called the Born Rule, according to which probabilities are the squared modulus of the wave function. Born, M., W. Heisenberg, and P. Jordan. “On Quantum Mechanics II.” In Sources of Quantum Mechanics, ed. B. L. van der Waerden, pp. 321–85. New York: Dover Publications, 1967. First publication as “Zur Quantenmechanik II.” Zeitschrift für Physik 35 (1926): 557–615. The “three-man-work” which sets forth the first fully worked out version of matrix mechanics. Dirac, P.A.M. “The Fundamental Equations of Quantum Mechanics.” In Sources of Quantum Mechanics, ed. B. L. van der Waerden, pp. 307–20. New York: Dover Publications, 1967. First publication in Proceedings of the Royal Society A 109 (1926): 642–53. Dirac’s first paper on quantum mechanics, in which he generalizes Heisenberg’s approach. ———. “The Quantum Theory of the Electron.” Proceedings of the Royal Society of London 117 (1928): 610–24. Dirac presents his relativistic wave equation for the electron, and shows that it admits of both negatively and positively charged particles. Einstein, Albert. “Does the Inertia of a Body Depend upon Its Energy Content?” In Einstein’s Miraculous Year: Five Papers that Changed the Face of Further Reading Physics, ed. John Stachel, pp. 161–64. Princeton, NJ and Oxford: Prince­ ton University Press, 2005. First publication as “Ist die Trägheit eines Körpers von seinem Energieinhalt abhängig?” Annalen der Physik 18 (1905): 639–41. The short paper in which Einstein announces the equivalence of mass and energy. ———. “On a Heuristic Point of View Concerning the Production and Transformation of Light.” In Einstein’s Miraculous Year: Five Papers that Changed the Face of Physics, ed. John Stachel, pp. 177–98. Princeton, NJ and Oxford: Princeton University Press, 2005. First publication as “Über einen die Erzeugung und Verwandlung des Lichtes betreffenden heuris­ tischen Gesichtspunkt.” Annalen der Physik 17: 132–45. Einstein’s first statement of his light quantum hypothesis. ——— 1905. “On the Electrodynamics of Moving Bodies.” In The Principle of Relativity, ed. A. Einstein, H. A. Lorentz, H. Minkowski, and H. Weyl, trans. W. Perrett and G. B. Jeffery, pp. 37–65. New York: Dover Books (reprint). First publication as “Zur Elektrodynamik bewegter Körper.” Annalen der Physik 17 (1905): 891–921. Einstein’s foundational paper on the special theory of relativity. ———. “On the Motion of Small Particles Suspended in Liquids at Rest Required by the Molecular-Kinetic Theory of Heat.” In Einstein’s Miraculous Year: Five Papers that Changed the Face of Physics, ed. John Stachel, pp. 85–98. Princeton, NJ and Oxford: Princeton University Press, 2005. First publication as “Über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspen­ dierten Teilchen.” Annalen der Physik 17 (1905): 549–60. Einstein demonstrates that if liquids are composed of discrete molecules then they should experience statistical fluctuations that could account for Brownian motion. ———. “On the Quantum Theory of Radiation.” In Sources of Quantum Mechanics, ed. B. L. van der Waerden, pp. 63–77. New York: Dover Publications, 1967. First publication as “Zur Quantentheorie der Strahlung.” Physikalische Gesellschaft Zürich 18 (1917): 47–62. Einstein further develops the light quantum hypothesis, showing that such quanta must have particle-like momenta as well as energy. Einstein, Albert, Boris Podolsky, and Nathan Rosen. “Can QuantumMechanical Description of Physical Reality Be Considered Complete?” In Quantum Theory and Measurement, ed. J. A. Wheeler and W. H. Zurek, pp. 138–41. Princeton, NJ: Princeton University Press, 1983. First publication in Physical Review 47 (1935): 777–80. The famous EPR argument for the incompleteness of quantum Further Reading Everett, Hugh. “Relative State Formulation of Quantum Mechanics.” Reviews of Modern Physics 29 (1957): 454–62. The first statement of Everett’s “Relative State” or “Many-Worlds” interpretation of quantum physics. Feynman, R. P. “Spacetime Approach to Quantum Electrodynamics.” In Selected Papers on Quantum Electrodynamics, ed. Julian Schwinger, pp. 236–56. New York: Dover, 1958. First publication in Physical Review 76 (1949): 769–89. A professional-level review of Feynman’s diagrammatic method of doing quantum electrodynamics. Heisenberg, Werner. “Quantum-Theoretical Re-Interpretation of Kinematic and Mechanical Relations.” In Sources of Quantum Mechanics, ed. B. L. van der Waerden, pp. 261–76. New York: Dover Publications, 1967. First publication as “Über quantentheoretische Umdeutung kinematischer und mechanischer Beziehungen.” Zeitschrift für Physik 33 (1925): 879–93. The first paper in which modern quantum mechanics appears. Kochen, S., and E. P. Specker. “The Problem of Hidden Variables in Quantum Mechanics.” Journal of Mathematics and Mechanics 17 (1967): 59–87. A powerful no-go theorem that restricts the possibility of Boolean accounts of quantum predictions. Landauer, Rolf. “Irreversibility and Heat Generation in the Computing Process.” IBM Journal of Research and Development 32 (1961): 183–191. The influential article in which Landauer showed that the erasure of information in a computation must produce a minimum amount of waste heat. Planck, Max. “On an Improvement of Wien’s Equation for the Spectrum.” In The Old Quantum Theory, ed. D. ter Haar, pp. 79–81. Oxford: Pergamon, 1967. Trans. by D. ter Haar of “Über eine Verbesserung der Wien’schen Spektralgleichung.” Verhandlungen der Deutsche Physikalische Gesellschaft 2 (1900): 202–4. Planck’s first statement of the equation that he had guessed for the blackbody radiation distribution law. ———. “On the Theory of the Energy Distribution Law of the Normal Spectrum.” In The Old Quantum Theory, ed. D. ter Haar, pp. 82–90. Oxford: Pergamon 1967. Trans. by D. ter Haar of “Zur Theorie des Gesetzes der Energieverteilung im Normalspectrum.” Verhandlungen der Deutsche Physikalische Gesellschaft 2 (1900): 237–45. First publication of Planck’s derivation of the blackbody radiation law using Boltzmann’s statistical concept of entropy. Schrödinger, E. “Quantization as an Eigenvalue Problem (Part I).” In Collected Papers on Wave Mechanics, ed. E. Schrödinger, pp. 1–12. New York: Chelsea, 1978. First publication as “Quantisierung als Eigenwertproblem.” Annalen der Physik 79 (1926): 361–76. Further Reading The first of the series of papers in which Schrödinger develops his wave mechanical version of quantum mechanics. Schrödinger, E. “Discussion of Probability Relations between Separated Systems.” Proceedings of the Cambridge Philosophical Society 31 (1935): 555–63; 32 (1936): 446–51. A searching analysis of correlated quantum systems, in which Schrö­ dinger coined the term “entanglement.” Wigner, Eugene P. “Remarks on the Mind-Body Question.” In Quantum Theory and Measurement, ed. J. A. Wheeler and W. H. Zurek, pp. 168–81. Princeton, NJ: Princeton University Press, 1983. First published in The Scientist Speculates, ed. I. J. Good, pp. 284–302. London: Heinemann, Wigner speculates that the collapse of the wave function is somehow caused by the conscious mind of a human observer. Yukawa, H. “On the Interaction of Elementary Particles.” In Tabibito, ed. Hideki Yukawa, pp. 209–18. Singapore: World Scientific, 1982. First publication in Proceedings of the Physico-Mathematical Society of Japan 17 (1935): 27–36. Yukawa presents his theory of the nuclear interaction between protons and neutrons, and predicts the existence of a new particle (later shown to be the pion) as a carrier of the strong force. Some Recent Interesting Work A good way to follow recent developments in quantum physics, particle theory, and other mathematically oriented sciences is to visit http://arXiv.org/, a preprint exchange currently operated out of Cornell University. Preprints are prepublication versions of research papers; with the advent of Web-based preprint servers, in particular arXiv.org, preprints (and the new ideas they should contain) can be distributed very rapidly. Aharonov, Y., J. Anandan, J. Maclay, and J. Suzuki. “Model for Entangled States with Spin-Spin Interactions.” Physical Review A 70 (2004): 052114. Possible (though inefficient) nonlocal communications using protective Bub, Jeffrey. “Quantum Mechanics is about Quantum Information.” http:// An exploration of the possibility that quantum information is a “new physical primitive.” Deutsch, David. “Quantum Theory, the Church-Turing Principle and the Universal Quantum Computer.” Proceedings of the Royal Society of London A 400 (1985): 97–117. Probably the first paper to explicitly introduce the concept of a universal quantum computer. Further Reading Feynman, Richard P. “Simulating Physics with Computers.” International Journal of Theoretical Physics 21, no. 6/7 (1982): 467–88. Feynman observes that computers as we know them could not predict the behavior of quantum systems as quickly as quantum systems can “compute” what they are supposed to do on their own. From this it is a short step to seeing that quantum mechanical computers might be able to do computations much faster than any “classical” computer. Heisenberg, Werner. “The Nature of Elementary Particles.” Physics Today 29, no. 3 (1976): 32–39. Heisenberg’s last word on particle physics, in which he calls for a renewed search for the dynamical laws governing particle structure. Jordan, Thomas F. “Quantum Correlations Do Not Transmit Signals.” Physics Letters 94A, no. 6/7 (1983): 264. A succinct, orthodox explanation of why there is no “Bell telephone.” Mermin, N. David. “Hidden Variables and the Two Theorems of John Bell.” Reviews of Modern Physics 65, no. 3 (1993): 803–15. A detailed, professional-level, but exceptionally clear review of the relationships between Bell’s work, the Kochen-Specker Theorem, and “hidden” variable (i.e., Boolean) interpretations of quantum mechanics. Peacock, Kent A., and Brian S. Hepburn. “Begging the Signalling Question: Quantum Signalling and the Dynamics of Multiparticle Systems.” (1999). A muck-raking critique of the conventional “no-signaling” arguments. Pitowsky, Itamar. “George Boole’s ‘Conditions of Possible Experience’ and the Quantum Puzzle.” British Journal for the Philosophy of Science 45 (1994): Pitowsky shows that the Bell Inequalities are examples of logical consistency conditions enunciated by George Boole in 1854. Secondary Sources The following anthologies listed here contain many of the decisive papers in the history of quantum theory and related areas of physics, and often much valuable secondary material. Einstein, A., H. A. Lorentz, H. Minkowski, and H. Weyl. The Principle of Relativity, trans. W. Perrett and G. B. Jeffery. New York: Dover Books (reprint). This contains most of the foundational papers in the theory of relativity, and is a “must” on every physicist’s bookshelf. Schrödinger, E. Collected Papers on Wave Mechanics. New York: Chelsea, Further Reading This book collects the papers from 1926 and 1927 in which Schrödinger founded wave mechanics. Schwinger, Julian, ed. Selected Papers on Quantum Electrodynamics. New York: Dover, 1958. The main papers in the “heroic” period of quantum electrodynamics, from 1927–1949; most are in English. Stachel, John, ed. Einstein’s Miraculous Year: Five Papers that Changed the Face of Physics. Princeton and Oxford: Princeton University Press, 2005. This book contains English translations of Einstein’s five great papers of 1905, together with exceptionally clear and insightful commentary by the editor. ter Haar, D., ed. The Old Quantum Theory. Oxford: Pergamon, 1967. This contains key papers running from the two major publications in 1900 by Planck, through to Pauli’s paper on spin, together with a very useful introductory survey by the editor. van der Waerden, B. L., ed. Sources of Quantum Mechanics. New York: Dover Publications, 1967. This book reproduces, in English translation, many of the key papers in the period 1917–1926 (not including papers on wave mechanics), together with a valuable review by the editor. Wheeler, J. A., and W. H. Zurek, eds. Quantum Theory and Measurement. Princeton, NJ: Princeton University Press, 1983. This fascinating collection contains many of the most influential papers on the foundations and interpretation of quantum theory, from the 1920s to 1981, together with learned commentary by the editors. Historical Studies Here are a few academic papers on the history of physics that are especially influential or insightful. Beller, Mara. “Matrix Theory before Schrödinger: Philosophy, Problems, Consequences.” Isis 74, no. 4 (1983): 469–91. A close study of the critical period from the first papers on matrix mechanics to Schrödinger; the author (who was one of our most distinguished historians of physics) explores the idea that Heisenberg’s unusual ability to cope with contradictions may have been a “source of his astounding Forman, Paul. “Weimar Culture, Causality, and Quantum Theory: Adaptation by German Mathematicians and Physicists to a Hostile Environment.” Historical Studies in the Physical Sciences 3 (1971): 1–115. This states Forman’s controversial thesis that the emphasis on uncertainty and acausality in the quantum physics of the 1920s was a reflection of post-World War I anomie in Europe. Further Reading Howard, Don. “Einstein on Locality and Separability.” Studies in History and Philosophy of Science 16 (1985): 171–201. This is a clear and detailed study of the historical context in which Einstein evolved his statement of the “Separation Principle.” ———. “Revisiting the Einstein-Bohr Dialogue.” (2005). http://www.nd.edu/ A recent study of the importance of entanglement in the debates between Bohr and Einstein. Rovelli, Carlo. “Notes for a brief history of quantum gravity.” (2001). http:// A very accessible overview of main trends in quantum gravity research from the 1930s to the present, by a current key player in the field. Biographies and Autobiographies There are, by now, substantial biographies of most of the influential physicists of the twentieth century. I list a few here that I have found to be especially useful or interesting. Cassidy, David C. Uncertainty: The Life and Science of Werner Heisenberg. New York: W. H. Freeman, 1992. An excellent, detailed biography of the creator of matrix mechanics. Pantheon, 1992. This is a very good telling of the colorful and sometimes tragic life of the man who many would rate the most brilliant physicist since World War II. Heisenberg, Werner. Physics and Beyond: Encounters and Conversations. New York: Harper and Row, 1971. Heisenberg’s autobiography, centered around reconstructed conversations with Einstein, Bohr, Pauli, and other founders. Heisenberg describes his preoccupation with physics as an almost religious search for the “central order.” Isaacson, Walter. Einstein: His Life and Universe. New York: Simon and Schuster, 2007. There are innumerable popular biographies of Einstein; this very detailed and clear book makes good use of the most recent scholarship on Einstein’s life and thought. James, Ioan. Remarkable Physicists from Galileo to Yukawa. Cambridge: Cambridge University Press, 2004. This book is “for those who would like to read something, but not too much” (as the author puts it) about the lives of the major physicists; it is very good on the period from the mid-1900s onward. Further Reading Moore, Walter John. Schrödinger: Life and Thought. Cambridge: Cambridge University Press, 1989. If Hollywood were to make a movie of the history of quantum mechanics, Schrödinger would have to be played by Johnny Depp. This is an authoritative biography of the complex, creative founder of wave Pais, Abraham. ‘Subtle is the Lord . . . ’: The Science and the Life of Albert Einstein. Oxford: Oxford University Press, 1982. This is by far the best biography of Einstein for the reader who wishes to understand the detailed development of his scientific ideas, including his huge contributions to quantum theory. Pais alternates between nontechnical biographical chapters and technical expositions of Einstein’s MA: Addison-Wesley Publishing, 1997. This is an engrossing story of the sometimes-tragic life of the brilliant American physicist who founded the “causal” interpretation of quantum mechanics and paved the way for the work of J. S. Bell. Powers, Thomas. Heisenberg’s War: The Secret History of the German Bomb. New York: Knopf, 1993. During World War II Heisenberg headed the (fortunately) abortive German atomic bomb project. This is a very clear and interesting history of that troubled period. Yukawa, Hideki. Tabibito. Singapore: World Scientific, 1982. The engaging autobiography of the Japanese physicist who predicted the existence of the pion, carrier of the strong force. Histories of Quantum and Atomic Physics Cushing, James T. Quantum Mechanics: Historical Contingency and the Copenhagen Hegemony. Chicago and London: University of Chicago Press, The late James Cushing was an exceptionally knowledgeable historian of physics. Cushing argues that the Copenhagen Interpretation has overshadowed the causal interpretations of Bohm and de Broglie mainly for historical reasons having little to do with the scientific merits of each approach. Professionals and novices together can learn a great deal about quantum physics and its history from this exceptionally clear book, even if they remain skeptical of Cushing’s controversial thesis. Gamow, George. Thirty Years That Shook Physics: The Story of Quantum Theory. New York: Doubleday, 1966. Further Reading This is a delightful survey of the heroic years of quantum mechanics from 1900 to 1935, accessible to anyone who can tolerate a little algebra. It is illustrated by Gamow’s sketches and caricatures of physicists of the time, many of whom Gamow knew personally. Jammer, Max. The Conceptual Development of Quantum Mechanics. 2nd ed. Woodbury, NY: Tomash Publishers/American Institute of Physics, This is an authoritative and detailed survey of the history of nonrelativistic quantum mechanics from Planck to EPR; it is quite good on foundational questions. Kragh, Helge. Quantum Generations: A History of Physics in the Twentieth Century. Princeton, NJ: Princeton University Press, 1999. This is an excellent and nontechnical survey of the history of physics from the late nineteenth century onward. It has good coverage of both theory and applications. Kuhn, Thomas S. Black-Body Theory and the Quantum Discontinuity 1894– 1912. Chicago and London: University of Chicago Press, 1978. This is a detailed and somewhat controversial historical study of the early years of quantum radiation theory, when classical physicists (Planck in particular) had to accustom themselves to the concept of Mehra, Jagdish, and Helmut Rechenberg. 1982. The Historical Development of Quantum Theory. 6 vols. New York: Springer-Verlag, 1982–2001. This series of books is the most complete and detailed history of quantum mechanics. The authors display a deep and accurate understanding of the physics and an exhaustive knowledge of its history. Pais, Abraham. Inward Bound: Of Matter and Forces in the Physical World. Oxford: Clarendon Press, 1986. This is a detailed and authoritative history of particle physics from Bequerel to the early 1980s and the triumph of the Standard Model, written by a participant in that history. Schuster, 1988. A very useful nontechnical survey of the growth of twentieth-century physics, culminating in the development and use of atomic weapons in World War II. ger, and Tomonaga. Princeton, NJ: Princeton University Press, 1994. A detailed, technical, and authoritative history of quantum electrodynamics, centering on the development of renormalization theory in the late Further Reading For the General Reader Listed here are a few books, most recent, that present aspects of the quantum story in an especially engaging and helpful way. Most of these books contain little or no mathematics, but they will still challenge you to think! Aczel, Amir D. Entanglement: The Greatest Mystery in Physics. Vancouver, BC: Raincoast Books, 2002. This is a very engaging and up-to-date introduction to the mysteries of quantum entanglement, with nice profiles of many of the physicists, from J. S. Bell to Nicholas Gisin, who have done recent and important foundational work. Brown, Julian. The Quest for the Quantum Computer. New York: Simon and Schuster, 2000. This is a detailed but very clear and thorough introduction to the fascinating fields of quantum information theory and computation; it is accessible to anyone with high school mathematics but meaty enough to be useful to professionals as well. Cropper, William H. The Quantum Physicists and an Introduction to Their Physics. New York: Oxford University Press, 1970. This book requires some calculus and linear algebra, but it explains in an exceptionally clear way the main mathematical ideas in the development of quantum theory from Planck to Dirac. Deutsch, David. The Fabric of Reality. London: Penguin Books, 1997. This is a very clear and nontechnical exposition of many big questions in science from quantum computing to time travel, informed by Deutsch’s enthusiastic advocacy of the controversial multiverse interpretation of quantum mechanics. NJ: Princeton University Press, 1985. A lucid nontechnical exposition by the master of quantum electrodynamics. Greene, Brian. The Fabric of the Cosmos: Space, Time, and the Texture of Reality. New York: Knopf, 2004. An exceptionally clear and authoritative account of modern physics and cosmology, from an expert in string theory. Herbert, Nick. Quantum Reality: Beyond the New Physics. New York: Anchor Press/Doubleday, 1985. Although a little out of date now, this remains one of the very best nontechnical introductions to quantum mechanics and its interpretations. Johnson, George. A Shortcut through Time: The Path to the Quantum Computer. New York: Random House, 2003. Further Reading This is a very clear review of quantum computing at about the level of a Scientific American article; highly recommended for a quick but thoughtprovoking introduction to the subject. and the Accelerating Cosmos. Princeton, NJ: Princeton University Press, An accessible introduction to the accelerating universe, which in the past ten years has changed all our thinking about cosmology. Lindley, David. Uncertainty: Einstein, Heisenberg, Bohr, and the Struggle for the Soul of Science. New York: Doubleday, 2007. This book is a very readable, up-to-date, and nontechnical survey of the intense debates about the meaning of quantum mechanics that took place in the 1920s. Some would say that Lindley is a bit too hard on McCarthy, Will. Hacking Matter: Levitating Chairs, Quantum Mirages, and the Infinite Weirdness of Programmable Atoms. New York: Basic Books, This is a very clear and fascinating introduction to the possibilities for programmable matter, which involves the use of quantum mechanics to tailor-make electronic orbitals to suit almost any purpose. Milburn, Gerald J. Schrödinger’s Machines: The Quantum Technology Reshaping Everyday Life. New York: W. H. Freeman, 1997. This is a very clear and accessible introduction to the basics of quantum mechanics and many of its recent applications in quantum computing, nanocircuits, quantum dots, and quantum cryptography. Fascinating and controversial speculations on mathematics, physics, computers, and the mind. This book contains an exceptionally clear introduction to quantum mechanics, at a level accessible to anyone with a tiny bit of high school mathematics. Science, and What Comes Next. New York: Houghton Mifflin, 2006. Smolin’s highly controversial argument that for the past 30 years theoretical physics has made virtually no progress because it has become diverted into an unproductive obsession with string theory. Smolin, Lee. Three Roads to Quantum Gravity. New York: Basic Books, An exceptionally clear nonmathematical account of cutting-edge work on the challenge of unifying quantum mechanics and Einstein’s theory of gravitation, by one of the leading researchers in the field. Further Reading Philosophy and Interpretation of Quantum Mechanics I list only a few titles here, to whet the reader’s appetite. Bell, J. S.. Speakable and Unspeakable in Quantum Mechanics. Cambridge: Cambridge University Press, 1987. This contains most of Bell’s major papers on the foundations of quantum mechanics, and much else besides; pure gold. Bub, Jeffrey. Interpreting the Quantum World. Cambridge, New York: Cambridge University Press, 1997. This is an exceptionally detailed and thorough explanation of the “no-go” results in the foundations of quantum mechanics, such as Bell’s Theorem and the Kochen-Specker Theorem. A scrappy and controversial attempt to debunk mythology about the history of science. Footnote 19, p. 61, contains Feyerabend’s insightful remarks about history and philosophy as scientific research tools. Maudlin, Tim. Quantum Non-Locality and Relativity. 2nd ed. Oxford: Blackwell Publishers, 2002. Maudlin, a philosopher at Rutgers University, argues controversially that the violation of Bell’s Inequality implies that information in entangled quantum systems is transmitted faster than light. This book contains an exceptionally clear but elementary version of Bell’s Theorem. Shimony, Abner. “Metaphysical Problems in the Foundations of Quantum Mechanics.” International Philosophical Quarterly 18 (1978): 3–17. The paper in which Shimony introduces the concept of “peaceful coexistence” between quantum mechanics and relativity, founded on the nosignaling theorems. Wilbur, Ken, ed. Quantum Questions: Mystical Writings of the World’s Great Physicists. Boston: Shambhala, 2001. A number of books attempt to explore the alleged parallels between quantum physics and Eastern mysticism; this is one of the more responsible. I list here a few especially sound university-level texts that would be good places to start for the determined beginner who is willing to “drill in very hard wood” as Heisenberg put it (Powers, 1993). Bohm, David. Quantum Mechanics. Englewood Cliffs, NJ: Prentice-Hall, Further Reading In this book (now one of the classic texts in quantum theory) Bohm set forth the version of the EPR experiment that would later be used by J. S. Bell to refute locality. Bohm also presents a very clear and thorough exposition of wave mechanics and the Copenhagen Interpretation (which Bohm was soon to abandon). Brand, Siegmund, and Hans Dieter Dahmen. The Picture Book of Quantum Mechanics. 3rd ed. New York: Springer-Verlag, 1995. This book is especially useful for its graphical presentation of the structure of wave-functions and wave-packets. Cohen-Tannoudji, Claude, Bernard Diu, and Franck Laloë. Quantum Mechanics. Vol. I. Trans. S. R. Hemley, N. Ostrowsky, and D. Ostrowsky. New York: John Wiley and Sons, 1977. A sound, detailed, although somewhat ponderous introduction to nonrelativistic quantum mechanics, with an especially thorough treatment of the mathematics of entanglement. Dirac, P.A.M. The Principles of Quantum Mechanics. 4th ed. (revised). Oxford: Clarendon Press, 1958. A terse but profound and thorough presentation of quantum theory by one of its creators. Lectures on Physics. Vol. III: Quantum Mechanics. Reading, MA: AddisonWesley Publishing, 1965. The legendary Feynman lectures on quantum mechanics are, by now, slightly out of date, but serious students and experienced professionals alike can still benefit from Feynman’s profound understanding of the Misner, Charles, John A. Wheeler, and Kip Thorne. Gravitation. San Francisco: W. H. Freeman, 1973. This tome (so massive that it bends spacetime) is one of the most complete and authoritative introductions to general relativity. A central theme is the fact that classical relativity must be replaced by a quantum theory of spacetime. Nielsen, Michael A., and Isaac L. Chuang. Quantum Computation and Quantum Information. Cambridge: Cambridge University Press, 2000. A very detailed and well-written text in the exploding new field of quantum computation. Although it is aimed at the professional, it contains a very clear introduction to the basics of quantum theory. Rovelli, Carlo. Quantum Gravity. Cambridge: Cambridge University Press, An up-to-date and opinionated review of the search for a quantum theory of space and time. This book makes few technical concessions but is Further Reading essential reading for anyone professionally interested in current work on quantum gravity. Taylor, Edwin, and John A. Wheeler. Spacetime Physics. San Francisco: W. H. Freeman, 1966. A superbly clear introduction to special relativity, requiring only high school mathematics and a certain amount of Sitzfleisch (“sitting muscles,” as the old-time German mathematicians would have it). BC: Raincoast Books, 2002. Bub, Jeffrey. Interpreting the Quantum World. Cambridge and New York: Cambridge University Press, 1997. Cropper, William H. The Quantum Physicists and an Introduction to their Physics. New York: Oxford University Press, 1970. Cushing, James T. Quantum Mechanics: Historical Contingency and the Copenhagen Hegemony. Chicago and London: University of Chicago Press, NJ: Princeton University Press, 1985. by German Mathematicians and Physicists to a Hostile Environment.” York: Harper and Row, 1971. Jammer, Max. The Conceptual Development of Quantum Mechanics. 2nd ed. Woodbury, NY: Tomash Publishers/American Institute of Physics, 1989. Century. Princeton, NJ: Princeton University Press, 1999. Landauer, Rolf. “Irreversibility and Heat Generation in the Computing McCarthy, Will. Hacking Matter: Levitating Chairs, Quantum Mirages, and the Infinite Weirdness of Programmable Atoms. New York: Basic Books, 2003. Einstein. Oxford: Oxford University Press, 1982. MA: Addison-Wesley Publishing, 1997. Schuster, 1988. Schrödinger, Erwin. What is Life? The Physical Aspect of the Living Cell. Cambridge: Cambridge University Press, 1944. Shimony, Abner. “Metaphysical Problems in the Foundations of Quantum Publications, 1967. Princeton, NJ: Princeton University Press, 1983. Absolute zero: defined, 5; and superconductivity, 114 Action: defined, 13; quantum of, 13–14 Action at a distance: in Bohm’s theory, 135; defined, 9; in Feynman-Wheeler theory, 103; and gravitation, 24 –25, 170; and the quantum, 46 – 47, 81, 86, 90, 141– 42. See also Non­ Aharonov, Yakir, 136, 145, 158–59 Aharonov-Bohm Effect, 136 Alice and Bob, 89–90, 140–41, 142–43, 144– 47, 156–57, 158, 161–62 Alpha particles. See Alpha radiation Alpha radiation, 33, 35, 36, 81, 85, 93, 107–8, 119, 121 Amplitudes, probability. See Probability Amplitudes, transition. See Probability Antielectron. See Positron Antiparticle, 67– 68, 70, 93, 96, 98; and Hawking radiation, 169 Antiproton, 68, 94 Aspect, Alain, 139– 40, 162 Atom, 6; existence of, 7, 16 –17; energy levels, 30, 116; excited state, 40; ground state, 40; hydrogen, 37; orbitals, 55–56; principal quantum number, 37; Rutherford model, 35–36; stability, 36, 38; stationary states, 37–38; Thomson model, 34–35 Atomic bomb, 27, 33, 95, 99, 108, 110, Atomic clock, 19 Atomic number, 39– 40 Avogadro’s Number, 17 Background dependence, 165–67 Barrier penetration. See Quantum Baryons, 124, 125 Bekenstein, Jakob, 168 Bell, J. S., 91, 137– 42 Bell’s Inequality, 139, 142– 43, 146 Bell’s Theorem, 139– 40, 146, 157 “Bell telephone,” 142– 44, 159 Bennett, Charles, 157 Beta decay, 94, 95, 96, 119 Beta radiation, 33–34, 123 Bethe, Hans, 99 Birkhoff, Garrett, 84, 146 Blackbody: and Einstein, 21–23; and Planck, 13–14; black hole as, 168; defined, 9–10; radiation from, 10; universe as a blackbody, 163 Black holes, 168 –69 Bohm, David, 81, 91, 133–38 Bohr, Niels: atomic theory of, 36–39, 47– 48, 52, 72, 111; and beta decay, 94; debate with Einstein, 42– 43, 85, 87–88, 90–91; and entanglement, 83–84; and field theory, 101; and Gamow, 107; and Heisenberg, 50, 59–60; influence, 29, 40, 63, 76, 171; life of, 36–37, 41– 43 Bohr-Kramers-Slater (BKS) paper, 43, 65, 94 Bolometer, 11 Boltzmann, Ludwig, 6–7, 8, 11, 13 Boltzmann statistics, 69 Boole, George, 145 Boole’s “conditions of possible experience,” 145– 46 Born, Max, 50–52, 58 –59, 76, 97, 131 Born Rule, 58, 72 Bose, Satyendra Nath, 45– 46 Bose-Einstein condensates (BECs), 46, 65, 114 Bose-Einstein statistics, 69 Bosons, 69, 127; intermediate vector, “Bra-ket” notation, 58, 71 Brownian motion, 16–17, 24, 163 Carnot, Sadi, 5 Cathode rays, 31, 32 Causal loops, 144 – 45 Cavity radiation. See Blackbody Charge, electrical, 55, 67, 70, 80, 93, 103, 107, 114, 123, 128, 165, 170 Chemical bond, theory of, 112 Chemistry, development of, 112–13 Classical physics, 7, 8 –9, 12, 15; and specific heat, 23; vs quantum physics, 31, 32, 35–36, 37–38, 41– 42, 61, 64–65, 70, 75, 91 Clausius, Rudolf, 5– 6 C-numbers versus q-numbers, 58, 167 Color force, 125–26, 129 Commutator, 57 Complementarity. See Principle of Completeness: of a physical theory, 88; of quantum mechanics, 85, 90, 91, Compton, A. H., 43, 49 Compton Effect, 43 Computer, 56, 112, 113, 121; classical, 149–52, 154 –55; game, 136. See also Quantum computing Consciousness, 161– 62 Continuity, 8, 61; versus discontinuity, 167 Copenhagen interpretation of quantum mechanics, 63– 65, 81, 85, 134, 135 Correspondence Principle, 41– 42, 46 Cosmic rays, 96 Cosmological constant, 26 Cosmology: Big Bang, 107, 163, 168; quantum, 162– 63 CPT conservation, 123 Creation and annihilation operators, 98 Curie, Marie, 32–34 de Broglie, L., 52–54, 68, 80– 81; causal interpretation of wave mechanics, 80– 81, 135, 141, 147 de Broglie wavelength, 53, 69, 74, 93 Delayed-choice experiment, 139 Determinism, 8 –9; and the pilot wave interpretation of quantum mechanics, 79, 136–37; and Schrödinger Equation, 57; Deutsch, David, 149–50, 153 Dirac, Paul A. M., 46, 49, 57–58, 60, 65– 68, 128, 166, 172 Dirac delta function, 58 Dirac equation, 66 Dirac Sea, 67– 68, 97 Double-slit experiment, 73–76 Doubly Special Relativity, 166 Dulong-Petit Law, 23 Ebit, 151 Eigenfunction, 55, 129 Eigenmode. See Eigenfunction Eigenstates, 72, 73, 82, 83, 128 –29 Eigenvalues, 55, 71–72, 129 Eigenvector, 71 Einstein, Albert: and Bohm’s interpretation of quantum mechanics, 135; causal wave theory of, 86; debate with Bohr, 42– 43, 87– 88; life of, 15–16, 23–24, 26–27, 84 – 85; objections to quantum mechanics, 84 – 86; radiation theory, 40– 41; and Special Relativity, 17–21, 165; and statistical mechanics, 16; the “year of miracles,” 16; and quantum gasses, 46 Einstein-Podolsky-Rosen thought experiment (EPR), 27, 88 –92; Bohm’s version of, 134 Electrodynamics, Maxwellian, 3–5, 18, 36, 38; action-at-a distance formulation by Feynman and Wheeler, 103; as statistical, 22, 165 Electrodynamics, quantum. See Quantum electrodynamics Electromagnetic field, 3– 4, 11–12, 69 Electromagnetic radiation: spontaneous emission of, 40; stimulated emission of, 40–41 Electromagnetic waves: and the ether, 18; nature of, 4; polarization of, 4 Electromagnetism, 27 Electron microscope, 113–14 Electrons, 22 (in photoelectric effect), 30–31, 34, 35, 40, 67, 102; bare, 98; Cooper pairs, 114; as lepton, 125–26; magnetic moment of, 48 – 49; mechanical energies quantized, 37; orbitals, 55–56; and quantum numbers, 47– 48; self-energy, 99; in semiconductors, 116–17; spin of, 38, 48–49, 52, 66– 67, 70, 71; trajectories of, 59, 61, 136–37 Electron volt, 95 Electrostatic attraction, 38 Electroweak force, 119, 125–26 Electroweak gauge field, 119, 126 Elementary particles, 119–31; creation and annihilation, 68, 97–98; mass, 127, 128; nature of, 70; nonconservation of particle number, 97; Standard Model, 119, 125–26; statistics of, 69 Energy, 1, 7, 98; conservation of, 1, 43; dark, 163; entanglement, 156; equivalence with mass, 20; nuclear, 94, 110, 111; quantization of, 13, 38, 51; radioactivity, energy of, 33 Entanglement, 82, 83– 84; energy of, 156; in EPR experiment, 89–92, 135, 139; and information, 155–56; of quantum potential, 135. See also Bell’s Theorem; Quantum computing; Quantum cryptography; Quantum Entropy: and black holes, 168; definition of, 5, as measure of disorder, 6, 13, 21–22, 163 Equilibrium, thermodynamic, 6, 10, 12 Equipartition of energy, 12, 23 Equivalence principle, 25, 171 Ether, 18 Everett III, Hugh, 152–53 Faraday, Michael, 3– 4 Fermi, Enrico, 69, 94, 109 Fermi-Dirac statistics, 69 Fermions, 69, 125, 127 Feynman, Richard P., 100, 101–3, 149, 153, 171–72 Feynman Problem, 172–73 Feynman’s Diagrams, 101–2 Fine structure, 39 Fine structure constant, 39, 100 Fission, nuclear, 110 Force: advanced and retarded, 103; centrifugal, 25, 38; exchange force, 94; fundamental forces, 96–97; gravitational, 25, 97; nonlocal, 135 Forman, Paul, 76–77 Fourier analysis, 37, 50–51 Franck-Hertz Experiment, 40 Gamma radiation, 4, 33, 43, 59, 60, 67, 97, 111, 121, 168 – 69 Gamow, George, 37, 107– 8, 109 Gates, quantum logic, 151 Gauge field theory, 119, 122, 125–26, Gell-Mann, Murray, 124 Gluons, 125–26 Grand Unified Field Theory (GUT), 119, Graviton, 126, 129, 164 Gravity, 24 –27, 88, 97, 126; nonlocal energy of, 170–71; and quantum field theory, 164 – 65; and thermodynamics, 167–70. See also Quantum gravity Greenberger-Horne-Zeilinger (GHZ) state, 157 Grossman, Marcel, 15, 25 Hadrons, 121, 124 –25 Halting problem, 150 Hamiltonian, 55, 72 Hawking, Stephen, 168, 170 Hawking effect, 169 Heat, 5–7 Heisenberg, Werner: discovery of matrix mechanics, 49–51; discovery of Uncertainty Relations, 59– 61; last word, 128 –29; life, 49–50, 59– 60, 109; nuclear model, 93-94s. See also Uncertainty Principle “Heisenberg’s microscope,” 59– 60 Helium, 33, 110, 155 Herbert, Nick, 157 Hermitian operators, 71–72 Hidden variables theories, 81, 134, 138–39, 146 Higgs field, 126 Higgs particle, 126, 128 Hilbert, David, 50 Hilbert Space, 70–73, 83, 151 Hydrogen, 31, 37–38, 39, 52, 67 Indeterminism, 59, 75, 85, 86. See also Inertia, 25, 136 Information, 20, 73, 75–76, 139, 142– 43, 151, 154–55; and black holes, 169 Implicate order, 136 Interference, 21, 42, 53, 64, 72, 73–75, 83, 150, 157, 172 Interpretations of quantum mechanics: Bohm’s pilot wave, 134–37; causal, 79–81; causal and Kochen-Specker theorems, 147–48; Copenhagen interpretation, 63–65, 81; de Broglie’s early causal theory, 80–81, 135; Einstein’s causal wave theory, 86; Many-Worlds interpretation, 152–53; “no-interpretation” interpretation, 146; pilot wave, 79 Irreversibility. See Reversibility Jordan, Pascual, 52, 55, 69, 97 Klein-Gordon equation, 66, 97 Kochen-Specker Theorems, 146– 48 Kramers, Hendrik, 43, 50 Lamb, Willis, 100 Landauer, Rolf, 154 –55 Landauer’s Principle, 155–56 Laser, 41, 115–16 Leptons, 125–27 Large Hadron Collider (LHC), 121, 128, Light: as electromagnetic waves, 4; fluctuations, 24; history of the study of, 21: invariance of speed 18 –21; as quanta, 21–23, 41, 42– 43; in Special Relativity, 18 –21; as waves, 21–23, 42, 74. See also Quantum: of light; Electromagnetic radiation; Laser; Light cone, 19–20 Linde, Andrei, 163 Linear operators, 58 Liquid drop model of nucleus, 109–10 Locality: meaning of, 8, 101 Local realism, 139– 40 Lorentz covariance: breakdown of, 166; in quantum mechanics, 66, 135 Lorentz transformations, 18 –19, 105 Mach, Ernst, 7 Madelung, Erwin, 80 Magnetic moment, 31 Magnetic Resonance Imaging (MRI), Mass-energy, 20, 26, 68, 98, 111, Matrix mechanics, 50–52, 56–58 Maxwell, James Clerk, 4, 6 Maxwell’s equations, 11, 22, 35–36, 42, Measurement as irreversible amplification, 65 Measurement problem, 82 Meitner, Lise, 109–10 Mercury (planet), 26 Meson, 95–96 Microcausality, 101 Mind-body problem, 161– 62 Minkowski, Hermann, 19 Minkowski spacetime, 19–21, 57, 100, 104 –5, 164 Molecular biology, 112–13 Molecules, 16–17 Momentum: indeterminacy of, 59–61, 84, 87–88, 89–90, 134; quantization, 41 Moseley, Henry G., 39– 40 M-theory, 131 Muon, 96, 125–26 Neutrino, 94–95, 125–26 Neutron, 93–94, 95, 109; activation, 109 Newton, Isaac, 3, 21, 25 Newtonian mechanics, 3, 18 Newton’s Laws: First Law, 8, 25; of gravitation, 24 No-Cloning theorem, 157 Noncommutativity, 65, 71, 147, 172–73 Nondemolition, quantum, 158 –59 Nonlocality, 20, 81, 90, 101, 136–37, 140, 142–43; controllable versus uncontrollable, 142; of gravitational energy, 170–71. See also Quantum No-signaling theorem, 142– 45 Nuclear Magnetic Resonance (NMR), Nucleus, atomic, 33–36, 95, 107–12; fission of, 94, 108 –10; fusion of, 110–12; Gamow’s model of, 109–10 Observables, 60– 61, 72, 83, 147; as c- or q-numbers, 167 Observer: as creating reality, 60; interaction with observed, 64, 82, 84, Orbitals, atomic, 54 –56 Parity, violation of, 123 Particle accelerators, 119–21 Particle detectors, 121–22 Passion at a distance, 142 Pauli, Wolfgang, 47, 94, 167 Pauli Exclusion Principle, 47– 48, 69 Pauling, Linus, 112 Pauli spin matrices, 52, 66 “Peaceful coexistence,” 142 Penrose, Roger, 162, 167 Photoelectric effect, 22 Photon, 22, 40–41, 45, 46, 68, 69, 70, 74, 87, 100, 125, 129 Physics: idealization in, 9–10; nuclear, 32, 107–12; and philosophy, 171; solid-state, 23; state of at the end of the 19th century, 1–3, 9; visualizability of, 104 Pion, 96, 124 Pitowsky, Itamar, 145– 46 Planck, Max, 1–2, 6, 9, 21, 22, 23, 46, 57, 142, 165; and blackbody radiation, 11–14 Planck energy, 166 Planck’s Law, 12–14, 21, 41, 45, 52, 53, Planck scale, 166 Planck’s constant: in blackbody formula, 22; in commutator, 57; defined, 13–14; mystery of, 173 Plasmas, 111–12 Plato, 49–50, 61, 129 Position: indeterminacy of, 59– 60, 84, 87– 88, 89–90, 134 Positron, 67, 93, 94, 97, 98, 102, 121, Preon, 127–28 Principle of complementarity, 63– 65, 83– 84, 125 Probability, 6, 7, 13; classical versus quantum, 75; quantum, 61, 75–76, 85; and quantum computing, 151; in quantum field theory, 125; of a state 7, 13 Probability amplitude, 59, 72, 83, 98, 102, 103, 136, 172 Probability function, 55 Proper quantities, 19 Proton, 33, 40, 67, 69, 93, 94, 95, 96; decay of, 127 Quantum: of action, 13–14, 60; connectedness of, 75; of energy, 13–14; field quanta, 69; of light, 21–23, 38, 40– 41, 45– 46; of light, permutation invariance of, 46; of light, probabilistic behavior of, 42– 43. See also Photon Quantum chromodynamics (QCD), Quantum-classical divide, 64 – 65, 82 Quantum computing, 149–56, 159; vs classical, 149–52 Quantum cryptography, 143, 156–57 Quantum electrodynamics (QED), 97– 100; infinities in, 99; renormalization in, 99–100, 104. See also Quantum field theory Quantum field theory, 97–105, 122; local, 101, 103–5; theoretical problems of, 104 –5. See also Gauge field theory; Quantum Quantum foam, 170 Quantum gasses, 46 Quantum gravity, 163– 67; and background dependence, 164 – 66; loop, 167; and quantum field theory, 164–166; and string theory, 167 Quantum information theory, 154–59, 173 Quantum logic, 84, 146 Quantum mechanics: and the brain, 161– 62; and chemistry, 112–13; common cause explanations in, 140– 41; formalism, 52, 70–72; and human cognitive capacity, 173; as information theory, 154 –55; and the mind, 162; and molecular biology, 112–13; non-Booleanity of, 146– 48; and origin of the universe, 162–63; path-integral formulation, 102; relation to historical forces, 76–77; relativistic, formulated by Dirac, 65–68; von Neumann’s formulation of, Quantum numbers, 47– 48, 51; principal, 37, 40 Quantum potential, 134 –35, 137, 157 Quantum signaling, 142– 45 Quantum teleportation, 157–58 Quantum theory, 23; Bohr-Sommerfeld version of, 43– 44; birth of modern quantum mechanics, 49–56; Old Quantum Theory, 29, 43– 44, 47 Quantum tunneling, 107– 8 Quark, 124 –27, 129 Qubit, 149, 151, 154 Radiation. See Electromagnetic radiation; Radioactivity Radioactivity: discovery of, 9, 32–34; law of radioactive decay, 33, 40 Raleigh’s Law, 12, 14 Realism, 64, 85–86. See also Local Relativity, Theory of: General Relativity, 24 –26, 163– 64; General Relativity, field equations of, 25–26; Principle of Special Relativity, 18; Special Relativity, 17–21, 104 –5 Renormalization, 99–100, 104, 119, 167 Resonances, 122 Reversibility vs irreversibility, 5–6, 11–12 Roentgen, Wilhelm, 32 Rovelli, Carlo, 167, 171 Rutherford, Ernest, 32–36 Rydberg constant, 30, 38 Rydberg-Ritz combination principle, 30, 37 Scattering: deep inelastic, 123–24; of particles, 35–36, 119–22 Schrödinger, Erwin, 54 –57, 81 Schrödinger equation, 55–57, 66, 73, 82, 102 “Schrödinger’s cat,” 81– 82, 161– 62 Schwinger, Julian, 100, 102 Science, creativity in the history of, 76–77 Second quantization, 98 Semiconductors, 116–17 Separation Principle, 85– 86 Shannon, Claude, 154 Shor’s algorithm, 150–51 Signaling, superluminal, 142– 44, 158 –59 Solvay Conferences on Physics, 24 Sommerfeld, Arnold, 39, 100 Smolin, Lee, 130–31, 167, 171 Space: 25; absolute, 166; curved, 25; quantization of, 99–100, 105, 165– 66, 167 Spacetime. See Minkowski spacetime Sparticles, 127 Specific heat, 23 Spectroscopy, 29–30 Spectrum, 4, 36; absorption spectrum, 29, 38; and Combination Principle, 30, 37; emission spectrum, 29; of hydrogen, 30, 38, 55; line spectrum, 29–30; normal, 10; spectral lines, 30, 31, 37–38, 39, 44, 47, 49 Spin, 69, 134 Spinor, 66 Spin Statistics Theorem, 69 Standard Model, 119, 125–28 Stanford Linear Accelerator (SLAC), 120, 123 Stapp, H. P., 137 State function. See State vector State function (thermodynamic), 6 State vector, 66, 71–73 Statistical Mechanics, 6–7, 16, 87, 170 Stefan-Boltzmann Law, 11 Stern-Gerlach experiment, 48 String theory, 119, 129–31, 171 Strong force, 95, 119, 122 Superconducting Supercollider (SSC), 120–21, 131 Superconductivity, 114 Superfluidity, 115 Superluminality, 19, 20, 108, 14245. See also Nonlocality; Signaling, Superposition, 81– 82, 158, 161– 62; in quantum computing, 150–53 Superposition Principle, 72, 83 Superstring theory. See String theory Supersymmetry, 127, 129 Symmetries: breaking of, 123; in particle physics, 123, 128. See also Szilard, Leo, 94, 110 Tachyons, 20, 21 Tau lepton, 125–26 Temperature, 5–7, 10–11, 13 Tensor calculus, 25 Thermodynamics, 5–7; and computing, 154–55; First Law of, 1, 5; and gravitation, 167–70; Second Law of, 5–7, 11, 17, 163; statistical interpretation of, 6–7, 16, 165 Thomson, J. J., 30–31, 34 –36 “Three man work,” 51–52 Time: absolute, 19; influences from the future, 103, 135; proper, 19; quantization of, 105; quantummechanical time operator, 167 Time travel in Gödel universe, 26 Tomonaga, Sinichiro, 100, 134 Topology of spacetime, 170 Transistor, 117, 155 Turing, A. M., 150 Turing machine: classical, 150, 152; quantum, 150–51 Twin paradox, 19 Ultraviolet catastrophe, 12 Uncertainty principle, 59– 61, 87, 88, 90, 98, 122, 128, 134, 137 Unified field theory, 27, 85, 97. See also Grand Unified Theory Universe: Big Bang origin of, 163, 168; expansion of, 163; “heat death” of, 168; microwave background radiation of, 162– 63 Unruh, William, 169 Unruh effect, 169 Vacuum, 68, 95, 128, 163, 169; energy of, 99; fluctuations, 98, 163, 170; polarization, 98; dispersiveness of, 166 Vibrating systems, 55 Virtual particles, 98, 169 von Neumann, John, 70–71, 84, 146; and hidden variables theories, 81, 138, 141 Wave function, 55–56; of bosons, 69; collapse of, 72–73, 82, 152–53, 158, 161; of fermions, 69; phase differences of, 134, 155; probabilistic interpretation of, 58 –59; and the quantum potential, 134 –35 Wave mechanics, 52–57; causal interpretations of, 79– 81, 86 Wave packet, 89, 108, 155 Wave-particle duality, 22–23, 24, 64, Waves: electromagnetic, 3–5, 12, 37; classical behavior of, 73–74. See also Vibrating systems Weak force, 96, 97, 119, 122, 126 Wheeler, John A., 76, 102–3, 152, 154, 164, 170 Wien, Wilhelm, 10 Wien’s Displacement Law, 10 Wien’s Law, 11, 12, 14, 21–22 Wigner, Eugene, 110, 161– 62 Witten, Edward, 131, 164 Worldlines, 19, 103, 144, 145 Wormholes, 26, 170 Wu, C. S., 123 Young, Thomas, 21, 74 Yukawa, Hideki, 95–97, 99, 107, 124, 125, 126 X-rays, 32, 33, 34, 39, 43, 113 Zeeman Effect, 31, 47, 52; anomalous, 47, 52 Zeno’s paradox, 61 Zero-point energy, 61 Yang-Mills theory. See Gauge field About the Author KENT A. PEACOCK is professor of philosophy at the University of Lethbridge, in Alberta, Canada. Peacock received his PhD from the University of Toronto and has also taught at the University of Western Ontario. He has published in philosophy of science, metaphysics of time, and ecological philosophy, and he spends much of his time trying to understand why it is still not obvious that quantum mechanics should be true. Random flashcards modelos atômicos 4 Cartões gabyagdasilva 2 Cartões paulonetgbi 2 Cartões oauth2_google_3d715a2d-c2e6-4bfb-b64e-c9a45261b2b4 2 Cartões juh16 2 Cartões oauth2_google_ddd7feab-6dd5-47da-9583-cdda567b48b3 Criar flashcards
c192deb654a516db
 answerstu gravity - Did Einstein propose a perpetual motion machine to try to disprove quantum mechanics? In response to quantum mechanics, so the story goes, Einstein proposed a machine, that, based on the uncertainty principle, was a perpetual motion. This showed that quantum mechanics was at odds with evidence that energy is conserved. Bohr later showed that the analysis was flawed; Einstein had failed to take gravity into account (ironically).Is this story true? If so, what was the machine Einstein proposed, how was it supposed to work, and what did Bohr reveal about it?...Read more quantum mechanics - Historically how it was discovered that we need fields to describe matter? This question is from one historical perspective. The question is: how physicists historically found out that one needs quantum fields to describe matter?Being more detailed. Let us consider the electromagnetic field for a while. Classically this was already a field. Now, if I understood the history correctly, in the days of old quantum theory, when Planck proposed the solution to the blackbody radiation problem in terms of quantized energy levels, and when Einstein did the same to solve the photoelectric effect problem, they were essentialy pr...Read more quantum mechanics - In the Schrödinger equation, can I have a Hamiltonian without a kinetic term? To find out the stationary states of Hamiltonian, we will be finding the eigenvalues and eigenstates. Is there any condition that form of the Hamiltonian should be like, $$\hat{H}=\hat{T}(\hat{p})+\hat{V}(\hat{x}).$$ I mean the sum of Kinetic operator and potential operator term. Can I have Hamiltonian without a kinetic term, $$\hat{H}=\hat{x}^{2}$$ or in other words? Can I have a Hamiltonian with just potential term alone? $$\hat{H}=\hat{V}(\hat{x}).$$Does quantum mechanics allow that?...Read more quantum mechanics - The problem of a relativistic path integral Many books have described the path integral for non-relativistic quantum. For example, how to get the Schrödinger equation from the path integral. But no one told us the relativistic version. In fact, the relativistic version is impossible to be perfect, it must be replaced by quantum field theory. But why?The answer I want is not that the quantum mechanics will give us a negative energy or negative probability. We need a answer to explain why non-relativistic Lagrangian $$L=\frac{p^2}{2m}$$ can lead a correct Schrödinger equation? Why if we re...Read more quantum mechanics - Refraction and how light bends I have heard a particle nature explanation of how light continues to go with the same constant speed $c$ after it has passed through a denser medium. I also have come across how photon is absorbed by the dielectric molecules and then again re-emitted after a fleeting period of $10^-15$ seconds and that is how light is able to continue in its constant speed condition.My questions are,How does light bend at the interface of the two media? Could you please give an explanation without referring to the wave nature of light?(I know the Fermat's princ...Read more reference frames - Quantum mechanical origin of pseudo forces I am thinking about this from quite some time but could not come up with any satisfactory explanation. In a nutshell, how would one explain the pseudo forces felt by non-inertial observers given that the fundamental laws of physics are quantum mechanical? Since in quantum mechanics one always talks about potentials instead of forces, I cannot think of anything that I can relate to the acceleration. In other words, given an electron for example, can we say that in the frame of an electron there is exists a pseudo force? I think no because of cou...Read more energy conservation - Quantum perpetual motion Perpetual motion describes hypothetical machines that operate or produce useful work indefinitely and, more generally, hypothetical machines that produce more work or energy than they consume, whether they might operate indefinitely or not.(Source:Wikipedia)With this definition in mind, particularly the "operates indefinitely" (I don't care about producing work), won't quantum mechanics allow perpetual motion due to energy quantization?For example, an electron in hydrogen can be thought of as perpetual motion. It's indefinite(I think so); unlik...Read more quantum mechanics - What's the relationship between Density Functional Theory (DFT) and Kohn-Sham equations? It seems Kohn-Sham equations are approximate methods to solve many body Schrodinger's equation. They directly split a multi-electron Schrodinger equation into many single-electron Schrodinger equations with exchange-correlation energy to eliminate errors.Why there is a necessity to introduce Density Functional Theory before give Kohn-Sham equations? Or if there is no Density Functional Theory, can the Kohn-Sham equations still be used?...Read more quantum mechanics - How to integrate two-electron interaction in He? (variational method) A probe wavefunction in the variational method is $$\psi(r_1, r_2) =\frac{\alpha^6}{\pi^2}e^{-\alpha(r_1+r_2)}$$. In $\left<\psi \right|H\left|\psi\right>$ with $$H = \frac{p_1^2+p_2^2}{2m} - \frac{Ze^2}{r_1}-\frac{Ze^2}{r_2}+\frac{e^2}{|\vec{r_1}-\vec{r_2}|}$$the last term is to be integrated like that: $$\idotsint_{} \frac{\left|\psi\right|^2 e^2}{|\vec{r_1}-\vec{r_2}|}r_1^2\, r_2^2\,\sin{\theta_1}\sin{\theta_2}\, d\theta_1 d\theta_2\,d \phi_1d\phi_2\,dr_1dr_2, $$which is quite challenging for me. Does anyone know how to integrate it or...Read more quantum mechanics - Struggling with an integral I'm struggling with the following integral:$$\int \int (r_1^2 + r_2^2) \exp \left( -\frac{b (r_1 + r_2)}{a} \right) \, \mathrm{d}V_1 \, \mathrm{d}V_2 $$I tried to expand near $r_1 = 0 ;\; r_2 = 0$ and to move to s spherical coordinates, but can't get through. I thought there might be some trick I have forgotten to evaluate integrals like this.It is related to a diamagnetic susceptibility of the helium atoms....Read more quantum mechanics - How to choose proper measurement operator? Let's assume I have two states inside the Bloch sphere, at radial vectors $r_1$ and $r_2$ respectively $(r_1<r_2<1)$. Their angular location is same. These are like:\begin{equation} \rho = \begin{pmatrix}\frac{1+r_1 \cos\theta}{2} &\frac{r_1 \exp(-i\phi)\sin\theta}{2} \\\frac{r_1 \exp(i\phi)\sin\theta}{2} &\frac{1-r_1 \cos\theta}{2} \nonumber\end{pmatrix}\end{equation}and another state as\begin{equation} \rho' = \begin{pmatrix}\frac{1+r_2 \cos\theta}{2} &\frac{r_2 \exp(-i\phi)\sin\theta}{2} \\\frac{r_2 \exp(i\phi)\sin\theta}{2...Read more quantum mechanics - Minimizing the energy of a Slater determinant: why are the Lagrange multiplier elements of a Hermitian matrix? If I want to minimize the energy of a Slater determinant subject to the constraint that the spin orbitals are orthonormal (as in the Hartree-Fock approximation), I can use Lagrange's method of undetermined multiplier, i.e.$$L[\{\chi_{a}\}] = E_{0}[\{\chi_{a}\}]-\sum_{a=1}^{N}\sum_{b=1}^{N}\varepsilon_{ba}([a|b]-\delta_{ab})$$where $\{\chi_{a}\}$ are the spin orbitals, $E_{0}$ is the ground state energy, $[a|b]$ is the overlap integral between spin-orbitals $\chi_{a}$ and $\chi_{b}$ and $\varepsilon_{ba}$ is a Langrange multiplier. By setting t...Read more
9ad85870bea82b90
Semi-Analytic Evaluation of 1, 2 and 3-Electron Coulomb Integrals with Gaussian Expansion of Distance Operators W= RC1-nRD1-M, RC1-Nr12-M, R12-Nr13-M 2019-08-19T19:11:01Z (GMT) by Sandor Kristyan The equations derived help to evaluate semi-analytically (mostly for k=1,2 or 3) the important Coulomb integrals Int rho(r1)…rho(rk) W(r1,…,rk) dr1…drk, where the one-electron density, rho(r1), is a linear combination (LC) of Gaussian functions of position vector variable r1. It is capable to describe the electron clouds in molecules, solids or any media/ensemble of materials, weight W is the distance operator indicated in the title. R stands for nucleus-electron and r for electron-electron distances. The n=m=0 case is trivial, the (n,m)=(1,0) and (0,1) cases, for which analytical expressions are well known, are widely used in the practice of computation chemistry (CC) or physics, and analytical expressions are also known for the cases n,m=0,1,2. The rest of the cases – mainly with any real (integer, non-integer, positive or negative) n and m - needs evaluation. We base this on the Gaussian expansion of |r|^-u, of which only the u=1 is the physical Coulomb potential, but the u≠1 cases are useful for (certain series based) correction for (the different) approximate solutions of Schrödinger equation, for example, in its wave-function corrections or correlation calculations. Solving the related linear equation system (LES), the expansion |r|^-u about equal SUM(k=0toL)SUM(i=1toM) Cik r^2k exp(-Aik r^2) is analyzed for |r| = r12 or RC1 with least square fit (LSF) and modified Taylor expansion. These evaluated analytic expressions for Coulomb integrals (up to Gaussian function integrand and the Gaussian expansion of |r|^-u) are useful for the manipulation with higher moments of inter-electronic distances via W, even for approximating Hamiltonian.
b2199901f47a0078
Skip to main content Chemistry LibreTexts 8.S: The Hydrogen Atom (Summary) • Page ID • The Schrödinger equation for one-electron atoms and ions such as H, \(He^+\), \(Li^{2+}\), etc. is constructed using a Coulombic potential energy operator and the three-dimensional kinetic energy operator written in spherical coordinates. Because the radial and angular motions are separable, solutions to the Schrödinger equation consist of products \(R (r) Y (\theta , \varphi )\) of radial functions \(R(r) \) and angular functions \(Y (\theta , \varphi )\) that are called atomic orbitals. Three quantum numbers, n, \(l\), and \(m_l\) are associated with the orbitals. Numerous visualization methods are available to enhance our understanding of the orbital shapes and sizes represented by the modulus squared of the wavefunctions. The orbital energy eigenvalues depend only on the n quantum number and match the energies found using the Bohr model of the hydrogen atom. Because all orbitals with the same principal quantum number have the same energy in one-electron systems, each orbital energy level is n2-degenerate. For example, the n = 3 level contains 9 orbitals (one 3s, three 3p’s and five 3d’s.) Atomic spectra measured in magnetic fields have more spectral lines than those measured in field-free environments. This Zeeman effect is caused by the interaction of the imposed magnetic field with the magnetic dipole moment of the electrons, which removes the \(m_l\) quantum number degeneracy. In addition to the orbital wavefunctions obtained by solving the Schrödinger equation, electrons in atoms possess a quality called spin that has associated wavefunctions \(\sigma\), quantum numbers s and ms, spin angular momentum S and spectroscopic selection rules. Interaction with a magnetic field removes the degeneracy of the two spin states, which are labeled \(\alpha\) and \(\beta \), and produces additional fine structure in atomic spectra. While spin does not appear during the solution of the hydrogen atom presented in this text, spin is presented as a postulate because it is necessary to explain experimental observations about atoms. Single-electron wavefunctions that incorporate both the orbital (spatial) and spin wavefunctions are called spin-orbitals. The occupancy of spin-orbitals is called the electron configuration of an atom. The lowest energy configuration is called the ground state configuration and all other configurations are called excited state configurations. To fully understand atomic spectroscopy, it is necessary to specify the total electronic state of an atom, rather than simply specifying the orbital configuration. An electronic state, or term, is characterized by a specific energy, total angular momentum and coupling of the orbital and spin angular momenta, and can be represented by a term symbol of the form \(^{2s+1} L_J\) where S is the total spin angular momentum quantum number, L is the total orbital angular momentum quantum number and J is the sum of L and S. One term may include several degenerate electron configurations. The degeneracy of a term is determined by the number of projections of the total angular momentum vector on the z-axis. The degeneracy of a term can be split by interaction with a magnetic field. Overview of key concepts and equations for the hydrogen atom • Potential energy • Hamiltonian • Wavefunctions • Quantum Numbers • Energies • Spectroscopic Selection Rules • Angular Momentum Properties
021d5d7866161739
Researchers estimate the probability of black hole production in electron collisions The probability of black hole production in electron collisions Hang Qi & Roberto Onofrio // Physics Letters B, 2019 Researchers from the United States theoretically estimated the cross-section and the probability of the creation of miniature black holes in the collision of two electrons. It turned out that despite the increase in the gravitational interaction at an energy of one hundred gigaelectronvolts, black holes are born with an insignificant probability of about 10 −13 and a tiny cross-section of the order of 10 −45 square centimetres. This is a trillion times less than a naive geometric estimate. Moreover, as the collision energy increases, the cross-section for the production of black holes remains constant. The hoop conjecture claims that a collision of two high-energy particles can give rise to a miniature black hole – for this, it is necessary that at some point the particles are less than the Schwarzschild radius calculated for the collision energy in the centre of inertia system. To find the cross-section of such a process, it is necessary to design the event horizon on a plane perpendicular to the line of the collision of particles and calculate the projection area. In this sense, the birth of a black hole is no different from a simple example with two billiard balls. However, at the birth of black holes, one must take into account that the gravitational radius is proportional to the collision energy and the gravitational constant – therefore, with an increase in these parameters, the “target” becomes larger, and with it the probability of getting a black hole grows. Of course, with the standard value of the gravitational constant, the probability of such a process is negligible. For example, for a stationary electron (mass about 9.1 × 10 −28  grams), the Schwarzschild radius is approximately 1.3 × 10 −55 centimetres. Nevertheless, some theories suggest that at energies of the order of 250 gigaelectronvolts, the value of the gravitational constant increases sharply. This is due to the fact that at such energies the Higgs boson joins the graviton, which can also be considered a particle-carrier of gravitational interaction. In fact, for both the graviton and the Higgs boson, the coupling constant with a particle (fermion) is proportional to the mass of the particle. Therefore, the Higgs virtual boson exchange describes an interaction similar to the virtual graviton exchange. In the general case, this interaction is described by the theory of scalar-tensor gravity, built-in 1961 by Karl Brans and Robert Dicke. However, there is an important difference between the graviton and the Higgs boson: while the graviton has practically no mass, the boson “weighs” about 126 gigaelectronvolts. This means that the long-range potential of Newton turns into a short-range potential of Yukawa, which exponentially decays at distances of more than 10 −15 centimeters ( Compton wavelength of the Higgs boson). At the same time, at lower distances, the potential generated by the scalar gravity more than 30 orders of magnitude greater than Newton’s habitual potential. The gravitational radius of black holes also increases by the same amount. The cross-section for the production of black holes, calculated using the adjusted radius, is quite noticeable: for example, for the collision of electrons with an energy of 100 Giga electron-volts each section of this process is approximately 3.2 × 10 −33 square centimetres. However, in practice, such a naive estimate can turn out to be overestimated, since it does not take into account the interaction of particles. Physicists Han Qi and Roberto Onofrio corrected this naive estimate by considering the interaction of two electrons in the Standard Model. Given that the main contribution to the interaction of two electrons is made by the Coulomb repulsion, physicists have considered a very simplified collision model. First, scientists neglected the spin of the particles, that is, they replaced the usual gauge electrodynamics with scalar electrodynamics. Secondly, the researchers neglected the magnetic field: although such a field is generated by flying charged particles, it is many times smaller than a static electric field. Third, physicists neglected relativistic effects; in other words, scientists considered the interaction of two resting electrons, but attributed to them the “right” energy, in γ times the resting energy. Finally, the researchers neglected the change in the black hole metric associated with electromagnetic interaction, since for an energy of more than 30 gigaelectronvolts such changes are negligible. To model particle interactions, scientists relied on two fundamentally different approaches. In the first approach, physicists numerically integrated the two-dimensional Schrödinger equation, which corresponds to a particle with reduced mass placed in the effective (Coulomb + Newton + Yukawa) potential. Using the cylindrical symmetry of the collision, scientists “squeezed” three-dimensional space to two-dimensional space. Solving this equation, scientists found the wave functions of the particles and checked the probability with which the particles are less than the Schwarzschild radius. The second approach was based on extended Gaussian dynamics – the semiclassical method proposed by Poole Ehrenfest in 1927. This method works with averaged momentum and particle coordinates, as well as with an effective potential expanded in averaged degrees of the coordinate. Generally speaking, all these averaged variables obey the infinite Heisenberg system of equations, but it can be “reduced” to four equations, assuming that the wave functions of the particles are Gaussian. By solving these equations, the probability of the formation of a black hole can be calculated analytically, although approximately. As expected, scientists have found that when electrons converge, the distance is less than the Schwarzschild radius, the probability of black hole production jumps sharply from zero to unity. In addition, the scalar potential affects the scattering of particles in other cases, “softening” their repulsion. However, the main result of scientists is the cross-section of the birth of black holes, which turned out to be almost a trillion times less than the naive estimate. This difference is due to the fact that processes in which particles pass at a fairly close distance make up only a small fraction of all kinds of collisions with a given energy. For example, at an energy of 100 gigaelectronvolts, the probability of a black hole is born with a probability of only 7 × 10 −14, and the cross-section of the corresponding process is 5 × 10 −45square centimetres. Moreover, with increasing energy, this cross-section remains constant, although against the background of other processes the birth of black holes appears more and more clearly. Dependence of the probability of the birth of a black hole on the Lorentz factor for different values ​​of the prefactor of the Yukawa potential (left picture) and the Compton wavelength of the scalar particle (right picture) Hang Qi & Roberto Onofrio // Physics Letters B, 2019 Distortion of the trajectories of colliding particles: black lines correspond to trajectories without taking into account the Yukawa potential, red lines – taking into account Hang Qi & Roberto Onofrio // Physics Letters B, 2019 It is worth noting that even those black holes that can hypothetically be synthesized at colliders are not dangerous: instead of absorbing surrounding matter, they evaporate almost instantly due to Hawking radiation. This is due to the fact that with decreasing hole size, the radiation power only increases. In addition, Hawking radiation does not allow small black holes to trigger the decay of a false vacuum – another process that could theoretically destroy the Earth (and then the rest of the Universe). SOURCEPhysics Letters B Previous articleThe Germans dispersed an aerotax tiltrotor faster than 100 kilometers per hour Next articleThe publisher of Russian version of Forbes magazine sues Safmar Group Aakash Molpariya
6995e8594f0c761b
Pauli exclusion principle From Wikipedia, the free encyclopedia   (Redirected from Pauli exclusion) Jump to: navigation, search Wolfgang Pauli The Pauli exclusion principle is the quantum mechanical principle that states that two identical fermions (particles with half-integer spin) cannot occupy the same quantum state simultaneously. In the case of electrons, it can be stated as follows: it is impossible for two electrons of a poly-electron atom to have the same values of the four quantum numbers: n, the principal quantum number, , the angular momentum quantum number, m, the magnetic quantum number, and ms, the spin quantum number. For example, if two electrons reside in the same orbital, and if their n, , and m values are the same, then their ms must be different, and thus the electrons must have opposite half-integer spins of 1/2 and -1/2. This principle was formulated by Austrian physicist Wolfgang Pauli in 1925 for electrons, and later extended to all fermions with his spin-statistics theorem of 1940. A more rigorous statement is that the total wave function for two identical fermions is antisymmetric with respect to exchange of the particles. This means that the wave function changes its sign if the space and spin co-ordinates of any two particles are interchanged. Particles with an integer spin, or bosons, are not subject to the Pauli exclusion principle: any number of identical bosons can occupy the same quantum state, as with, for instance, photons produced by a laser and Bose–Einstein condensate. The Pauli exclusion principle governs the behavior of all fermions (particles with "half-integer spin"), while bosons (particles with "integer spin") are not subject to it. Fermions include elementary particles such as quarks (the constituent particles of protons and neutrons), electrons and neutrinos. In addition, protons and neutrons (subatomic particles composed from three quarks) and some atoms are fermions, and are therefore subject to the Pauli exclusion principle as well. Atoms can have different overall "spin", which determines whether they are fermions or bosons — for example helium-3 has spin 1/2 and is therefore a fermion, in contrast to helium-4 which has spin 0 and is a boson.[1]:123–125 As such, the Pauli exclusion principle underpins many properties of everyday matter, from its large-scale stability, to the chemical behavior of atoms. "Half-integer spin" means that the intrinsic angular momentum value of fermions is (reduced Planck's constant) times a half-integer (1/2, 3/2, 5/2, etc.). In the theory of quantum mechanics fermions are described by antisymmetric states. In contrast, particles with integer spin (called bosons) have symmetric wave functions; unlike fermions they may share the same quantum states. Bosons include the photon, the Cooper pairs which are responsible for superconductivity, and the W and Z bosons. (Fermions take their name from the Fermi–Dirac statistical distribution that they obey, and bosons from their Bose–Einstein distribution). In the early 20th century it became evident that atoms and molecules with even numbers of electrons are more chemically stable than those with odd numbers of electrons. In the 1916 article "The Atom and the Molecule" by Gilbert N. Lewis, for example, the third of his six postulates of chemical behavior states that the atom tends to hold an even number of electrons in any given shell, and especially to hold eight electrons which are normally arranged symmetrically at the eight corners of a cube (see: cubical atom).[2] In 1919 chemist Irving Langmuir suggested that the periodic table could be explained if the electrons in an atom were connected or clustered in some manner. Groups of electrons were thought to occupy a set of electron shells around the nucleus.[3] In 1922, Niels Bohr updated his model of the atom by assuming that certain numbers of electrons (for example 2, 8 and 18) corresponded to stable "closed shells".[4]:203 Pauli looked for an explanation for these numbers, which were at first only empirical. At the same time he was trying to explain experimental results of the Zeeman effect in atomic spectroscopy and in ferromagnetism. He found an essential clue in a 1924 paper by Edmund C. Stoner, which pointed out that for a given value of the principal quantum number (n), the number of energy levels of a single electron in the alkali metal spectra in an external magnetic field, where all degenerate energy levels are separated, is equal to the number of electrons in the closed shell of the noble gases for the same value of n. This led Pauli to realize that the complicated numbers of electrons in closed shells can be reduced to the simple rule of one electron per state, if the electron states are defined using four quantum numbers. For this purpose he introduced a new two-valued quantum number, identified by Samuel Goudsmit and George Uhlenbeck as electron spin.[5][6] Connection to quantum state symmetry[edit] is necessarily antisymmetric. To prove it, consider the matrix element The first and last terms on the right side are diagonal elements and are zero, and the whole sum is equal to zero. So the wavefunction matrix elements obey: Pauli principle in advanced quantum theory[edit] According to the spin-statistics theorem, particles with integer spin occupy symmetric quantum states, and particles with half-integer spin occupy antisymmetric states; furthermore, only integer or half-integer values of spin are allowed by the principles of quantum mechanics. In relativistic quantum field theory, the Pauli principle follows from applying a rotation operator in imaginary time to particles of half-integer spin. In one dimension, bosons, as well as fermions, can obey the exclusion principle. A one-dimensional Bose gas with delta-function repulsive interactions of infinite strength is equivalent to a gas of free fermions. The reason for this is that, in one dimension, exchange of particles requires that they pass through each other; for infinitely strong repulsion this cannot happen. This model is described by a quantum nonlinear Schrödinger equation. In momentum space the exclusion principle is valid also for finite repulsion in a Bose gas with delta-function interactions,[7] as well as for interacting spins and Hubbard model in one dimension, and for other models solvable by Bethe ansatz. The ground state in models solvable by Bethe ansatz is a Fermi sphere. Atoms and the Pauli principle[edit] The Pauli exclusion principle helps explain a wide variety of physical phenomena. One particularly important consequence of the principle is the elaborate electron shell structure of atoms and the way atoms share electrons, explaining the variety of chemical elements and their chemical combinations. An electrically neutral atom contains bound electrons equal in number to the protons in the nucleus. Electrons, being fermions, cannot occupy the same quantum state as other electrons, so electrons have to "stack" within an atom, i.e. have different spins while at the same electron orbital as described below. An example is the neutral helium atom, which has two bound electrons, both of which can occupy the lowest-energy (1s) states by acquiring opposite spin; as spin is part of the quantum state of the electron, the two electrons are in different quantum states and do not violate the Pauli principle. However, the spin can take only two different values (eigenvalues). In a lithium atom, with three bound electrons, the third electron cannot reside in a 1s state, and must occupy one of the higher-energy 2s states instead. Similarly, successively larger elements must have shells of successively higher energy. The chemical properties of an element largely depend on the number of electrons in the outermost shell; atoms with different numbers of occupied electron shells but the same number of electrons in the outermost shell have similar properties, which gives rise to the periodic table of the elements.[8]:214–218 Solid state properties and the Pauli principle[edit] In conductors and semiconductors, there are very large numbers of molecular orbitals which effectively form a continuous band structure of energy levels. In strong conductors (metals) electrons are so degenerate that they cannot even contribute much to the thermal capacity of a metal.[9]:133–147 Many mechanical, electrical, magnetic, optical and chemical properties of solids are the direct consequence of Pauli exclusion. Stability of matter[edit] The stability of the electrons in an atom itself is unrelated to the exclusion principle, but is described by the quantum theory of the atom. The underlying idea is that close approach of an electron to the nucleus of the atom necessarily increases its kinetic energy, an application of the uncertainty principle of Heisenberg.[10] However, stability of large systems with many electrons and many nucleons is a different matter, and requires the Pauli exclusion principle.[11] The consequence of the Pauli principle here is that electrons of the same spin are kept apart by a repulsive exchange interaction, which is a short-range effect, acting simultaneously with the long-range electrostatic or Coulombic force. This effect is partly responsible for the everyday observation in the macroscopic world that two solid objects cannot be in the same place at the same time. Astrophysics and the Pauli principle[edit] Dyson and Lenard did not consider the extreme magnetic or gravitational forces which occur in some astronomical objects. In 1995 Elliott Lieb and coworkers showed that the Pauli principle still leads to stability in intense magnetic fields such as in neutron stars, although at a much higher density than in ordinary matter.[15] It is a consequence of general relativity that, in sufficiently intense gravitational fields, matter collapses to form a black hole. Astronomy provides a spectacular demonstration of the effect of the Pauli principle, in the form of white dwarf and neutron stars. In both bodies, atomic structure is disrupted by large gravitational forces, but the stars are held in hydrostatic equilibrium by degeneracy pressure, also known as Fermi pressure. This exotic form of matter is known as degenerate matter. The immense gravitational force of a star's mass is normally held in equilibrium by thermal pressure caused by heat produced in thermonuclear fusion in the star's core. In white dwarfs, which do not undergo nuclear fusion, an opposing force to gravity is provided by electron degeneracy pressure. In neutron stars, subject to even stronger gravitational forces, electrons have merged with protons to form neutrons. Neutrons are capable of producing an even higher degeneracy pressure, neutron degeneracy pressure, albeit over a shorter range. This can stabilize neutron stars from further collapse, but at a smaller size and higher density than a white dwarf. Neutrons are the most "rigid" objects known; their Young modulus (or more accurately, bulk modulus) is 20 orders of magnitude larger than that of diamond. However, even this enormous rigidity can be overcome by the gravitational field of a massive star or by the pressure of a supernova, leading to the formation of a black hole.[16]:286–287 See also[edit] 1. ^ Kenneth S. Krane (5 November 1987). Introductory Nuclear Physics. Wiley. ISBN 978-0-471-80553-3.  2. ^ [1] 3. ^ Langmuir, Irving (1919). "The Arrangement of Electrons in Atoms and Molecules" (PDF). Journal of the American Chemical Society 41 (6): 868–934. doi:10.1021/ja02227a002. Retrieved 2008-09-01.  4. ^ Shaviv, Glora. The Life of Stars: The Controversial Inception and Emergence of the Theory of Stellar Structure (2010 ed.). Springer. ISBN 978-3642020872.  6. ^ Über den Zusammenhang des Abschlusses der Elektronengruppen im Atom mit der Komplexstruktur der Spektren W. Pauli, Zeitschrift für Physik, February 1925, Volume 31, Issue 1, pp 765-783 11. ^ This realization is attributed by Lieb and by GL Sewell (2002). Quantum Mechanics and Its Emergent Macrophysics. Princeton University Press. ISBN 0-691-05832-6.  to FJ Dyson and A Lenard: Stability of Matter, Parts I and II (J. Math. Phys., 8, 423–434 (1967); J. Math. Phys., 9, 698–711 (1968) ). 12. ^ As described by FJ Dyson (J.Math.Phys. 8, 1538–1545 (1967) ), Ehrenfest made this suggestion in his address on the occasion of the award of the Lorentz Medal to Pauli. 15. ^ Lieb, E. H.; Loss, M.; Solovej, J. P. (1995). "Stability of Matter in Magnetic Fields". Phys. Rev. Letters 75 (6): 985–9. arXiv:cond-mat/9506047. Bibcode:1995PhRvL..75..985L. doi:10.1103/PhysRevLett.75.985.  16. ^ Martin Bojowald (5 November 2012). The Universe: A View from Classical and Quantum Gravity. John Wiley & Sons. ISBN 978-3-527-66769-7.  • Dill, Dan (2006). "Chapter 3.5, Many-electron atoms: Fermi holes and Fermi heaps". Notes on General Chemistry (2nd ed.). W. H. Freeman. ISBN 1-4292-0068-5.  External links[edit]
813acc7f76d64f5e
Sensors Sensors 1424-8220 Molecular Diversity Preservation International (MDPI) 10.3390/s120506049 sensors-12-06049 Review Sensing with Superconducting Point Contacts NurbawonoArgo1 ZhangChun12* Department of Physics, National University of Singapore, 2 Science Drive 3, Singapore; E-Mail: argo Department of Chemistry, National University of Singapore, 3 Science Drive 3, Singapore Author to whom correspondence should be addressed; E-Mail: 2012 10 05 2012 12 5 6049 6074 14 03 2012 06 04 2012 20 04 2012 © 2012 by the authors; licensee MDPI, Basel, Switzerland. 2012 Superconducting point contacts have been used for measuring magnetic polarizations, identifying magnetic impurities, electronic structures, and even the vibrational modes of small molecules. Due to intrinsically small energy scale in the subgap structures of the supercurrent determined by the size of the superconducting energy gap, superconductors provide ultrahigh sensitivities for high resolution spectroscopies. The so-called Andreev reflection process between normal metal and superconductor carries complex and rich information which can be utilized as powerful sensor when fully exploited. In this review, we would discuss recent experimental and theoretical developments in the supercurrent transport through superconducting point contacts and their relevance to sensing applications, and we would highlight their current issues and potentials. A true utilization of the method based on Andreev reflection analysis opens up possibilities for a new class of ultrasensitive sensors. point contact spectroscopy superconductivity andreev reflections Since its first discovery over a hundred years ago [1], superconductors have been utilized for various sensing applications, among others. Superconducting quantum interference devices (SQUIDs) for example, are ubiquitous for ultrasensitive magnetic sensors such as magnetic resonance imaging (MRI) in medical applications, thanks to the Josephson effects [2]. Less common applications are point contact Andreev reflection (PCAR) spectroscopies, which are still fairly limited mainly in laboratory demonstrations and theoretical studies. This is due to non-trivial Andreev physics that is involved in the supercurrent transport through point contacts (PC) which requires more rigorous theoretical treatments in order to decipher the underlying physics and therefore to interpret the experimental results correctly. PC can be fabricated with various methods, for example using a sharp or needle like metallic probe with chemically etched tip, which is then pressed onto another metallic surface using a combination of piezoelectric actuator and differential screw mechanism [3]. A combination of reactive ion etching (RIE) and electron beam machining is also common to produce nanobridges [4], which are basically nanoholes drilled through a thin insulator. Another common technique is micro-controlled break junction (MCBJ) [5], which is basically a metallic nanocontact produced with electron beam machining that can be broken up to produce an atomic gap. This gap can be precisely adjusted using a piezoelectric actuator. The contact sizes range from a few nanometers down to a single atom, and therefore the transport through these PCs is mainly ballistic or under the Sharvin limit [6], where the constriction or the contact size is much smaller than the elastic mean free path of the electrons. Over the past decade there are mainly two very significant landmarks in the applications of PCAR spectroscopies. The first one is the measurement of magnetic polarization [3,7], which utilizes the fact that Andreev process is suppressed when a supercurrent flows from a superconductor to a magnetic normal metal. The degree of polarization can be precisely measured by fitting the entire differential conductance with an appropriate model based on a semiclassical theory, which would be discussed in detail later in this review. This method has spurred new experimental and theoretical developments in magnetic polarization measurements, partly because the PCAR method is easier and more flexible compared to the older methods such as spin-dependent tunneling planar junctions [8] and spin-resolved photoemissions spectroscopy [9]. The second significant landmark is the experimental determination of individual transmission quantum channels of a superconducting single-atom contact [1012], utilizing a microscopic Hamiltonian model and nonequilibrium Green's functions technique to fit the current-voltage curves. This was the first time that the details of quantum conduction channels have ever been resolved experimentally after it was first proposed more than fifty years ago by Landauer [13,14]. Since then, the microscopic Hamiltonian theory is becoming the mainstream in the subsequent development of superconducting quantum transport. Many experiments followed after this pioneering work discussing other various aspects such as using different contact materials from niobium [15,16], effects of diffusivity [17], ferromagnetic interface [18], hydrogen adsorption [19], or structural deformation effects [20], etc. There are also other more recent exciting experimental developments such as the work of Ji et al. [21] and Marchenkov et al. [22], and we would also briefly discuss them in the section on experimental surveys. In order to have a meaningful physical understanding of the PCAR physics, we shall also present a detailed discussions of the theoretical aspects in both semiclassical and quantum pictures. The theoretical discussions in this review shall be divided into two parts. The first part is the summary of the semiclassical treatment based on the famous Blonder–Tinkham–Klapwijk (BTK) theory [23] and its relevant extensions for the PCAR magnetic polarization measurements. The second part is the so-called quantum Hamiltonian theory where we would adopt nonequilibrium Green's function method which is regarded as the most rigorous quantum perturbative technique for dealing with nonequilibrium problems [24]. This formalism fits the atomic point contacts where the conduction consists of only a few quantum channels. We would derive the supercurrent based on the Bardeen–Cooper–Schrieffer (BCS) model of Hamiltonian [25], and highlight some applications of the theory such as to resolve individual quantum channels of a superconducting MCBJ [10], and to study quantum dots coupled to superconducting leads under external radiations [26]. Experimental Surveys Magnetic Polarization Measurements The technique of PCAR spectroscopy has been used for measuring the polarization of ferromagnetic materials [3,7,27], which is mainly driven by the need to search suitable materials for spintronic devices [28,29]. The PCAR method provides easier and more flexible measurements compared to the conventional spin tunneling using planar junctions [8] or spin resolved photoemissions spectroscopy [9]. Unlike the planar junction method, PCAR does not need application of large magnetic field of several Teslas, and there is no constraints in terms of thin film fabrications which impose severe limitations on the types of materials that can be tested. Also, PCAR offers better energy resolutions compared to the photoemission method which is typically limited to ∼1 meV resolutions. The PC and the sample are immersed in a liquid helium bath to keep the temperature below the transition temperature Tc. The positioning and adjustment of the PC employed standard piezoelectric actuators for achieving ideal ballistic contacts. Some cares must be taken to prevent excessive pressure on the tip as this may change electronic properties of the materials and hence the spin polarizations [30]. The current is usually obtained using standard AC lock-in techniques at few kHz frequency. The PCAR method is based on the fact that the current through the PC differs when the tip is superconducting compared to when it is in normal state. The PCAR method is based on the behaviour of the conductance at very low bias where the current is most dependent on the polarization P of the ferromagnet. At low bias electrons enter the gap through Andreev reflection (AR) mechanism, which produces a hole that travels in opposite direction for every electron that enters the gap. The net charge of 2e that moves as supercurrent results in the doubling of conductance, i.e., GNS/GNN = 2. This ratio is called the normalized conductance. When the normal metal is a ferromagnet with perfect polarization, i.e., P = 1, then the probability for the electron to make a pair with another electron with opposite spin is virtually zero, and therefore AR is completely suppressed at the interface as illustrated in Figure 1(a). This leads to zero conductance, i.e., GNS/GNN = 0. A simple linear interpolation between these two extremes gives, GNS/GNN = 2(1 − P), and based on this ballistic assumption, Upadhyay et al. [7] and Soulen et al. [3,30] independently made the first PCAR magnetic polarization measurements, though the idea for deducing spin polarization from conductance was already proposed by de Jong et al. [27]. The theoretical normalized conductance for different polarizations can be seen in Figure 1(b). They fit the entire normalized differential conductance curves for Co, Ni, and some compound ferromagnets as well as Cu. Of course this ballistic assumption is insufficient and the effects of some diffusivity, impurities and surface properties at the contact must be incorporated in order to make better fits to the experimental curves. Mazin et al. [31] and Strijkers et al. [32] proposed a straightforward extension to the BTK theory, which then became a more standard method for polarization measurements with PCARs. As the scattering suppresses AR at low bias and creates sharp peaks in the conductance at eV = ±Δ, a careful account of the diffusive transport is necessary to obtain more reliable estimate for the polarization measurements. Suppression of AR may be misinterpreted as overestimation of polarization if scattering is not properly accounted for. A different parameterization for the BTK coefficients was then proposed and used to determine the spin polarization measurements in half-metallic CrO2 [33]. The modified BTK versions by Mazin and Strijkers are fairly similar and a comparison for CrO2 system reveals only 0.02 difference in the polarization measurements, which is about the accuracy of the PCAR method [34]. These details shall be discussed in the theoretical sections. The model also incorporates proximity effects which can reduce the effective gap of superconductors. Hundreds of related works on PCAR magnetic measurements appeared following these main experimental and theoretical achievements ever since. For instance, Pérez-Willard et al. [35] performed PCAR measurements on Al/Co contact fabricated with RIE method [4] and analyzed the dependence of conductance on the temperatures and magnetic fields. The temperature, as predicted by the extended BTK model, reduces the effective superconducting gap and still finds nice agreements with the theory apart from the temperatures close to Tc. Applications of magnetic field parallel to the insulating layer also modifies the Andreev spectra. Magnetic fields reduces the height of the two maxima around the gap and the transition to normal conductance at the threshold field was abrupt. Panguluri et al. [36] performed PCAR measurement on MnAs epitaxial films grown on [011] GaAs using Pb and Sn point contacts. They also performed a phonon spectra analysis (d2I/dV2) of the contacts and concluded that smaller contact diameters are necessary to achieve truly ballistic transport, and to obtain a reliable PCAR measurements contact sizes around 10 nm or smaller are generally preferable. PCAR can also be used to measure spin diffusion lengths. For example Geresdi et al. and others [37,38] used PCAR to measure spin relaxation in Pt thin films grown on the top of a ferromagnetic Co layer, where by the temperature dependence was investigated and various sources of the spin relaxation in Pt were identified. The widespread use of the BTK theory extension for PCAR spectroscopy has been questioned by Xia et al. [39] who argued that realistic interface conditions must be considered if PCAR measurements are to be valid after all. From the theoretical works on giant magnetoresistance it is generally known that reflection processes at the interface between nonmagnetic and ferromagnetic materials are strongly spin dependent [40], yet the model used in PCAR experiments never introduced spin-dependent scattering at the interface. Xia et al. found that failing to take spin-dependent scattering potential into account would result in poor fitting for Pb/Co systems. Grein et al. [41] recently propose spin-active scattering model of PCAR spectra, which include spin filtering and spin mixing effects. They found that the shape of the interface potential has important effects on the spin mixing effects, which probably makes it necessary to reconsider the general validity of some PCAR measurements once again. Individual Quantum Channel Measurements The second important landmark in the applications of the PCAR method is to determine the individual transmission coefficients of an atomic point contact (APC) [10,11] or often called quantum point contact (QPC). A typical APC consists only of a small number of eigenchannels and each of them is characterized by a transmission coefficient, τn. Each eigenchannel contributes to the conductance by G0τn, where G0 is the quantum conductance given by G0 = 2e2/h. The total conductance of an APC is thus given by [13,14], G = 2 e 2 h n τ n Since the transmission coefficient of each channels can take value between zero and unity, the conductance of a single channel is mostly less than G0 despite the fact that statistically the conductance of an APC tends to be quantized. The quantitative information on individual conductance channels has been inaccessible through normal conductance measurements, but for superconducting systems this can be extracted due to the sensitivity of the so-called sub-gap structures (SGS) of the superconductor at low bias to small changes of each conductance channels. The SGS originates from multiple Andreev reflection (MAR) [42] between two superconductors and the centre normal (vacuum) region, which we shall discuss in detail later in the theory section. This presumably resolves the old question that whether a quantum conductance in the measurements actually corresponds to a number of partially open channels, instead of just one channel. Scheer et al. [10] demonstrated using a superconducting Aluminium APC fabricated with MCBJ method, and fitted the time averaged current with the theoretical model based on the quantum Hamiltonian theory [43]. They found that a single Al atomic contact actually corresponds to three partially open eigenchannels, which exactly correspond to the number of the valence orbitals as illustrated in Figure 2. This conclusion is further verified also for Pb and Nb APCs [12]. The study is very fundamental to our understanding in the science of molecular electronics and mesoscopic transport in general. The total current can be analyzed from the independent current contribution of each channels, i.e., I ( V ) = n I n ( V , τ n ) = 2 e h T ( E , V ) [ f L ( E ) f R ( E ) ] d Efrom which the individual τn can be deduced, the so-called “PIN code” of the eigenchannels. We shall later discuss the derivation of the transmission terms using quantum Hamiltonian model. Excellent quantitative agreements with the experimental data provide a strong justification for the validity of the subsequent developing theory of superconducting quantum transport. Magnetic Impurities Measurements PCAR spectroscopy has also been used to detect and identify magnetic impurities on superconducting surfaces. Yazdani et al. [44] used gold scanning tunneling microscope (STM) tip to study excitations from magnetic adatoms of Mn and Gd on superconducting Nb substrate. Atoms such as Cr, Mn and Gd have been found to reduce the transition temperature Tc of Nb films, and magnetic impurities in general reduce superconducting order parameter and lead to quasiparticle excitations within the superconducting gap [45,46]. Excitations from the magnetic impurities were confirmed by Yazdani et al. by comparing them with non-magnetic adatoms such as Ag, which showed almost featureless conductance across the entire bias. Ji et al. [21] performed an improved experiment with both the STM tip and the substrate made from superconducting materials Nb and Pb respectively. Unlike Yazdani's work where a quantitative analysis for adatom identifications had been hindered by poor energy resolutions, Ji et al. made very significant improvements due to the existence of MAR between the two superconductors which provides high resolution SGS in the conductance, as illustrated in Figure 3. More symmetric SGS structures which are resolved up to 0.1 meV can clearly be seen in the conductance measurements. They claimed that the method can potentially be used to unambiguously detect magnetic adatoms on a superconducting surface, because these spectra are unique fingerprints of the spin states of adatoms, as a result of complex interactions between Andreev bound states (ABS) process and the electronic properties of the adatoms. They also performed similar measurements on dimers of Mn and Cr. Ji et al. used a thin film superconducting Pb which is deposited on a clean Si(111) up to 20 monolayers thick. The superconducting gap of the Pb thin film was found to be 1.30 meV while the Nb STM tip was between 1.44 to 1.52 meV. The effective energy gap of the system turned out to be around 3.0 meV as can be seen in in Figure 3(b) for a clean Pb surface. Different number of peaks with varying intensities were observed for different adatoms. Ji et al. suggested that these correspond to each angular momentum channels, though this still requires further investigations. Electron transport process between the STM tip and the adatoms clearly involves only a few quantum channels and the interactions of the ABS with the spin impurities need to be modeled microscopically in order to fit and interpret the experimental data. Apart from the interface issues which are always tricky, first principle calculations of the adatoms combined with suitable model of the superconductors possibly enable unambiguous determination of magnetic adatoms. Vibrational Mode Measurements Excitations of vibrational modes by traversing electrons have been observed in metallic electrodes attached to nanostructures and molecules such as carbon nanotubes [47,48], hydrogen molecules [49], organic molecules [50,51], gold atomic chains [52], and fullerenes [53]. When a vibrational mode resonates with the bias energy, the conductance can either be enhanced or suppressed by the vibrations. The vibrational energy of the nth -mode is given by ħωn, and the bias at which this takes place is Vn = ħωn/e. Thus in such systems, vibrational modes can be detected directly from current measurements alone and to determine the actual modes one must combine it with standard first principle calculations in order to model the complete vibrating molecule. A recent application of PCAR is to study vibrational modes of a suspended Nb dimer conducted by Marchenkov et al. [22], as illustrated in Figure 4. The dimer was fabricated with the MCBJ technique, and from previous study based on density functional theory (DFT) calculations and conductance measurements, it was confirmed that the configurations at the tip before the break-up was a Nb dimer, where the symmetry and asymmetry of the dimer position across the gap corresponds to either high or low conductance respectively [54,55]. Though in this particular setup the dimer is made of the same atoms as the leads, the idea is still applicable for other types of molecules to be probed with similar technique. This would enable us to study vibrational modes of a truly isolated molecule, unlike the behaviours of ensembles such as in the conventional IR, UV or NMR spectroscopies [56,57]. The measurements were performed at various temperatures from well below Tc up to 12 K. Resonances for high conductance configurations (the dimer is symmetric between the leads) were analysed which appeared both inside and outside the SGS. Particularly for resonances outside SGS, the so-called over the gap structure (OGS), they observed more symmetric and persistent patterns through out different temperatures until they diminished as T > Tc. Unlike the usual SGS which originate from MAR, the OGS do not change positions with bias as the temperature varies. The OGS is not governed by MAR; rather Marchenkov et al. suggested that the OGS originated from the atomic scale structural and dynamical properties of the dimer which resonate with the Josephson current oscillations. The exact shapes, amplitudes and widths of these features correspond to different vibronic and electronic coupling regimes. The time dependent electromagnetic fields of the Josephson oscillations resonate with the vibrational eigenmodes of the Nb dimer. Further they compared the frequencies with ab initio calculations based on DFT and found nice agreements for three different modes of vibrations: longitudinal, transverse and wagging. The method offers a new physics to be used to study dynamical properties of small molecules in general. Theoretical Surveys At the heart of the supercurrent transport mechanism is the so-called Andreev reflection (AR) process which can take place when a superconductor is in contact with a normal metal [58]. In the superconductor the quasiparticles form pairs of opposite spins commonly known as the Cooper pairs [59]. For a normal electron to move into the superconductor, it needs to make a pair with another electron with the opposite spin. At bias higher than superconducting gap energy, denoted as Δ, the electron enters as quasielectron which relaxes into the Cooper pair over a charge relaxation distance. At bias eV < Δ, superconducting gap prevents direct transfer of single electron states and as a result a hole is reflected back at the interface in order to create a Cooper pair in the superconductor, resulting in the doubling of the conductance as discussed in Section 2.1. When two superconductors are separated by a normal region, a series of electron and hole reflection process take place, which is called multiple Andreev reflections (MAR) [42]. Illustrations can be made with a simple diagram in Figure 5 where a normal region is sandwiched in between two superconductors with identical energy gaps and a small bias eV < Δ is applied across the superconductors. The current is oscillating across the junction with a frequency proportional to the bias, ω = 2eV/ħ, known as the AC Josephson frequency, and the MAR process creates SGS in the IV curves. To illustrate the MAR process, we can use the following arguments: initially an electron from the interface between N and S on the left is accelerated by the external field toward the right, but unable to enter due to the energy gap. This would result in a reflection of a hole moving back to the left. The charge of 2e (one from the electron, the other from the hole moving in opposite direction) increase the supercurrent. The process is repeated until the particle gains sufficient energy to overcome the gap. Octavio et al. [42] explains, using the extension of BTK model [23], the SGS in the supercurrent behaviour when the bias is comparable or smaller than Δ. Many researchers have suggested that the SGS are basically current singularities that take place at bias V = 2Δ/en where n = 1, 2, 3, …. However the details of SGS also involve some subtle aspects that are still missing from the semi-empirical approaches, such as the delicate interface properties. An entirely first principle microscopic theory would be needed to quantitatively model the interface natures. A successful quantum theory that can do so would enable PCAR to be used as a reliable sensor with ultrahigh sensitivity, since the SGS provide submili-electronvolt energy resolutions. The BTK Theory Now we shall summarize the derivations of the phenomenological treatments for transport through a normal-superconducting (NS) interface of the famous BTK theory [23]. First, let us discuss some elementary results of the Bogoliubov de Gennes equation from which the BTK theory is derived. Readers who are not familiar with superconductivity can consult some well known references [59]. The Bogoliubov de Gennes Equation The Bogoliubov de Gennes equation [60] describes quasiparticles of electrons and holes in superconductors, analogous to the way Schrödinger equation describes electrons and holes in normal solids. Using the standard two state basis of electron-like and hole-like states, we can describe the wave function as, ψ ( x , t ) = [ f ( x , t ) g ( x , t ) ]and the Bogoliubov de Gennes equation reads, i ψ ( x , t ) t = ( H ( x ) Δ ( x ) Δ ( x ) H ( x ) ) ψ ( x , t )where, H ( x ) = 2 2 m d 2 d x 2 + V ( x ) E FΔ(x) is the spatially dependent superconducting energy gap (or quasiparticle coupling) and EF is the Fermi energy. The mathematical structure of the equation implies time reversed dynamics of the holes compared to that of the electron quasiparticles. For the simplest scenario where we have Δ(x) = Δ and V(x) = 0, we can have an eigenfunction solution of the form, ψ ( x , t ) = [ u v ] exp i ( k x ω t )which gives the eigenenergy solution, E 2 = ( 2 k 2 2 m E F ) 2 + Δ 2and the sketch of this energy can be seen in Figure 6 for a normal metal (Δ = 0) and a superconductor (Δ > 0). The positive solution of the energy refers to the electron quasiparticles and the negative one to hole quasiparticles. The superconducting energy gap is introduced whenever Δ > 0, and this is typically in the order of 1 meV for elemental (low Tc) superconductors, while EF is several eV in magnitude. Another useful quantity is the density of states (DOS) which can be derived from elementary solid state physics, ρ ( k ) d k = V ( 2 π ) 3 4 π k 2 d kand a simple expression for the DOS ratio between the superconducting state to the normal state can be easily derived. Assuming equal Fermi energy between N and S, (EF)N = (EF)S, and in the limit of small energy range compared to the Fermi energy, we have, ρ S ( E ) ρ N ( E ) = ρ ( E ) = E E 2 Δ 2for E > Δ and zero otherwise. Deriving Supercurrent in the BTK Theory The original BTK theory solves the scattering conditions to obtain reflection and transmission probabilities at the interface between normal metal and superconductor using the simplest possible assumptions. First, BTK theory assumes equal Fermi energy between normal metal and superconductor. Second, the superconducting gap Δ(x) is assumed to be spatially independent. In reality, when a superconductor is in contact with a normal metal, there will be some proximity effects [60] due to diffusions of some Cooper pairs into the metal, which reduces the effective gap at the superconsuctor interface. Proximity effects require spatially dependent Δ for a certain length scale around the interface, however in the BTK theory we shall neglect these effects and assume a sudden change of Δ. Third, we shall neglect interactions in both the superconductor and the metal, i.e., V(x) = 0, for regions deep inside the conductors and in the vicinity of x → 0 we can model a simple interface scattering potential such as V(x) = (x) where H is the strength of the scattering potential. Such a simple (but unrealistic) scattering potential allows for analytical spatial solutions for the wave function as follows, ψ N ( x ) = [ 1 0 ] e i ( k F + k N ) x + a [ 0 1 ] e i ( k F k N ) x + b [ 1 0 ] e i ( k F + k N ) x ψ S ( x ) = c [ u v ] e i ( k F + k S ) x + d [ v u ] e i ( k F + k S ) x The wave-numbers kN and kS are measured from the Fermi-wave number kF. Referring to Figure 6, the incident electron e has probability of unity, and it can experience Andreev reflection (a) or normal reflection (b) an the interface. The transmission can take in the form of electron-like (c) or hole-like (d) quasiparticles in the superconductor. Boundary conditions at the interface give, ψ N ( 0 ) = ψ S ( 0 ) = ψ ( 0 ) ψ S ( 0 ) ψ N ( 0 ) = H 2 m 2 ψ ( 0 ) This allows for the solutions of the coefficients and therefore the probabilities, A = |a|2, B = |b|2, etc. The expressions for A and B are listed in Table 1, while the transmission probabilities C and D can be calculated from conservation of probability C + D = 1 − AB, but we do not really need their expressions directly in order to derive the current later. The dimensionless quantity Z is defined as Z 2 = m H 2 2 2 E Foften called the barrier strength, representing the strength of the scattering potential (x). Now we consider those energies less than the gap energy, i.e., |E| < Δ. The incident electrons cannot enter the superconductor as quasiparticles, therefore A + B = 1. If Z = 0, all electrons are Andreev reflected, (A = 1, B = 0), while for Z > 0 some electrons are normally reflected, (A < 1, B > 0). To resolve this we need to consider normal-normal (NN) interface by letting Δ → 0 or ρ → 1. The transmission, evaluated as 1 − (A + B) is given by, T = 1 1 + Z 2which is the standard result for delta potential scattering, and for Andreev reflection probability at the Fermi energy is given by, A = [ 1 1 + 2 Z 2 ] 2which is roughly the square of the normal transmission. This reflects the fact that AR process requires simultaneous transmission of two independent electrons. After we know the probabilities A and B, we are ready to calculate the current, which can be deduced either from the left (normal metal) or the right (superconductor) hand side of the interface. Let us consider from the normal metal side: at energy interval δE, there is a current contribution to the right from the incident electron, a current contribution from AR which reflects holes to the left, i.e., current to the right, and the normal reflection that contributes current to the left. Summing up all these we have, δ I ( E ) = e A v ( E ) ρ ( E ) [ 1 + A ( E ) B ( E ) ] f ( E ) δ Ewhere e is the electronic charge, is the point contact cross sectional area, v(E) is the electron velocity, ρ(E) is the DOS, and f(E) is the Fermi-Dirac distribution function. There is also equivalent current flowing to the left from the superconductor, but it has a different Fermi–Dirac distribution function due to the applied bias, δ I ( E ) = e A v ( E ) ρ ( E ) [ 1 + A ( E ) B ( E ) ] f ( E eV ) δ Eand the total current can be written as, I = e A v ( E ) ρ ( E ) [ 1 + A ( E ) B ( E ) ] [ f ( E eV ) f ( E ) ] d E The integration is mainly over a small energy region around the Fermi level since the term [f(EeV) − f(E)] is zero for large energy. In practice, eV ∼ Δ ≪ EF, and thus the velocity and DOS can be taken as constants, I = e A v ρ [ 1 + A ( E ) B ( E ) ] [ f ( E e V ) f ( E ) ] d E The conductance defined as G = dI/dV can be derived for both NN and NS system, giving the conductance ratio of NN to NS as G NS G NN = ( 1 + Z 2 ) [ 1 + A ( E ) B ( E ) ] f ( E e V ) d Ewhich is the main results of the celebrated BTK theory. f′(E) refers to the derivative of f(E) with respect to energy. To calculate the current through SNS systems, Octavio et al. combined two BTK formulations and used it to explain MAR effects in SNS junctions [42]. Interested readers can refer to the original paper for details. In order to extend the BTK theory to measure the spin polarizations of ferromagnets, Mazin et al. [31] and Strijkers et al. [32] proposed that the current I is a superposition of a fully polarized current PI and a fully non-polarized current (1 − P)I. The non-polarized current can be calculated using the standard BTK theory while the polarized current needs to be calculated with modified expressions for the reflectivities and . The modified constants are determined as follows. The fully polarized current consists of one electron spin species only, therefore there is no Andreev reflection, i.e., Ã = 0 and + + = 1. At small energies |E| < Δ, there is no transmission, implying = 1 [32]. For |E| < Δ, can be determined by assuming that the ratio between normally reflected and transmitted electrons is independent of the polarization, in other words, B C + D = B C + D that subsequently gives, B = B 1 A Complete tabulations of and can be found in the original paper by Strijkers et al. [32]. However, Mazin et al. proposed a slightly different approach that, for electron with energy above the superconducting gap, describes Andreev reflected holes as spatially decaying evanescent wave with finite probability but carrying no current. This difference turns out to be a minor issue as they differ only by a negligible amount when used to interpret the experiments [34]. The conductance ratio for the spin polarized system is hence given by, G NS G NN = P ( 1 + Z 2 ) [ 1 + A B ( E ) ] f ( E e V ) d E ( 1 P ) ( 1 + Z 2 ) [ 1 + A ( E ) B ( E ) ] f ( E e V ) d EIn the metallic limit of perfect contact, there is perfect transparency (Z = 0) and the normalized conductance ratio for zero bias is simply given by 2(1 − P) as stated earlier in the previous section on the experimental surveys. Quantum Hamiltonian Theory In this section we shall summarize a model based on quantum Hamiltonian theory, whose origin can be traced back from the early work by Bardeen who proposed a microscopic Hamiltonian approach for tunneling junction problems [61]. We shall adopt nonequilibrium Green's function (NEGF) formalism to formulate relevant physical quantities. NEGF is a big topic on its own, and readers who are not familiar with its formalism are recommended to browse reference [24], and perhaps some many body topics such as reference [62] and [63]. The historical accounts for the developments of the theory for superconducting resonant tunneling systems can be found in the well known references [43,6470], and readers who are interested in the details should consult the original papers. In particular, we shall illustrate in detail the method by Sun et al. [67] for the supercurrent formulation. The quantum Hamiltonian theory is based on the Bardeen–Cooper–Schrieffer (BCS) model [25], and it still has free adjustable parameters such as the tunneling Hamiltonian and the leads. In order to have a truly first principle method which takes into account the real atomic structure of the device, the theory of superconductivity needs to be combined, for example, with density functional theory (DFT). Fortunately such formalisms are already under developments [71,72] and by combining this formalism with NEGF would enable a first principle calculation for superconducting transport. This is perhaps the future endeavor for the researchers in the field. Model Hamiltonian and Current Derivation In quantum Hamiltonian theory, a system with two metallic leads can be represented by two independent Hamiltonians, HL and HR together with a weak tunneling Hamiltonian between the leads, HT, that represents coupling by which electrons are transferred from one lead to another. To model experimental systems described in Sections 2.3 and 2.4 where quantum point contacts are used to probe magnetic impurities or molecules, we can add an intermediate centre region where electrons transit before they tunnel to the next lead. This can also be thought of a quantum dot represented by a Hamiltonian HC. For a vacuum region between the leads such as in Section 2.2 we do not need HC. The schematics for the system is shown in Figure 7. Expressions for the whole system's Hamiltonian can be written as the following, H ( t ) = H L + H T ( t ) + H C + H Rwhere [43], H L + H R = k , σ , α = L , R ɛ k α σ a k α σ a k α σ + k , α = L , R Δ k α a k α a k α + H . c H C = i , σ ɛ i σ c i σ c i σ + interaction terms H T ( t ) = k , i , σ , α = L , R t k α i e i ( ϕ α + 2 e V α t ) a k α σ c i σ + H . c . The leads are governed by the mean field BCS theory [59]. Momentum index k refers to the leads, and index i (or j) refers to the quantum dot which contains discrete energy levels ε. σ refers to the spin, Vα is the chemical shift due to bias potential across the junction, and φα is the superconducting phase of the leads. Operators a(†) annihilate (create) particle on their respective leads, while operators c(†) do the same for the quantum dot. The time dependent phase is the consequence of the AC Josephson effects in finite bias, and it is incorporated into the tunneling terms following a gauge transformation suggested by Rogovin et al. [73]. For superconducting systems governed by the BCS Hamiltonian, we can construct Green's functions as 2 × 2 Nambu (spinor) space [74] similar to previous construction for Bogoliubov de Gennes, and this is due to the anomalous terms in the potential which contain two operators with opposite spins and momentum. Nambu representation provides consistent and convenient form of Green's function required for the evaluation of equation of motion and perturbation theory. The spinor terms are defined as, α k = [ a k a k ] and α k = [ a k , a k ] For example we can calculate the (retarded) free propagator gr for the mean field BCS model as the following, g r ( k , t , t ) = i θ ( t t ) { α k ( t ) , α k ( t ) } = i θ ( t t ) [ { a k ( t ) , a k ( t ) } { a k ( t ) , a k ( t ) } { a k ( t ) , a k ( t ) } { a k ( t ) , a k ( t ) } ] Evaluations of this term gives [67,68], k g r ( k , t , t ) = i θ ( t t ) d ɛ ρ N β ( ɛ ) e i ɛ ( t t ) [ 1 Δ / ɛ Δ / ɛ 1 ]where ρN is normal density of states and β(ε) is a complex term related to the BCS DOS defined as, β ( ɛ ) = | ɛ | ɛ 2 Δ 2 θ ( | ɛ | Δ ) + ɛ i Δ 2 ɛ 2 θ ( Δ | ɛ | ) Another useful free propagator is the lesser propagator given by, k g < ( k , t , t ) = i d ɛ ρ N f ( ɛ ) Re [ β ( ɛ ) ] e i ɛ ( t t ) [ 1 Δ / ɛ Δ / ɛ 1 ] Time-dependent supercurrent across the junction can be derived from the expectation value of the time derivative of the number operator in any one leads, say the left one for convenience, I ( t ) = e N ˙ L = i e [ N L ( t ) , H ( t ) ] = 2 e Re i , k Tr { G i , L k < ( t , t ) t L i ( t ) σ Z } The term G i , L k < ( t , t ) is called lesser Green's function, which is defined as, G j , L , k < ( t , t 1 ) = i [ a L k ( t 1 ) c j ( t ) a L k ( t 1 ) c j ( t ) a L k ( t 1 ) c j ( t ) a L k ( t 1 ) c j ( t ) ]and the term tLi(t) is tunneling matrix given by, t L j ( t ) = [ t L j e i ( ϕ L + 2 e V L t ) 0 0 t L j e i ( ϕ L + 2 e V L t ) ] The term az is the Pauli matrix, σ Z = [ 1 0 0 1 ] The next step is to express the current in terms of the free propagator of the leads and Green's function of the quantum dot. This can be done through NEGF procedure where the corresponding time-ordered Green's function for G i , L k < is evaluated with NEGF time contour integral, followed by Langreth's analytical continuation. This gives the expression for G i , L k < as the following, G j , L k < ( t , t ) = i d t ( G j i r ( t , t ) t L i ( t ) g L k < ( t t ) + G j i < ( t , t ) t L i ( t ) g L k a ( t t ) )where the quantum dot's Green's functions are given by, G i j r ( t , t 1 ) = i θ ( t t 1 ) [ { c i ( t ) , c j ( t 1 ) } { c i ( t ) , c j ( t 1 ) } { c i ( t ) , c j ( t 1 ) } { c i ( t ) , c j ( t 1 ) } ] G i j < ( t , t 1 ) = i [ c j ( t 1 ) c i ( t ) c j ( t 1 ) c i ( t ) c j ( t 1 ) c i ( t ) c j ( t 1 ) c i ( t ) ] We can then substitute these into G< and write out the current equation. For simplicity in the current example we can include only one localized level in the quantum dot, i.e., transport is only through a single eigenchannel. Using the expressions for the BCS free propagators in the previous chapter and after rearranging the terms we would obtain, I ( t ) = 2 e Im t d t 1 d ɛ 2 π e i ɛ ( t t 1 ) Tr { [ Re ( β L ( ɛ ) ) f L ( ɛ ) G r ( t , t 1 ) + β L ( ɛ ) G < ( t , t 1 ) ] Γ L L ( ɛ ) σ z }and the term Σ̃L/R(ε) is a product term from the rearrangements defined as, L / R ( ɛ ) = [ e i e V L / R ( t 1 t ) Δ L / R ɛ e i ( ϕ L / R + e V L / R ( t 1 + t ) ) Δ L / R ɛ e i ( ϕ L / R + e V L / R ( t 1 + t ) ) e i e V L / R ( t 1 t ) ] The term ΓL is the line width matrix function, a product of interlevel tunneling matrices and the normal density of states ρN, Γ L ; i j ( t , t 1 ) = 2 π t L i ( t ) t L j ( t 1 ) ρ L Nwhich would be a constant in the case of single level quantum dot. Now in order to solve G i j r / < we need to be more specific with the actual form of the interactions in Equation (27) of the quantum dot. For illustrations, we can use the simplest case where the quantum dot is non-interacting, which enables exact evaluations for G i j r / <. This corresponds to larger quantum dots where charge screening is sufficiently strong to make the interactions to be accounted only as an overall self-consistent potential. In such simple cases we can use the Dyson and Keldysh equations by first computing the corresponding selfenergies. The selfenergies can be calculated easily from the equation of motions, which take the same form as the resonant tunneling model [66,67], L / Rij r / < ( t , t 1 ) = t L / R i ( t ) ( k g L / R k r / < ( t , t 1 ) ) t L / R j ( t 1 )and using the BCS free propagators stated above we can easily get their explicit forms. Time Averaged Current and Fourier Transformations The Josephson current through SNS QPC oscillates at very high frequency, typically in the terahertz range, which makes the time resolved quantities not so easily compared with the experiments. A more convenient way would be to work with the time averaged quantities derived from the Fourier transformation of the correct intrinsic frequencies of the systems. All dynamic quantities can be expanded as harmonics of the fundamental frequency ω = 2 eV, i.e., I ( t ) = n I n e in ω t The time average current is derived simply from the zeroth order term I0. Due to the two-time correlations in the Green's function, we require a transformation that can account them in a consistent manner, and this is done through a so-called double Fourier transform of the Green's functions, G m n ( ɛ ) = 1 2 π T / 2 T / 2 d t 1 e i ( ɛ + n ω ) t 1 T / 2 T / 2 d t e i ( ɛ + m ω ) t G ( ɛ , t , t 1 ) The retarded Green's function is calculated with the Dyson equation in Fourier transformed form, hence the matrices here are in Fourier space and Nambu space, and for the case of multilevel system it would be the tensor product of all, i.e., [m, n] ⊗ [i, j] ⊗ [2 × 2] and the retarded function is obtained by straightforward inversion of the whole matrix. The lesser function is calculated with the Keldysh equation and the entire composite matrices are substituted, i.e., G r ( ɛ ) = [ g r ( ɛ ) 1 ( L r ( ɛ ) + R r ( ɛ ) ) ] 1 G < ( ɛ ) = [ G r ( ɛ ) ( L < ( ɛ ) + R r ( ɛ ) ) G a ( ɛ ) ] The advanced function is obtained from the retarded function by Ga = [Gr], and the time-average current can then be expressed as the zeroth order component of the Fourier transform, I 0 = e π Im d ɛ Tr { [ f L ( ɛ ) Re ( β ( ɛ ) ) G 00 r ( ɛ ) + 1 2 β ( ɛ ) G 00 < ( ɛ ) ] Γ L ( ɛ ) σ z } The sample plot for the time averaged current and differential conductance (dI/dV) for single level quantum dot in SNS QPC can be seen in Figure 8. Notice the rich SGS at small bias due to MAR compared to fairly featureless behaviours at higher bias eV > 2Δ. The quantum Hamiltonian theory enables us to incorporate more physics into the quantum dot. For example to describe magnetic interactions of the impurities, one may consider a model for HC of the following, H C = i , σ ɛ i σ c i σ c i σ + i j U i , j n i n jor other suitable forms of interactions. With this the underlying physics when MAR oscillates across a magnetic impurity can be studied, and general interactions can also be computed with first principle method. For such interacting systems the Green's function may be calculated perturbatively or with other methods. Some examples on such works are by Avishai et al. [75] and Pala et al. [76]. For a vacuum region between the superconducting leads we do not include HC and the resulting model is slightly simpler. The model Hamiltonian they used is similar to Equation (25), but in this case without the quantum dot, H ( t ) = H L + H R + H T ( t )where, H T ( t ) = σ [ t e i ( ϕ 0 + 2 eVt ) a L σ a R σ + t e i ( ϕ 0 + 2 eVt ) a R σ a L σ ] The tunneling Hamiltonian directly couples left and right leads. For a single eigenchannel system the hopping term t is just a constant, and the phase term is the difference between left lead and right lead, i.e., ϕ0 = ϕLϕR and eV = μLμR. The equation for the current can then be re-derived using the same procedure as explained in the last sections. Excellent quantitative agreements with the experimental data provide a strong justification for the validity of the microscopic model in the quantum Hamiltonian theory. Shapiro Effects and External Radiations Another interesting application of the quantum Hamiltonian theory is for studying the interactions with some external electromagnetic radiations. The frequency range of interests in this case would be in the microwave regions, due to the intrinsic energy scale of typical superconducting energy gaps. The interplay between the AC Josephson effect in superconducting junctions under finite bias with the external radiations exhibit the phenomenon known as the Shapiro effects in the supercurrent behaviours [77]. Cuevas et al. [78] proposed that the effects from the external radiations of frequency ωr to some extent can be modeled as effective time dependent voltage, Vac cos ωrt, acting on top of the existing AC Josephson frequency. The total effective bias can be written as V(t) = V + Vac cos ωrt, and the time dependent phase in the tunneling Hamiltonian becomes ϕ ( t ) = ( ϕ 0 + ω t + α cos ω r twhere α is a measure of the coupling strength with the external radiations. The Fourier series expansion of the current takes the following form, I ( t ) = m , n I n m exp [ i ( n ϕ 0 + n ω t + m ω r t ) ] For a superconducting QPC system with a featureless barrier, i.e., a vacuum region between two superconducting leads, Cuevas et al. managed to compute the supercurrent numerically with the use of Bessel basis functions. They found that the Shapiro effects take place at bias V = (m/n)ħωr/2e, where m and n are integers. The effects from external radiations are basically current singularities that are distinct from the fundamental SGS of the QPC, since each singularity takes place at infinitely short bias interval and appear as prominent spikes. Chauvin et al. [79] have experimentally confirmed this with very good agreements with the model, except for very low bias regions. For superconducting QPC with a quantum dot at the centre, the localized energy levels at the quantum dot exhibit another intriguing physics upon exposure to external radiations, at least in two ways. First, in semiclassical limit the external field would oscillate the entire set of localized energy levels in unison. Second, absorptions and emissions of the photons would also stimulate interlevel transitions as the electrons tunnel through the quantum dot, and both would affect MAR process inside the quantum dot and hence the supercurrent behaviours. However, in order to do time averaged analysis, one needs to perform multi-frequency Fourier transformation on the dynamical quantities because of the two frequencies dependence of the phase factor. This is non-trivial particularly when the frequencies are non-commensurate, i.e., when their ratio is irrational. To slightly simplify the problem, one may consider replacing one of the superconducting leads with a normal lead (SNN system) and use the gauge where the bias potential at the superconducting lead is zero, thereby eliminating time dependence term from the AC Josephson effects [26]. External radiations can be modeled semiclassically adopting typical dipole approximations [80], H C ( t ) = i , σ [ ɛ i + A cos ( ω t ) ] c i σ c i σ + i j , σ B cos ( ω t ) c i σ c j σ ]In this case, the Green's function of the quantum dot may be computed with the use of Floquet basis [81], which was found to enable flexible modeling of quantum transitions in a multilevel quantum dot [26]. One can study the effects of localized level oscillations by letting B = 0, and it was found that series of resonances appear due to the oscillations and the energy spacing between these resonances is equivalent to the radiation energy as can be seen in Figure 9. On the other hand the effects from interlevel transitions can be studied by simply letting A = 0 and transitions was found to produce splitting on the primary DC resonance when radiation frequency is at Rabi frequency. Furthermore, the splits were separated by the energy proportional to the interlevel hopping constant B. This provides the possibility for experimental inference of the interlevel coupling strength from simple current measurements. In addition, the details of the quantum dot can greatly affect the transport behaviours such as the symmetry of the quantum dot with respect to the leads [82], the relative energy difference between the localized level and the superconducting gap, electronic interactions etc. [83]. If these additional factors are not carefully taken into account, any physical deductions based on the incomplete model would potentially lead to false conclusions. Intrinsically small energy gap in superconducting PCAR spectroscopy provides a promising candidate for ultrasensitive sensors, making use of the AR process which carries rich physics at the contacts. AR process in NS systems can be used to probe spin polarizations of ferromagnetic materials with convenience and high precision compared to the conventional methods. Theoretical developments in this area are mainly based on the BTK theory, which had begun earlier and has become a relatively mature theory to be used in spin polarization measurements. However, some problems still remain that relate to various delicate details of the surface properties at the contacts which have been treated phenomenologically. Atomic contacts such as STM tips and MCBJ have discrete eigenchannels and the quantum Hamiltonian theory combined with NEGF enables rigorous descriptions of the complex transport properties of MAR. The method also has promising potentials to be extended for a fully first principle method if we combine the existing first principle superconductivity theory [71,72] with NEGF, which is a possible future research direction for anyone working in this field. References and Notes OnnesH.K.The superconductivity of MercuryComm. Phys. Lab. Univ. Leiden.1911No. 122 and No. 124 JosephsonB.D.Possible new effects in superconducting tunnellingPhys. Lett.19621251253 SoulenR.J.ByersJ.M.OsofskyM.S.NadgornyB.AmbroseT.ChengS.F.BroussardP.R.TanakaC.T.NowakJ.MooderaJ.S.BarryA.CoeyJ.M.D.Measuring the Spin Polarization of a Metal with a Superconducting Point ContactScience19982828588 RallsK.S.BuhrmanR.A.TiberoR.C.Fabrication of thin-film metal nanobridgesAppl. Phys. Lett.19895524592461 MullerC.J.van JonghL.J.Conductance and supercurrent discontinuities in atomic-scale metallic constrictions of variable widthPhys. Rev. Lett.199269140143 SharvinY.V.A possible method for studying Fermi surfacesSov. Phys. JETP196521655656 UpadhyayS.K.PalanisamiA.LouieR.N.BuhrmanR.A.Probing ferromagnets with andreev reflectionPhys. Rev. Lett.19988132473250 TedrowP.M.MeserveyR.Spin-polarized electron tunnelingPhys. Rep.1994238173243 JohnsonP.D.Core Level Spectroscopies for Magnetic PhenomenaNATO Advanced Study InstituteSeries B: PhysicsPlenum PressNew York, NY, USA1995345 ScheerE.JoyezP.EsteveD.UrbinaC.DevoretM.H.Conduction channel transmissions of atomic-size aluminum contactsPhys. Rev. Lett.19977835353538 CuevasJ.C.YeyatiA.L.Martín-RoderoA.Microscopic origin of conducting channels in metallic atomic-size contactsPhys. Rev. Lett.19988010661069 ScheerE.AgraïtNCuevasJ.C.YeyatiA.L.LudophB.Martín-RoderoA.BollingerG.R.van RuitenbeekJ.M.UrbinaC.The signature of chemical valence in the electrical conduction through a single-atom contactNature1998394154157 LandauerR.Spatial variation of currents and fields due to localized scatterers in metallic conductionIBM J. Res. Dev.19571223231 van HoutenH.BeenakkerC.Quantum point contactsPhys. Today1996492227 NavehY.PatelV.AverinD.V.LikharevK.K.LukensJ.E.Universal distributions of transparencies in highly conductive Nb/AlOx/Nb junctionsPhys. Rev. Lett.20008554045407 LudophB.van der PostN.Multiple Andreev reflection in single-atom niobium junctionsPhys. Rev. B20006185618569 PierreF.AnthoreA.PothierH.UrbinaC.EsteveD.Multiple Andreev reflections revealed by the energy distribution of quasiparticlesPhys. Rev. Lett.20018610781081 ChtchelkatchevN.M.Transitions between π and 0 states in superconductor-ferromagnet-superconductor junctionsJETP Lett.200480743747 MakkP.CsonkaS.HalbritterA.Effect of hydrogen molecules on the electronic transport through atomic-sized metallic juctions in the superconducting statePhys. Rev. B200878045414 BöhlerT.EdtbauerA.ScheerE.Point-contact spectroscopy on aluminium atomic size contacts: Longitudinal and transverse vibronic excitationsNew J. Phys.200911013036 JiS.-H.ZhangT.FuY.-S.ChenX.MaX.-C.LiJ.DuanW.-H.JiaJ.-F.XueQ.-K.High-resolution scanning tunneling spectroscopy of magnetic impurity induced bound states in the superconducting gap of Pb thin filmsPhys. Rev. Lett.2008100226801 MarchenkovA.DaiZ.DonehooB.BarnettR.N.LandmanU.Alternating current Josephson effect and resonant superconducting transport through vibrating Nb nanowiresNature Nanotechnology20072481485 BlonderG.E.TinkhamM.KlapwijkT.M.Transition from metallic to tunneling regimes in superconducting microconstrictions: Excess current, charge imbalance, and supercurrent conversionPhys. Rev. B19822545154532 HaugH.JauhoA.-P.Quantum Kinetics in Transport and Optics of SemiconductorsSpringerNew York, NY, USA2008 BardeenJ.CooperL.N.SchriefferJ.R.Theory of superconductivityPhys. Rev.195710811751204 NurbawonoA.FengY.P.ZhangC.Electron tunneling through a hybrid superconducting-normal mesoscopic junction under microwave radiationPhys. Rev. B201082014535 de JongM.J.M.BeenakkerC.W.J.Andreev reflection in ferromagnet-superconductor junctionsPhys. Rev. Lett.19957416571660 PrinzG.A.Spin-Polarized transportPhys. Today1995485863 PrinzG.A.MagnetoelectronicsScience199828216601663 WoodsG.T.SoulenR.J.MazinI.NadgornyB.OsofskyM.S.SandersJ.SrikanthH.EgelhoffW.F.DatlaR.Analysis of point-contact Andreev reflection spectra in spin polarization measurementsPhys. Rev. B200470054416 MazinI.I.GolubovA.A.NadgornyB.Probing spin polarization with Andreev reflection: A theoretical basisJ. Appl. Phys.20018975767578 StrijkersG.J.JiY.YangF.Y.ChienC.L.ByersJ.M.Andreev reflections at metal/superconductor point contacts: Measurement and analysisPhys. Rev. B200163104510 JiY.StrijkersG.J.YangF.Y.ChienC.L.ByersJ.M.AnguelouchA.XiaoG.GuptaA.Determination of the spin polarization of Half-Metallic CrO2 by point contact Andreev reflectionPhys. Rev. Lett.2001865585 JiY.StrijkersG.J.YangF.Y.ChienC.L.Comparison of two models for spin polarization measurements by Andreev reflectionPhys. Rev. B200164224425 Pérez-WillardF.CuevasJ.C.SürgersC.PfundsteinP.KopuJ.EschrigM.LöhneysenH.V.Determining the current polarization in Al/Co nanostructured point contactsPhys. Rev. B200469140502 PanguluriR.P.TsoiG.NadgornyB.ChunS.H.SamarthN.MazinI.I.Point contact spin spectroscopy of ferromagnetic MnAs epitaxial filmsPhys. Rev. B200368201307(R) GeresdiA.HalbritterA.TanczikóF.MihályG.Direct measurement of the spin diffusion length by Andreev spectroscopyAppl. Phys. Lett.201198212507 RajanikanthAKasaiS.OhshimaN.HonoK.Spin polarization of currents in Co/Pt multilayer and Co–Pt alloy thin filmsAppl. Phys. Lett.201097022505 XiaK.KellyP.J.BauerG.E.W.TurekI.Spin-dependent transparency of ferromagnet/superconductor interfacesPhys. Rev. Lett.200289166603 GijsM.A.M.BauerG.E.W.Perpendicular giant magnetoresistance of magnetic multilayersAdvances in Physics199746285445 GreinR.LöfwanderT.MetalidisG.EschrigM.Theory of superconductor-ferromagnet point-contact spectra: The case of strong spin polarizationPhys. Rev. B201081094508 OctavioM.TinkhamMBlonderG.EKlapwijkT.M.Subharmonic energy-gap structure in superconducting constrictionsPhys. Rev. B19832767396746 CuevasJ.C.Martín-RoderoA.YeyatiA.L.Hamiltonian approach to the transport properties of superconducting quantum point contactsPhys. Rev. B19965473667379 YazdaniA.JonesB.A.LutzC.P.CrommieM.F.EiglerD.M.Probing the local effects of magnetic impurities on superconductivityScience199727517671770 RoyA.BuchananD.S.HolmgrenD.J.GinsbergD.M.Localized magnetic moments on chromium and manganese dopant atoms in Niobium and VanadiumPhys. Rev. B19853130033014 ScholtenP.D.MoultonW.G.Effect of ion-implanted Gd on the superconducting properties of thin Nb filmsPhys. Rev. B19771513181323 SazonovaV.YaishY.ÜstünelH.RoundyD.AriasT.A.McEuenP.L.A tunable carbon nanotube electromechanical oscillatorNature2004431284287 LeRoyB.J.LemayS.G.KongJ.DekkerC.Electrical generation and absorption of phonons in carbon nanotubesNature2004432371374 SmitR.H.M.NoatY.UntiedtC.LangN.D.van HemertM.C.van RuitenbeekJ.M.Measurement of the conductance of a hydrogen moleculeNature2002419906909 StipeB.C.RezaeiM.A.HoW.Single molecule vibrational spectroscopy and microscopyScience199828017321735 ZhitenevN.B.MengH.BaoZ.Conductance of small molecular junctionsPhys. Rev. Lett.200288226801 AgraïtN.UntiedtC.Rubio-BollingerG.VieiraS.Onset of energy dissipation in ballistic atomic wiresPhys. Rev. Lett.200288216803 ParkH.ParkJ.LimA.K.L.AndersonE.H.AlivisatosA.P.McEuenP.L.Nanomechanical oscillations in a single-C60 transistorNature20004075760 MarchenkovA.DaiZ.ZhangC.BarnettR.N.LandmanU.Atomic dimer shuttling and two-level conductance fluctuations in Nb nanowiresPhys. Rev. Lett.200798046802 DaiZ.MarchenkovA.Subgap structure in resistively shunted superconducting atomic point contactsAppl. Phys. Lett.200688203120 HarwoodL.M.MoodyC.J.Experimental Organic Chemistry: Principles and PracticeWiley-BlackwellHoboken, NJ, USA1989 KeelerJUnderstanding NMR SpectroscopyChichesterU.K.WileyHoboken, NJ, USA2010 AndreevA.F.Thermal conductivity of the intermediate state of superconductorsSov. Phys. JETP19641912281231 TinkhamMIntroduction to SuperconductivityMcGraw HillNew York, NY, USA1996 de GennesP.G.Superconductivity of Metals and AlloysBenjaminNew York, NY, USA1966 BardeenJ.Tunneling from a many-particle point of viewPhys. Rev. Lett.196165759 MahanG.D.Many-Particle PhysicsPlenumNew York, NY, USA1981 FetterA.L.WaleckaJ.D.Quantum Theory of Many-Particle SystemsDoverNew York, NY, USA2003 MeirY.WingreenN.S.Landauer formula for the current through an interacting electron regionPhys. Rev. Lett.19926825122515 WingreenN.S.JauhoA.-P.MeirY.Time-dependent transport through a mesoscopic structurePhys. Rev. B19934884878490 JauhoA.-P.WingreenN.S.MeirY.Time-dependent transport in interacting and noninteracting resonant-tunneling systemsPhys. Rev. B19945055285544 SunQ.-F.GuoH.WangJ.Hamiltonian approach to the ac Josephson effect in superconducting-normal hybrid systemsPhys. Rev. B200265075315 SunQ.-F.WangJ.LinT.-H.Photon assisted Andreev tunneling through a mesoscopic hybrid systemPhys. Rev. B1999591312613138 YeyatiA.L.CuevasJ.C.López-DávalosA.Martín-RoderoA.Resonant tunneling through a small quantum dot coupled to superconducting leadsPhys. Rev. B199655R6137R6140 DolciniF.Dell'AnnaL.Multiple Andreev reflections in a quantum dot coupled to superconducting leads: Effect of spin-orbit couplingPhys. Rev. B200878024518 LüdersM.MarquesM.A.L.LathiotakisN.N.FlorisA.ProfetaG.FastL.ContinenzaA.MassiddaS.GrossE.K.U.Ab initio theory of superconductivity. I. Density functional formalism and approximate functionalsPhys. Rev. B200572024545 MarquesM.A.L.LüdersM.LathiotakisN.N.ProfetaG.FlorisA.FastL.ContinenzaA.GrossE.K.U.MassiddaS.Ab initio theory of superconductivity. II. Application to elemental metalsPhys. Rev. B200572024546 RogovinD.ScalapinoD.J.Fluctuations phenomena in tunnel junctionsAnn. Phys.197486190 NambuY.Quasi-particles and gauge invariance in the theory of superconductivityPhys. Rev.1960117648663 AvishaiY.GolubA.ZaikinA.D.Quantum dot between two superconductorsEurophys. Lett.200154640646 PalaM.GovernaleM.KönigJNon-equilibrium josephson and andreev current through interacting quantum dotsNew J. Phys.2007910.1088/1367-2630/9/8/278 ShapiroS.Josephson currents in superconducting tunneling: The effect of microwaves and other observationsPhys. Rev. Lett.1963118082 CuevasJ.C.HeurichJ.Martín-RoderoA.Levy YeyatiA.SchönG.Subharmonic shapiro steps and assisted tunneling in superconducting point contactsPhys. Rev. Lett.200288157001 ChauvinM.vom SteinP.PothierH.JoyezP.HuberM.E.EsteveD.UrbinaC.Superconducting atomic contacts under microwave irradiationPhys. Rev. Lett.200697067006 ScullyM.O.ZubairyM.S.Quantum OpticsCambridge University PressCambridge, UK1997 ShirleyJ.H.Solution of the Schrödinger equation with a hamiltonian periodic in timePhys. Rev.1965138B979B987 NurbawonoA.FengY.P.ZhaoE.ZhangC.Differential conductance anomaly in superconducting quantum point contactsPhys. Rev. B20098015 NurbawonoA.FengY.P.ZhangC.The roles of potential symmetry in transport properties of superconducting quantum point contactsJ. Comput. Theor. Nanosci.2010724482452 Figures and Table (a) typical I-V curves in PCAR measurements. During the normal state (T > Tc) the current shows the typical ohmic response. After the PC becomes superconducting (T < Tc), non-magnetic systems (P = 0) show excess current due to Andreev reflection (AR) process, while ferromagnetic systems (P = 1) show suppression of AR process leading to suppression of current; (b) Normalized conductance for various polarizations, in the clean metallic limit (Z = 0). The bias is in the units of superconducting energy gap. Measured I-V curves for two different Al atomic point contacts having different sets of {τn}: a = {0.747, 0.168, 0.036} and b = {0.519, 0.253, 0.106}. Each τn is associated with each valence orbital of Al. The current and voltages are in reduced units, the current is normalized with respect to the total conductance measured by the slope of the I-V at high voltages eV > 5Δ. Effectively exact fitting of the experimental data shows the reliability of the theoretical model based on quantum Hamiltonian [11]. Adapted figure reproduced with kind permission from the authors [10]. Other details can be found in the original paper. Detecting a single atom magnetic impurities of Mn and Cr on Pb surface with a Nb STM tip. (a) is the schematic view of the set up; and (b) is the differential conductance (dI/dV) for a clean Pb surface; (c) is for Cr atom where six peaks are detected and (d) is for Mn atom where four peaks are detected. The method proposes the use of SGS to identify atomic size magnetic impurities on surfaces. Figures were reproduced and adapted with kind permission from the authors [21]. The schematic view of the atomic configurations for measuring vibrational modes of an Nb dimer fabricated with MCBJ technique [22]. The Nb leads were adjusted with piezoelectric movements. The dimer was found to have four modes of vibrations: longitudinal (along the dimer), transverse (up and down), and wagging (torsional) about its centre of mass. These modes affect the MAR tunneling process between the leads and were detected as current singularities inside and outside the superconducting gaps. Multiple Andreev reflection (MAR) process in a symmetric superconductor-normal-superconductor (SNS) system with the normal region sufficiently thin to provide ballistic trajectories. The dark particles (electrons) are the antiparticle of the white particles (holes), and the reflection process repeats until they attain sufficient energy to overcome the superconducting gap Δ. The horizontal axes on the superconductor sides represent density of states. Band diagram for N (left) S (right) interface for the BTK model. The superconducting energy gap in reality is much smaller to Fermi energy (Δ ≪ EF). Label e is the incident electron, a is Andreev reflection, b is normal reflection, c is electron like transmission, and d is hole like transmission. Figures are adapted from reference [23]. A resonant tunneling system which consists of two superconducting leads and a quantum dot. The system is represented by three subsystem Hamiltonians, H = HL + HT + HC + HR. Plot of the time averaged I-V and dI/dV curves for SNS QPC systems with single level quantum dot (εd = 0). Other parameters are, ΓL = ΓR = 0.5Δ and kbT = 0.1Δ. Rich subgap structures mainly at low bias (eV < Δ) can potentially be used to identify the quantum dot's electronic structures and magnetic properties. Effects of single mode external radiations on SNN transport in weak coupling limit. (a) Time averaged current for a single-level quantum dot in SNN system and the effects of single level oscillations upon external radiations. The external radiations create current resonances at interval ħω and preserve the main DC resonance at eV = 4Δ; (b) Time averaged current for a symmetric two-level quantum dot in SNN system and the effects of interlevel transitions due to the external radiations. In this case the external radiations can only affect the transport when the frequency is equal to the energy difference between the localized levels, i.e., at Rabi frequency ħω = (ε1ε2). The main DC resonance at 4Δ splits into two, and the separation between the split is equal to 2B. The simple relationship provides a way to directly measure the interlevel coupling strength from a simple current measurements [26]. Table for coefficients A (Andreev reflection) and B (normal reflection). E < Δ E < Δ A = Δ2/[E2 + (Δ2E2)(1 + 2Z2)2] A = (ρ2 − 1)/[ρ+(1+2Z2)]2 B = 1 − A B = 4Z2(1 + Z2)/[ρ+(1+2Z2)]2]
666c8b4ddc3fff53
Take the 2-minute tour × Is there anything in the physics that enforces the wave function to be $C^2$? Are weak solutions to the Schroedinger equation physical? I am reading the beginning chapters of Griffiths and he doesn't mention anything. share|improve this question Related: physics.stackexchange.com/q/1067/2451 –  Qmechanic Jan 17 '12 at 23:57 Thanks, but I don't think a good answer was given there. –  user19192 Jan 18 '12 at 0:05 3 Answers 3 Some of this was discussed elsewhere. See « significance of unbounded operators » http://physics.stackexchange.com/a/19569/6432 . It is not true the wave function has to be continuous, it just has to be measurable (i.e., a limit of step functions almost everywhere). Naturally you might wonder what sense Schroedinger's equation makes if you apply it to a step function...but the answer is easier than worrying about distributional weak solutions. The point is that you can solve the time-dependent Schroedinger equation with the exponential $$e^{itH},$$ which is a family of unitary operators, and which is better behaved than the $H$ you have to use in Schroedinger's equation. The $H$ you have to use, for example $$-{\partial ^2\over\partial x^2} + \mathrm{other\ stuff}, $$ is unbounded. And non-differentiable functions are not in its domain. But plugging it in to the power series for exponential converges in norm anyway, and so the resulting operator, being bounded and even unitary on a dense domain of the Hilbert space, can be extended painlessly to the entire space, even step functions. So it makes more sense to say that the solution to Schroedinger's equation with a given initial condition $\psi_o$ is $$\psi_t (x) = e^{itH}\cdot \psi_o (x)$$ and there is no need to bring in distributional weak solutions. These considerations are called the Stone--von Neumann theorem. But such functions are not very important and indeed it is possible to do all of Quantum Mechanics with smooth functions, especially if you take the attitude that, for example, a square well potential would also be unphysical and is really just a simplified approximation of a physical potential which smoothed off those square corners but had a formula that was unmanageable.... See Anthony Sudbery, Quantum Mechanics and the Particles of Nature, which since it is written by a mathematician, is careful about unimportant issues like this. That family of operators I wrote down is called the time-evolution operators, and they are an example of a unitary group with one-parameter, time. It is easy to see that if $\psi_o$, the initial condition, the state of the quantum system at time $t=0$, is nice and smooth, then all the future states will be nice and smooth too. Furthermore, all the usual quantum observables have eigenstates which are nice and smooth, so if you perform a future measurement, you will get a function which is nice and smooth and its future time evolution will remain that way, until the next measurement, etc. until Doomsday. That said, for all practical purposes you may assume all wave functions are smooth and that the only reason you study discontinuous ones is as convenient approximations. The comment one sometimes hears is that a wave function that was not in the domain of the Hamiltonian would « have infinite energy » but this is nonsense. In Quantum Mechanics, you are not allowed to talk about a quantum system as having a definite value of an observable unless it is in an eigenstate of that observable. What you can ask is, what would be the expectation of that observable. If the wave function $\psi$ is discontinuous and not in the domain of the Hamiltonian, it cannot be an eigenstate, but if its energy is measured, the answer will always be finite. Yet, the expectation of its energy does not exist, or you could say, the expectation « is infinite ». Not the energy, its expectation. There is nothing very unphysical about this because expectation itself is not very directly physical: you cannot measure the expectation unless you make infinitely many measurements, and your estimated answer, even for this discontinuous function, will always be a finite expectation. It's just that those estimates are way inaccurate, the expectation really is infinite (like the Cauchy distribution in statistics). But even for such a « bad » wavefunction, all the axioms of Quantum Mechanics apply: the probability that the energy, if measured, will be 7 erg, is calculated the usual way. But these bad wave functions never arise in elementary systems or exercises so most people think they are « unphysical ». And, as I said, if the initial condition is a « good » wave function, the system will never evolve outside of that. This, I think, is connected with the fact that in QM, all systems have a finite number of degrees of freedom: this would no longer be true for quantum systems with infinitely many degrees of freedom such as are studied in Statistical Mechanics. share|improve this answer Right, there's nothing wrong about step functions, delta-functions (the derivatives of the former), and others, and that's why physicists freely work with them and never mention artificial mathematical constraints. Still, some discontinuities may make the kinetic energy infinity, so they don't exist in the finite-energy spectrum. I would add that the most natural space of functions to consider is $L^2$, all square-integrable functions. They may be Fourier-transformed or converted to other (discrete...) bases. A subset also has a finite (expectation value of) energy. –  Luboš Motl Jan 18 '12 at 7:22 The time-independent Schroedinger equation for the position-space wavefunction has the form $$\left(\frac{-\hbar^2}{2m}\nabla^2 +(V-E) \right)\Psi=0$$ Where $E$ is the energy of that particular eigenstate, and $V$ in general depends on the position. All physical wavefunctions must be in some superposition of states that satisfy this equation. At least in nonrelativistic QM, the wavefunction is not allowed to have infinte energy. If the second derivative of the wavefunction does not exist or is infinite, it implies that either $V$ has some property that "cancels out" the discontinutiy (as in the infinite square well), or that the wavefunction is continuous and differentiable everywhere. Generally, $\Psi$ must always be continuous, and any spatial derivative of $\Psi$ must exist unless $V$ is infinite at that point. share|improve this answer Here we want to show that there is an easy mathematical bootstrap argument why solutions to the time independent 1D Schrödinger equation $$-\frac{\hbar^2}{2m} \psi^{\prime\prime}(x) + V(x) \psi(x) ~=~ E \psi(x) \qquad\qquad (1)$$ tend to be rather nice. First rewrite eq. (1) in integral form $$ \psi(x)~=~ \frac{2m}{\hbar^2} \int^{x}\mathrm{d}y \int^{y}\mathrm{d}z\ (V(z)-E)\psi(z) .\qquad\qquad (2)$$ There are various cases. 1. Case $V \in {\cal L}^2_{\rm loc}(\mathbb{R})$ is a locally square integrable function. Assume the wavefunction $\psi \in {\cal L}^2_{\rm loc}(\mathbb{R})$ as well. Then the product $(V-E)\psi\in {\cal L}^1_{\rm loc}(\mathbb{R})$ due to Cauchy–Schwarz inequality. Then the integral $y\mapsto \int^{y}\mathrm{d}z\ (V(z)-E)\psi(z)$ is continuous, and hence the wavefunction $\psi$ on the lhs. of eq. (2) is smooth $\psi\in C^{1}(\mathbb{R}).$ 2. Case $V \in C^{p}(\mathbb{R})$ for a non-negative integer $p\in\mathbb{N}_0$. Similar bootstrap argument shows that $\psi\in C^{p+2}(\mathbb{R}).$ The above two cases do not cover a couple of often-used mathematically idealized potentials $V(x)$, e.g., 1. the infinite wall $V(x)=\infty$ in some region. (The wavefunction must vanish $\psi(x)=0$ in this region.) 2. or a Dirac delta distribution $V(x)=V_0\delta(x)$. See also here. share|improve this answer Your Answer
e0961e345e9eca55
Inorganic Chemistry/Chemical Bonding/Molecular orbital theory From Wikibooks, open books for an open world < Inorganic Chemistry‎ | Chemical Bonding Jump to: navigation, search In chemistry, molecular orbital theory (MO theory) is a method for determining molecular structure in which electrons are not assigned to individual bonds between atoms, but are treated as moving under the influence of the nuclei in the whole molecule.[1] In this theory, each molecule has a set of molecular orbitals, in which it is assumed that the molecular orbital wave function ψf may be written as a simple weighted sum of the n constituent atomic orbitals χi, according to the following equation:[2] The cij coefficients may be determined numerically by substitution of this equation into the Schrödinger equation and application of the variational principle. This method is called the linear combination of atomic orbitals approximation and is used in computational chemistry. Molecular orbital (MO) theory uses a linear combination of atomic orbitals to form molecular orbitals which cover the whole molecule. These are often divided into bonding orbitals, anti-bonding orbitals, and non-bonding orbitals. A molecular orbital is merely a Schrödinger orbital which includes several, but often only two nuclei. If this orbital is of type in which the electron(s) in the orbital have a higher probability of being between nuclei than elsewhere, the orbital will be a bonding orbital, and will tend to hold the nuclei together. If the electrons tend to be present in a molecular orbital in which they spend more time elsewhere than between the nuclei, the orbital will function as an anti-bonding orbital and will actually weaken the bond. Electrons in non-bonding orbitals tend to be in deep orbitals (nearly atomic orbitals) associated almost entirely with one nucleus or the other, and thus they spend equal time between nuclei or not. These electrons neither contribute nor detract from bond strength. MO theory provides a global, delocalized perspective on chemical bonding. For example, in the MO theory for hypervalent molecules, it is no longer necessary to invoke a major role for d-orbitals. In MO theory, any electron in a molecule may be found anywhere in the molecule, since quantum conditions allow electrons to travel under the influence of an arbitrarily large number of nuclei, so long as permitted by certain quantum rules. Although in MO theory some molecular orbitals may hold electrons which are more localized between specific pairs of molecular atoms, other orbitals may hold electrons which are spread more uniformly over the molecule. Thus, overall, bonding (and electrons) are far more delocalized (spread out) in MO theory, than is implied in VB theory. This makes MO theory more useful for the description of extended systems. An example is that in the MO picture of benzene, composed of a hexagonal ring of 6 carbon atoms. In this molecule, 24 of the 30 total valence bonding electrons are located in 12 σ (sigma) bonding orbitals which are mostly located between pairs of atoms (C-C or C-H), similar to the valence bond picture. However, in benzene the remaining 6 bonding electrons are located in 3 π (pi) molecular bonding orbitals that are delocalized around the ring. Two are in an MO which has equal contributions from all 6 atoms. The other two have a vertical nodes at right angles to each other. As in the VB theory, all of these 6 delocalized pi electrons reside in a larger space which exists above and below the ring plane. All carbon-carbon bonds in benzene are chemically equivalent. In MO theory this is a direct consequence of the fact that the 3 molecular pi orbitals form a combination which evenly spreads the extra 6 electrons over 6 carbon atoms.[9] In molecules such as methane, the 8 valence electrons are in 4 MOs that are spread out over all 5 atoms. However, it is possible to transform this picture, without altering the total wavefunction and energy, to one with 8 electrons in 4 localized orbitals that are similar to the normal bonding picture of four two-electron covalent bonds. This is what has been done above for the σ (sigma) bonds of benzene, but it is not possible for the π (pi) orbitals. The delocalised picture is more appropriate for ionisation and spectroscopic properties. Upon ionization, a single electron is taken from the whole molecule. The resulting ion does not have one bond different from the other three Similarly for electronic excitations, the electron that is excited is found over the whole molecule and not in one bond. 3. Coulson, Charles, A. (1952). Valence. Oxford at the Clarendon Press.  4. a b Spectroscopy, Molecular Orbitals, and Chemical Bonding - Robert Mulliken's 1966 Nobel Lecture 5. Lennard-Jones Paper of 1929 - Foundations of Molecular Orbital Theory. 6. Hückel, E. (1934). Trans. Faraday Soc. 30, 59. 7. Coulson, C.A. (1938). Proc. Camb. Phil. Soc. 34, 204. 8. Hall, G.G. Lennard-Jones, Sir John. (1950). Proc. Roy. Soc. A202, 155. 9. Introduction to Molecular Orbital Theory - Imperial College London
ee0524b93062b3eb
Friday, April 29, 2011 Fun with an Argon Atom Photon-recoil bilocation experiment at Heidelberg A recent experiment on Argon atoms by Jeri Tomkovic and five collaborators at the University of Heidelberg has demonstrated once again the subtle and astonishing reality of the quantum world. Erwin Schrödinger, who devised the Schrödinger equation that governs quantum behavior, also demonstrated the preposterousness of his own equation by showing that under certain special conditions quantum theory seemed to allow a cat (Schrödinger's Cat) to be alive and dead at the same time. Humans can't yet do this to cats, but clever physicists are discovering how to put larger and larger systems into a "quantum superposition" in which a single entity can comfortably dwell in two distinct (and seemingly contradictory) states of existence. The Heidelberg experiment with Argon atoms (explained popularly here, in the physics arXiv here and published in Nature here) dramatically demonstrates two important features of quantum reality: 1) if it is experimentally impossible to tell whether a process went one way or the other, then, in reality, IT WENT BOTH WAYS AT ONCE (like a Schrödinger Cat); 2) quantum systems behave like waves when not looked at--and like particles when you look. The Heidelberg physicists looked at laser-excited Argon atoms which shed their excitation by emitting a single photon of light. The photon goes off in a random direction and the Argon atom recoils in the opposite direction. Ordinary physics so far. But Tomkovic and pals modified this experiment by placing a gold mirror behind the excited Argon atom. Now (if the mirror is close enough to the atom) it is impossible for anyone to tell whether the emitted photon was emitted directly or bounced off the mirror. According to the rules of quantum mechanics then, the Argon atom must be imagined to recoil IN BOTH DIRECTIONS AT ONCE--both towards and away from the mirror. But this paradoxical situation is present only if we don't look. Like Schrödinger's Cat, who will be either alive or dead (if we look) but not both, the bilocal Argon atom (if we look) will always be found to be recoiling in only one direction--towards the mirror (M) or away from the mirror (A) but never both at the same time. To prove that the Argon atom was really in the bilocal superposition state we have to devise an experiment that involves both motions (M and A) at once. (Same to verify the Cat--we need to devise a measurement that looks at both LIVE and DEAD cat at the same time.) To measure both recoil states at once, the Heidelberg guys set up a laser standing wave by shining a laser directly into a mirror and scattered the bilocal Argon atom off the peaks and troughs of this optical standing wave. Just as a wave of light is diffracted off the regular peaks and troughs of a matter-made CD disk, so a wave of matter (Argon atoms) can be diffracted from a regular pattern of light (a laser shining into a mirror). When an Argon atom encounters the regular lattice of laser light, it is split (because of its wave nature) into a transmitted (T) and a diffracted (D) wave. The intensity of the laser is adjusted so that the relative proportion of these two waves is approximately 50/50. In its encounter with the laser lattice, each state (M and A) of the bilocated Argon atom is split into two parts (T and D), so now THE SAME ARGON ATOM is traveling in four directions at once (MT, MD, AT, AD). Furthermore (as long as we don't look) these four distinct parts act like waves. This means they can constructively and destructively interfere depending on their "phase difference". The two waves MT and AD are mixed and the result sent to particle detector #1. The two waves AT and MD are mixed and sent to particle detector #2. For each atom only one count is recorded--one particle in, one particle out. But the PATTERN OF PARTICLES in each detector will depend on the details of the four-fold experience each wavelet has encountered on its way to a particle detector. This hidden wave-like experience is altered by moving the laser mirror L which shifts the position of the peaks of the optical diffraction grating. In quantum theory, the amplitude of a matter wave is related to the probability that it will trigger a count in a particle detector. Even though the unlooked-at Argon atom is split into four partial waves, the looked-at Argon particle can only trigger one detector. The outcome of the Heidelberg experiment consists of counting the number of atoms detected in counters #1 and #2 as a function of the laser mirror position L. The results of this experiment show that, while it was unobserved, a single Argon atom was 1) in two places at once because of the mirror's ambiguisation of photon recoil, then 2) four places at once after encountering the laser diffraction grating, 3) then at last, only one place at a time when it is finally observed by either atom counter #1 or atom counter #2. The term "Schrödinger Cat state" has come to mean ANY MACROSCOPIC SYSTEM that can be placed in a quantum superposition. Does an Argon atom qualify as a Schrödinger Cat? Argon is made up of 40 nucleons, each consisting of 3 quarks. Furthermore each Argon atom is surrounded by 18 electrons for a total of 138 elementary particles--each "doing its own thing" while the atom as a whole exists in four separate places at the same time. Now a cat surely has more parts than a single Argon atom, but the Heidelberg experiment demonstrates that, with a little ingenuity, a quite complicated system can be coaxed into quantum superposition. Today's physics students are lucky. When I was learning quantum physics in the 60s, much of the quantum weirdness existed only as mere theoretical formalism. Now in 2011, many of these theoretical possibilities have become solid experimental fact. This marvelous Heidelberg quadralocated Argon atom joins the growing list of barely believable experimental hints from Nature Herself about how She routinely cooks up the bizarre quantum realities that underlie the commonplace facts of ordinary life. kcb000 said... The arXiv link to the paper is broken ( HTTP 403 ). It can however be found here: kcb000 said... As you were kind enough to send me rummaging through arViv, I also found this paper: It's about using humans as photon detectors to observe entanglement macroscopically . Perhaps you could comment on it here or in a future post? nick herbert said... If you click on the tag "Gisin" you'll find a post on an earlier version of this experiment called "How To Quantum Entangle Human Beings." Gisin is quite clear that his experiment DOES NOT entangle people but it's a small step in that direction.
8c8164029ebdc938
Durham e-Theses You are in: Quantum field theories with fermions in the Schrödinger representation Nolland, David John (2000) Quantum field theories with fermions in the Schrödinger representation. Unspecified thesis, Durham University. This thesis is concerned with the Schrödinger representation of quantum field theory. We describe techniques for solving the Schrödinger equation which supplement the standard techniques of field theory. Our aim is to develop these to the point where they can readily be used to address problems of current interest. To this end, we study realistic models such as gauge theories coupled to dynamical fermions. For maximal generality we consider particles of all physical spins, in various dimensions, and eventually, curved spacetimes. We begin by considering Gaussian fields, and proceed to a detailed study of the Schwinger model, which is, amongst other things, a useful model for (3+1) dimensional gauge theory. One of the most important developments of recent years is a conjecture by Mal-dacena which relates supergravity and string/M-theory on anti-de-Sitter spacetimes to conformal field theories on their boundaries. This correspondence has a natural interpretation in the Schrödinger representation, so we solve the Schrödinger equation for fields of arbitrary spin in anti-de-Sitter spacetimes, and use this to investigate the conjectured correspondence. Our main result is to calculate the Weyl anomalies arising from supergravity fields, which, summed over the supermultiplets of type JIB supergravity compactified on AdS(_s) x S(^5) correctly matches the anomaly calculated in the conjecturally dual N = 4 SU{N) super-Yang-Mills theory. This is one of the few existing pieces of evidence for Maldacena's conjecture beyond leading order in TV. [brace not closed] Item Type:Thesis (Unspecified) Thesis Date:2000 Copyright:Copyright of this thesis is held by the author Deposited On:01 Aug 2012 11:49 Social bookmarking: del.icio.usConnoteaBibSonomyCiteULikeFacebookTwitter
65d1fc862fe43231
The Many Worlds of Hugh Everett After his now celebrated theory of multiple universes met scorn, Hugh Everett abandoned the world of academic physics. He turned to top secret military research and led a tragic private life Hugh Everett Editor's Note: This story was originally printed in the December 2007 issue of Scientific American and is being reposted from our archive in light of a new documentary on PBS, Parallel Worlds, Parallel Lives. Hugh Everett III was a brilliant mathematician, an iconoclastic quantum theorist and, later, a successful defense contractor with access to the nation’s most sensitive military secrets. He introduced a new conception of reality to physics and influenced the course of world history at a time when nuclear Armageddon loomed large. To science-fiction aficionados, he remains a folk hero: the man who invented a quantum theory of multiple universes. To his children, he was someone else again: an emotionally unavailable father; “a lump of furniture sitting at the dining room table,” cigarette in hand. He was also a chain-smoking alcoholic who died prematurely. At least that is how his history played out in our fork of the universe. If the many-worlds theory that Everett developed when he was a student at Princeton University in the mid-1950s is correct, his life took many other turns in an unfathomable number of branching universes. Everett’s revolutionary analysis broke apart a theoretical logjam in interpreting the how of quantum mechanics. Although the many-worlds idea is by no means universally accepted even today, his methods in devising the theory presaged the concept of quantum decoherence— a modern explanation of why the probabilistic weirdness of quantum mechanics resolves itself into the concrete world of our experience. Everett’s work is well known in physics and philosophical circles, but the tale of its discovery and of the rest of his life is known by relatively few. Archival research by Russian historian Eugene Shikhovtsev, myself and others and interviews I conducted with the late scientist’s colleagues and friends, as well as with his rock-musician son, unveil the story of a radiant intelligence extinguished all too soon by personal demons. Ridiculous Things The core of the idea was to interpret what the equations of quantum mechanics represent in the real world by having the mathematics of the theory itself show the way instead of by appending interpretational hypotheses to the math. In this way, the young man challenged the physics establishment of the day to reconsider its foundational notion of what constitutes physical reality. In pursuing this endeavor, Everett boldly tackled the notorious measurement problem in quantum mechanics, which had bedeviled physicists since the 1920s. In a nutshell, the problem arises from a contradiction between how elementary particles (such as electrons and photons) interact at the microscopic, quantum level of reality and what happens when the particles are measured from the macroscopic, classical level. In the quantum world, an elementary particle, or a collection of such particles, can exist in a superposition of two or more possible states of being. An electron, for example, can be in a superposition of different locations, velocities and orientations of its spin. Yet anytime scientists measure one of these properties with precision, they see a definite result—just one of the elements of the superposition, not a combination of them. Nor do we ever see macroscopic objects in superpositions. The measurement problem boils down to this question: How and why does the unique world of our experience emerge from the multiplicities of alternatives available in the superposed quantum world? Physicists use mathematical entities called wave functions to represent quantum states. A wave function can be thought of as a list of all the possible configurations of a superposed quantum system, along with numbers that give the probability of each configuration’s being the one, seemingly selected at random, that we will detect if we measure the system. The wave function treats each element of the superposition as equally real, if not necessarily equally probable from our point of view. The Schrödinger equation delineates how a quantum system’s wave function will change through time, an evolution that it predicts will be smooth and deterministic (that is, with no randomness). But that elegant mathematics seems to contradict what happens when humans observe a quantum system, such as an electron, with a scientific instrument (which itself may be regarded as a quantum-mechanical system). For at the moment of measurement, the wave function describing the superposition of alternatives appears to collapse into one member of the superposition, thereby interrupting the smooth evolution of the wave function and introducing discontinuity. A single measurement outcome emerges, banishing all the other possibilities from classically described reality. Which alternative is produced at the moment of measurement appears to be arbitrary; its selection does not evolve logically from the information- packed wave function of the electron before measurement. Nor does the mathematics of collapse emerge from the seamless flow of the Schrödinger equation. In fact, collapse has to be added as a postulate, as an additional process that seems to violate the equation. Many of the founders of quantum mechanics, notably Bohr, Werner Heisenberg and John von Neumann, agreed on an interpretation of quantum mechanics—known as the Copenhagen interpretation— to deal with the measurement problem. This model of reality postulates that the mechanics of the quantum world reduce to, and only find meaning in terms of, classically observable phenomena—not the reverse. This approach privileges the external observer, placing that observer in a classical realm that is distinct from the quantum realm of the object observed. Though unable to explain the nature of the boundary between the quantum and classical realms, the Copenhagenists nonetheless used quantum mechanics with great technical success. Entire generations of physicists were taught that the equations of quantum mechanics work only in one part of reality, the microscopic, while ceasing to be relevant in another, the macroscopic. It is all that most physicists ever need. Universal Wave Function In stark contrast, Everett addressed the measurement problem by merging the microscopic and macroscopic worlds. He made the observer an integral part of the system observed, introducing a universal wave function that links observers and objects as parts of a single quantum system. He described the macroscopic world quantum mechanically and thought of large objects as existing in quantum superpositions as well. Breaking with Bohr and Heisenberg, he dispensed with the need for the discontinuity of a wave-function collapse. Everett’s radical new idea was to ask, What if the continuous evolution of a wave function is not interrupted by acts of measurement? What if the Schrödinger equation always applies and applies to everything—objects and observers alike? What if no elements of superpositions are ever banished from reality? What would such a world appear like to us? Everett saw that under those assumptions, the wave function of an observer would, in effect, bifurcate at each interaction of the observer with a superposed object. The universal wave function would contain branches for every alternative making up the object’s superposition. Each branch has its own copy of the observer, a copy that perceived one of those alternatives as the outcome. According to a fundamental mathematical property of the Schrödinger equation, once formed, the branches do not influence one another. Thus, each branch embarks on a different future, independently of the others. Consider a person measuring a particle that is in a superposition of two states, such as an electron in a superposition of location A and location B. In one branch, the person perceives that the electron is at A. In a nearly identical branch, a copy of the person perceives that the same electron is at B. Each copy of the person perceives herself or himself as being one of a kind and sees chance as cooking up one reality from a menu of physical possibilities, even though, in the full reality, every alternative on the menu happens. Explaining how we would perceive such a universe requires putting an observer into the picture. But the branching process happens regardless of whether a human being is present. In general, at each interaction between physical systems the total wave function of the combined systems would tend to bifurcate in this way. Today’s understanding of how the branches become independent and each turn out looking like the classical reality we are accustomed to is known as decoherence theory. It is an accepted part of standard modern quantum theory, although not everyone agrees with the Everettian interpretation that all the branches represent realities that exist. Everett was not the first physicist to criticize the Copenhagen collapse postulate as inadequate. But he broke new ground by deriving a mathematically consistent theory of a universal wave function from the equations of quantum mechanics itself. The existence of multiple universes emerged as a consequence of his theory, not a predicate. In a footnote in his thesis, Everett wrote: “From the viewpoint of the theory, all elements of a superposition (all ‘branches’) are ‘actual,’ none any more ‘real’ than the rest.” The draft containing all these ideas provoked a remarkable behind-the-scenes struggle, uncovered about five years ago in archival research by Olival Freire, Jr., a historian of science at the Federal University of Bahia in Brazil. In the spring of 1956 Everett’s academic adviser at Princeton, John Archibald Wheeler, took the draft dissertation to Copenhagen to convince the Royal Danish Academy of Sciences and Letters to publish it. He wrote to Everett that he had “three long and strong discussions about it” with Bohr and Petersen. Wheeler also shared his student’s work with several other physicists at Bohr’s Institute for Theoretical Physics, including Alexander W. Stern. Wheeler’s letter to Everett reported: “Your beautiful wave function formalism of course remains unshaken; but all of us feel that the real issue is the words that are to be attached to the quantities of the formalism.” For one thing, Wheeler was troubled by Everett’s use of “splitting” humans and cannonballs as scientific metaphors. His letter revealed the Copenhagen-ists’ discomfort over the meaning of Everett’s work. Stern dismissed Everett’s theory as “theology,” and Wheeler himself was reluctant to challenge Bohr. In a long, politic letter to Stern, he explicated and excused Everett’s theory as an extension, not a refutation, of the prevailing interpretation of quantum mechanics: I think I may say that this very fine and able and independently thinking young man has gradually come to accept the present approach to the measurement problem as correct and self-consistent, despite a few traces that remain in the present thesis draft of a past dubious attitude. So, to avoid any possible misunderstanding, let me say that Everett’s thesis is not meant to question the present approach to the measurement problem, but to accept it and generalize it. [Emphasis in original.] Everett would have completely disagreed with Wheeler’s description of his opinion of the Copenhagen interpretation. For example, a year later, when responding to criticisms from Bryce S. DeWitt, editor of the journal Reviews of Modern Physics, he wrote: The Copenhagen Interpretation is hopelessly incomplete because of its a priori reliance on classical physics ... as well as a philosophic monstrosity with a “reality” concept for the macroscopic world and denial of the same for the microcosm. While Wheeler was off in Europe arguing his case, Everett was in danger of losing his student draft deferment. To avoid going to boot camp, he decided to take a research job at the Pentagon. He moved to the Washington, D.C., area and never came back to theoretical physics. During the next year, however, he communicated long-distance with Wheeler as he reluctantly whittled down his thesis to a quarter of its original length. In April 1957 Everett’s thesis committee accepted the abridged version—without the “splits.” Three months later Reviews of Modern Physics published the shortened version, entitled “‘Relative State’ Formulation of Quantum Mechanics.” In the same issue, a companion paper by Wheeler lauded his student’s discovery. When the paper appeared in print, it slipped into instant obscurity. Wheeler gradually distanced himself from association with Everett’s theory, but he kept in touch with the theorist, encouraging him, in vain, to do more work in quantum mechanics. In an interview last year, Wheeler, then 95, commented that “[Everett] was disappointed, perhaps bitter, at the nonreaction to his theory. How I wish that I had kept up the sessions with Everett. The questions that he brought up were important.” Nuclear Military Strategies Princeton awarded Everett his doctorate nearly a year after he had begun his first project for the Pentagon: calculating potential mortality rates from radioactive fallout in a nuclear war. He soon headed the mathematics division in the Pentagon’s nearly invisible but extremely influential Weapons Systems Evaluation Group (WSEG). Everett advised high-level officials in the Eisenhower and Kennedy administrations on the best methods for selecting hydrogen bomb targets and structuring the nuclear triad of bombers, submarines and missiles for optimal punch in a nuclear strike. In 1960 he helped write WSEG No. 50, a catalytic report that remains classified to this day. According to Everett’s friend and WSEG colleague George E. Pugh, as well as historians, WSEG No. 50 rationalized and promoted military strategies that were operative for decades, including the concept of Mutually Assured Destruction. WSEG provided nuclear warfare policymakers with enough scary information about the global effects of radioactive fallout that many became convinced of the merit of waging a perpetual standoff—as opposed to, as some powerful people were advocating, launching preemptive first strikes on the Soviet Union, China and other communist countries. One final chapter in the struggle over Everett’s theory also played out in this period. In the spring of 1959 Bohr granted Everett an interview in Copenhagen. They met several times during a six-week period but to little effect: Bohr did not shift his position, and Everett did not reenter quantum physics research. The excursion was not a complete failure, though. One afternoon, while drinking beer at the Hotel Østerport, Everett wrote out on hotel stationery an important refinement of the other mathematical tour de force for which he is renowned, the generalized Lagrange multiplier method, also known as the Everett algorithm. The method simplifies searches for optimum solutions to complex logistical problems—ranging from the deployment of nuclear weapons to just-in-time industrial production schedules to the routing of buses for maximizing the desegregation of school districts. In 1964 Everett, Pugh and several other WSEG colleagues founded a private defense company, Lambda Corporation. Among other activities, it designed mathematical models of anti-ballistic missile systems and computerized nuclear war games that, according to Pugh, were used by the military for years. Everett became enamored of inventing applications for Bayes’ theorem, a mathematical method of correlating the probabilities of future events with past experience. In 1971 Everett built a prototype Bayesian machine, a computer program that learns from experience and simplifies decision making by deducing probable outcomes, much like the human faculty of common sense. Under contract to the Pentagon, Lambda used the Bayesian method to invent techniques for tracking trajectories of incoming ballistic missiles. In 1973 Everett left Lambda and started a data-processing company, DBS, with Lambda colleague Donald Reisler. DBS researched weapons applications but specialized in analyzing the socioeconomic effects of government affirmative action programs. When they first met, Reis-ler recalls, Everett “sheepishly” asked whether he had ever read his 1957 paper. “I thought for an instant and replied, ‘Oh, my God, you are that Everett, the crazy one who wrote that insane paper,’” Reisler says. “I had read it in graduate school and chuckled, rejected it out of hand.” The two became close friends but agreed not to talk about multiple universes again. Three-Martini Lunches Despite all these successes, Everett’s life was blighted in many ways. He had a reputation for drinking, and friends say the problem seemed only to grow with time. According to Reisler, his partner usually enjoyed a three-martini lunch, sleeping it off in his office—although he still managed to be productive. Yet his hedonism did not reflect a relaxed, playful attitude toward life. “He was not a sympathetic person,” Reisler says. “He brought a cold, brutal logic to the study of things. Civil-rights entitlements made no sense to him.” John Y. Barry, a former colleague of Everett’s at WSEG, also questioned his ethics. In the mid-1970s Barry convinced his employers at J. P. Morgan to hire Everett to develop a Bayesian method of predicting movement in the stock market. By several accounts, Everett succeeded— and then refused to turn the product over to J. P. Morgan. “He used us,” Barry recalls. “[He was] a brilliant, innovative, slippery, untrustworthy, probably alcoholic individual.” Everett was egocentric. “Hugh liked to espouse a form of extreme solipsism,” says Elaine Tsiang, a former employee at DBS. “Although he took pains to distance his [many-worlds] theory from any theory of mind or consciousness, obviously we all owed our existence relative to the world he had brought into being.” And he barely knew his children, Elizabeth and Mark. As Everett pursued his entrepreneurial career, the world of physics was starting to take a hard look at his once ignored theory. DeWitt swung around 180 degrees and became its most devoted champion. In 1967 he wrote an article presenting the Wheeler-DeWitt equation: a universal wave function that a theory of quantum gravity should satisfy. He credited Everett for having demonstrated the need for such an approach. DeWitt and his graduate student Neill Graham then edited a book of physics papers, The Many-Worlds Interpretation of Quantum Mechanics, which featured the unamputated version of Everett’s dissertation. The epigram “many worlds” stuck fast, popularized in the science-fiction magazine Analog in 1976. Not everybody agrees, however, that the Copenhagen interpretation needs to give way. Cornell University physicist N. David Mermin maintains that the Everett interpretation treats the wave function as part of the objectively real world, whereas he sees it as merely a mathematical tool. “A wave function is a human construction,” Mer-min says. “Its purpose is to enable us to make sense of our macroscopic observations. My point of view is exactly the opposite of the many-worlds interpretation. Quantum mechanics is a device for enabling us to make our observations coherent, and to say that we are inside of quantum mechanics and that quantum mechanics must apply to our perceptions is inconsistent.” But many working physicists say that Everett’s theory should be taken seriously. “When I heard about Everett’s interpretation in the late 1970s,” says Stephen Shenker, a theoretical physicist at Stanford University, “I thought it was kind of crazy. Now most of the people I know that think about string theory and quantum cosmology think about something along an Everett-style interpretation. And because of recent developments in quantum computation, these questions are no longer academic.” One of the pioneers of decoherence, Wojciech H. Zurek, a fellow at Los Alamos National Laboratory, comments that “Everett’s accomplishment was to insist that quantum theory should be universal, that there should not be a division of the universe into something which is a priori classical and something which is a priori quantum. He gave us all a ticket to use quantum theory the way we use it now to describe measurement as a whole.” String theorist Juan Maldacena of the Institute for Advanced Study in Princeton, N.J., reflects a common attitude among his colleagues: “When I think about the Everett theory quantum mechanically, it is the most reasonable thing to believe. In everyday life, I do not believe it.” In 1977 DeWitt and Wheeler invited Everett, who hated public speaking, to make a presentation on his interpretation at the University of Texas at Austin. He wore a rumpled black suit and chain-smoked throughout the seminar. David Deutsch, now at the University of Oxford and a founder of the field of quantum computation (itself inspired by Everett’s theory), was there. “Everett was before his time,” Deutsch says in summing up Everett’s contribution. “He represents the refusal to relinquish objective explanation. A great deal of harm was done to progress in both physics and philosophy by the abdication of the original purpose of those fields: to explain the world. We got irretrievably bogged down in formalisms, and things were regarded as progress which are not explanatory, and the vacuum was filled by mysticism and religion and every kind of rubbish. Everett is important because he stood out against it.” After the Texas visit, Wheeler tried to hook Everett up with the Institute for Theoretical Physics in Santa Barbara, Calif. Everett reportedly was interested, but nothing came of the plan. Totality of Experience Everett died in bed on July 19, 1982. He was just 51. His son, Mark, then a teenager, remembers finding his father’s lifeless body that morning. Feeling the cold body, Mark realized he had no memory of ever touching his dad before. “I did not know how to feel about the fact that my father just died,” he told me. “I didn’t really have any relationship with him.” Not long afterward, Mark moved to Los Angeles. He became a successful songwriter and the lead singer for a popular rock band, Eels. Many of his songs express the sadness he experienced as the son of a depressed, alcoholic, emotionally detached man. It was not until years after his father’s death that Mark learned of Everett’s career and accomplishments. Mark’s sister, Elizabeth, made the first of many suicide attempts in June 1982, only a month before Everett died. Mark discovered her unconscious on the bathroom floor and got her to the hospital just in time. When he returned home later that night, he recalled, his father “looked up from his newspaper and said, ‘I didn’t know she was that sad.’” In 1996 Elizabeth killed herself with an overdose of sleeping pills, leaving a note in her purse saying she was going to join her father in another universe. In a 2005 song, “Things the Grandchildren Should Know,” Mark wrote: “I never really understood/ what it must have been like for him/living inside his head.” His solipsistically inclined father would have understood that dilemma. “Once we have granted that any physical theory is essentially only a model for the world of experience,” Everett concluded in the unedited version of his dissertation, “we must renounce all hope of finding anything like the correct theory ... simply because the totality of experience is never accessible to us.” Rights & Permissions Share this Article: Email this Article
d61c60144773f82b
Tuesday, March 29, 2011 What's Liquidator in Japanese??? 清算 maybe? Vereffenaars in dutch, Liquidateurs in french, Liquidatoren in Deutsch, ликвида́торы in Russian!! 1 milli Sievert = 100 milli rem.  Average individual background radiation dose: 2 milli Sv/year Chest CT scan: 6–18 mSv  (for 20 minutes) Criterion for relocation after Chernobyl disaster: 350 mSv/lifetime 10 Sv  and you are dead.  New York-Tokyo flights for airline crew: 9 milliSv/year (at 11km height I measured 3 micro Sv with my geiger counter, let say NY-Tokyo is 10 hours flight, then 3m Sv per year means that they fly 100days at 10 hours) Smoking 1.5 packs/day: 13-60 mSv/year  If you sleep next to another person for 8 hours every night you will receive 0.002 mSv   Exposure comes from the naturally radioactive potassium in the other person's body radioactivity exposure map europe - * The approximately 40 firefighters who were among the first to deal with the catastrophe * A 300-person brigade of Civil Defense from Kiev who buried the contaminated soil * Medical personnel * Various workers and military who performed deactivation and clean-up of the area * Construction workers who constructed the sarcophagus over the exploded reactor No. 4 * Internal Troops who ensured secure access to the complex * Transport workers The 20th anniversary of the catastrophe was marked by a series of events and developments. The liquidators held a rally in Kiev to complain about deteriorated compensation and medical support. Similar rallies were held in many other cities of the former Soviet Union. On April 25, 2006, a monument to Hero of the Soviet Union Leonid Telyatnikov, who was among the very first liquidators, was inaugurated in the Baikove Cemetery in Kiev. On the occasion of the 20th anniversary the charity Children of Chernobyl delivered their 32nd delivery of $1.7 million worth of medical supplies to Kiev. 4,200 liquidators who currently reside in Estonia may hope for the introduction of an Estonian law for their relief after the meeting of their representatives with President of Estonia on April 26, 2006. It turns out that by the Estonian laws, the state may provide help and relief only to citizens, who are "legal descendants" of the citizens of 1918–1940 Republic of Estonia. At the same time, Russia, Belarus and Ukraine do not provide any relief to the liquidators residing abroad. A number of liquidators residing in Khabarovsk who were in military service were denied a certain compensation for loss of health on grounds that they were not salaried workers, but rather under military order. They have to appeal to the European Court of Human Rights. from right to left  - reactor 1 2 3 4  .. nr 1 is a little forward, the plutonium reactor Nr 3 has a smoke cloud, number 2 still has the roof. The Fukushima I nuclear accidents Fukushima Dai-ichi genshiryoku hatsudensho jiko are a series of ongoing equipment failures and releases of radioactive materials at the Fukushima I Nuclear Power Plant, following the 2011 To-hoku earthquake and tsunami at 14:46 JST on 11 March 2011. The plant comprises six separate boiling water reactors maintained by the Tokyo Electric Power Company (TEPCO). Reactors 4, 5 and 6 had been shut down prior to the earthquake for planned maintenance. The remaining reactors were shut down automatically after the earthquake, but the subsequent 14 metres (46 ft) tsunami flooded the plant, knocking out emergency generators needed to run pumps which cool and control the reactors. The flooding and earthquake damage prevented assistance being brought from elsewhere. On 25 March, Japan's nuclear regulator announced a likely breach and radiation leak in the containment vessel of the unit 3 reactor, the only one at the plant using MOX fuel. World wide measurements of radioactive fallout released from the reactors were reported by New Scientist to be "nearing Chernobyl levels". It reported that the preparatory commission of the Comprehensive Test Ban Treaty Organization had measured levels of iodine-131 at 73% and caesium-137 at 60% the levels released from the Chernobyl disaster. Food grown in the area was banned. Tokyo officials declared its tap water unsafe for infants for a short time. Plutonium (Pu) has been detected in the soil at two sites in the plant. None the less, the overall levels of Pu were about the same as background Pu levels resulting from atmospheric nuclear bomb testing in the past. The IAEA announced on 27 March that workers hospitalized as a precaution on 25 March had been exposed to between 2 and 6 Sv of radiation at their ankles when standing in water in unit 3. The international reaction to the accidents was also concerned. The Japanese government and TEPCO have been criticized for poor communication with the public. On 20 March, the Chief Cabinet Secretary Yukio Edano announced that the plant would be closed once the crisis was over Bookmark and Share posted by u2r2h at Tuesday, March 29, 2011 0 comments Monday, March 28, 2011 Internet Porn Saga - Middle Men 2009 comedy-drama film Middle Men is a 2009 comedy-drama film and directed by George Gallo and written by Gallo and Andy Weiss. It stars Luke Wilson, Giovanni Ribisi, Gabriel Macht and James Caan. The film is based on the experiences of producer Christopher Mallick. See No Evil Seth Lubove, Forbes Magazine, 09.17.01 Much of the raunchy porn on the Internet wouldn't exist were it not for the help of a handful of legitimate companies operating quietly in the background. There is still oodles of money to be made in dot-com land. Check out the anonymous San Fernando Valley, Calif. headquarters of Cybernet Ventures, inside a former plumbing supply warehouse. With its mostly bare walls, Dilbert cubicles and geeky guys in white shirts and ties, Cybernet could pass for an insurance office. But it's a Web venture, and it's probably making a pile of money. As the owner of the Adult Check Internet access system, Cybernet acts as the digital gatekeeper to one of the largest collections of porn anywhere. Adult Check's 4 million customers--in the U.S. and as many as 120 foreign countries--pay $20 every three months for the privilege of ogling 400,000 sex sites, including everything from "Nude Amazon Women" to "Boy O Boy." Once they're hooked, Adult Check's affiliated Web sites will usually try to trade up customers to a "Gold" membership of $20 a month, providing access to even more sex-drenched content. Though the company won't confirm sales, it's likely that Cybernet is handling $320 million annually in membership revenues. A big chunk of that gets passed along to the content providers--the participating porn sites. The business has been so successful that its press-shy, thirty-something founder and co-owner, Laith Alsarraf, has removed himself from day-to-day affairs of the company. He spends much of his money and time on Pooch Heaven, a nonprofit pet project that rescues stray dogs and places them on his 820-acre ranch outside of Los Angeles. "Business is good," smiles Timothy Umbreit, Cybernet's vice president and chief operating officer. The company recently expanded to an 11,000-square-foot call center in nearby Burbank. "Everyone is stealing our film,"Zadeh seethes in his modern, 16,000-square-foot home above Beverly Hills, surrounded by Polaroids of nudes and a small stable of belly-baring models who do odd jobs around the house. Though Zadeh faces an uphill battle convincing a judge that Adult Check can somehow police millions of digitized photo images, he has succeeded in shining an unwelcome spotlight on the handful of otherwise legitimate companies that are responsible for processing much of the porn on the Internet. While there's debate about the size of the Internet porn industry, with most estimates pegging it at around $1 billion, there's no question that porn--unlike the rest of Internet commerce--had a clear profit potential from the start. Much of the Web porn trade wouldn't exist were it not for the four firms Zadeh complains about:Adult Check, CyberAge and two credit card processing outfits, Credit Card Billing (CCBill) and Internet Billing Co. (A third firm, Paycom in Marina del Rey, Calif., is another large processor of credit card chits for purchases of dirty pictures.) Though he isn't suing them, Zadeh even accuses giant search engines Yahoo and Google of being a part of the cabal, since they take advertising from the bogus celebrity sites. Whenever there's an Internet porn transaction, there's a good chance that at least one of these outfits had a hand in it. Like Adult Check, Glendale, Calif.-based CyberAge serves as a Napster-like portal to some 250,000 sex sites. Members pay fees starting at $20 a year to gain access to CyberAge sites with such creative names as "Lustville" and "Bangkok Bombshells."In return for getting customers to fork over their credit card numbers, the owners of the porno sites get headhunting fees ranging from $11 to $46 a pop, or as much as $100,000 a month, according to CyberAge. CyberAge, formerly AgeCheck, was started by three Armenian carpenters who worked for Adult Check's cofounder. It didn't take them long to figure out that hanging wallboard isn't as lucrative as peddling skin. Adult Check subsequently sued and forced the company to change its name. The three middlemen firms are fixtures at porn conventions, where they hustle for business surrounded by seminude actors and sex-toy vendors. "CCBill is constantly coming up with new and innovative ideas to help their customers increase revenues," gushes Jeffrey Miller, a former assistant vice president at Merrill Lynch who quit his day job after getting into the Internet porn business with his wife in 1996. Like Norm Zadeh, Miller also rails against copyright infringement in the Internet porn industry and accuses Adult Check and CyberAge of dragging their feet on copyright issues. But for someone whose business consists of Internet sites such as "Midget Sex Show--The little and nastiest nymphos anywhere!"he's probably getting what he deserves. Bookmark and Share posted by u2r2h at Monday, March 28, 2011 0 comments Saturday, March 26, 2011 Eyewitness to Rossi's secret Nickel catalyst Italian invention promises cheap cold fusion energy A major innovation in the energy field that reverses to date data, while a significant investment to build an industrial level for this scientific discovery will be announced in the coming days, at least according to estimates, will rock the boat and cause international concern. The reason for the Greek company Defkalion Green Technologies SA, which managed to transform an industrial product is a breakthrough and Italian scientists begin shortly from Xanthi mass production and worldwide available units can be installed in any home or workplace and produce heat and electricity at low cost. Not only the company's executives, but the market and people who know the details of the case, claiming that, without exaggeration, may tilt the energy landscape in the world and make our country the leader of a new energy revolution, with all that it entails the Greek economy. The product The starting point of the matter is the revolutionary invention of the hydrogen fusion reactor and Nickel (cold fusion in a controlled environment), made by two Italian scientists from the University of Bologna, Sergio Foccardi and the Andrea Rossi, which was officially launched on 14 January in Italy. Based on this discovery can be produced cheap energy, with very low, almost negligible compared with the current data because the reactor can produce more energy than it consumes in its operation. The rights to use and industrial use (not for military purposes) of the invention of the Italians for the whole world except the United States took the Greek-diversified company Defkalion. The company arranged, based on Greek scientists to translate technical scientific results and to create a device that after initial activation, produces heat without the need for further supply and without creating pollutants or harmful to human radiation. The thermal energy that can be used to produce hot water and space heating and a share of around 40%, be converted into electricity through a common converter (inverter). The third stage is that which has already begun for the industrial production in our country on a massive scale of this device. Production of low cost Data on the costs of producing energy from the device is really impressive. The Greek professor at the University of Bologna and former Ambassador Christos Stremmenos, who was actively involved in research work and participate in the company Defkalion, calculates that the cost of electricity is less than 0.01 per kilowatt hour, overwhelming amount smaller compared with the average production cost of PPC which reaches 0.1 million. So the device will generate electricity cheaper by 90%, while it has already been obtained for heating hot water, domestic use, etc.. The annual cost for hydrogen and nickel, the "fuel", ie a device capacity of 20 kW is about 1,300 euros, when the value of the energy produced at current prices of more than 14,000 euros. According to company officials, the cost to acquire the device will not exceed the 3000 to 3500 euros and around 500 - 900 euros is the cost of the converter to produce electrical energy. The excess electricity, as with domestic PV systems will be applied to the electrical system and is bought by DEI, providing income to the producer. The company will seek (a request will be submitted to RAE) to integrate this method of electricity production in the program of Renewable Energy and therefore enjoys a high rate of absorption of power. The method, besides the household can apply for energy production in agriculture, livestock, crafts, businesses, public buildings and, under conditions in industry. Industrial production The use of scientific invention for the manufacture of this device, not a distant goal. Instead, it has already purchased an old factory in Xanthi, which, according to executives Defkalion will be ready next October to provide products to market. As stated by the company told the newspaper Investor's World "brought the issue to the public, Mr Simeon Tsalikoglou the productive capacity of the plant will be 300,000 units a year, which for the Greek and Balkan markets. The investment will reach 200 million. of which almost half relating to the payment of royalties. It is estimated that by 2012 will create 215 direct permanent jobs and about 2,500 people will work and then the sales network - support in Greece. Especially to be noted that there will be a specialized center for Research and Development (25 jobs skilled research personnel) within the unit, which will support scientists and researchers involved in the process of "transformation" of scientific discovery in the Italian industrial product. Moreover, ongoing construction of a major laboratory facility of this technology, power 1 MW, which will be used (and blatantly) to the needs of the plant Xanthi. The continuity of the project envisages the creation of an even works on Greek soil, Volos, which produces larger-scale, 1 to 2 megawatts. Also setting up factories abroad and expanding foreign markets for retail sale. Regarding the financing of investments, executives Defkalion claim to be secured by investment funds, and noted characteristics that, as the project takes shape, so multiply the bids for participation in the financing plan. Benefits for the Greek economy Beyond the implicit importance of establishing the plant in Xanthi and total investment this effort, at a time when the country is "looking to light" new productive activities, the challenge is, of course, if implemented according to the designs of those who set expected to have significant benefits for the Greek economy. The total energy produced by these devices will result in savings worth hundreds of millions of euros, the time seems to epelafnei a new energy crisis when the country is bleeding precious funds to import fuel. They may also develop broader development and economic links with the rest of the economy, tourism, agriculture, crafts, gained a competitive advantage in their production. The exchange benefits the country would also be great as the exclusive use of the invention by a Greek company is expected to generate gains. Some believe that only the area expected to get the issue is likely to change for the positive climate and the general impressions of the country's investment environment. It should not be omitted finally able to handle the large nickel deposits in the country. Geopolitical upheavals The estimates appear to be those least enthusiastic supporters of the project refer to huge economic and geopolitical upheavals that can result, over time, after-market devices such energy green energy cost low causing no emissions or radiation. According to these estimates, the gradual replacement of conventional energy systems based on oil, natural gas, coal or nuclear power a new generation of energy products of this technology, along with the development of conventional renewables (wind, solar, geothermal), raises serious conditions overturn existing balances in the global energy scene. The scientific basis of the unit The cold fusion process that is based on scientific unit Hyperion, is a "hot" field of physics that has caused heated debates in recent years. This is a reaction occurs at room temperature, in which two smaller nuclei together to form a larger simultaneously releasing large amounts of energy. In the mid 80's Fleishmann and Pons claimed to have achieved cold fusion reaction, but their experiment could not be repeated. Since any theory and assumption that cold fusion was dismissed, thus creating a stream of scientific caution on the issue. Despite this skepticism a small portion of scientists continued to investigate the matter. The latest subversive news came from Italy and those which have started setting up the unit Hyperion. Two scientists, engineer Andrea Rossi and Prof. Sergio Focardi, University of Bologna, shown publicly on 14 January of this year's time, a cold fusion device capable of producing 12.400 Wh heat entering only 400 W power. The Italian scientists showed that a new production of green energy with common materials (nickel and hydrogen catalyst), with low cost and without emitting air pollutants or radioactive waste. The results already confirmed by independent third parties and readings are positive criticism from first physical size as the professor of the University of Uppsala Sven Kullander, president of the National Academy of Sciences Energy Committee, the Hanno Essén, professor at the Swedish Royal Institute of Technology, the Dr Edmud Storms, etc. There are many who argue that the specific effect of Engineering and Natural Rossi Focardi creates the need to ... rewrite some chapters of physics. Andrea Rossi on 14 January 2011 - power for 1 cent per kilowatt hour Edmund Storms on the Rossi device: "There will be a stampede." James: "This is a major step then, would you not agree?" Dr. Storms: "Oh yes, It's a major step. It doesn't change the reality, the reality had already been established, but it has moved the debate from the laboratory into an industrial environment, and it's put the phenomenon on the map now. People, skeptics can no longer ignore what's going on, it's such a high level, and apparently quite reproducible, that there's no doubt that it has the potential to really be a serious competitor for a primary energy." James: "So we've arrived, so to speak." Dr. Storms: "We've arrived. It's interesting we've arrived in a different car than we thought we were. Cold fusion started out using deuterium and palladium, and then Rossi found that it worked quite well in nickel and light hydrogen." James: Regarding that, since I saw the 60 mins interview, and saw what the Israeli's did over there in their lab, what did … the Italians do that's different? Were they financed well? What made them be ahead of everybody else regarding this issue?" Dr. Storms: "Well, you really need a patent, you need to protect your intellectual property. You want to be able to gain some economic benefit from the discovery. So far, they have not gotten a patent, and that's always been difficult in the cold fusion field because the patent examiners simply don't believe that it's real. So, until they get a patent, they're not revealing how they do it. Now, they've been upfront about what they can do and what they promise to do, and so far, they've fulfilled these promises. Once they get their patent, then they promise to reveal how they go about doing this." "Some said this is LENR, not cold fusion. What's the difference?" Dr. Storms: "Well there is no difference. It's purely a matter of semantics. There is a phenomenon, and that phenomenon allows a nuclear reaction to be initiated in a chemical environment, and it's a very special chemical environment, it's one that we don't understand yet, we don't have total control over it, so that it's difficult to reproduce, although not impossible, it's been replicated hundreds of times, so it's real. But it's a process whereby the Coulomb barrier is reduced in magnitude, in a solid, by some kind of … oh what would I call it … chemical mechanism. It's not chemistry, but it involves atoms and electrons, which of course apply to chemistry. And so, what do you call it? Well it was called cold fusion by Steve Jones, and that stuck. And then later people said, you know, that's not very accurate because you get transmutations, and it may not be fusion directly, so let's make it describe a bigger area, so we'll call it low-energy nuclear reactions [LENR]. I like the chemically-assisted nuclear reactions [CANR] description myself, but nevertheless, it's all the same thing. It's hard to believe that nature has only one technique for doing something so extraordinary." James: "As far as patents go for this subject matter, are you briefed on that all the time, are other scientists made aware of what's happening with that, or do you hear about it later?" None of the patents do that, so technically, their not valid, and that 's a big problem, until somebody makes something that works, and then describes how they made it work and that's where Rossi comes in, because he in fact does have something that works and once he shows how it works, he will have a valid patent." Dr. Storms: "The Swedish newspapers, the Italian newspapers, the Greek newspapers, they showed an interest. The American newspapers showed none at all. It's been on a number of blogs and talked about in a number of chat rooms, but no, it hasn't reached a level of any serious importance to the American press." James: "Why do you think that is now?" Dr. Storms: "Mainly because, it is institutionally the belief that cold fusion is not real, or if it is real, it's so trivial, it'd make no difference to anybody. That's institutional. It's the myth that's in, we'll call it, the intellectual structure of the United States, and a number of other countries. There a few countries where that's not true, and Italy is one of them. The government there believes that it's real, and they're doing everything they can to develop it. The government in China believes it's real an they're doing everything they can to develop it." James: "So what is the problem? Regardless whether it was an American issue or an Italian issue, that should be all over the press here, and it's not. It absolutely amazes me that this needs to be happening right now, what I'm doing. The press should've had this totally covered. Well, what's next for you? Are you going to be following what the Italians are doing, are you going to go to Italy and be working on it, and try to do what they've done and replicate it where you are?" Dr. Storms: "Well, first of all, I haven't been invited. Rossi is determining who's going to watch this – he's promised a demonstration in Florida that's coming up in October. And there will be some people from the US government there watching, and hopefully they will be convinced that it's real and that will change the attitudes." James: "So they still – after this entire time – can't wrap their head around it!" Yes, people are trying to replicate what he did. But in the absence of this secret addition, it's all guesswork [refering to the secret ingredient Rossi is using as a catalyst]. And that's been pretty much true of all the work in the field. We do not have a good theory, we don't have a path to follow, and so people do a lot of random searches, and when somebody – I'll use the analogy prospecting for gold – when somebody finds a nugget, everybody runs to the spot where that guy found the nugget and everybody starts to dig there. Maybe some other nuggets will be found, maybe not. That's what has made it easy for the skeptics to blow it off, and it's made it easy for the government to pretend that it doesn't exist." Dr. Storms: That's where the factory is that Rossi owns. Rossi has business interests in the United States, he has a number of companies. He has a company in Florida and that's where the cells are being manufactured. James: "So they've [Rossi and co.] already started the process then?" Dr. Storms: "Oh, yeah. The [recent demonstration] in Bologna was a single cell unit and it put out 10Kilowatts and it's put out even more energy in other circumstances. He's going to build a hundred cell unit in Florida, he claims, to try to run a Megawatt. That's pretty difficult to ignore." James: "What do you think they're going to be able to do of a practical use? What are they going to use it for initially?" Dr. Storms: "Well, they're planning to use this as a source of energy in a factory in Greece, and they're making arrangements in Greece for this to be incorporated into an industrial application, an industrial factory. It has to be done in industry at this level because we don't know if it's safe, we don't know it's characteristics, we just don't know enough about it to put it into individual homes. This is what he says, and it's quite rational. It has to be explored, its characteristics have to be understood in an industrial environment, so they're going to do that in Greece. Of course, he's taking orders, and I'm sure there'll be people from all over the world, where regulations are not so quite severe, and minds are more open than they are here, and they'll buy units, and put them in their factories, and suddenly the cost of energy to those companies will go down significantly, and all of a sudden people will panic, and then there'll be a stampede to buy these things." James: "The irony of the timing of all this now, seeing what's going on in the Middle East right now, everything's going up at the gas tank, people looking at other energy things, do you find this unusual, the timing of this? This could have happened five years ago, and right now, with the complete and total collapse of many economies around the world, suddenly these guys in Italy come up with something. Did that surprise you?" Dr. Storms: "Well, life always surprises me. It always has these synergistic relationships happening all the time. No, it didn't surprise me. It's quite, what would I call it, simple justice. The system absolutely needs this, and suddenly it's available. I guess it took both happening at the same time to change minds. Dr. Storms: "I don't think that's possible." James: "… because I don't think you should have been cut out of it. I mean, you're one of the guys that stood tall before anybody!" Dr. Storms: "Well I appreciate that, but I'm not being cut out of it, and in fact, I don't feel that I've been cut out of it. I'm funded. We're working to try to understand the mechanism and so we're hoping to have a seat at the table when the final decisions are made. But Rossi is clearly in charge of his own discovery, and I wouldn't find that unusual." James: "OK, well, listen, I'm glad that you're back, I'm glad that you've told us this, I'm glad that we've covered it here. I want to thank you very much Dr. Storms for always being there for for me and helping me out, and making this a public issue, so thank you very much, much appreciated. We'll be talking to you very soon. You may be surprised – we may hit Florida anyway!" Dr. Storms: "Well James, I appreciate your efforts too, it's efforts like yours that make it possible for people to find out what's going on." Introduction to quantum ring theory by Wladimir Guglinski 1. The principal aim of Quantum Ring Theory The Large Hadron Collider (LHC) started to work in March 2010.  Most of the people believe that the main aim of the experiments made in the LHC is to confirm the superstring theory, the existence of the Higgs boson, and the Suppersymmetry (Susy). But the principal aim of the LHC experiments is actually another one:  the aim is to confirm the fundamental principles of Quantum Mechanics. Because if LHC confirms superstrings, Higgs boson, and Susy, this means that all the principles of Quantum Mechanics are correct, since those three theories were developed from the concepts of Quantum Mechanics. All the current theories, as Quantum Field Theory, Nuclear Physics, Particle Physics, Standard Model, etc., all they keep the fundamental principles of Quantum Mechanics. Quantum Field Theory is the successor of Quantum Mechanics.  It was developed so that to  eliminate some inconsistencies of QM, in order to refine the theory.  But QFT keeps all the foundations of QM. So, the confirmation of those three theories in the LHC will have for the theorists the following meaning: all the principles of QM are correct.  And this is actually the principal  objective why they built the LHC. From the data collected along March and December 2010, the particles predicted in the Supersymmetry (Susy) would ougth to be already found. But the LHC did find NOTHING of Susy predictions. Some physicits already started to think that the LHC experiments will show the need of looking for a New Physics. But the most physicists keep yet their hope to find evidences for the superstring, the Higgs boson, and Susy, when the LHC will work with its maximum power in 2014. But the fact that they did not find evidences that confirm Susy along 2010 suggests that Supersymmetry actually does not exist.  And so, probably in 2014 they will find nothing again. Therefore, probably in 2014 there will be a general consensus in the community of physicits:  the need of looking for a New Physics. Nevertheless, having a consensus about the need of a New Physics, a fundamental question arises: What sort of New Physics will it be? Only two sort of New Physics are possible: 1- A New Physics that keeps all the fundamental principles of Quantum Mechanics, as the physicists did along the development of Quantum Field Theory. 2- A New Physics that rejects some principles of Quantum Mechanics, replacing them by new ones.  Then such New Theory will be a rival of Quantum Field Theory, since it will be a candidate to be a new successor of Quantum Mechanics, with some of its principles replaced by new ones. Then let's analyse the two sort of New Physics. 1- Keeping the principles of QM – Along 100 years, thousand of theorists developed several theories based on the foundations of Quantum Mechanics, which fundamental principles the scientific community now try to confirm in the LHC. Then a question arises:  if they did not succeed to find a correct theory along 100 years under that way of keeping the fundamental principles of Quantum Mechanics, is it reasonable to hope that they will succeed to find it in the next years, by continuing to keep those fundamental principles of QM not confirmed in the LHC? 2-  Rejecting and replacing some principles of QM -  OK, we realized that it makes no sense to keep all the principles of QM, since after 100 years of attempts, the LHC disproved such an effort, showing that the structure of our universe is not like predicted in the prevailing theories. Then there is need to look for a New Physics with new principles. But then three questions arise: - what sort of new principles must be adopted? - what are the principles of Quantum Mechanics that need to be rejected? - what are the new principles which would have to replace those ones rejected in QM? This is the question. The quantum theorists have no idea about such matter. The principal aim of Quantum Ring Theory is just to show what fundamental principles of Quantum Mechanics must be rejected, and what are the new principles that must replace them. Quantum Ring Theory exhibits coherent arguments (sometimes they are irrefutable, like in the case of the Bohr successes, which require a new hydrogen atom different of that proposed in Quantum Mechanics), by showing what new principles must be incorporated in Quantum Mechanics, and "why" it's indispensable to incorporate them. 2. The successes of Bohr What is the meaning of the successes of Bohr ? The successes of the Bohr theory are an indisputable proof on that something is wrong with the foundations of Quantum Mechanics. The most physicists have not knowledge of the meaning of the Bohr successes, because the most of them prefer to ignore its meaning, since it is a very unpleasant subject, inasmuch it points out a serious flaw of QM. The first physicist to understand the true meaning of Bohr successes was Schrödinger.  In an article intitled On a Remarkable Property of the Quantum-Orbits of a Single Electron, while commenting on a factor calculated from a Bohr orbit which gave an impressive result, he wrote: "It is difficult to believe that this result is merely an accidental mathematical consequence of the quantum conditions, and has no deeper physical meaning" Why did Schrödinger say such a thing? And why was so hard to him to believe that the impressive result of Bohr theory could be a single coincidence ? And what would he want to say with "deeper physical meaning"? Let's talk about. 3. The mystery of the centripetal acceleration In the Bohr hydrogen atom, the electron emits photons when it jumps from a energy level to another one. In his calculation, Bohr considered that in the instant when the photons are emitted the electron is submitted to a centripetal acceleration. The results of the Bohr theory are very impressive. Actually, from the mathematical probability it's IMPOSSIBLE that the successes of his theory may be simply accidental. The conclusion is obvious:  as the centripetal acceleration plays a role in the Bohr calculus, it's unequivocal that the centripetal acceleration has some connection with the mechanism which emits photons in the atom. Then here is the mystery: In the Quantum Mechanics the electron into the electrosphere cannot be submitted to a centripetal acceleration. It's easy to understand why, since the electron cannot have trajectory within the electrosphere.  Indeed, magine an electron that moves between two energy levels, in the hydrogen atom.  Well, it is submitted to the attraction with the proton.  If the electron should be moving between two points in the electrosphere, it would have to be accelerated (or slowed), and it would have to emit a continuous spectrum, according to Maxwell theory.  But the experiments show that it does not happen. So we have the following situation: 1- For Quantum Mechanics to be 100% correct, it's indispensable that there is no centripetal acceleration on the electron. 2- Therefore, for QM to be 100% correct, the Bohr theory must be 100% wrong.  Otherwise, if Bohr theory is not 100% wrong, then Quantum Mechanics cannot be 100% correct. 3- The Bohr model cannot be 100% wrong, because it is impossible, according to the mathematical probability 4- From the item 3 above, one concludes that centripetal acceleration on the electron really exists within the atom 5- First conclusion: Quantum Mechanics cannot be 100% correct. To be correct, QM requires that Bohr theory must be 100% incorrect.  In another words: to be correct, QM requires that a centripetal acceleration on the electron cannot exist.  And since Bohr theory shows that it exists, then something is missing in Quantum Mechanics. 6- Second conclusion: A satisfactory model of hydrogen atom must consider the existence of a centripetal acceleration on the electron.  A theory on the hydrogem atom (unable to explain the existence of the centripetal acceleration on the electron) cannot be 100% correct. The situation nowadays As Quantum Mechanics is incompatible with the Bohr model, the physicists concluded that Bohr successes can be only consequence of a fantastic coincidence. And it couldn't be on another way. If they should admit that Bohr successes are not accidental, the physicists would have to admit that Quantum Mechanics cannot be 100% correct, since it does not admit the electron to be submitted to a centripetal acceleration. But from the mathematical probability it's IMPOSSIBLE that Bohr successes can be a mere coincidence, as claim the community of physicists. That's what bothered so much Schrödinger. He noted that some mystery was hidden in the Bohr theory. The successes of his theory might not be result of mere coincidence. But as always happens in the history of science, is often more convenient to avoid some thorny issues. And physicists prefer to ignore the mystery surrounding the theory of Bohr and quantum mechanics. Fundamental premise to be filled by a new theory on hydrogen model From the above, we realize that there is an indispensable premise to be filled: Any theory on the hydrogen atom, in which the electron is NOT submitted to a centripetal acceleration, CANNOT be 100% correct Suppose that the hydrogen model of Quantum Ring Theory is wrong. However, any other theory which proposes a new hydrogen model, to be acceptable, must fill this fundamental premise:  it must consider the existence of the centripetal acceleration on the electron, and to explain its existence. Let's then see how Quantum Ring Theory explains the existence of the centripetal acceleration on the electron, when the atom emits photons. 4. How is the photon emitted in Quantum Ring Theory?In QRT the photon is emitted through a resonance process. In Quantum Mechanics the photon is also emitted from a resonance process. Therefore, QRT and QM agree on this point:  the photons are emitted by a resonance process. In this point Quantum Mechanics is correct. And in this point Bohr's theory is wrong, because in his theory the photon is NOT emitted by a resonance process. Some obvious questions: 1- Since Bohr theory is not correct (because it does not consider that photons are emitted by resonance), and since in his theory the centripetal acceleration plays some role in the instant of the photon emission, then how can his theory get so many fantastic successes ? You will understand it ahead, after get touch with the mechanism of photons emission proposed in QRT. 2- Since in Quantum Ring Theory the photon is emitted by resonance (as happens in Quantum Mechanics), and this one is the CORRECT mechanism of photons emission,  then how can the centripetal acceleration play some role in the photons emission? The centripetal acceleration does NOT play ANY role in the photons emission.  This is the error of the Bohr theory. But there is, YES, a centripetal acceleration on the electron, in the instant when the photon is emitted.  In spite of such centripetal acceleration does not play any role in the process of photons emissions, nevertheless the centripetal acceleration really exists on the electron, in the instant when the atom emits photons.  According to Quantum Mechanics, this is impossible.  This is the error of Quantum Mechanics. We will see ahead how Quantum Ring Theory solves such mystery of the existence of the centripetal acceleration on the electron, a mystery that Quantum Mechanics cannot explain. 5. Quantum Ring Theory: why was it developed? In the book Quantum Physics by Eisberg and Resnick, they state that from the basis of the current Nuclear Physics there is no way to explain some ordinary nuclear phenomena: So, as from the current foundations of Nuclear Physics some ordinary nuclear phenomena cannot be explained, it is reasonable to expect that such current foundations of Nuclear Physics cannot explain cold fusion too.   After all, since it is missing something fundamental in the current Nuclear Physics, probably it is just such lack of the theory that is missing for explaining cold fusion. So, was QRT developed for the explanation of cold fusion? No. When I started to develop my theory in 1990, I did not know cold fusion. I decided to look for new theoretical models because I realized at that time that the current theories cannot explain several ordinary phenomena. So, QRT was developed for the explanation of ordinary phenomena not explained by current Atomic and Nuclear Physics. I discovered the existence of cold fusion only in the end of 1998, by reading a paper by Mike Carrell in Frontier Perspectives.  And when in 1999 I started to read about cold fusion in the Infinite Energy Magazine, I started to realize that from those models of mine, (developed for the explanation of ordinary phenomena not explained by current Nuclear Physics) cold fusion occurrence should be possible. One of the fundamental backgrounds of Quantum Ring Theory is the helical trajectory (zitterbewegung) of elementary particles. The zitterbewegung was discovered by Schrödinger, from his interpretation of the Dirac equation of the electron. According to Quantum Ring Theory, the light is composed of photons constituted by a particle and its antiparticle, moving with helical trajectory.  These particle and antiparticle are formed by the constituents of the ether. The helical motion of the light was confirmed by an experiment published in Phys. Rev. Letters in July 2010, under the title "Unveiling a Truncated Optical Lattice Associated with a Triangular Aperture Using Light's Orbital Angular Momentum". So, as an experiment confirmed the prediction of Quantum Ring Theory that photons have helical trajectory, there is a good reason to believe in the existence of the helical trajectory of the elementary particles in general. But there are several other questions related to the helical trajectory.  For instance, from a hydrogen model composed by one electron that moves with helical trajectory in the electrosphere of the proton we can explain the existence of hydrinos, which is the subject of research by Randell Mills in BlackLight Power Inc. Also, a question not explained by Quantum Mechanics: the motion of the electron  between energy levels in the electrosphere of atoms. An experiment made by the Nobel Laureate Hans Dehmelt showed that the electron moves between two points in the electrosphere of atoms.  But according to Quantum Mechanics the electron cannot move between two points in the electrosphere, and so the quantum theorists were obliged to deny Dehmelt experiment:  they claim that the atom is "dressed" in Dehmelt experiment, a sort of ad hoc solution adopted by quantum theorists every time when the theory is discredited by some experiment (as happened with cold fusion along 20 years).  In Quantum Ring Theory, thanks to its helical trajectory, the electron moves between the levels in the electrosphere, which is agree to the results of the Dehmelt experiment. The helical trajectory is related to cold fusion too, because in QRT the neutron is composed by proton+electron.  Into the structure of the neutron the electron loses its helical trajectory, and the energy of the zitterbewegung is responsible for the excess energy that occurs in many cold fusion experiments, as for instance in the Conte-Pieralice experiment:  in their experiment the cathode was melt, a result not expected by them, since there was not (apparently) energy available for the electron to do it. Quantum Ring Theory is rival of Quantum Field Theory.  The both theories were developed so that to eliminate some inconsistencies of Quantum Mechanics. So, they both are candidate to be a successor of QM.  The difference between QRT and QFT lies in their foundations:  while QFT keeps the foundations of QM, unlike QRT replaces some of them. 6. Helical trajectory within the hydrogen atom As Quantum Mechanics, to be 100% correct, requires that Bohr theory must be 100% wrong, obviously the physicists have a paradox in their hand, because from the mathematical probability it's impossible to consider Bohr's successes as merely accidental. The successes of Bohr theory are explained by considering the hydrogen model proposed in Quantum Ring Theory, which shows that the centripetal acceleration on the electron exists, but it is not like Bohr imagined. The centripetal acceleration appears on the electron due to its helical trajectory. The centripetal acceleration on the electron does NOT play a role in the mechanism of the photon emission, like Bohr wrongly supposed. However, the centripetal acceleration exists, like Quantum Ring Theory shows, and Quantum Mechanics cannot explain its existence.  Nowadays, when the solution proposed in QRT for the paradox of Bohr successes is exhibited for a quantum theorist, he tries to hide from everybody that Quantum Ring Theory has solved the mystery, because the community of academic physicists try to hide from the people such disagreeable fact:  that Quantum Mechanics has several unacceptable paradoxes. The solution of the paradox, explained by the hydrogen atom of Quantum Ring Theory, is shown in the two figures ahead. Look at the figure 2. Obviously there are many other mechanisms in such hydrogen model proposed in Quantum Ring Theory.  For instance, there is a dilation in the space of the electrosphere of proton where the electron moves with helical trajectory.  Such density of the space is produced by the quantity of Dirac strings produced by the proton. Thanks to such dilation, which changes the mass of the electron (with regard to the proton), the electron can move with constant speed into the electrosphere, when it jumps from two levels of energy (this explains the result obtained in the Dehmelt experiment), because the electron is submitted to two contrary and equal forces:  the force of attraction with the proton, and a force of repulsion due to the dilation of the space.  So, its null the resultant of forces on the electron, when it moves radially within the electrosphere. Also, there is a mechanism which explains why the electron jumps from no consecutive levels. But let's see why the centripetal acceleration do exists, as shown in the figure 2. The atom emits photons when the distance REM (radius of emission) gets resonance with the density of Dirac strings into the elecrosphere (when it moves in radial direction into the electrosphere, the electron crosses such strings). Looking at the figure 2, we see that the electron is submitted to a centripetal acceleration due to its helical trajectory.  Such centripetal acceleration does not play any role in the resonance between the electron and the Dirac strings (of the proton).  But due to a coincidence, the radius of the electron's helical trajectory is the same radius considered by Bohr in his calculus.  Other coincidence is the fact that the force on the electron due to its helical trajectory is equal the force of attraction proton-electron, considered by Bohr.  The radius of Bohr is also equal the radius of the electron's orbit in the helical trajectory. Thanks to this series of coincidences, the Bohr theory is able to yield those fantastic successes, in spite of his model is wrong. 7. The accordion effect Let's now speak a little about the new nuclear model proposed in QRT, named Hexagonal Floors Model. Look at figure 3. In the center of the nuclei there is a nucleon 2He4.  It produces gravitational Dirac strings (they are a flux of gravitons). They form the principal field of the nuclei. Each string captures protons, or neutrons, or deuterions.  The figure ahead shows a nucleus 8O16, with its central 2He4 which forms the Dirac strings f1 , f2 , f3 , f4,  f5, f6 , and each one of those strings capture one deuterium.  In the oxygen nuclei there is one complete hexagonal floor. The existence of the Dirac strings is corroborated by an experiment published in the end of 2009: See link Of course the real hexagonal floor existing in Nature is not flat as shown in the figure.  Due to repulsions between the deuterium nucleons, there is an oscillatory motion of them with regard to the central 2He4. In the figure 3, there are another Dirac strings: in blue collor.  They form a Coulombic field, which is induced by the rotation of the principal field.  It is named secondary field.  Its blue strings are not gravitational.  They are formed by a flux of electric particles of the ether. The figure 4 shows a nucleus with 7 complete hexagonal floors.  The distance between the hexagonal floors, indicated by Dd, has contraction and expansion, a phenomenon named Accordion Effect, because the hexagonal floors behave like if they should be the bellows of an accordion. The Accordion Effect explains a property of some nuclei, like the U238.  It has 92 protons (pair) and 146 neutrons (also pair).  The U238 changes its shape, from a ellipsoid in the horizontal to an ellipsoid in the vertical, and the changing in the diameter is 30% .  There is no way to explain such big changing through the models of current Nuclear Physics, because having a number pair of protons (92) and a number pair of neutrons (146) the nucleus has a symmetry, and its expansion-contraction would have to occur in all the radial directions (according to Nuclear Physics).  So, the contraction-expansion of the U238 ought to be spherical, according to Nuclear Physics, and not elliptical as the experiments show.  If the distribution of protons and neutrons should be like predicted in current Nuclear Physics, the growth of the radius of the 92U238 ought to be in all directions, and not in one preferential direction, as really happens. Looking at the nuclear model proposed in QRT, with several hexagonal floors, we realize that its contraction-expansion must occur in a preferential direction, like happens with the U238. The distribution of nucleons in the nuclear model of QRT is AXIAL (it has a preferential direction, consequence of the fact that protons and neutrons are captured by the Dirac strings produced by the central 2He4). The distribution of nucleons in the nuclear models of Nuclear Physics is RADIAL (that's why those models are unable to explain that dilation of the 92U238). It's reasonable to suppose that such accordion-effect can influence in the cold fusion reactions, by considering a resonance between the contraction-expansion of the distance Dd of some nuclei (as the Pd in the lattice) and the oscillations of deuterium nuclei (of the heavy water used in the electrolityc cell) due to the zero-point energy, when the oscillation of the Pd nuclei are aligned with the oscillation of deuterium nucleons due to the zero point energy.  Such alignment can be get, for instance, with an external magnetic field applied externally to the vessel.  Obviously a suitable laser can help the resonance, as occurred in the Lets-Cravens experiment, since the frequency of the laser can enter in resonance with the oscillation of the Pd nuclei and deuterium nucleons. As said, the secondary field of nuclei is responsible for the Coulombic repulsions.  So, the fusion of two nuclei depends on the energy enough to pierce the two secondary fields of the two nuclei.  This is what occurs in hot fusion. But in cold fusion probably there is no need to pierce the secondary field.  That's because  in the secondary field there is a lack, as shown in the figure 5.  This "hole" is consequence of the way in which the secondary field is formed:  it is induced by the principal field, and it's impossible a formation of the Dirac strings without having a hole. This "hole" in the secondary fields of nuclei explains a paradox in Nuclear Physics: an alfa particle 2He4 can leave the 92U238 with energy 4MeV, in spite of the energy required for trespass the Coulombic barrier is 8MeV (100% stronger than 4MeV).  They call it tunneling effect.  The physicists wrongly believe that Gamow had explained the paradox of 92U328.  However the Gamow explanation introduces another unacceptable paradox, as shown in Quantum Ring Theory.  And so he solved a paradox by introducing another unacceptable paradox. Besides, the tunneling effect is able to make a particle to trespass a barrier which energy is 30% , 40 % or at maximum 50% stronger then the energy of the particle.  But a barrier 100% stronger cannot be crossed, as happens in the 92U238.  The paradox of 2He4 emission by the 92U238 must to be explained by considering the hole in the secondary field of the 92U238 nuclei, as considered in Quantum Ring Theory. Look at the figure 3 again.  As the Dirac strings are formed by gravitons, then such strings would have to attract one each other, and they could not have the distribution shown in the figure 3.  Instead of being distributed symmetrically about the central 2He4, the strings should have to be concentrated all them in one side. In QRT it's proposed that the ether is filled by electric particles e(+) and e(-), magnetic particles m(+) and m(-), and gravitational particles g(+) and g(-).  But there are also REPULSIVE gravitational particles G(+) and G(-), and they are responsible for the symmetrical distribution of the Dirac strings in the electrosphere of the proton, since they avoid that two gravitational Dirac strings attract one each other. So, according to Quantum Ring Theory, into the hydrogen atom (and any atom) there are repulsive gravitons, and they cause the expansion of the proton's electrosphere. Such repulsive gravity exists within the structure of the photon too.  It explains why the particle and the antiparticle into the photon's structure do not annihilate one each other. The figure bellow shows the particle and antiparticle surrounded by repulsive gravitational particles of the ether.  The particle and antiparticle move with helical trajectory, in contrary directions. 8. The evolution of Physics As one may realize, these strange conditions into the hydrogen atom, as the helical trajectory of the electron, as the expansion of the proton and electron electrospheres, as the changing of the electron's inertia with regard to the proton, and as the repulsive gravity, all together these strange conditions are responsible for an exotic behavior of the atom.  That's why the physicists did never understand its paradoxical behavior. In October 2008 the Brazillian publishing house Editora Bodigaya published my book entitled "A Evolução da Mecânica Quântica – o duelo Schrödinger versus Heisenberg".  The Telesio Galilei Academy of Science intended to publish it in Europe, in 2009, having a partnership with a publishing house in London.  It would be published with the title "The Missed U-Tur – the duel Schrödinger versus Heisenberg".  Unfortunatelly the publishing house faced troubles due to the financial crisis in 2008, and closed its doors, so  the book was not published in Europe. The book reports to the lay man the duel between Schrödinger and Heisenberg, concerning the helical trajectory (a duel which the academicians hide from everybody, and even the most physicists do not know about such duel).  Ahead is a stretch of the book, where I speak about the meaning of the Schrödinger Equation, connected to the true mechanisms of the atom's working, discovered by me in 2004: After discovering the mechanisms and laws that rule the behavior of electrons that move in helical trajectories in the hydrogen atom, I started to think about finding the mathematical equation that describes this model.  This wasn't an easy task, because it involved complex phenomena, such as the contraction of the ether, the zoom-effect, and having two potentials that attract the electron, one being the proton and the other being the center of the helical trajectory.  Finally, I realized that the equation describing this complex motion of the electron in the hydrogen atom has been found already; it is the Schrödinger equation. As starting point, Schrödinger gained inspiration from the equation Bohr obtained for the energy levels of the hydrogen atom.  Without knowing the true mechanism, shown here, by which the electron interacts with the proton in the hydrogen atom, he succeeded in finding the equation that describes the electron's behavior. Schrödinger started from some properties inherent to the helical trajectory, such as the wave-particle duality expressed by the de Broglie equation λ=h/p, and, by speculating on the form of the differential equation that the wave should have, he arrived at the final form of his famous equation, which gives the energy levels of an atom when it emits photons.  The Schrödinger equation involves the use of imaginary numbers.  An imaginary number is the square root of a negative number and, therefore, the square of an imaginary number is a negative number.  This use of an imaginary number in Schrödinger's equation reflects the complexity of the mechanisms to which the electron is subject in the hydrogen atom:  the contraction of the space, the zoom-effect, the loss of inertia when the electron moves away from the proton, etc.  Actually, one has to be amazed by the path followed to achieve this scientific discovery, since Schrödinger discovered his equation in the 1930's and the true meaning of his equation, which connects his equation to physical reality, was discovered only in 2004. Other book of mine that Telesio Galilei Academy of Science intended to publish in 2009 in Europe was "The Evolution of Physics- the duel Newton versus Descartes". In this book it's shown that the evolution of Physics was occurred switching two methods of investigation:  one used by Descartes, and other used by Newton.  Sometimes the Newton method failed, and was replaced by the method of Descartes. For instance, the method used by Bohr, when he discovered his hydrogen model, was the method used by Descartes.  Before Bohr, the physicist Voight tried to use the Newtonian method for the discovery of  a model of atom, and the Newtonian method used by he has failed. So, the Newtonian method was not efficient for the discovery of the laws discovered later by Bohr.  However, the success of Bohr was partial, that is, after an initial success of the Descartes method used by Bohr, the method failed, because Bohr model was not correct. He discovered some correct laws, but not all the correct laws. Then Schrödinger applied the Newtonian method again, and made some corrections in the Bohr model. However, Schrödinger also failed, because he did not discovered all the correct laws into the atom, and therefore the Newtonian method has failed again with Schrödinger. Finally, the Descartes method was used again in the development of Quantum Ring Theory, and the true laws were finally discovered. So, the evolution of Physics occurs by the alternation of the Descartes and Newton methods, and we are now in a stage that requires the use of the Newtonian method to be applied again on the discoveries proposed in Quantum Ring Theory, which were found thanks to the use of the Descartes method.  This new stage is required because the Newtonian method, used by the theorists along the 20th Century, has failed.  The Newtonian method used in the 20th Century was successful for the development of technology, but the method has failed for the discovery of the true laws of Nature. Magnetic motors and cold fusion are evidence of the failure of Newton's method. Here is a stretch of the book: Let us see the stages followed in the development of the model of atom. Stage 1 – First Voigt failed when applying the Newtonian method. Stage 2 – After that Bohr was succeeded partially by applying the cartesian method.  In this way, he discovered some fundamental mechanisms of the atom, like the emission of photons when the electron changes from one orbit to another.  The cartesian method failed but opened a new path for the application of the Newtonian method starting from Bohr's discoveries. Stage 3 – Schrödinger applied the Newtonian method again.  He obtained a partial success and the method of Newton failed again, as shown by the successes of Bohr and the Dehmelt experiment. What comes later must be anticipated by the reader.  For achieving total success, there are two stages missing needed for concluding the process. Stage 4 – Re-apply the cartesian method in order to discover what is missing in Quantum Mechanics, in a process similar to that used by Bohr. Stage 5 – Re-apply the Newtonian method after stage 4, by applying the mathematical formalism for confirming the discoveries made in stage 4. At the present moment, we are in stage 4.  What is missing in Quantum Mechanics is shown in the new model of the atom proposed in Quantum Ring Theory.  It is still missing stage 5. The hydrogen model of QRT explains several paradoxes of QM.  For instance, according to the Schrödinger Equation, when the electron moves in the fundamental status within the hydrogen atom, its orbit is purely radial.  This means that the electron trespasses directly through into the nucleus (proton) and the oscillation occurs in any direction in space.  Obviously such motion is very strange, since the electron passes into the proton's body, which is an absurd. But look at the figure 6 what happens, by considering the model where the electron moves with helical trajectory about the proton, in the fundamental status: • The real motion into the atom: the electron (green) moves about the proton with helical trajectory traveling the trajectory 1-2-3-4-5—6-7-8-9-10-11-12-13-14-15-16-17 • The interpretation of the motion according to the Schrödinger Equation: such a motion looks like if the electron should be traveling the trajectory: (1-2) => (16-17) so, apparently , trespassing into the proton (the trajectory in red). 9. Cold fusion requires a New Physics The book "Quantum Ring Theory-Foundations for Cold Fusion" was published in 2006.  After its publication, the cold fusion researchers neglected the theory, because of the strategy explained ahead. Since 1989 when it was announced by Fleischmmann and Pons, the cold fusion researchers have along the years hoped to get funds from the governments of several countries, so that to continue the development of the cold fusion technology. But a cold fusion reaction is not possible according to the foundations of Quantum Mechanics, and so a cold fusion technology would require a New Physics for its development.  It would be so hard to get funds for developing a technology impossible to occur (according to the current theories), which would require a New Physics.  Then the cold fusion researchers replaced the name "cold fusion" by LENR (Low Energy Nuclear Reactions), and started to claim that: a)  it actually was not cold fusion b) there was no need of a New Physics for its explanation Well, the cold fusion technology developed by Eng. Andrea A. Rossi and Professor Sergio Focardi is ready, and the cold fusion reactors are ready to be sold.  They don't need funds from the governments for the development of the technology.  Therefore there is no need to avoid the correct name for the phenomenon, which is cold fusion, because there are two facts that show that it is indeed a fusion:  the transmutation of elements, and the emission of neutrons above the background of neutrons.  And since it occurs under conditions of room pressure and low temperature, then of course it is cold fusion. Besides, the experiments in the LHC are already showing that something is wrong with the current theories (from the data collected in 2010, the Supersymmetry would have to be confirmed, but the experiments have found nothing).  Some physicists already are saying that LHC will show the need of a New Physics.  And so, if there is need of a New Physics for the LHC, then why not a New Physics for explaining cold fusion too? Quantum Ring Theory is a New Physics.  And probably the principles proposed in the theory are those ones which will bring the explanation for cold fusion. Perhaps the foundations proposed in Quantum Ring Theory are wrong, and then another New Physics must be found. Nevertheless, no matter what is the New Physics that will be found (in the hypothesis that QRT is wrong), such New Physics will have to satisfy some conditions already shown in QRT.  For instance, the new hydrogen model of the New Physics must be able to explain the successes of Bohr (and the existence of the centripetal acceleration on the electron). A New Physics cannot live together with paradoxes again, as happened along the 20th Century. Some experiments are suggesting that Quantum Ring Theory is correct.  We can mention for instance the Dehmelt experiment, the Borghi and Conte-Pierlice experiments, and that recent experiment which confirmed the helical trajectory of the photon, published in July-2010. There are many other experiments suggesting that QRT is right.  Of course there is need a lot of more strong evidences so that to confirm the theory. But what will decide either the theory is correct, or wrong, is the investigation and the submission of its models to experiments. by Wladimir Guglinski 18 hours NO SWINDLE In the morning of February 10, the inventor and engineer Andrea Rossi initiated a new controlled experiment in Bologna, Italy, with the heat producing 'energy catalyzer' that could possibly be based on cold fusion. With him was the physicist and researcher Giuseppe Levi from the University of Bologna, who also supervised the public demonstration in January. Together they ran the unit for 18 hours. "It was extremely interesting. It is clear that this was an internal test that I needed to understand what parameters must be under control during a longer test, but frankly, I wanted to see the device work for hours," Levi told Ny Teknik "It was pretty impressive in some respects. First, the repeatability. This is the third time I've seen the device, and again it produces energy." "I weighed container before and after charging, and including the gas we let out to empty the tube of air, the consumption of hydrogen was 0.4 grams. That's nothing!" "Minimum power was 15 kilowatts, and that's a conservative value. I calculated it several times. At night we did a measurement and the device then worked very stable and produced 20 kilowatts." "Now that I have seen the device work for so many hours, in my view all chemical energy sources are excluded," said Giuseppe Levi. He explained that this time he chose to heat the water without boiling it, to avoid errors. Initially, the temperature of the inflowing water was seven degrees Celsius and for a while the outlet temperature was 40 degrees Celsius. A flow rate of about one liter per second, equates to a peak power of 130 kilowatts. The power output was later stabilized at 15 to 20 kilowatts. Levi explained that they did not have a peristaltic pump with sufficient flow, so instead the device was attached directly to the water tap. Therefore the flow was not constant, but by regularly noting the time and reading the input volume on a counter, he controlled the flow. At night the counter information was recorded with a camera. Levi is now planning more tests and a thorough analysis, before and after operation, of the nickel powder that the energy catalyst is loaded with. Giuseppe Levi has worked with Sergio Focardi, emeritus professor at the University of Bologna and Rossi's scientific adviser since four years. NYT: What would you tell those who doubt your independence? A one megawatt installation with Rossi's 'energy catalyzer', consisting of one hundred connected 10 kilowatt devices, is supposed to be inaugurated in October in Athens, Greece, run by the newly formed Greek company Defkalion Green Technologies Cold Fusion: Here's the Greek company building 1 MW Av: Mats Lewan Publicerad 7 februari 2011 11:02 6 kommentarer The Greek company that will build a heating plant of one megawatt with the Italian 'energy catalyzer' spoke about its activities and about the invention on Greek television this weekend. The Greek company is called Defkalion Green Technologies. According to Andrea Rossi – the inventor of the 'energy catalyzer' possibly based on cold fusion – it is a newly formed consortium. Partners include companies involved in energy distribution. On Saturday, the news about Defkalion Green Technologies was broadcasted by NET, national television in Greece (this recording of the show, now in high quality, has been uploaded by the blogger talefta.blogspot.com where there's also a summary in English). Last week, Ny Teknik made an interview with a representative of the company. "We represent a group with financial and industrial background having invested into Dr. Rossi's technology, and having obtained the exclusive rights to manufacture, license and distribute globally," Symeon Tsalikoglou, spokesman for Defkalion Green Technologies, wrote in an email to Ny Teknik. He also made clear that Defkalion will not sell or distribute energy. "Defkalion, as previously stated, will provide the market with devices producing heat, based on Dr. Rossi's invention. This heat can be utilized by CHP or micro CHP installations to produce electricity and consumable heat for different applications." "We are neither power plant nor a utility company of any kind. As such, the one megawatt installation in Greece referred to by Dr Rossi, will consist of a combination of 3rd party technologies (such as CHP), whereby the Defkalion Energy Catalyzer will be used as the only source of power. This very installation of one megawatt in Greece will only cover the energy needs of Defkalion's factory," Symeon Tsalikoglou wrote. Ny Teknik: How do you expect this new energy technology to develop in the future? "This technology will not change the world energy usage over night. Essentially we are introducing a new and impressive energy source that offers cheap, clean and renewable power to the end user. We are pursuing socially responsible pricing and market penetration. We are also taking into consideration the global energy trends and fitting well within the expected gradual energy transition," Symeon Tsalikoglou wrote. (If the 'energy catalyzer' is based on nuclear reaction between nickel and hydrogen it should not be considered as renewable. Ny Tekniks's note). According to Andrea Rossi Defkalion will have the exclusive commercial rights to the 'energy catalyzer' in Greece, while two U.S. companies will have the corresponding rights in the United States. In the rest of the world, Defkalion has the commercial rights, but non exclusively. "They focus on the commercial part and I on the technical development," Andrea Rossi explained. He also made clear that his company Leonardo Corporation does not seek funding at the moment. Leonardo Corporation -- 116 South River Road -- Bedford, N.H. 03110 - USA Phone +1 603 668 7000 Phone +1 603 674 1309 Fax +1 603 647 4325 E-mail: [email protected] manufacturing of electric generators (GENSETS) fueled by vegetable oils (Diesel cobustion) "We are fully funded by our customers," he said. Nor does Defkalion Green Technologies seem to seek funding. Defkalion Green Technologies states that it will hold a press confoerence within two months where further questions will be answered. The reason why the European operations came to end up in Greece depends, according to Professor Sergio Focardi (Rossi's scientific adviser since four years) on contacts with Professor Christos Stremmenos who previously collaborated with Professor Focardi. "The bond between us and Greece is a professor of Greek origin whose name is Christos Stremmenos and who has been Ambassador of Greece in Italy. He knows people in the Greek government and has also worked in this field. We have known each other for years, and when he heard about these things, he involved Greece," Sergio Focardi told Ny Teknik. In the news bulletin on Greek television on Saturday Christos Stremmenos appeared. "We are headed away from energy medieval ages. So it seems. It will depend on further research and the study of applications so that all levels where green energy is needed can be covered," Christos Stremmenos said Bookmark and Share posted by u2r2h at Saturday, March 26, 2011 0 comments
eccf8a5a9192d2d9
Atmospheric entry From Wikipedia, the free encyclopedia   (Redirected from Reentry) Jump to: navigation, search "Reentry" redirects here. For other uses, see Reentry (disambiguation). Mars Exploration Rover (MER) aeroshell, artistic rendition Atmospheric entry is the movement of an object into and through the gases of a planet's atmosphere from outer space. There are two main types of atmospheric entry - uncontrolled entry, such as in the entry of celestial objects, space debris or bolides - and controlled entry, such as the entry (or reentry) of technology capable of being navigated or following a predetermined course. Atmospheric drag and aerodynamic heating can cause atmospheric breakup capable of completely disintegrating smaller objects. These forces may cause objects with lower compressive strength to explode. For Earth, atmospheric entry occurs above the Kármán Line at an altitude of more than 100 km above the surface while Venus atmospheric entry occurs at 250 km and Mars atmospheric entry at about 80 km. Uncontrolled, objects accelerate through the atmosphere at extreme velocities under the influence of Earth's gravity. Most controlled objects enter at hypersonic speeds due to their suborbital (e.g. ICBM reentry vehicles), orbital (e.g. the Space Shuttle), or unbounded (e.g. meteors) trajectories. Various advanced technologies have been developed to enable atmospheric reentry and flight at extreme velocities. An alternative low velocity method of controlled atmospheric entry is buoyancy[1] which is suitable for planetary entry where thick atmospheres, strong gravity or both factors complicate high-velocity hyperbolic entry, such as the atmospheres of Venus, Titan and the gas giants.[2] Apollo Command Module flying at a high angle of attack for lifting entry, artistic rendition. The concept of the ablative heat shield was described as early as 1920 by Robert Goddard: "In the case of meteors, which enter the atmosphere with speeds as high as 30 miles per second (48 km/s), the interior of the meteors remains cold, and the erosion is due, to a large extent, to chipping or cracking of the suddenly heated surface. For this reason, if the outer surface of the apparatus were to consist of layers of a very infusible hard substance with layers of a poor heat conductor between, the surface would not be eroded to any considerable extent, especially as the velocity of the apparatus would not be nearly so great as that of the average meteor."[3] Practical development of reentry systems began as the range and reentry velocity of ballistic missiles increased. For early short-range missiles, like the V-2, stabilization and aerodynamic stress were important issues (many V-2s broke apart during reentry), but heating was not a serious problem. Medium-range missiles like the Soviet R-5, with a 1200 km range, required ceramic composite heat shielding on separable reentry vehicles (it was no longer possible for the entire rocket structure to survive reentry). The first ICBMs, with ranges of 8000 to 12,000 km, were only possible with the development of modern ablative heat shields and blunt-shaped vehicles. In the USA, this technology was pioneered by H. Julian Allen at Ames Research Center.[4] Terminology, definitions and jargon[edit] Over the decades since the 1950s, a rich technical jargon has grown around the engineering of vehicles designed to enter planetary atmospheres. It is recommended that the reader review the jargon glossary before continuing with this article on atmospheric reentry. Blunt body entry vehicles[edit] Various reentry shapes (NASA) using shadowgraphs to show high-velocity flow These four shadowgraph images represent early reentry-vehicle concepts. A shadowgraph is a process that makes visible the disturbances that occur in a fluid flow at high velocity, in which light passing through a flowing fluid is refracted by the density gradients in the fluid resulting in bright and dark areas on a screen placed behind the fluid. In the United States, H. Julian Allen and A. J. Eggers, Jr. of the National Advisory Committee for Aeronautics (NACA) made the counterintuitive discovery in 1951[5] that a blunt shape (high drag) made the most effective heat shield. From simple engineering principles, Allen and Eggers showed that the heat load experienced by an entry vehicle was inversely proportional to the drag coefficient, i.e. the greater the drag, the less the heat load. If the reentry vehicle is made blunt, air cannot "get out of the way" quickly enough, and acts as an air cushion to push the shock wave and heated shock layer forward (away from the vehicle). Since most of the hot gases are no longer in direct contact with the vehicle, the heat energy would stay in the shocked gas and simply move around the vehicle to later dissipate into the atmosphere. Entry vehicle shapes[edit] There are several basic shapes used in designing entry vehicles: Sphere or spherical section[edit] The simplest axisymmetric shape is the sphere or spherical section.[citation needed] This can either be a complete sphere or a spherical section forebody with a converging conical afterbody. The aerodynamics of a sphere or spherical section are easy to model analytically using Newtonian impact theory. Likewise, the spherical section's heat flux can be accurately modeled with the Fay-Riddell equation.[7] The static stability of a spherical section is assured if the vehicle's center of mass is upstream from the center of curvature (dynamic stability is more problematic). Pure spheres have no lift. However, by flying at an angle of attack, a spherical section has modest aerodynamic lift thus providing some cross-range capability and widening its entry corridor. In the late 1950s and early 1960s, high-speed computers were not yet available and computational fluid dynamics was still embryonic. Because the spherical section was amenable to closed-form analysis, that geometry became the default for conservative design. Consequently, manned capsules of that era were based upon the spherical section. Pure spherical entry vehicles were used in the early Soviet Vostok and Voskhod and in Soviet Mars and Venera descent vehicles. The Apollo Command/Service Module used a spherical section forebody heatshield with a converging conical afterbody. It flew a lifting entry with a hypersonic trim angle of attack of −27° (0° is blunt-end first) to yield an average L/D (lift-to-drag ratio) of 0.368.[8] This angle of attack was achieved by precisely offsetting the vehicle's center of mass from its axis of symmetry. Other examples of the spherical section geometry in manned capsules are Soyuz/Zond, Gemini and Mercury. Even these small amounts of lift allow trajectories that have very significant effects on peak g-force (reducing g-force from 8-9g for a purely ballistic (slowed only by drag) trajectory to 4-5g) as well as greatly reducing the peak reentry heat.[citation needed] Galileo Probe during final assembly The sphere-cone is a spherical section with a frustum or blunted cone attached. The sphere-cone's dynamic stability is typically better than that of a spherical section. With a sufficiently small half-angle and properly placed center of mass, a sphere-cone can provide aerodynamic stability from Keplerian entry to surface impact. (The "half-angle" is the angle between the cone's axis of rotational symmetry and its outer surface, and thus half the angle made by the cone's surface edges.) Re-entry system of the LGM-30 Minuteman ICBM The original American sphere-cone aeroshell was the Mk-2 RV (reentry vehicle), which was developed in 1955 by the General Electric Corp. The Mk-2's design was derived from blunt-body theory and used a radiatively cooled thermal protection system (TPS) based upon a metallic heat shield (the different TPS types are later described in this article). The Mk-2 had significant defects as a weapon delivery system, i.e., it loitered too long in the upper atmosphere due to its lower ballistic coefficient and also trailed a stream of vaporized metal making it very visible to radar. These defects made the Mk-2 overly susceptible to anti-ballistic missile (ABM) systems. Consequently an alternative sphere-cone RV to the Mk-2 was developed by General Electric.[citation needed] Mk-6 RV, Cold War weapon and ancestor to most of NASA's entry vehicles This new RV was the Mk-6 which used a non-metallic ablative TPS (nylon phenolic). This new TPS was so effective as a reentry heat shield that significantly reduced bluntness was possible.[citation needed] However, the Mk-6 was a huge RV with an entry mass of 3360 kg, a length of 3.1 meters and a half-angle of 12.5°. Subsequent advances in nuclear weapon and ablative TPS design allowed RVs to become significantly smaller with a further reduced bluntness ratio compared to the Mk-6. Since the 1960s, the sphere-cone has become the preferred geometry for modern ICBM RVs with typical half-angles being between 10° to 11°.[citation needed] "Discoverer" type reconnaissance satellite film Recovery Vehicle (RV) Reconnaissance satellite RVs (recovery vehicles) also used a sphere-cone shape and were the first American example of a non-munition entry vehicle (Discoverer-I, launched on 28 February 1959). The sphere-cone was later used for space exploration missions to other celestial bodies or for return from open space; e.g., Stardust probe. Unlike with military RVs, the advantage of the blunt body's lower TPS mass remained with space exploration entry vehicles like the Galileo Probe with a half angle of 45° or the Viking aeroshell with a half angle of 70°. Space exploration sphere-cone entry vehicles have landed on the surface or entered the atmospheres of Mars, Venus, Jupiter and Titan. The biconic is a sphere-cone with an additional frustum attached. The biconic offers a significantly improved L/D ratio. A biconic designed for Mars aerocapture typically has an L/D of approximately 1.0 compared to an L/D of 0.368 for the Apollo-CM. The higher L/D makes a biconic shape better suited for transporting people to Mars due to the lower peak deceleration. Arguably, the most significant biconic ever flown was the Advanced Maneuverable Reentry Vehicle (AMaRV). Four AMaRVs were made by the McDonnell-Douglas Corp. and represented a significant leap in RV sophistication. Three of the AMaRVs were launched by Minuteman-1 ICBMs on 20 December 1979, 8 October 1980 and 4 October 1981. AMaRV had an entry mass of approximately 470 kg, a nose radius of 2.34 cm, a forward frustum half-angle of 10.4°, an inter-frustum radius of 14.6 cm, aft frustum half angle of 6°, and an axial length of 2.079 meters. No accurate diagram or picture of AMaRV has ever appeared in the open literature. However, a schematic sketch of an AMaRV-like vehicle along with trajectory plots showing hairpin turns has been published.[9] The DC-X, shown during its first flight, was a prototype single stage to orbit vehicle, and used a biconic shape similar to AMaRV. Opportunity rover's heat shield lying inverted on the surface of Mars. AMaRV's attitude was controlled through a split body flap (also called a "split-windward flap") along with two yaw flaps mounted on the vehicle's sides. Hydraulic actuation was used for controlling the flaps. AMaRV was guided by a fully autonomous navigation system designed for evading anti-ballistic missile (ABM) interception. The McDonnell Douglas DC-X (also a biconic) was essentially a scaled up version of AMaRV. AMaRV and the DC-X also served as the basis for an unsuccessful proposal for what eventually became the Lockheed Martin X-33. Amongst aerospace engineers, AMaRV has achieved legendary status alongside such marvels as the SR-71 Blackbird and the Saturn V Rocket.[citation needed] Non-axisymmetric shapes[edit] Non-axisymmetric shapes have been used for manned entry vehicles. One example is the winged orbit vehicle that uses a delta wing for maneuvering during descent much like a conventional glider. This approach has been used by the American Space Shuttle and the Soviet Buran. The lifting body is another entry vehicle geometry and was used with the X-23 PRIME (Precision Recovery Including Maneuvering Entry) vehicle.[citation needed] The FIRST (Fabrication of Inflatable Re-entry Structures for Test) system was an Aerojet proposal for an inflated-spar Rogallo wing made up from Inconel wire cloth impregnated with silicone rubber and silicon carbide dust. FIRST was proposed in both one-man and six man versions, used for emergency escape and reentry of stranded space station crews, and was based on an earlier unmanned test program that resulted in a partially successful reentry flight from space (the launcher nose cone fairing hung up on the material, dragging it too low and fast for the TPS, but otherwise it appears the concept would have worked; even with the fairing dragging it, the test article flew stably on reentry until burn-through).[citation needed] The proposed MOOSE system would have used a one-man inflatable ballistic capsule as an emergency astronaut entry vehicle. This concept was carried further by the Douglas Paracone project. While these concepts were unusual, the inflated shape on reentry was in fact axisymmetric.[citation needed] Shock layer gas physics[edit] An approximate rule-of-thumb used by heat shield designers for estimating peak shock layer temperature is to assume the air temperature in kelvins to be equal to the entry speed in meters per second[citation needed]— a mathematical coincidence. For example, a spacecraft entering the atmosphere at 7.8 km/s would experience a peak shock layer temperature of 7800 K. This is unexpected, since the kinetic energy increases with the square of the velocity, and can only occur because the specific heat of the gas increases greatly with temperature (unlike the nearly constant specific heat assumed for solids under ordinary conditions). At typical reentry temperatures, the air in the shock layer is both ionized and dissociated. This chemical dissociation necessitates various physical models to describe the shock layer's thermal and chemical properties. There are four basic physical models of a gas that are important to aeronautical engineers who design heat shields: Perfect gas model[edit] Almost all aeronautical engineers are taught the perfect (ideal) gas model during their undergraduate education. Most of the important perfect gas equations along with their corresponding tables and graphs are shown in NACA Report 1135.[10] Excerpts from NACA Report 1135 often appear in the appendices of thermodynamics textbooks and are familiar to most aeronautical engineers who design supersonic aircraft. The perfect gas theory is elegant and extremely useful for designing aircraft, but assumes the gas is chemically inert. From the standpoint of aircraft design, air can be assumed to be inert for temperatures less than 550 K at one atmosphere pressure. The perfect gas theory begins to break down at 550 K and is not usable at temperatures greater than 2000 K. For temperatures greater than 2000 K, a heat shield designer must use a real gas model. Real (equilibrium) gas model[edit] An entry vehicle's pitching moment can be significantly influenced by real-gas effects. Both the Apollo-CM and the Space Shuttle were designed using incorrect pitching moments determined through inaccurate real-gas modeling. The Apollo-CM's trim-angle angle of attack was higher than originally estimated, resulting in a narrower lunar return entry corridor. The actual aerodynamic center of the Columbia was upstream from the calculated value due to real-gas effects. On Columbia’s maiden flight (STS-1), astronauts John W. Young and Robert Crippen had some anxious moments during reentry when there was concern about losing control of the vehicle.[11] An equilibrium real-gas model assumes that a gas is chemically reactive, but also assumes all chemical reactions have had time to complete and all components of the gas have the same temperature (this is called thermodynamic equilibrium). When air is processed by a shock wave, it is superheated by compression and chemically dissociates through many different reactions. Direct friction upon the reentry object is not the main cause of shock-layer heating. It is caused mainly from isentropic heating of the air molecules within the compression wave. Friction based entropy increases of the molecules within the wave also account for some heating.[original research?] The distance from the shock wave to the stagnation point on the entry vehicle's leading edge is called shock wave stand off. An approximate rule of thumb for shock wave standoff distance is 0.14 times the nose radius. One can estimate the time of travel for a gas molecule from the shock wave to the stagnation point by assuming a free stream velocity of 7.8 km/s and a nose radius of 1 meter, i.e., time of travel is about 18 microseconds. This is roughly the time required for shock-wave-initiated chemical dissociation to approach chemical equilibrium in a shock layer for a 7.8 km/s entry into air during peak heat flux. Consequently, as air approaches the entry vehicle's stagnation point, the air effectively reaches chemical equilibrium thus enabling an equilibrium model to be usable. For this case, most of the shock layer between the shock wave and leading edge of an entry vehicle is chemically reacting and not in a state of equilibrium. The Fay-Riddell equation,[7] which is of extreme importance towards modeling heat flux, owes its validity to the stagnation point being in chemical equilibrium. The time required for the shock layer gas to reach equilibrium is strongly dependent upon the shock layer's pressure. For example, in the case of the Galileo Probe's entry into Jupiter's atmosphere, the shock layer was mostly in equilibrium during peak heat flux due to the very high pressures experienced (this is counterintuitive given the free stream velocity was 39 km/s during peak heat flux). Determining the thermodynamic state of the stagnation point is more difficult under an equilibrium gas model than a perfect gas model. Under a perfect gas model, the ratio of specific heats (also called "isentropic exponent", adiabatic index, "gamma" or "kappa") is assumed to be constant along with the gas constant. For a real gas, the ratio of specific heats can wildly oscillate as a function of temperature. Under a perfect gas model there is an elegant set of equations for determining thermodynamic state along a constant entropy stream line called the isentropic chain. For a real gas, the isentropic chain is unusable and a Mollier diagram would be used instead for manual calculation. However, graphical solution with a Mollier diagram is now considered obsolete with modern heat shield designers using computer programs based upon a digital lookup table (another form of Mollier diagram) or a chemistry based thermodynamics program. The chemical composition of a gas in equilibrium with fixed pressure and temperature can be determined through the Gibbs free energy method. Gibbs free energy is simply the total enthalpy of the gas minus its total entropy times temperature. A chemical equilibrium program normally does not require chemical formulas or reaction-rate equations. The program works by preserving the original elemental abundances specified for the gas and varying the different molecular combinations of the elements through numerical iteration until the lowest possible Gibbs free energy is calculated (a Newton-Raphson method is the usual numerical scheme). The data base for a Gibbs free energy program comes from spectroscopic data used in defining partition functions. Among the best equilibrium codes in existence is the program Chemical Equilibrium with Applications (CEA) which was written by Bonnie J. McBride and Sanford Gordon at NASA Lewis (now renamed "NASA Glenn Research Center"). Other names for CEA are the "Gordon and McBride Code" and the "Lewis Code". CEA is quite accurate up to 10,000 K for planetary atmospheric gases, but unusable beyond 20,000 K (double ionization is not modeled). CEA can be downloaded from the Internet along with full documentation and will compile on Linux under the G77 Fortran compiler. Real (non-equilibrium) gas model[edit] A non-equilibrium real gas model is the most accurate model of a shock layer's gas physics, but is more difficult to solve than an equilibrium model. The simplest non-equilibrium model is the Lighthill-Freeman model.[12][13] The Lighthill-Freeman model initially assumes a gas made up of a single diatomic species susceptible to only one chemical formula and its reverse; e.g., N2 → N + N and N + N → N2 (dissociation and recombination). Because of its simplicity, the Lighthill-Freeman model is a useful pedagogical tool, but is unfortunately too simple for modeling non-equilibrium air. Air is typically assumed to have a mole fraction composition of 0.7812 molecular nitrogen, 0.2095 molecular oxygen and 0.0093 argon. The simplest real gas model for air is the five species model which is based upon N2, O2, NO, N and O. The five species model assumes no ionization and ignores trace species like carbon dioxide. When running a Gibbs free energy equilibrium program, the iterative process from the originally specified molecular composition to the final calculated equilibrium composition is essentially random and not time accurate. With a non-equilibrium program, the computation process is time accurate and follows a solution path dictated by chemical and reaction rate formulas. The five species model has 17 chemical formulas (34 when counting reverse formulas). The Lighthill-Freeman model is based upon a single ordinary differential equation and one algebraic equation. The five species model is based upon 5 ordinary differential equations and 17 algebraic equations. Because the 5 ordinary differential equations are loosely coupled, the system is numerically "stiff" and difficult to solve. The five species model is only usable for entry from low Earth orbit where entry velocity is approximately 7.8 km/s. For lunar return entry of 11 km/s, the shock layer contains a significant amount of ionized nitrogen and oxygen. The five species model is no longer accurate and a twelve species model must be used instead. High speed Mars entry which involves a carbon dioxide, nitrogen and argon atmosphere is even more complex requiring a 19 species model. An important aspect of modeling non-equilibrium real gas effects is radiative heat flux. If a vehicle is entering an atmosphere at very high speed (hyperbolic trajectory, lunar return) and has a large nose radius then radiative heat flux can dominate TPS heating. Radiative heat flux during entry into an air or carbon dioxide atmosphere typically comes from unsymmetric diatomic molecules; e.g., cyanogen (CN), carbon monoxide, nitric oxide (NO), single ionized molecular nitrogen, et cetera. These molecules are formed by the shock wave dissociating ambient atmospheric gas followed by recombination within the shock layer into new molecular species. The newly formed diatomic molecules initially have a very high vibrational temperature that efficiently transforms the vibrational energy into radiant energy; i.e., radiative heat flux. The whole process takes place in less than a millisecond which makes modeling a challenge. The experimental measurement of radiative heat flux (typically done with shock tubes) along with theoretical calculation through the unsteady Schrödinger equation are among the more esoteric aspects of aerospace engineering. Most of the aerospace research work related to understanding radiative heat flux was done in the 1960s, but largely discontinued after conclusion of the Apollo Program. Radiative heat flux in air was just sufficiently understood to ensure Apollo's success. However, radiative heat flux in carbon dioxide (Mars entry) is still barely understood and will require major research. Frozen gas model[edit] The frozen gas model describes a special case of a gas that is not in equilibrium. The name "frozen gas" can be misleading. A frozen gas is not "frozen" like ice is frozen water. Rather a frozen gas is "frozen" in time (all chemical reactions are assumed to have stopped). Chemical reactions are normally driven by collisions between molecules. If gas pressure is slowly reduced such that chemical reactions can continue then the gas can remain in equilibrium. However, it is possible for gas pressure to be so suddenly reduced that almost all chemical reactions stop. For that situation the gas is considered frozen. The distinction between equilibrium and frozen is important because it is possible for a gas such as air to have significantly different properties (speed-of-sound, viscosity, et cetera) for the same thermodynamic state; e.g., pressure and temperature. Frozen gas can be a significant issue in the wake behind an entry vehicle. During reentry, free stream air is compressed to high temperature and pressure by the entry vehicle's shock wave. Non-equilibrium air in the shock layer is then transported past the entry vehicle's leading side into a region of rapidly expanding flow that causes freezing. The frozen air can then be entrained into a trailing vortex behind the entry vehicle. Correctly modeling the flow in the wake of an entry vehicle is very difficult. Thermal protection shield (TPS) heating in the vehicle's afterbody is usually not very high, but the geometry and unsteadiness of the vehicle's wake can significantly influence aerodynamics (pitching moment) and particularly dynamic stability. Thermal protection systems[edit] A thermal protection system or TPS is the barrier that protects a spacecraft during the searing heat of atmospheric reentry. A secondary goal may be to protect the spacecraft from the heat and cold of space while on orbit. Multiple approaches for the thermal protection of spacecraft are in use, among them ablative heat shields, passive cooling and active cooling of spacecraft surfaces. Ablative heat shield (after use) on Apollo 12 capsule The ablative heat shield functions by lifting the hot shock layer gas away from the heat shield's outer wall (creating a cooler boundary layer). The boundary layer comes from blowing of gaseous reaction products from the heat shield material and provides protection against all forms of heat flux. The overall process of reducing the heat flux experienced by the heat shield's outer wall by way of a boundary layer is called blockage. Ablation occurs at two levels in an ablative TPS: the outer surface of the TPS material chars, melts, and sublimes, while the bulk of the TPS material undergoes pyrolysis and expels product gases. The gas produced by pyrolysis is what drives blowing and causes blockage of convective and catalytic heat flux. Pyrolysis can be measured in real time using thermogravimetric analysis, so that the ablative performance can be evaluated.[14] Ablation can also provide blockage against radiative heat flux by introducing carbon into the shock layer thus making it optically opaque. Radiative heat flux blockage was the primary thermal protection mechanism of the Galileo Probe TPS material (carbon phenolic). Carbon phenolic was originally developed as a rocket nozzle throat material (used in the Space Shuttle Solid Rocket Booster) and for re-entry vehicle nose tips. Early research on ablation technology in the USA was centered at NASA's Ames Research Center located at Moffett Field, California. Ames Research Center was ideal, since it had numerous wind tunnels capable of generating varying wind velocities. Initial experiments typically mounted a mock-up of the ablative material to be analyzed within a hypersonic wind tunnel.[15] Mars Pathfinder during final assembly showing the aeroshell, cruise ring and solid rocket motor The thermal conductivity of a particular TPS material is usually proportional to the material's density.[16] Carbon phenolic is a very effective ablative material, but also has high density which is undesirable. If the heat flux experienced by an entry vehicle is insufficient to cause pyrolysis then the TPS material's conductivity could allow heat flux conduction into the TPS bondline material thus leading to TPS failure. Consequently for entry trajectories causing lower heat flux, carbon phenolic is sometimes inappropriate and lower density TPS materials such as the following examples can be better design choices: SLA in SLA-561V stands for super light-weight ablator. SLA-561V is a proprietary ablative made by Lockheed Martin that has been used as the primary TPS material on all of the 70° sphere-cone entry vehicles sent by NASA to Mars other than the Mars Science Laboratory (MSL). SLA-561V begins significant ablation at a heat flux of approximately 110 W/cm², but will fail for heat fluxes greater than 300 W/cm². The MSL aeroshell TPS is currently designed to withstand a peak heat flux of 234 W/cm². The peak heat flux experienced by the Viking-1 aeroshell which landed on Mars was 21 W/cm². For Viking-1, the TPS acted as a charred thermal insulator and never experienced significant ablation. Viking-1 was the first Mars lander and based upon a very conservative design. The Viking aeroshell had a base diameter of 3.54 meters (the largest used on Mars until Mars Science Laboratory). SLA-561V is applied by packing the ablative material into a honeycomb core that is pre-bonded to the aeroshell's structure thus enabling construction of a large heat shield.[17] NASA's Stardust sample return capsule successfully landed at the USAF Utah Range. Phenolic impregnated carbon ablator (PICA), a carbon fiber preform impregnated in phenolic resin,[18] PICA is a modern TPS material and has the advantages of low density (much lighter than carbon phenolic) coupled with efficient ablative capability at high heat flux. It is a good choice for ablative applications such as high-peak-heating conditions found on sample-return missions or lunar-return missions. PICA's thermal conductivity is lower than other high-heat-flux ablative materials, such as conventional carbon phenolics.[citation needed] PICA was patented by NASA Ames Research Center in the 1990s and was the primary TPS material for the Stardust aeroshell.[19] The Stardust sample-return capsule was the fastest man-made object ever to reenter Earth's atmosphere (12.4 km/s or 28,000 mph at 135 km altitude). This was faster than the Apollo mission capsules and 70% faster than the Shuttle.[20] PICA was critical for the viability of the Stardust mission, which returned to Earth in 2006. Stardust's heat shield (0.81 m base diameter) was manufactured from a single monolithic piece sized to withstand a nominal peak heating rate of 1.2 W/cm2. A PICA heat shield has also been used for the Mars Science Laboratory entry into the Martian atmosphere.[21] An improved and easier to manufacture version called PICA-X was developed by SpaceX in 2006-2010[21] for the Dragon space capsule.[22] The first re-entry test of a PICA-X heatshield was on the Dragon C1 mission on 8 December 2010.[23] The PICA-X heat shield was designed, developed and fully qualified by a small team of only a dozen engineers and technicians in less than four years.[21] PICA-X is ten times less expensive to manufacture than the NASA PICA heat shield material.[24] The Dragon 1 spacecraft initially used PICA-X version 1 and was later equipped with version 2. The Dragon V2 spacecraft uses PICA-X version 3. SpaceX has indicated that each new version of PICA-X primarily improves upon heat shielding capacity rather than the manufacturing cost. Deep Space 2 impactor aeroshell, a classic 45° sphere-cone with spherical section afterbody enabling aerodynamic stability from atmospheric entry to surface impact Silicone-impregnated reusable ceramic ablator (SIRCA) was also developed at NASA Ames Research Center and was used on the Backshell Interface Plate (BIP) of the Mars Pathfinder and Mars Exploration Rover (MER) aeroshells. The BIP was at the attachment points between the aeroshell's backshell (also called the afterbody or aft cover) and the cruise ring (also called the cruise stage). SIRCA was also the primary TPS material for the unsuccessful Deep Space 2 (DS/2) Mars impactor probes with their 0.35 m base diameter aeroshells. SIRCA is a monolithic, insulating material that can provide thermal protection through ablation. It is the only TPS material that can be machined to custom shapes and then applied directly to the spacecraft. There is no post-processing, heat treating, or additional coatings required (unlike Space Shuttle tiles). Since SIRCA can be machined to precise shapes, it can be applied as tiles, leading edge sections, full nose caps, or in any number of custom shapes or sizes. As of 1996, SIRCA had been demonstrated in backshell interface applications, but not yet as a forebody TPS material.[25] AVCOAT is a NASA-specified ablative heat shield, a glass-filled epoxy-novolac system.[26] NASA originally used it for the Apollo capsule and then utilized the material for its next-generation beyond low Earth-orbit Orion spacecraft.[27] The Avcoat to be used on Orion has been reformulated to meet environmental legislation that has been passed since the end of Apollo.[28][29] Thermal soak[edit] Astronaut Andrew S. W. Thomas takes a close look at TPS tiles underneath Space Shuttle Atlantis. Rigid black LI-900 tiles were used on the Space Shuttle. Thermal soak is a part of almost all TPS schemes. For example, an ablative heat shield loses most of its thermal protection effectiveness when the outer wall temperature drops below the minimum necessary for pyrolysis. From that time to the end of the heat pulse, heat from the shock layer convects into the heat shield's outer wall and would eventually conduct to the payload.[citation needed] This outcome is prevented by ejecting the heat shield (with its heat soak) prior to the heat conducting to the inner wall. Typical Space Shuttle TPS tiles (LI-900) have remarkable thermal protection properties. An LI-900 tile exposed to a temperature of 1000 K on one side will remain merely warm to the touch on the other side. However, they are relatively brittle and break easily, and cannot survive in-flight rain. Passively cooled[edit] In some early ballistic missile RVs; e.g., the Mk-2 and the sub-orbital Mercury spacecraft, radiatively cooled TPS were used to initially absorb heat flux during the heat pulse and then, after the heat pulse, radiate and convect the stored heat back into the atmosphere. However, the earlier version of this technique required a considerable quantity of metal TPS (e.g., titanium, beryllium, copper, etc.). Modern designers prefer to avoid this added mass by using ablative and thermal soak TPS instead. The Mercury Capsule design (shown with escape tower) originally used a radiatively cooled TPS, but was later converted to an ablative TPS Radiatively cooled TPS can still be found on modern entry vehicles, but reinforced carbon-carbon (RCC) (also called carbon-carbon) is normally used instead of metal. RCC is the TPS material on the Space Shuttle's nose cone and wing leading edges. RCC was also proposed as the leading edge material for the X-33. Carbon is the most refractory material known with a one atmosphere sublimation temperature of 3825 °C for graphite. This high temperature made carbon an obvious choice as a radiatively cooled TPS material. Disadvantages of RCC are that it is currently very expensive to manufacture and lacks impact resistance.[30] Some high-velocity aircraft, such as the SR-71 Blackbird and Concorde, deal with heating similar to that experienced by spacecraft but at much lower intensity and for hours at a time. Studies of the SR-71's titanium skin revealed the metal structure was restored to its original strength through annealing due to aerodynamic heating. In the case of Concorde, the aluminum nose was permitted to reach a maximum operating temperature of 127 °C (typically 180 °C warmer than the, sub-zero, ambient air); the metallurgical implications (loss of temper) that would be associated with a higher peak temperature were the most significant factors determining the top speed of the aircraft. A radiatively cooled TPS for an entry vehicle is often called a hot metal TPS. Early TPS designs for the Space Shuttle called for a hot metal TPS based upon nickel superalloy (René 41) and titanium shingles.[31] The earlier Shuttle TPS concept was rejected because it was believed a silica tile based TPS offered less expensive development and manufacturing costs.[citation needed] A nickel superalloy shingle TPS was again proposed for the unsuccessful X-33 single-stage to orbit (SSTO) prototype.[32] Recently, newer radiatively cooled TPS materials have been developed that could be superior to RCC. Referred to by their prototype vehicle Slender Hypervelocity Aerothermodynamic Research Probe (SHARP), these TPS materials have been based upon substances such as zirconium diboride and hafnium diboride. SHARP TPS have suggested performance improvements allowing for sustained Mach 7 flight at sea level, Mach 11 flight at 100,000 ft (30,000 m) altitudes, and significant improvements for vehicles designed for continuous hypersonic flight. SHARP TPS materials enable sharp leading edges and nose cones to greatly reduce drag for air breathing combined cycle propelled space planes and lifting bodies. SHARP materials have exhibited effective TPS characteristics from zero to more than 2,000 °C, with melting points over 3,500 °C. They are structurally stronger than RCC, thus do not require structural reinforcement with materials such as Inconel. SHARP materials are extremely efficient at re-radiating absorbed heat, thus eliminating the need for additional TPS behind and between SHARP materials and conventional vehicle structure. NASA initially funded (and discontinued) a multi-phase R&D program through the University of Montana in 2001 to test SHARP materials on test vehicles.[33][34] Actively cooled[edit] Various advanced reusable spacecraft and hypersonic aircraft designs have been proposed to employ heat shields made from temperature-resistant metal alloys that incorporated a refrigerant or cryogenic fuel circulating through them. Such a TPS concept was proposed for the X-30 National Aerospace Plane (NASP). The NASP was supposed to have been a scramjet powered hypersonic aircraft, but failed in development. In the early 1960s various TPS systems were proposed to use water or other cooling liquid sprayed into the shock layer, or passed through channels in the heat shield. Advantages included the possibility of more all-metal designs which would be cheaper to develop, be more rugged, and eliminate the need for classified technology. The disadvantages are increased weight and complexity, and lower reliability. The concept has never been flown, but a similar technology (the plug nozzle[35]) did undergo extensive ground testing. Feathered reentry[edit] In 2004, aircraft designer Burt Rutan demonstrated the feasibility of a shape-changing airfoil for reentry with the suborbital SpaceShipOne. The wings on this craft rotate upward into the feather configuration that provides a shuttlecock effect. Thus SpaceShipOne achieves much more aerodynamic drag on reentry while not experiencing significant thermal loads. The configuration increases drag, as the craft is now less streamlined and results in more atmospheric gas particles hitting the spacecraft at higher altitudes than otherwise. The aircraft thus slows down more in higher atmospheric layers which is the key to efficient reentry. Secondly the aircraft will automatically orient itself in this state to a high drag attitude.[36] However, the velocity attained by SpaceShipOne prior to reentry is much lower than that of an orbital spacecraft, and engineers, including Rutan, recognize that a feathered reentry technique is not suitable for return from orbit. On 4 May 2011, the first test on the SpaceShipTwo of the feathering mechanism was made during a glideflight after release from the White Knight Two. The feathered reentry was first described by Dean Chapman of NACA in 1958.[37] In the section of his report on Composite Entry, Chapman described a solution to the problem using a high-drag device: It may be desirable to combine lifting and nonlifting entry in order to achieve some advantages... For landing maneuverability it obviously is advantageous to employ a lifting vehicle. The total heat absorbed by a lifting vehicle, however, is much higher than for a nonlifting vehicle... Nonlifting vehicles can more easily be constructed... by employing, for example, a large, light drag device... The larger the device, the smaller is the heating rate. Nonlifting vehicles with shuttlecock stability are advantageous also from the viewpoint of minimum control requirements during entry. ... an evident composite type of entry, which combines some of the desirable features of lifting and nonlifting trajectories, would be to enter first without lift but with a... drag device; then, when the velocity is reduced to a certain value... the device is jettisoned or retracted, leaving a lifting vehicle... for the remainder of the descent. Inflatable heat shield reentry[edit] NASA engineers check IRVE Deceleration for atmospheric reentry, especially for higher-speed Mars-return missions, benefits from maximizing "the drag area of the entry system. The larger the diameter of the aeroshell, the bigger the payload can be."[38] An inflatable aeroshell provides one alternative for enlarging the drag area with a low-mass design. Such an inflatable shield/aerobrake was designed for the penetrators of Mars 96 mission. Since the mission failed due to the launcher malfunction, the NPO Lavochkin and DASA/ESA have designed a mission for Earth orbit. The Inflatable Reentry and Descent Technology (IRDT) demonstrator was launched on Soyuz-Fregat on 8 February 2000. The inflatable shield was designed as a cone with two stages of inflation. Although the second stage of the shield failed to inflate, the demonstrator survived the orbital reentry and was recovered.[39][40] The subsequent missions flown on the Volna rocket were not successful due to launcher failure.[41] NASA launched an inflatable heat shield experimental spacecraft on 17 August 2009 with the successful first test flight of the Inflatable Re-entry Vehicle Experiment (IRVE). The heatshield had been vacuum-packed into a 15 inches (380 mm) diameter payload shroud and launched on a Black Brant 9 sounding rocket from NASA's Wallops Flight Facility on Wallops Island, Virginia. "Nitrogen inflated the 10-foot (3.0 m) diameter heat shield, made of several layers of silicone-coated [Kevlar] fabric, to a mushroom shape in space several minutes after liftoff."[38] The rocket apogee was at an altitude of 131 miles (211 km) where it began its descent to supersonic speed. Less than a minute later the shield was released from its cover to inflate at an altitude of 124 miles (200 km). The inflation of the shield took less than 90 seconds.[38] Entry vehicle design considerations[edit] There are four critical parameters considered when designing a vehicle for atmospheric entry: 1. Peak heat flux 2. Heat load 3. Peak deceleration 4. Peak dynamic pressure Peak heat flux and dynamic pressure selects the TPS material. Heat load selects the thickness of the TPS material stack. Peak deceleration is of major importance for manned missions. The upper limit for manned return to Earth from Low Earth Orbit (LEO) or lunar return is 10 Gs.[42] For Martian atmospheric entry after long exposure to zero gravity, the upper limit is 4 Gs.[42] Peak dynamic pressure can also influence the selection of the outermost TPS material if spallation is an issue. Starting from the principle of conservative design, the engineer typically considers two worst case trajectories, the undershoot and overshoot trajectories. The overshoot trajectory is typically defined as the shallowest allowable entry velocity angle prior to atmospheric skip-off. The overshoot trajectory has the highest heat load and sets the TPS thickness. The undershoot trajectory is defined by the steepest allowable trajectory. For manned missions the steepest entry angle is limited by the peak deceleration. The undershoot trajectory also has the highest peak heat flux and dynamic pressure. Consequently the undershoot trajectory is the basis for selecting the TPS material. There is no "one size fits all" TPS material. A TPS material that is ideal for high heat flux may be too conductive (too dense) for a long duration heat load. A low density TPS material might lack the tensile strength to resist spallation if the dynamic pressure is too high. A TPS material can perform well for a specific peak heat flux, but fail catastrophically for the same peak heat flux if the wall pressure is significantly increased (this happened with NASA's R-4 test spacecraft).[42] Older TPS materials tend to be more labor-intensive and expensive to manufacture compared to modern materials. However, modern TPS materials often lack the flight history of the older materials (an important consideration for a risk-averse designer). Based upon Allen and Eggers discovery, maximum aeroshell bluntness (maximum drag) yields minimum TPS mass. Maximum bluntness (minimum ballistic coefficient) also yields a minimal terminal velocity at maximum altitude (very important for Mars EDL, but detrimental for military RVs). However, there is an upper limit to bluntness imposed by aerodynamic stability considerations based upon shock wave detachment. A shock wave will remain attached to the tip of a sharp cone if the cone's half-angle is below a critical value. This critical half-angle can be estimated using perfect gas theory (this specific aerodynamic instability occurs below hypersonic speeds). For a nitrogen atmosphere (Earth or Titan), the maximum allowed half-angle is approximately 60°. For a carbon dioxide atmosphere (Mars or Venus), the maximum allowed half-angle is approximately 70°. After shock wave detachment, an entry vehicle must carry significantly more shocklayer gas around the leading edge stagnation point (the subsonic cap). Consequently, the aerodynamic center moves upstream thus causing aerodynamic instability. It is incorrect to reapply an aeroshell design intended for Titan entry (Huygens probe in a nitrogen atmosphere) for Mars entry (Beagle-2 in a carbon dioxide atmosphere). Prior to being abandoned, the Soviet Mars lander program achieved one successful landing (Mars 3), on the second of three entry attempts (the others were Mars 2 and Mars 6). The Soviet Mars landers were based upon a 60° half-angle aeroshell design. A 45 degree half-angle sphere-cone is typically used for atmospheric probes (surface landing not intended) even though TPS mass is not minimized. The rationale for a 45° half-angle is to have either aerodynamic stability from entry-to-impact (the heat shield is not jettisoned) or a short-and-sharp heat pulse followed by prompt heat shield jettison. A 45° sphere-cone design was used with the DS/2 Mars impactor and Pioneer Venus Probes. Notable atmospheric entry accidents[edit] Re-entry window A- Friction with air, B- In air flight. C- Expulsion lower angle, D- Perpendicular to the entry point, E- Excess friction 6.9° to 90°, F- Repulsion of 5.5° or less, G- Explosion friction, H- plane tangential to the entry point Not all atmospheric re-entries have been successful and some have resulted in significant disasters. • Friendship 7 — Instrument readings showed that the heat shield and landing bag were not locked. The decision was made to leave the retrorocket pack in position during reentry. Lone astronaut John Glenn survived. The instrument readings were later found to have been erroneous. • Voskhod 2 — The service module failed to detach for some time, but the crew survived. • Soyuz 1 — The attitude control system failed while still in orbit and later parachutes got entangled during the emergency landing sequence (entry, descent and landing (EDL) failure). Lone cosmonaut Vladimir Mikhailovich Komarov died. • Soyuz 5 — The service module failed to detach, but the crew survived. • Soyuz 11 — Early depressurization led to the death of all three crew. • Mars Polar Lander — Failed during EDL. The failure was believed to be the consequence of a software error. The precise cause is unknown for lack of real-time telemetry. • Space Shuttle Columbia — The failure of an RCC panel on a wing leading edge led to breakup of the orbiter at hypersonic speed resulting in the death of all seven crew members. Genesis entry vehicle after crash • Genesis — The parachute failed to deploy due to a G-switch having been installed backwards (a similar error delayed parachute deployment for the Galileo Probe). Consequently, the Genesis entry vehicle crashed into the desert floor. The payload was damaged, but most scientific data were recoverable. • Soyuz TMA-11 (April 19, 2008) — The Soyuz propulsion module failed to separate properly; fallback ballistic reentry was executed that subjected the crew to forces about eight times that of gravity.[43] The crew survived. Uncontrolled and unprotected reentries[edit] Of satellites that reenter, approximately 10-40% of the mass of the object is likely to reach the surface of the Earth.[44] On average, about one catalogued object reenters per day.[45] In 1978, Cosmos 954 reentered uncontrolled and crashed near Great Slave Lake in the Northwest Territories of Canada. Cosmos 954 was nuclear powered and left radioactive debris near its impact site.[47] In 1979, Skylab reentered uncontrolled, spreading debris across the Australian Outback, damaging several buildings and killing a cow.[48][49] The re-entry was a major media event largely due to the Cosmos 954 incident, but not viewed as much as a potential disaster since it did not carry nuclear fuel. The city of Esperance, Western Australia, issued a fine for littering to the United States, which was finally paid 30 years later (not by NASA, but by privately collected funds from radio listeners).[50] NASA had originally hoped to use a Space Shuttle mission to either extend its life or enable a controlled reentry, but delays in the program combined with unexpectedly high solar activity made this impossible.[51][52] On February 7, 1991 Salyut 7 underwent uncontrolled reentry with Kosmos 1686. Reentering over Argentina, scattering much of its debris over the town of Capitan Bermudez.[53][54][55] Deorbit disposal[edit] In 1971, the world's first space station Salyut 1 was deliberately de-orbited into the Pacific Ocean following the Soyuz 11 accident. Its successor, Salyut 6, was de-orbited in a controlled manner as well. On June 4, 2000 the Compton Gamma Ray Observatory was deliberately de-orbited after one of its gyroscopes failed. The debris that did not burn up fell harmlessly into the Pacific Ocean. The observatory was still operational, but the failure of another gyroscope would have made de-orbiting much more difficult and dangerous. With some controversy, NASA decided in the interest of public safety that a controlled crash was preferable to letting the craft come down at random. In 2001, the Russian Mir space station was deliberately de-orbited, and broke apart in the fashion expected by the command center during atmospheric re-entry. Mir entered the Earth's atmosphere on March 23, 2001, near Nadi, Fiji, and fell into the South Pacific Ocean. On February 21, 2008, a disabled US spy satellite, USA 193, was successfully hit at an altitude of approximately 246 kilometers (153 mi) by an SM-3 missile fired from the U.S. Navy cruiser Lake Erie off the coast of Hawaii. The satellite was inoperative, having failed to reach its intended orbit when it was launched in 2006. Due to its rapidly deteriorating orbit, it was destined for uncontrolled reentry within a month. United States Department of Defense expressed concern that the 1,000-pound (450 kg) fuel tank containing highly toxic hydrazine might survive reentry to reach the Earth’s surface intact. Several governments including those of Russia, China, and Belarus protested the action as a thinly-veiled demonstration of their anti-satellite capabilities.[56] China had previously caused an international incident when it tested an anti-satellite missile in 2007. On September 7, 2011, NASA announced the impending uncontrolled re-entry of Upper Atmosphere Research Satellite and noted that there was a small risk to the public.[57] The decommissioned satellite reentered the atmosphere on September 24, 2011, and some pieces are presumed to have crashed into the South Pacific Ocean over a debris field 500 miles (800 km) long.[58] Successful atmospheric re-entries from orbital velocities[edit] Manned orbital re-entry, by country/governmental entity Manned orbital re-entry, by commercial entity • None to date Unmanned orbital re-entry, by country/governmental entity Unmanned orbital re-entry, by commercial entity Selected atmospheric re-entries[edit] What Re-entry Phobos-Grunt 2012 ROSAT 2011 UARS 2011 Mir 2001 Skylab 1979 See also[edit] Further reading[edit] • Launius, Roger D.; Jenkins, Dennis R. (October 10, 2012). Coming Home: Reentry and Recovery from Space. NASA. ISBN 9780160910647. OCLC 802182873. Retrieved August 21, 2014.  • Martin, John J. (1966). Atmospheric Entry - An Introduction to Its Science and Engineering. Old Tappan, NJ: Prentice-Hall.  • Regan, Frank J. (1984). Re-Entry Vehicle Dynamics (AIAA Education Series). New York: American Institute of Aeronautics and Astronautics, Inc. ISBN 0-915928-78-7.  • Etkin, Bernard (1972). Dynamics of Atmospheric Flight. New York: John Wiley & Sons, Inc. ISBN 0-471-24620-4.  • Vincenti, Walter G.; Kruger, Jr., Charles H. (1986). Introduction to Physical Gas Dynamics. Malabar, Florida: Robert E.Krieger Publishing Co. ISBN 0-88275-309-6.  • Hansen, C. Frederick (1976). Molecular Physics of Equilibrium Gases, A Handbook for Engineers. NASA. NASA SP-3096.  • Hayes, Wallace D.; Probstein, Ronald F. (1959). Hypersonic Flow Theory. New York and London: Academic Press.  A revised version of this classic text has been reissued as an inexpensive paperback: Hayes, Wallace D. (1966, reissued in 2004). Hypersonic Inviscid Flow. Mineola, New York: Dover Publications. ISBN 0-486-43281-5.  Check date values in: |date= (help) • Anderson, Jr., John D. (1989). Hypersonic and High Temperature Gas Dynamics. New York: McGraw-Hill, Inc. ISBN 0-07-001671-2.  Notes and references[edit] 1. ^ 2. ^ 3. ^ Goddard, Robert H. (Mar 1920). "Report Concerning Further Developments". The Smithsonian Institution Archives. Archived from the original on 26 June 2009. Retrieved 2009-06-29. In the case of meteors, which enter the atmosphere with speeds as high as 30 miles per second, the interior of the meteors remains cold, and the erosion is due, to a large extent, to chipping or cracking of the suddenly heated surface. For this reason, if the outer surface of the apparatus were to consist of layers of a very infusible hard substance with layers of a poor heat conductor between, the surface would not be eroded to any considerable extent, especially as the velocity of the apparatus would not be nearly so great as that of the average meteor.  4. ^ Boris Chertok, "Rockets and People", NASA History Series, 2006 5. ^ Hansen, James R. (Jun 1987). "Chapter 12: Hypersonics and the Transition to Space". Engineer in Charge: A History of the Langley Aeronautical Laboratory, 1917-1958. The NASA History Series. sp-4305. United States Government Printing. ISBN 978-0-318-23455-7.  6. ^ Allen, H. Julian; Eggers, Jr., A. J. (1958). "A Study of the Motion and Aerodynamic Heating of Ballistic Missiles Entering the Earth's Atmosphere at High Supersonic Speeds". NACA Annual Report (NASA Technical Reports) 44.2 (NACA-TR-1381): 1125–1140. [dead link] 7. ^ a b Fay, J. A.; Riddell, F. R. (February 1958). "Theory of Stagnation Point Heat Transfer in Dissociated Air" (PDF Reprint). Journal of the Aeronautical Sciences 25 (2): 73–85. doi:10.2514/8.7517. Retrieved 2009-06-29.  9. ^ Regan, Frank J. and Anadakrishnan, Satya M., "Dynamics of Atmospheric Re-Entry," AIAA Education Series, American Institute of Aeronautics and Astronautics, Inc., New York, ISBN 1-56347-048-9, (1993). 10. ^ "Equations, tables, and charts for compressible flow". NACA Annual Report (NASA Technical Reports) 39 (NACA-TR-1135): 611–681. 1953.  11. ^ Kenneth Iliff and Mary Shafer, Space Shuttle Hypersonic Aerodynamic and Aerothermodynamic Flight Research and the Comparison to Ground Test Results, Page 5-6 12. ^ Lighthill, M.J. (Jan 1957). "Dynamics of a Dissociating Gas. Part I. Equilibrium Flow". Journal of Fluid Mechanics 2 (1): 1–32. Bibcode:1957JFM.....2....1L. doi:10.1017/S0022112057000713.  13. ^ Freeman, N.C. (Aug 1958). "Non-equilibrium Flow of an Ideal Dissociating Gas". Journal of Fluid Mechanics 4 (4): 407–425. Bibcode:1958JFM.....4..407F. doi:10.1017/S0022112058000549.  15. ^ Hogan, C. Michael, Parker, John and Winkler, Ernest, of NASA Ames Research Center, "An Analytical Method for Obtaining the Thermogravimetric Kinetics of Char-forming Ablative Materials from Thermogravimetric Measurements", AIAA/ASME Seventh Structures and Materials Conference, April, 1966 16. ^ Di Benedetto, A.T.; Nicolais, L.; Watanabe, R. (1992). Composite materials : proceedings of Symposium A4 on Composite Materials of the International Conference on Advanced Materials--ICAM 91, Strasbourg, France, 27-29 May, 1991. Amsterdam: North-Holland. p. 111. ISBN 0444893563.  17. ^ Tran, Huy; Michael Tauber; William Henline; Duoc Tran; Alan Cartledge; Frank Hui; Norm Zimmerman (1996). Ames Research Center Shear Tests of SLA-561V Heat Shield Material for Mars-Pathfinder (Technical report). NASA Ames Research Center. NASA Technical Memorandum 110402.  18. ^ Lachaud, Jean; N. Mansour, Nagi (June 2010). "A pyrolysis and ablation toolbox based on OpenFOAM". 5th OpenFOAM Workshop. Gothenburg, Sweden. p. 1.  19. ^ Tran, Huy K, et al., "Qualification of the forebody heatshield of the Stardust's Sample Return Capsule," AIAA, Thermophysics Conference, 32nd, Atlanta, GA; 23–25 June 1997. 20. ^ Stardust - Cool Facts 21. ^ a b c Chambers, Andrew; Dan Rasky (2010-11-14). "NASA + SpaceX Work Together". NASA. Retrieved 2011-02-16. SpaceX undertook the design and manufacture of the reentry heat shield; it brought speed and efficiency that allowed the heat shield to be designed, developed, and qualified in less than four years.'  22. ^ SpaceX Manufactured Heat Shield Material - February 23, 2009 23. ^ Dragon could visit space station next,, 2010-12-08, accessed 2010-12-09. 24. ^ Chaikin, Andrew (January 2012). "1 visionary + 3 launchers + 1,500 employees = ? : Is SpaceX changing the rocket equation?". Air & Space Smithsonian. Retrieved 2012-11-13. SpaceX’s material, called PICA-X, is 1/10th as expensive than the original [NASA PICA material and is better], ... a single PICA-X heat shield could withstand hundreds of returns from low Earth orbit; it can also handle the much higher energy reentries from the moon or Mars.  25. ^ Tran, Huy K., et al., "Silicone impregnated reusable ceramic ablators for Mars follow-on missions," AIAA-1996-1819, Thermophysics Conference, 31st, New Orleans, LA, June 17–20, 1996. 26. ^ Flight-Test Analysis Of Apollo Heat-Shield Material Using The Pacemaker Vehicle System NASA Technical Note D-4713, pp. 8, 1968-08, accessed 2010-12-26. "Avcoat 5026-39/HC-G is an epoxy novolac resin with special additives in a fiberglass honeycomb matrix. In fabrication, the empty honeycomb is bonded to the primary structure and the resin is gunned into each cell individually. ... The overall density of the material is 32 lb/ft3 (512 kg/m3). The char of the material is composed mainly of silica and carbon. It is necessary to know the amounts of each in the char because in the ablation analysis the silica is considered to be inert, but the carbon is considered to enter into exothermic reactions with oxygen. ... At 2160O R (12000 K), 54 percent by weight of the virgin material has volatilized and 46 percent has remained as char. ... In the virgin material, 25 percent by weight is silica, and since the silica is considered to be inert the char-layer composition becomes 6.7 lb/ft3 (107.4 kg/m3) of carbon and 8 lb/ft3 (128.1 kg/m3) of silica." 27. ^ NASA Selects Material for Orion Spacecraft Heat Shield, 2009-04-07, accessed 2011-01-02. 28. ^ NASA's Orion heat shield decision expected this month 2009-10-03, accessed 2011-01-02 29. ^ Company Watch (Apr 12, 2009 ) 30. ^ [1] Columbia Accident Investigation Board report. 31. ^ [2] Shuttle Evolutionary History. 32. ^ [3] X-33 Heat Shield Development report. 33. ^ 34. ^ sharp structure homepage w left 35. ^ - J2T-200K & J2T-250K 36. ^ SpaceShipOne 37. ^ Chapman, Dean R. (May 1958). "An approximate analytical method for studying reentry into planetary atmospheres". NACA Technical Note 4276: 38. Archived from the original on 2011-04-07.  38. ^ a b c NASA Launches New Technology: An Inflatable Heat Shield, NASA Mission News, 2009-08-17, accessed 2011-01-02. 39. ^ Inflatable Re-Entry Technologies: Flight Demonstration and Future Prospects 40. ^ Inflatable Reentry and Descent Technology (IRDT) Factsheet, ESA, September, 2005 41. ^ IRDT demonstration missions 42. ^ a b c Pavlosky, James E., St. Leger, Leslie G., "Apollo Experience Report - Thermal Protection Subsystem," NASA TN D-7564, (1974). 43. ^ William Harwood (2008). "Whitson describes rough Soyuz entry and landing". Spaceflight Now. Retrieved July 12, 2008.  44. ^ Spacecraft Reentry FAQ: How much material from a satellite will survive reentry? 45. ^ NASA - Frequently Asked Questions: Orbital Debris 46. ^ Center for Orbital and Reentry Debris Studies - Spacecraft Reentry 47. ^ Settlement of Claim between Canada and the Union of Soviet Socialist Republics for Damage Caused by "Cosmos 954" (Released on April 2, 1981) 48. ^ Hanslmeier, Arnold (2002). The sun and space weather. Dordrecht ; Boston: Kluwer Academic Publishers. p. 269. ISBN 9781402056048.  49. ^ Mitnik, Donald (2009). Death of a Trillion Dreams. (October 19, 2009). p. 113. ISBN 978-0557156016.  50. ^ Littering fine paid[dead link] 51. ^ Lamprecht, Jan (1998). Hollow planets : a feasibility study of possible hollow worlds. Austin, TX: World Wide Pub. p. 326. ISBN 9780620219631.  52. ^ Elkins-Tanton, Linda (2006). The Sun, Mercury, and Venus. New York: Chelsea House. p. 56. ISBN 9780816051939.  53. ^, Spacecraft Reentry FAQ:[dead link] 54. ^ Astronautix, Salyut 7. 55. ^ NYT, Salyut 7, Soviet Station in Space, Falls to Earth After 9-Year Orbit 56. ^ Gray, Andrew (2008-02-21). "U.S. has high confidence it hit satellite fuel tank". Reuters. Archived from the original on 25 February 2008. Retrieved 2008-02-23.  57. ^ David, Leonard (7 September 2011). "Huge Defunct Satellite to Plunge to Earth Soon, NASA Says". Retrieved 10 September 2011.  58. ^ "Final Update: NASA's UARS Re-enters Earth's Atmosphere". Retrieved 2011-09-27.  External links[edit]
94952b1f2670d0b5
The table below lists the core theories along with many of the concepts they employ. Theory Major subtopics Concepts Classical mechanics Newton's laws of motion, Lagrangian mechanics, Hamiltonian mechanics, Kinematics, Statics, Dynamics, Chaos theory, Acoustics, Fluid dynamics, Continuum mechanics Density, Dimension, Gravity, Space, Time, Motion, Length, Position, Velocity, Acceleration, Galilean invariance, Mass, Momentum, Impulse, Force, Energy, Angular velocity, Angular momentum, Moment of inertia, Torque, Conservation law, Harmonic oscillator, Wave, Work, Power, Lagrangian, Hamiltonian, Tait-Bryan angles, Euler angles Electromagnetism Electrostatics, Electrodynamics, Electricity, Magnetism, Magnetostatics, Maxwell's equations, Optics Capacitance, Electric charge, Current, Electrical conductivity, Electric field, Electric permittivity, Electric potential, Electrical resistance, Electromagnetic field, Electromagnetic induction, Electromagnetic radiation, Gaussian surface, Magnetic field, Magnetic flux, Magnetic monopole, Magnetic permeability Thermodynamics and Statistical mechanics Heat engine, Kinetic theory Boltzmann's constant, Conjugate variables, Enthalpy, Entropy, Equation of state, Equipartition theorem, Thermodynamic free energy, Heat, Ideal gas law, Internal energy, Laws of thermodynamics, Maxwell relations, Irreversible process, Ising model, Mechanical action, Partition function, Pressure, Reversible process, Spontaneous process, State function, Statistical ensemble, Temperature, Thermodynamic equilibrium, Thermodynamic potential, Thermodynamic processes, Thermodynamic state, Thermodynamic system, Viscosity, Volume, Work, Granular material Quantum mechanics Path integral formulation, Scattering theory, Schrödinger equation, Quantum field theory, Quantum statistical mechanics Adiabatic approximation, Blackbody radiation, Correspondence principle, Free particle, Hamiltonian, Hilbert space, Identical particles, Matrix Mechanics, Planck's constant, Observer effect, Operators, Quanta, Quantization, Quantum entanglement, Quantum harmonic oscillator, Quantum number, Quantum tunneling, Schrödinger's cat, Dirac equation, Spin, Wavefunction, Wave mechanics, Wave-particle duality, Zero-point energy, Pauli Exclusion Principle, Heisenberg Uncertainty Principle Relativity Special relativity, General relativity, Einstein field equations Covariance, Einstein manifold, Equivalence principle, Four-momentum, Four-vector, General principle of relativity, Geodesic motion, Gravity, Gravitoelectromagnetism, Inertial frame of reference, Invariance, Length contraction, Lorentzian manifold, Lorentz transformation, Mass-energy equivalence, Metric, Minkowski diagram, Minkowski space, Principle of Relativity, Proper length, Proper time, Reference frame, Rest energy, Rest mass, Relativity of simultaneity, Spacetime, Special principle of relativity, Speed of light, Stress-energy tensor, Time dilation, Twin paradox, World line Search another word or see theorieson Dictionary | Thesaurus |Spanish Copyright © 2014, LLC. All rights reserved. • Please Login or Sign Up to use the Recent Searches feature
c5e19dc157a04eef
Take the 2-minute tour × Could someone experienced in the field tell me what the minimal math knowledge one must obtain in order to grasp the introductory Quantum Mechanics book/course? I do have math knowledge but I must say, currently, kind of a poor one. I did a basic introductory course in Calculus, Linear algebra and Probability Theory. Perhaps you could suggest some books I have to go through before I can start with QM? Thx. share|improve this question It's easier to learn something if you have a need for it, so you might use your interest in QM to inspire yourself to learn the math. –  Mike Dunlavey Dec 15 '11 at 1:39 Related Math.SE question: math.stackexchange.com/q/758502/11127 –  Qmechanic Apr 20 at 7:16 3 Answers 3 up vote 13 down vote accepted I depends on the book you've chosen to read. But usually some basics in Calculus, Linear Algebra, Differential equations and Probability theory is enough. For example, if you start with Griffiths' Introduction to Quantum Mechanics, the author kindly provides you with the review of Linear Algebra in the Appendix as well as with some basic tips on probability theory in the beginning of the first Chapter. In order to solve Schrödinger equation (which is (partial) differential equation) you, of course, need to know the basics of Differential equations. Also, some special functions (like Legendre polynomials, Spherical Harmonics, etc) will pop up in due course. But, again, in introductory book, such as Griffiths' book, these things are explained in detail, so there should be no problems for you if you're careful reader. This book is one of the best to start with. share|improve this answer +1 for the book recommendation. This was the one I was taught with and it provided an excellent starting point. –  qubyte Dec 15 '11 at 16:56 You don't need any probability: the probability used in QM is so basic that you pick it up just from common sense. You need linear algebra, but sometimes it is reviewed in the book itself or an appendix. QM seems to use functional analysis, i.e., infinite dimensional linear algebra, but the truth is that you will do just fine if you understand the basic finite dimensional linear algebra in the usual linear algebra course and then pretend it is all true for Hilbert Spaces, too. It would be nice if you had taken a course in ODE but the truth is, most ODE courses these days don't do the only topic you need in QM, which is the Frobenius theory for eq.s with a regular singular point, so most QM teachers re-do the special case of that theory needed for the hydrogen atom anyway, sadly but wisely assuming that their students never learned it. An ordinary Calculus II course covers ODE basics like separation of variables and stuff. Review it. I suggest using Dirac's book on QM! It uses very little maths, and a lot of physical insight. The earlier edition of David Park is more standard and easy enough and can be understood with one linear algebra course and Calc I, CalcII, and CalcIII. share|improve this answer Dirac's book is readable with no prior knowledge, +1, and it is still the best, but it has no path integral, and the treatment of the Dirac equation (ironically) is too old fasioned. I would recommend learning matrix mechanics, which is reviewed quickly on Wikipedia. The prerequisite is Fourier transforms. Sakurai and Gottfried are good, as is Mandelstam/Yourgrau for path integrals. –  Ron Maimon Dec 6 '11 at 22:37 There is a story about Dirac. When it was proved that parity was violated, someone asked him what he thought about that. He replied "I never said anything about it in my book." The things you mention that are left out of his book are things it is a good idea to omit. Path integrals are ballyhooed but are just a math trick and give no physical insight, in fact, they are misleading. Same for matrix mechanics. Those are precisely why I still recommend Dirac for beginners... I would not even be surprised if his treatment of QED in the second edition proved more durable than Feynman's..... –  joseph f. johnson Dec 7 '11 at 0:38 Matrix mechanics is good because it gives you intuition for matrix elements, for example, you immediately understand that an operator with constant frequency is a raising/lowering operator. You also understand the semiclassical interpretation of off-diagonal matrix elements, they are just stunted Fourier transforms of classical motions. You also understand why the dipole matrix element gives the transition rate without quantizing the photon field, just semiclassically. These are all important intuitions, which have been lost because Schrodinger beat Heisenberg in mass appeal. –  Ron Maimon Dec 7 '11 at 5:20 Your comment about path integrals is silly. The path integral gives a unification of Heisenberg and Schrodinger in one formalism, that is automatically relativistic. It gives analytic continuation to imaginary time, which gives results like CPT, relativistic regulators, stochastic renormalization, second order transitions, Fadeev Popov ghosts, supersymmetry, and thousands of other things that would be practically impossible without it. The particle path path integral is the source of the S-matrix formulation and string theory, of unitarity methods, and everything modern. –  Ron Maimon Dec 7 '11 at 5:29 @RonMaimon I have had to teach stochastic processes and integrals to normal, untalented folks. IMHO, stohastic processes count as probability theory, one of the trickiest parts, and path integrals are no help for beginners here either. It is still better for the beginning student to not take a course in probability and let what they learn about the physics of QM be their introduction to stochastic processes...I mean, besides what they already learned about stochastic processes from playing Snakes and Ladders. This is part of my theme: learn the physics first, and mathematical tricks later –  joseph f. johnson Dec 15 '11 at 17:52 There is a nice book with an extremely long title: Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles. It does the basics pretty well. Griffith's would be the next logical step. After that there is Shankar. share|improve this answer Your Answer
4a02e1f1c2b69927
Pilot wave From Wikipedia, the free encyclopedia Jump to: navigation, search Couder experiments,[1][2] "materializing" the pilot wave model. In theoretical physics, the pilot wave theory was the first known example of a hidden variable theory, presented by Louis de Broglie in 1927. Its more modern version, the Bohm interpretation, remains a controversial attempt to interpret quantum mechanics as a deterministic theory, avoiding troublesome notions such as instantaneous wave function collapse and the paradox of Schrödinger's cat. The pilot wave theory[edit] The pilot wave theory is one of several interpretations of quantum mechanics. It uses the same mathematics as other interpretations of quantum mechanics; consequently, it is also supported by the current experimental evidence to the same extent as the other interpretations. The pilot wave theory is a hidden variable theory. Consequently: • the theory has realism (meaning that its concepts exist independently of the observer); • the theory has determinism. A collection of particles has an associated matter wave, which evolves according to the Schrödinger equation. Each particle follows a deterministic trajectory, which is guided by the wave function; collectively, the density of the particles conforms to the magnitude of the wave function. The wave function is not influenced by the particle and can exist also as an empty wave function.[3] The theory brings to light nonlocality that is implicit in the non-relativistic formulation of quantum mechanics and uses it to satisfy Bell's theorem. Interestingly, these nonlocal effects are compatible with the no-communication theorem, which prevents us from using them for faster-than-light communication. The pilot wave theory shows that it is possible to have a realistic and deterministic hidden variable theory, which reproduces the experimental results of ordinary quantum mechanics. The price which has to be paid for this is manifest nonlocality. Mathematical foundations[edit] To derive the de Broglie–Bohm pilot-wave for an electron, the quantum Lagrangian where Q is the potential associated with the quantum force (the particle being pushed by the wave function), is integrated along precisely one path (the one the electron actually follows). This leads to the following formula for the Bohm propagator: K^Q(X_1, t_1; X_0, t_0) = \frac{1}{J(t)^ {\frac{1}{2}} } \exp\left[\frac{i}{\hbar}\int_{t_0}^{t_1}L(t)\,dt\right]. This propagator allows to track the electron precisely over time under the influence of the quantum potential Q. Derivation of the Schrödinger equation[edit] Pilot Wave theory is based on Hamilton–Jacobi dynamics[4] rather than Lagrangian or Hamiltonian dynamics. Using the Hamilton–Jacobi equation H\left(\mathbf{q},{\partial S \over \partial \mathbf{q}},t\right) + {\partial S \over \partial t}\left(\mathbf{q},t\right) = 0 it is possible to derive the Schrödinger equation: Consider a classical particle — the position of which is not known with certainty. We must deal with it statistically, so only the probability density \rho (x,t) is known. Probability must be conserved, i.e. \int\rho\,d^3x = 1 for each t. Therefore it must satisfy the continuity equation \partial \rho / \partial t = - \nabla(\rho v) \quad(1) where v(x,t) is the velocity of the particle. In the Hamilton–Jacobi formulation of classical mechanics, velocity is given by v(x,t) = \frac{\nabla S(x,t)}{m} where S(x,t) is a solution of the Hamilton-Jacobi equation - \frac{\partial S}{\partial t} = \frac{\left(\nabla S\right)^2}{2m} + V \quad(2) We can combine (1) and (2) into a single complex equation by introducing the complex function \psi = \sqrt{\rho}e^\frac{iS}{\hbar}, then the two equations are equivalent to i \hbar \frac{\partial \psi}{\partial t} = \left( - \frac{\hbar^2}{2m} \nabla^2 +V-Q \right)\psi \quad with \quad Q = - \frac{\hbar^2}{2m} \frac{\nabla^2 \sqrt{\rho}}{\sqrt{\rho}} This is the time dependent Schrödinger equation with an extra potential, the quantum potential Q, which is the potential of the quantum force, which is proportional (in approximation) to the curvature of the amplitude of the wave function. Mathematical formulation for a single particle[edit] The matter wave of de Broglie is described by the time-dependent Schrödinger equation: The complex wave function can be represented as: \psi = \sqrt{\rho} \; \exp \left( \frac{i \, S}{\hbar} \right) By plugging this into the Schrödinger equation, one can derive two new equations for the real variables. The first is the continuity equation for the probability density\rho: [5] \partial \rho / \partial t + \nabla \cdot ( \rho v) =0 \; , where the velocity field is defined by the guidance equation \vec{v} (\vec{r},t) = \frac{\nabla S(\vec{r},t)}{m}\; . According to pilot wave theory, the point particle and the matter wave are both real and distinct physical entities. ( Unlike standard quantum mechanics, where particles and waves are considered to be the same entities, connected by wave–particle duality. ) The pilot wave guides the motion of the point particles as described by the guidance equation. Ordinary quantum mechanics and pilot wave theory are based on the same partial differential equation. The main difference is that in ordinary quantum mechanics, the Schrödinger equation is connected to reality by the Born postulate, which states that the probability density of the particle's position is given by \rho = |\psi|^2 . Pilot wave theory considers the guidance equation to be the fundamental law, and sees the Born rule as a derived concept. The second equation is a modified Hamilton–Jacobi equation for the action S: where Q is the quantum potential defined by By neglecting Q, our equation is reduced to the Hamilton–Jacobi equation of a classical point particle. ( Strictly speaking, this is only a semiclassical limit[clarification needed], because the superposition principle still holds and one needs a decoherence mechanism to get rid of it. Interaction with the environment can provide this mechanism.) So, the quantum potential is responsible for all the mysterious effects of quantum mechanics. One can also combine the modified Hamilton–Jacobi equation with the guidance equation to derive a quasi-Newtonian equation of motion m \, \frac{d}{dt} \, \vec{v} = - \nabla( V + Q ) \; , where the hydrodynamic time derivative is defined as \frac{d}{dt} = \frac{ \partial }{ \partial t } + \vec{v} \cdot \nabla \; . Mathematical formulation for multiple particles[edit] The Schrödinger equation for the many-body wave function \psi(\vec{r}_1, \vec{r}_2, \cdots, t) is given by i \hbar \frac{\partial \psi}{\partial t} =\left( -\frac{\hbar^2}{2} \sum_{i=1}^{N} \frac{\nabla_i^2}{m_i} + V(\bold{r}_1,\bold{r}_2,\cdots\bold{r}_N) \right) \psi The complex wave function can be represented as: The pilot wave guides the motion of the particles. The guidance equation for the jth particle is: \vec{v}_j = \frac{\nabla_j S}{m}\; . The velocity of the jth particle explicitly depends on the positions of the other particles. This means that the theory is nonlocal. Empty wave function[edit] Lucien Hardy[6] and John Stewart Bell[3] have emphasized that in the de Broglie–Bohm picture of quantum mechanics there can exist empty waves, represented by wave functions propagating in space and time but not carrying energy or momentum,[7] and not associated with a particle. The same concept was called ghost waves (or "Gespensterfelder", ghost fields) by Albert Einstein.[7] The empty wave function notion has been discussed controversially.[8][9][10] In contrast, the many-worlds interpretation of quantum mechanics does not call for empty wave functions.[3] In his 1926 paper,[11] Max Born suggested that the wave function of Schrödinger's wave equation represents the probability density of finding a particle. From this idea, de Broglie developed the pilot wave theory, and worked out a function for the guiding wave.[12] Initially, de Broglie proposed a double solution approach, in which the quantum object consists of a physical wave (u-wave) in real space which has a spherical singular region that gives rise to particle-like behaviour; in this initial form of his theory he did not have to postulate the existence of a quantum particle.[13] He later formulated it as a theory in which a particle is accompanied by a pilot wave. He presented the pilot wave theory at the 1927 Solvay Conference.[14] However, Wolfgang Pauli raised an objection to it at the conference, saying that it did not deal properly with the case of inelastic scattering. De Broglie was not able to find a response to this objection, and he and Born abandoned the pilot-wave approach. Unlike David Bohm, de Broglie did not complete his theory to encompass the many-particle case.[13] Later, in 1932, John von Neumann published a paper claiming to prove that all hidden variable theories were impossible.[15] (A result found to be flawed by Grete Hermann three years later, though this went unnoticed by the physics community for over fifty years). However, in 1952, David Bohm, dissatisfied with the prevailing orthodoxy, rediscovered de Broglie's pilot wave theory. Bohm developed pilot wave theory into what is now called the de Broglie–Bohm theory.[5][16] The de Broglie–Bohm theory itself might have gone unnoticed by most physicists, if it had not been championed by John Bell, who also countered the objections to it. In 1987, John Bell[17] rediscovered Grete Hermann's work, and thus showed the physics community that Pauli's and von Neumann's objections really only showed that the pilot wave theory did not have locality. The de Broglie–Bohm theory is now considered by some to be a valid challenge to the prevailing orthodoxy of the Copenhagen Interpretation, but it remains controversial. Yves Couder and co-workers recently discovered a macroscopic pilot wave system in the form of walking droplets. This system exhibits behaviour of a pilot wave, heretofore considered to be reserved to microscopic phenomena.[1] 1. ^ a b Couder, Y.; Boudaoud, A.; Protière, S.; Moukhtar, J.; Fort, E. (2010). "Walking droplets: a form of wave–particle duality at macroscopic level?". Europhysics News 41 (1): 14–18. Bibcode:2010ENews..41...14C. doi:10.1051/epn/2010101.  2. ^ "How Does The Universe Work?". Through the Wormhole. 13 July 2011. Season 2, Episode 6, 15min 23s.  3. ^ a b c Bell, J. S. (1992). "Six possible worlds of quantum mechanics". Foundations of Physics 22 (10): 1201–1215. Bibcode:1992FoPh...22.1201B. doi:10.1007/BF01889711.  4. ^ Towler, M. (10 February 2009). "De Broglie-Bohm pilot-wave theory and the foundations of quantum mechanics". University of Cambridge. Retrieved 2014-07-03.  5. ^ a b Bohm, D. (1952). "A suggested Interpretation of the Quantum Theory in Terms of Hidden Variables, I". Physical Review 85 (2): 166–179. Bibcode:1952PhRv...85..166B. doi:10.1103/PhysRev.85.166.  6. ^ Hardy, L. (1992). "On the existence of empty waves in quantum theory". Physics Letters A 167 (1): 11–16. Bibcode:1992PhLA..167...11H. doi:10.1016/0375-9601(92)90618-V.  7. ^ a b Selleri, F.; Van der Merwe, A. (1990). Quantum paradoxes and physical reality. Kluwer Academic Publishers. pp. 85–86. ISBN 0-7923-0253-2.  8. ^ Zukowski, M. (1993). ""On the existence of empty waves in quantum theory": a comment". Physics Letters A 175 (3–4): 257–258. Bibcode:1993PhLA..175..257Z. doi:10.1016/0375-9601(93)90837-P.  9. ^ Zeh, H. D. (1999). "Why Bohm's Quantum Theory?". Foundations of Physics Letters 12: 197–200. arXiv:quant-ph/9812059. doi:10.1023/A:1021669308832.  10. ^ Vaidman, L. (2005). The Reality in Bohmian Quantum Mechanics or Can You Kill with an Empty Wave Bullet? 35 (2). pp. 299–312. arXiv:quant-ph/0312227. Bibcode:2005FoPh...35..299V. doi:10.1007/s10701-004-1945-2.  11. ^ Born, M. (1926). "Quantenmechanik der Stoßvorgänge". Zeitschrift für Physik 38 (11–12): 803–827. Bibcode:1926ZPhy...38..803B. doi:10.1007/BF01397184.  12. ^ de Broglie, L. (1927). "La mécanique ondulatoire et la structure atomique de la matière et du rayonnement". Journal de Physique et le Radium 8 (5): 225–241. doi:10.1051/jphysrad:0192700805022500.  13. ^ a b Dewdney, C.; Horton, G.; Lam, M. M.; Malik, Z.; Schmidt, M. (1992). "Wave-particle dualism and the interpretation of quantum mechanics". Foundations of Physics 22 (10): 1217–1265. Bibcode:1992FoPh...22.1217D. doi:10.1007/BF01889712.  14. ^ Institut International de Physique Solvay (1928). Electrons et Photons: Rapports et Discussions du Cinquième Conseil de Physique tenu à Bruxelles du 24 au 29 Octobre 1927. Gauthier-Villars.  16. ^ Bohm, D. (1952). "A suggested Interpretation of the Quantum Theory in Terms of Hidden Variables, II". Physical Review 85 (2): 180–193. Bibcode:1952PhRv...85..180B. doi:10.1103/PhysRev.85.180.  17. ^ Bell, J. S. (1987). Speakable and Unspeakable in Quantum Mechanics. Cambridge University Press. ISBN 978-0521334952.  External links[edit]
7765cbd8c241dd38
Statistical physics From Wikipedia, the free encyclopedia Jump to: navigation, search Statistical physics is a branch of physics that uses methods of probability theory and statistics, and particularly the mathematical tools for dealing with large populations and approximations, in solving physical problems. It can describe a wide variety of fields with an inherently stochastic nature. Its applications include many problems in the fields of physics, biology, chemistry, neurology, and even some social sciences, such as sociology. Its main purpose is to clarify the properties of matter in aggregate, in terms of physical laws governing atomic motion.[1] In particular, statistical mechanics develops the phenomenological results of thermodynamics from a probabilistic examination of the underlying microscopic systems. Historically, one of the first topics in physics where statistical methods were applied was the field of mechanics, which is concerned with the motion of particles or objects when subjected to a force. Statistical mechanics[edit] Main article: Statistical mechanics Statistical mechanics provides a framework for relating the microscopic properties of individual atoms and molecules to the macroscopic or bulk properties of materials that can be observed in everyday life, therefore explaining thermodynamics as a natural result of statistics, classical mechanics, and quantum mechanics at the microscopic level. Because of this history, the statistical physics is often considered synonymous with statistical mechanics or statistical thermodynamics.[note 1] One of the most important equations in Statistical mechanics (analogous to F=ma in mechanics, or the Schrödinger equation in quantum mechanics ) is the definition of the partition function Z, which is essentially a weighted sum of all possible states q available to a system. Z = \sum_q \mathrm{e}^{-\frac{E(q)}{k_BT}} where k_B is the Boltzmann constant, T is temperature and E(q) is energy of state q. Furthermore, the probability of a given state, q, occurring is given by P(q) = \frac{ {\mathrm{e}^{-\frac{E(q)}{k_BT}}}}{Z} A statistical approach can work well in classical systems when the number of degrees of freedom (and so the number of variables) is so large that exact solution is not possible, or not really useful. Statistical mechanics can also describe work in non-linear dynamics, chaos theory, thermal physics, fluid dynamics (particularly at high Knudsen numbers), or plasma physics. See also[edit] 1. ^ Huang, Kerson. Introduction to Statistical Physics (2nd ed.). CRC Press. p. 15. ISBN 978-1-4200-7902-9.  Thermal and Statistical Physics (lecture notes, Web draft 2001) by Mallett M., Blumler P. BASICS OF STATISTICAL PHYSICS: Second Edition by Harald J W Müller-Kirsten (University of Kaiserslautern, Germany) Statistical physics by Kadanoff L.P. Statistical Physics - Statics, Dynamics and Renormalization by Kadanoff L.P.
1ed2cafc1a40cdb3
Forgot your password? Chameleon-Like Behavior of Neutrino Confirmed 191 Posted by Soulskill from the yes,-neutrinos-eat-bugs dept. Chameleon-Like Behavior of Neutrino Confirmed Comments Filter: by Steve Max (1235710) on Monday May 31, 2010 @06:03PM (#32411522) Journal You'd need a pretty complex theory to get non-mass oscillations to match all the data we got over the past 12 years, which is very compatible with a three-state, mass-driven oscillation scenario. Besides, you'd have to explain more than what the current "new standard model" (the SM with added neutrino masses) does if you want your theory to be accepted. If two theories explain the same data equally well, the simplest is more likely. • by dumuzi (1497471) on Monday May 31, 2010 @06:17PM (#32411680) Journal I agree. In QCD quarks and gluons can undergo colour changes [wikipedia.org], this would be "chameleon-like behavior". Neutrinos on the other hand change flavour [wikipedia.org], this would be "Willy Wonka like behavior". • Re: What if... (Score:3, Informative) by Black Parrot (19622) on Monday May 31, 2010 @06:22PM (#32411716) If two theories explain the same data equally well, the simplest is more likely./quote? Make that "more preferred". In general we don't know anything about likelihood. The thing about Occam's Razor is that it filters out "special pleading" type arguments. If you want your pet in the show, you've got to provide motivation for including it. • by pz (113803) on Monday May 31, 2010 @06:23PM (#32411738) Journal How could something have mass and so weakly interact with normal matter? Neutrinos are thought to have a very small mass. So exceedingly small that they barely interact with anything (they also have no charge, so they are even less likely to interact). But zero mass and really, really, really small but not zero mass, are two different things. • by BitterOak (537666) on Monday May 31, 2010 @06:49PM (#32411982) The fact that they barely interact with anything has nothing to do with the fact that they are nearly massless. Photons are massless and they interact with anything that carries an electric charge. Electrons are much lighter than muons, but they are just as likely to interact with something. The only force that gets weaker as the mass goes down is gravity, which is by far the weakest of the fundamental forces. by BitterOak (537666) on Monday May 31, 2010 @08:31PM (#32412774) That's the way I've always understood the mass/oscillation connection too. But then I thought... wait... don't photons oscillate too? They're just coherent oscillations of the EM field; oscillating back and forth between electric and transverse magnetic in free space. If there's something different about neutrino oscillation which makes it necessary for the neutrino to travel at sublight, what is it specifically? The situation you describe with the EM field is an example of wave-particle duality. Light can behave like both a wave and a particle, but it doesn't make sense to analyze it both ways at the same time. As a wave, it does manifest itself as oscillating electric and magnetic fields and as a particle, it manifests itself as a photon, which doesn't change into a different type of particle. (There's no such thing as an "electric photon" and a "magnetic photon".) Neutrinos, too, are described quantum mechanically by wavefunctions, and these wavefunctions have frequencies associated with them, related to the energy of the particle. But these have nothing to do with the oscillation frequencies described here, in which a neutrino of one flavor (eg. mu) can change into a different flavor (eg. tau). Quantum mechanically speaking, we say the mass eigenstates of the neutrino (states of definite mass) don't coincide with the weak eigenstates (states of definite flavor: i.e. e, mu, or tau). Without mass, there would be no distinct mass eigenstates at all, and so mixing of the weak eigenstates would not occur as the neutrino propagates through free space. • by Steve Max (1235710) on Monday May 31, 2010 @09:15PM (#32413138) Journal Light doesn't oscillate in this way. A photon is a photon, and remains a photon. Electric and magnetic fields oscillate, but the particle "photon" doesn't. Neutrinos start as one particle (say, as muon-neutrinos) and are detected as a completely different particle (say, as a tau-neutrino). The explanation for that is that what we call "electron-neutrino", "muon-neutrino" and "tau-neutrino" aren't states with a definite mass; they're a mixture of three neutrino states with definite, different mass (one of those masses can be zero, but at most one). Then, from pure quantum mechanics (and nothing more esoteric than that: pure Schrödinger equation) you see that, if those three defined-mass states have slightly different mass, you will have a probability of creating an electron neutrino and detecting it as a tau neutrino, and every other combination. Those probabilities follow a simple expansion, based on only five parameters (two mass differences and three angles), and depend on the energy of the neutrino and the distance in a very specific way. We can test that dependency, and use very different experiments to measure the five parameters; and everything fits very well. Right now (specially after MINOS saw the energy dependency of the oscillation probability), nobody questions neutrino oscillations. This OPERA result only confirms what we already knew. by khayman80 (824400) on Monday May 31, 2010 @09:18PM (#32413160) Homepage Journal Thanks. I just found some [uci.edu] equations [ucl.ac.be] that appear to reinforce what you said. Since the oscillation frequency is proportional to the difference of the squared masses of the mass eigenstates, perhaps it's more accurate to say that neutrino flavor oscillation implies the existence of several mass eigenstates which aren't identical to flavor eigenstates. Since two mass eigenstates would need different eigenvalues in order to be distinguishable, this means at least one mass eigenvalue has to be nonzero. There's probably some sort of "superselection rule" which prevents particles from oscillating between massless and massive eigenstates, so both mass eigenstates have to be non-zero. Cool. • by Anonymous Coward on Tuesday June 01, 2010 @12:03AM (#32414422) Photons are masless and chargeless, right? • by Young Master Ploppy (729877) on Tuesday June 01, 2010 @08:30AM (#32417022) Homepage Journal I'm not a "real" physicist - but I did study this at undergrad level, so here goes: Heisenberg's Uncertainty Principle ( http://en.wikipedia.org/wiki/Uncertainty_principle [wikipedia.org] ) states that there must always be a minimum uncertainty in certain pairs of related variables - e.g. position and momentum, i.e. the more accurately you know the position of something, the less accurately you know how it's moving. Another related pair is energy and time - the more accurately you know the energy of something, the less accurately you know when the measurement was taken. (disclaimer - this makes perfect sense when expressed mathematically, it onlysounds like handwavery when you translate it into English, as words are ambiguous and mean different things to different people) Anyway, this uncertainty means that there is a small but non-zero probability of a higher-energy event occuring in the history of a lower-energy particle (often mis-stated as "particles can borrow energy for a short time, but check the wiki page for a more accurate statement). It sounds nuts, I know, but it has many real-world implications that have no explanation in non-quantum physics. Particles can "tunnel" through barriers that they shouldn't be able to cross, for instance - this is how semi-conductors work. By implication, there is a small probability of the neutrino acting as if it had a higher energy, and *this* is how neutrino-flipping occurs without violating conservation of energy. • Re:What if... (Score:4, Informative) by Steve Max (1235710) on Tuesday June 01, 2010 @10:41AM (#32418340) Journal No. All flavour eigenstates MUST be massive: they are superpositions of the three mass eigenstates, one of which can have zero mass. Calling the three mass eigenstates n1, n2 and n3; and the three flavour eigenstates ne, nm and nt, we'd have: So, if any of n1, n2 or n3 has a non-zero mass (and at least two of them MUST have non-zero masses, since we know two different and non-zero mass differences), all three flavour eigenstates have non-zero masses. Also, remember that the limit for the neutrino mass is at about 1eV, while it's hard to have neutrinos travelling with energies under 10^6 eV. In other words, the gamma factor is huge, and they're always ultrarelativistic, travelling practically at "c". Another point is that the mass differences are really, really small; of the order of 0.01 eV. This is ridiculously small; so small that the uncertainty principle makes it possible for one state to "tunnel" to the other. I really can't go any deeper than that without resorting to quantuim field theory. I can only say that standard QM is not compatible with relativity: Schrödinger's equation comes from the classical Hamiltonian, for example. To take special relativity into account, you need a different set of equations (Dirac's), which use the relativistic Hamiltonian. In this particular case, the result is the same using Dirac, Schrödinger or the full QFT, but the three-line Schrödinger solution becomes a full-page Dirac calculation, or ten pages of QFT. In this particular case, unfortunately, the best I can do is say "trust me, it works; you'll see it when you get more background". by Steve Max (1235710) on Tuesday June 01, 2010 @10:52AM (#32418462) Journal The time-dependent Schrödinger's equation doesn't apply for massless particles. It was never intended to. It isn't relativistic. Try to apply a simple boost and you'll see it's not Poincaré invariant. The main point is that you get the same probabilities if you use a relativistic theory, but you need A LOT of work to get there. Oscillations work and happen in QFT, which is Poincaré-invariant and assumes special relativity. I can't find any references in a quick search, but I've done all the (quite painful) calculations a long time ago to make sure it works. It's one of those cases where the added complexity of relativistic quantum field theory doesn't change the results from a simple Schrödinger solution.