book_volume
stringclasses 3
values | book_title
stringclasses 1
value | chapter_number
stringlengths 1
2
| chapter_title
stringlengths 5
79
| section_number
stringclasses 9
values | section_title
stringlengths 4
93
| section_text
stringlengths 868
48.5k
|
---|---|---|---|---|---|---|
1 | 38 | The Relation of Wave and Particle Viewpoints | 5 | Energy levels | We have talked about the atom in its lowest possible energy condition, but it turns out that the electron can do other things. It can jiggle and wiggle in a more energetic manner, and so there are many different possible motions for the atom. According to quantum mechanics, in a stationary condition there can only be definite energies for an atom. We make a diagram (Fig. 38–9) in which we plot the energy vertically, and we make a horizontal line for each allowed value of the energy. When the electron is free, i.e., when its energy is positive, it can have any energy; it can be moving at any speed. But bound energies are not arbitrary. The atom must have one or another out of a set of allowed values, such as those in Fig. 38–9. Now let us call the allowed values of the energy $E_0$, $E_1$, $E_2$, $E_3$. If an atom is initially in one of these “excited states,” $E_1$, $E_2$, etc., it does not remain in that state forever. Sooner or later it drops to a lower state and radiates energy in the form of light. The frequency of the light that is emitted is determined by conservation of energy plus the quantum-mechanical understanding that the frequency of the light is related to the energy of the light by (38.1). Therefore the frequency of the light which is liberated in a transition from energy $E_3$ to energy $E_1$ (for example) is \begin{equation} \label{Eq:I:38:14} \omega_{31}=(E_3-E_1)/\hbar. \end{equation} This, then, is a characteristic frequency of the atom and defines a spectral emission line. Another possible transition would be from $E_3$ to $E_0$. That would have a different frequency \begin{equation} \label{Eq:I:38:15} \omega_{30}=(E_3-E_0)/\hbar. \end{equation} Another possibility is that if the atom were excited to the state $E_1$ it could drop to the ground state $E_0$, emitting a photon of frequency \begin{equation} \label{Eq:I:38:16} \omega_{10}=(E_1-E_0)/\hbar. \end{equation} The reason we bring up three transitions is to point out an interesting relationship. It is easy to see from (38.14), (38.15), and (38.16) that \begin{equation} \label{Eq:I:38:17} \omega_{30}=\omega_{31}+\omega_{10}. \end{equation} In general, if we find two spectral lines, we shall expect to find another line at the sum of the frequencies (or the difference in the frequencies), and that all the lines can be understood by finding a series of levels such that every line corresponds to the difference in energy of some pair of levels. This remarkable coincidence in spectral frequencies was noted before quantum mechanics was discovered, and it is called the Ritz combination principle. This is again a mystery from the point of view of classical mechanics. Let us not belabor the point that classical mechanics is a failure in the atomic domain; we seem to have demonstrated that pretty well. We have already talked about quantum mechanics as being represented by amplitudes which behave like waves, with certain frequencies and wave numbers. Let us observe how it comes about from the point of view of amplitudes that the atom has definite energy states. This is something we cannot understand from what has been said so far, but we are all familiar with the fact that confined waves have definite frequencies. For instance, if sound is confined to an organ pipe, or anything like that, then there is more than one way that the sound can vibrate, but for each such way there is a definite frequency. Thus an object in which the waves are confined has certain resonance frequencies. It is therefore a property of waves in a confined space—a subject which we will discuss in detail with formulas later on—that they exist only at definite frequencies. And since the general relation exists between frequencies of the amplitude and energy, we are not surprised to find definite energies associated with electrons bound in atoms. |
|
1 | 38 | The Relation of Wave and Particle Viewpoints | 6 | Philosophical implications | Let us consider briefly some philosophical implications of quantum mechanics. As always, there are two aspects of the problem: one is the philosophical implication for physics, and the other is the extrapolation of philosophical matters to other fields. When philosophical ideas associated with science are dragged into another field, they are usually completely distorted. Therefore we shall confine our remarks as much as possible to physics itself. First of all, the most interesting aspect is the idea of the uncertainty principle; making an observation affects the phenomenon. It has always been known that making observations affects a phenomenon, but the point is that the effect cannot be disregarded or minimized or decreased arbitrarily by rearranging the apparatus. When we look for a certain phenomenon we cannot help but disturb it in a certain minimum way, and the disturbance is necessary for the consistency of the viewpoint. The observer was sometimes important in prequantum physics, but only in a rather trivial sense. The problem has been raised: if a tree falls in a forest and there is nobody there to hear it, does it make a noise? A real tree falling in a real forest makes a sound, of course, even if nobody is there. Even if no one is present to hear it, there are other traces left. The sound will shake some leaves, and if we were careful enough we might find somewhere that some thorn had rubbed against a leaf and made a tiny scratch that could not be explained unless we assumed the leaf were vibrating. So in a certain sense we would have to admit that there is a sound made. We might ask: was there a sensation of sound? No, sensations have to do, presumably, with consciousness. And whether ants are conscious and whether there were ants in the forest, or whether the tree was conscious, we do not know. Let us leave the problem in that form. Another thing that people have emphasized since quantum mechanics was developed is the idea that we should not speak about those things which we cannot measure. (Actually relativity theory also said this.) Unless a thing can be defined by measurement, it has no place in a theory. And since an accurate value of the momentum of a localized particle cannot be defined by measurement it therefore has no place in the theory. The idea that this is what was the matter with classical theory is a false position. It is a careless analysis of the situation. Just because we cannot measure position and momentum precisely does not a priori mean that we cannot talk about them. It only means that we need not talk about them. The situation in the sciences is this: A concept or an idea which cannot be measured or cannot be referred directly to experiment may or may not be useful. It need not exist in a theory. In other words, suppose we compare the classical theory of the world with the quantum theory of the world, and suppose that it is true experimentally that we can measure position and momentum only imprecisely. The question is whether the ideas of the exact position of a particle and the exact momentum of a particle are valid or not. The classical theory admits the ideas; the quantum theory does not. This does not in itself mean that classical physics is wrong. When the new quantum mechanics was discovered, the classical people—which included everybody except Heisenberg, Schrödinger, and Born—said: “Look, your theory is not any good because you cannot answer certain questions like: what is the exact position of a particle?, which hole does it go through?, and some others.” Heisenberg’s answer was: “I do not need to answer such questions because you cannot ask such a question experimentally.” It is that we do not have to. Consider two theories (a) and (b); (a) contains an idea that cannot be checked directly but which is used in the analysis, and the other, (b), does not contain the idea. If they disagree in their predictions, one could not claim that (b) is false because it cannot explain this idea that is in (a), because that idea is one of the things that cannot be checked directly. It is always good to know which ideas cannot be checked directly, but it is not necessary to remove them all. It is not true that we can pursue science completely by using only those concepts which are directly subject to experiment. In quantum mechanics itself there is a wave function amplitude, there is a potential, and there are many constructs that we cannot measure directly. The basis of a science is its ability to predict. To predict means to tell what will happen in an experiment that has never been done. How can we do that? By assuming that we know what is there, independent of the experiment. We must extrapolate the experiments to a region where they have not been done. We must take our concepts and extend them to places where they have not yet been checked. If we do not do that, we have no prediction. So it was perfectly sensible for the classical physicists to go happily along and suppose that the position—which obviously means something for a baseball—meant something also for an electron. It was not stupidity. It was a sensible procedure. Today we say that the law of relativity is supposed to be true at all energies, but someday somebody may come along and say how stupid we were. We do not know where we are “stupid” until we “stick our neck out,” and so the whole idea is to put our neck out. And the only way to find out that we are wrong is to find out what our predictions are. It is absolutely necessary to make constructs. We have already made a few remarks about the indeterminacy of quantum mechanics. That is, that we are unable now to predict what will happen in physics in a given physical circumstance which is arranged as carefully as possible. If we have an atom that is in an excited state and so is going to emit a photon, we cannot say when it will emit the photon. It has a certain amplitude to emit the photon at any time, and we can predict only a probability for emission; we cannot predict the future exactly. This has given rise to all kinds of nonsense and questions on the meaning of freedom of will, and of the idea that the world is uncertain. Of course we must emphasize that classical physics is also indeterminate, in a sense. It is usually thought that this indeterminacy, that we cannot predict the future, is an important quantum-mechanical thing, and this is said to explain the behavior of the mind, feelings of free will, etc. But if the world were classical—if the laws of mechanics were classical—it is not quite obvious that the mind would not feel more or less the same. It is true classically that if we knew the position and the velocity of every particle in the world, or in a box of gas, we could predict exactly what would happen. And therefore the classical world is deterministic. Suppose, however, that we have a finite accuracy and do not know exactly where just one atom is, say to one part in a billion. Then as it goes along it hits another atom, and because we did not know the position better than to one part in a billion, we find an even larger error in the position after the collision. And that is amplified, of course, in the next collision, so that if we start with only a tiny error it rapidly magnifies to a very great uncertainty. To give an example: if water falls over a dam, it splashes. If we stand nearby, every now and then a drop will land on our nose. This appears to be completely random, yet such a behavior would be predicted by purely classical laws. The exact position of all the drops depends upon the precise wigglings of the water before it goes over the dam. How? The tiniest irregularities are magnified in falling, so that we get complete randomness. Obviously, we cannot really predict the position of the drops unless we know the motion of the water absolutely exactly. Speaking more precisely, given an arbitrary accuracy, no matter how precise, one can find a time long enough that we cannot make predictions valid for that long a time. Now the point is that this length of time is not very large. It is not that the time is millions of years if the accuracy is one part in a billion. The time goes, in fact, only logarithmically with the error, and it turns out that in only a very, very tiny time we lose all our information. If the accuracy is taken to be one part in billions and billions and billions—no matter how many billions we wish, provided we do stop somewhere—then we can find a time less than the time it took to state the accuracy—after which we can no longer predict what is going to happen! It is therefore not fair to say that from the apparent freedom and indeterminacy of the human mind, we should have realized that classical “deterministic” physics could not ever hope to understand it, and to welcome quantum mechanics as a release from a “completely mechanistic” universe. For already in classical mechanics there was indeterminability from a practical point of view. |
|
1 | 39 | The Kinetic Theory of Gases | 1 | Properties of matter | With this chapter we begin a new subject which will occupy us for some time. It is the first part of the analysis of the properties of matter from the physical point of view, in which, recognizing that matter is made out of a great many atoms, or elementary parts, which interact electrically and obey the laws of mechanics, we try to understand why various aggregates of atoms behave the way they do. It is obvious that this is a difficult subject, and we emphasize at the beginning that it is in fact an extremely difficult subject, and that we have to deal with it differently than we have dealt with the other subjects so far. In the case of mechanics and in the case of light, we were able to begin with a precise statement of some laws, like Newton’s laws, or the formula for the field produced by an accelerating charge, from which a whole host of phenomena could be essentially understood, and which would produce a basis for our understanding of mechanics and of light from that time on. That is, we may learn more later, but we do not learn different physics, we only learn better methods of mathematical analysis to deal with the situation. We cannot use this approach effectively in studying the properties of matter. We can discuss matter only in a most elementary way; it is much too complicated a subject to analyze directly from its specific basic laws, which are none other than the laws of mechanics and electricity. But these are a bit too far away from the properties we wish to study; it takes too many steps to get from Newton’s laws to the properties of matter, and these steps are, in themselves, fairly complicated. We will now start to take some of these steps, but while many of our analyses will be quite accurate, they will eventually get less and less accurate. We will have only a rough understanding of the properties of matter. One of the reasons that we have to perform the analysis so imperfectly is that the mathematics of it requires a deep understanding of the theory of probability; we are not going to want to know where every atom is actually moving, but rather, how many move here and there on the average, and what the odds are for different effects. So this subject involves a knowledge of the theory of probability, and our mathematics is not yet quite ready and we do not want to strain it too hard. Secondly, and more important from a physical standpoint, the actual behavior of the atoms is not according to classical mechanics, but according to quantum mechanics, and a correct understanding of the subject cannot be attained until we understand quantum mechanics. Here, unlike the case of billiard balls and automobiles, the difference between the classical mechanical laws and the quantum-mechanical laws is very important and very significant, so that many things that we will deduce by classical physics will be fundamentally incorrect. Therefore there will be certain things to be partially unlearned; however, we shall indicate in every case when a result is incorrect, so that we will know just where the “edges” are. One of the reasons for discussing quantum mechanics in the preceding chapters was to give an idea as to why, more or less, classical mechanics is incorrect in the various directions. Why do we deal with the subject now at all? Why not wait half a year, or a year, until we know the mathematics of probability better, and we learn a little quantum mechanics, and then we can do it in a more fundamental way? The answer is that it is a difficult subject, and the best way to learn is to do it slowly! The first thing to do is to get some idea, more or less, of what ought to happen in different circumstances, and then, later, when we know the laws better, we will formulate them better. Anyone who wants to analyze the properties of matter in a real problem might want to start by writing down the fundamental equations and then try to solve them mathematically. Although there are people who try to use such an approach, these people are the failures in this field; the real successes come to those who start from a physical point of view, people who have a rough idea where they are going and then begin by making the right kind of approximations, knowing what is big and what is small in a given complicated situation. These problems are so complicated that even an elementary understanding, although inaccurate and incomplete, is worthwhile having, and so the subject will be one that we shall go over again and again, each time with more and more accuracy, as we go through our course in physics. Another reason for beginning the subject right now is that we have already used many of these ideas in, for example, chemistry, and we have even heard of some of them in high school. It is interesting to know the physical basis for these things. As an interesting example, we all know that equal volumes of gases, at the same pressure and temperature, contain the same number of molecules. The law of multiple proportions, that when two gases combine in a chemical reaction the volumes needed always stand in simple integral proportions, was understood ultimately by Avogadro to mean that equal volumes have equal numbers of atoms. Now why do they have equal numbers of atoms? Can we deduce from Newton’s laws that the number of atoms should be equal? We shall address ourselves to that specific matter in this chapter. In succeeding chapters, we shall discuss various other phenomena involving pressures, volumes, temperature, and heat. We shall also find that the subject can be attacked from a nonatomic point of view, and that there are many interrelationships of the properties of substances. For instance, when we compress something, it heats; if we heat it, it expands. There is a relationship between these two facts which can be deduced independently of the machinery underneath. This subject is called thermodynamics. The deepest understanding of thermodynamics comes, of course, from understanding the actual machinery underneath, and that is what we shall do: we shall take the atomic viewpoint from the beginning and use it to understand the various properties of matter and the laws of thermodynamics. Let us, then, discuss the properties of gases from the standpoint of Newton’s laws of mechanics. |
|
1 | 39 | The Kinetic Theory of Gases | 2 | The pressure of a gas | First, we know that a gas exerts a pressure, and we must clearly understand what this is due to. If our ears were a few times more sensitive, we would hear a perpetual rushing noise. Evolution has not developed the ear to that point, because it would be useless if it were so much more sensitive—we would hear a perpetual racket. The reason is that the eardrum is in contact with the air, and air is a lot of molecules in perpetual motion and these bang against the eardrums. In banging against the eardrums they make an irregular tattoo—boom, boom, boom—which we do not hear because the atoms are so small, and the sensitivity of the ear is not quite enough to notice it. The result of this perpetual bombardment is to push the drum away, but of course there is an equal perpetual bombardment of atoms on the other side of the eardrum, so the net force on it is zero. If we were to take the air away from one side, or change the relative amounts of air on the two sides, the eardrum would then be pushed one way or the other, because the amount of bombardment on one side would be greater than on the other. We sometimes feel this uncomfortable effect when we go up too fast in an elevator or an airplane, especially if we also have a bad cold (when we have a cold, inflammation closes the tube which connects the air on the inside of the eardrum with the outside air through the throat, so that the two pressures cannot readily equalize). In considering how to analyze the situation quantitatively, we imagine that we have a volume of gas in a box, at one end of which is a piston which can be moved (Fig. 39–1). We would like to find out what force on the piston results from the fact that there are atoms in this box. The volume of the box is $V$, and as the atoms move around inside the box with various velocities they bang against the piston. Suppose there is nothing, a vacuum, on the outside of the piston. What of it? If the piston were left alone, and nobody held onto it, each time it got banged it would pick up a little momentum and it would gradually get pushed out of the box. So in order to keep it from being pushed out of the box, we have to hold it with a force $F$. The problem is, how much force? One way of expressing the force is to talk about the force per unit area: if $A$ is the area of the piston, then the force on the piston will be written as a number times the area. We define the pressure, then, as equal to the force that we have to apply on a piston, divided by the area of the piston:\begin{equation} \label{Eq:I:39:1} P = F/A. \end{equation} To make sure we understand the idea (we have to derive it for another purpose anyway), the differential work $dW$ done on the gas in compressing it by moving the piston in a differential amount $-dx$ would be the force times the distance that we compress it, which, according to (39.1), would be the pressure times the area, times the distance, which is equal to minus the pressure times the change in the volume: \begin{equation} \label{Eq:I:39:2} dW = F(-dx) = -PA\,dx = -P\,dV. \end{equation} (The area $A$ times the distance $dx$ is the volume change.) The minus sign is there because, as we compress it, we decrease the volume; if we think about it we can see that if a gas is compressed, work is done on it. How much force do we have to apply to balance the banging of the molecules? The piston receives from each collision a certain amount of momentum. A certain amount of momentum per second will pour into the piston, and it will start to move. To keep it from moving, we must pour back into it the same amount of momentum per second from our force. Of course, the force is the amount of momentum per second that we must pour in. There is another way to put it: if we let go of the piston it will pick up speed because of the bombardments; with each collision we get a little more speed, and the speed thus accelerates. The rate at which the piston picks up speed, or accelerates, is proportional to the force on it. So we see that the force, which we already have said is the pressure times the area, is equal to the momentum per second delivered to the piston by the colliding molecules. To calculate the momentum per second is easy—we can do it in two parts: first, we find the momentum delivered to the piston by one particular atom in a collision with the piston, then we have to multiply by the number of collisions per second that the atoms have with the wall. The force will be the product of these two factors. Now let us see what the two factors are: In the first place, we shall suppose that the piston is a perfect “reflector” for the atoms. If it is not, the whole theory is wrong, and the piston will start to heat up and things will change, but eventually, when equilibrium has set in, the net result is that the collisions are effectively perfectly elastic. On the average, every particle that comes in leaves with the same energy. So we shall imagine that the gas is in a steady condition, and we lose no energy to the piston because the piston is standing still. In those circumstances, if a particle comes in with a certain speed, it comes out with the same speed and, we will say, with the same mass. If $\FLPv$ is the velocity of an atom, and $v_x$ is the $x$-component of $\FLPv$, then $mv_x$ is the $x$-component of momentum “in”; but we also have an equal component of momentum “out,” and so the total momentum delivered to the piston by the particle, in one collision, is $2mv_x$, because it is “reflected.” Now, we need the number of collisions made by the atoms in a second, or in a certain amount of time $dt$; then we divide by $dt$. How many atoms are hitting? Let us suppose that there are $N$ atoms in the volume $V$, or $n = N/V$ in each unit volume. To find how many atoms hit the piston, we note that, given a certain amount of time $t$, if a particle has a certain velocity toward the piston it will hit during the time $t$, provided it is close enough. If it is too far away, it goes only part way toward the piston in the time $t$, but does not reach the piston. Therefore it is clear that only those molecules which are within a distance $v_xt$ from the piston are going to hit the piston in the time $t$. Thus the number of collisions in a time $t$ is equal to the number of atoms which are in the region within a distance $v_xt$, and since the area of the piston is $A$, the volume occupied by the atoms which are going to hit the piston is $v_xtA$. But the number of atoms that are going to hit the piston is that volume times the number of atoms per unit volume, $nv_xtA$. Of course we do not want the number that hit in a time $t$, we want the number that hit per second, so we divide by the time $t$, to get $nv_xA$. (This time $t$ could be made very short; if we feel we want to be more elegant, we call it $dt$, then differentiate, but it is the same thing.) So we find that the force is \begin{equation} \label{Eq:I:39:3} F = nv_xA\cdot 2mv_x. \end{equation} See, the force is proportional to the area, if we keep the particle density fixed as we change the area! The pressure is then \begin{equation} \label{Eq:I:39:4} P = 2nmv_x^2. \end{equation} Now we notice a little trouble with this analysis: First, all the molecules do not have the same velocity, and they do not move in the same direction. So, all the $v_x^2$’s are different! So what we must do, of course, is to take an average of the $v_x^2$’s, since each one makes its own contribution. What we want is the square of $v_x$, averaged over all the molecules: \begin{equation} \label{Eq:I:39:5} P = nm\avg{v_x^2}. \end{equation} Did we forget to include the factor $2$? No; of all the atoms, only half are headed toward the piston. The other half are headed the other way, so the number of atoms per unit volume that are hitting the piston is only $n/2$. Now as the atoms bounce around, it is clear that there is nothing special about the “$x$-direction”; the atoms may also be moving up and down, back and forth, in and out. Therefore it is going to be true that $\avg{v_x^2}$, the average motion of the atoms in one direction, and the average in the other two directions, are all going to be equal: \begin{equation} \label{Eq:I:39:6} \avg{v_x^2} = \avg{v_y^2} = \avg{v_z^2}. \end{equation} It is only a matter of rather tricky mathematics to notice, therefore, that they are each equal to one-third of their sum, which is of course the square of the magnitude of the velocity: \begin{equation} \label{Eq:I:39:7} \avg{v_x^2} = \tfrac{1}{3}\avg{v_x^2 + v_y^2 + v_z^2} = \avg{v^2}/3. \end{equation} This has the advantage that we do not have to worry about any particular direction, and so we write our pressure formula again in this form: \begin{equation} \label{Eq:I:39:8} P = (\tfrac{2}{3})n\avg{mv^2/2}. \end{equation} The reason we wrote the last factor as $\avg{mv^2/2}$ is that this is the kinetic energy of the center-of-mass motion of the molecule. We find, therefore, that \begin{equation} \label{Eq:I:39:9} PV = N(\tfrac{2}{3})\avg{mv^2/2}. \end{equation} With this equation we can calculate how much the pressure is, if we know the speeds. As a very simple example let us take helium gas, or any other gas, like mercury vapor, or potassium vapor of high enough temperature, or argon, in which all the molecules are single atoms, for which we may suppose that there is no internal motion in the atom. If we had a complex molecule, there might be some internal motion, mutual vibrations, or something. We suppose that we may disregard that; this is actually a serious matter that we will have to come back to, but it turns out to be all right. We suppose that the internal motion of the atoms can be disregarded, and therefore, for this purpose, that the kinetic energy of the center-of-mass motion is all the energy there is. So for a monatomic gas, the kinetic energy is the total energy. In general, we are going to call $U$ the total energy (it is sometimes called the total internal energy—we may wonder why, since there is no external energy to a gas), i.e., all the energy of all the molecules in the gas, or the object, whatever it is. For a monatomic gas we will suppose that the total energy $U$ is equal to a number of atoms times the average kinetic energy of each, because we are disregarding any possibility of excitation or motion inside the atoms themselves. Then, in these circumstances, we would have \begin{equation} \label{Eq:I:39:10} PV = \tfrac{2}{3}U. \end{equation} Incidentally, we can stop here and find the answer to the following question: Suppose that we take a can of gas and compress the gas slowly, how much pressure do we need to squeeze the volume down? It is easy to find out, since the pressure is $\tfrac{2}{3}$ the energy divided by $V$. As we squeeze it down, we do work on the gas and we thereby increase the energy $U$. So we are going to have some kind of a differential equation: If we start out in a given circumstance with a certain energy and a certain volume, we then know the pressure. Now we start to squeeze, but the moment we do, the energy $U$ increases and the volume $V$ decreases, so the pressure goes up. So, we have to solve a differential equation, and we will solve it in a moment. We must first emphasize, however, that as we are compressing this gas, we are supposing that all the work goes into increasing the energy of the atoms inside. We may ask, “Isn’t that necessary? Where else could it go?” It turns out that it can go another place. There are what we call “heat leaks” through the walls: the hot (i.e., fast-moving) atoms that bombard the walls, heat the walls, and energy goes away. We shall suppose for the present that this is not the case. For somewhat wider generality, although we are still making some very special assumptions about our gas, we shall write, not $PV = \tfrac{2}{3}U$, but \begin{equation} \label{Eq:I:39:11} PV = (\gamma - 1)U. \end{equation} It is written $(\gamma - 1)$ times $U$ for conventional reasons, because we will deal with a few other cases later where the number in front of $U$ will not be $\tfrac{2}{3}$, but will be a different number. So, in order to do the thing in general, we call it $\gamma - 1$, because people have been calling it that for almost one hundred years. This $\gamma$, then, is $\tfrac{5}{3}$ for a monatomic gas like helium, because $\tfrac{5}{3} - 1$ is $\tfrac{2}{3}$. We have already noticed that when we compress a gas the work done is $-P\,dV$. A compression in which there is no heat energy added or removed is called an adiabatic compression, from the Greek a (not) $+$ dia (through) $+$ bainein (to go). (The word adiabatic is used in physics in several ways, and it is sometimes hard to see what is common about them.) That is, for an adiabatic compression all the work done goes into changing the internal energy. That is the key—that there are no other losses of energy—for then we have $P\,dV = -dU$. But since $U = PV/(\gamma - 1)$, we may write \begin{equation} \label{Eq:I:39:12} dU = (P\,dV + V\,dP)/(\gamma - 1). \end{equation} So we have $P\,dV = -(P\,dV + V\,dP)/(\gamma - 1)$, or, rearranging the terms, $\gamma P\,dV = -V\,dP$, or \begin{equation} \label{Eq:I:39:13} (\gamma\,dV/V) + (dP/P) = 0. \end{equation} Fortunately, assuming that $\gamma$ is constant, as it is for a monatomic gas, we can integrate this: it gives $\gamma\ln V + \ln P = \ln C$, where $\ln C$ is the constant of integration. If we take the exponential of both sides, we get the law \begin{equation} \label{Eq:I:39:14} PV^\gamma = C\text{ (a constant)}. \end{equation} In other words, under adiabatic conditions, where the temperature rises as we compress because no heat is being lost, the pressure times the volume to the $\tfrac{5}{3}$ power is a constant for a monatomic gas! Although we derived it theoretically, this is, in fact, the way monatomic gases behave experimentally. |
|
1 | 39 | The Kinetic Theory of Gases | 3 | Compressibility of radiation | We may give one other example of the kinetic theory of a gas, one which is not used in chemistry so much, but is used in astronomy. We have a large number of photons in a box in which the temperature is very high. (The box is, of course, the gas in a very hot star. The sun is not hot enough; there are still too many atoms, but at still higher temperatures in certain very hot stars, we may neglect the atoms and suppose that the only objects that we have in the box are photons.) Now then, a photon has a certain momentum $\FLPp$. (We always find that we are in terrible trouble when we do kinetic theory: $p$ is the pressure, but $p$ is the momentum; $v$ is the volume, but $v$ is the velocity; $T$ is the temperature, but $T$ is the kinetic energy or the time or the torque; one must keep one’s wits about one!) This $\FLPp$ is momentum, it is a vector. Going through the same analysis as before, it is the $x$-component of the vector $\FLPp$ which generates the “kick,” and twice the $x$-component of the vector $\FLPp$ is the momentum which is given in the kick. Thus $2p_x$ replaces $2mv_x$, and in evaluating the number of collisions, $v_x$ is still $v_x$, so when we get all the way through, we find that the pressure in Eq. (39.4) is, instead, \begin{equation} \label{Eq:I:39:15} P = 2np_xv_x. \end{equation} Then, in the averaging, it becomes $n$ times the average of $p_xv_x$ (the same factor of $2$) and, finally, putting in the other two directions, we find \begin{equation} \label{Eq:I:39:16} PV = N\avg{\FLPp\cdot\FLPv}/3. \end{equation} This checks with the formula (39.9), because the momentum is $m\FLPv$; it is a little more general, that is all. The pressure times the volume is the total number of atoms times $\tfrac{1}{3}(\FLPp\cdot\FLPv)$, averaged. Now, for photons, what is $\FLPp\cdot\FLPv$? The momentum and the velocity are in the same direction, and the velocity is the speed of light, so this is the momentum of each of the objects, times the speed of light. The momentum times the speed of light of every photon is its energy: $E = pc$, so these terms are the energies of each of the photons, and we should, of course, take an average energy, times the number of photons. So we have $\tfrac{1}{3}$ of the energy inside the gas: \begin{equation} \label{Eq:I:39:17} PV = U/3\text{ (photon gas)}. \end{equation} For photons, then, since we have $\tfrac{1}{3}$ in front, $(\gamma - 1)$ in (39.11) is $\tfrac{1}{3}$, or $\gamma = \tfrac{4}{3}$, and we have discovered that radiation in a box obeys the law \begin{equation} \label{Eq:I:39:18} PV^{4/3} = C. \end{equation} So we know the compressibility of radiation! That is what is used in an analysis of the contribution of radiation pressure in a star, that is how we calculate it, and how it changes when we compress it. What wonderful things are already within our power! |
|
1 | 39 | The Kinetic Theory of Gases | 4 | Temperature and kinetic energy | So far we have not dealt with temperature; we have purposely been avoiding the temperature. As we compress a gas, we know that the energy of the molecules increases, and we are used to saying that the gas gets hotter; we would like to understand what this has to do with the temperature. If we try to do the experiment, not adiabatically but at what we call constant temperature, what are we doing? We know that if we take two boxes of gas and let them sit next to each other long enough, even if at the start they were at what we call different temperatures, they will in the end come to the same temperature. Now what does that mean? That means that they get to a condition that they would get to if we left them alone long enough! What we mean by equal temperature is just that—the final condition when things have been sitting around interacting with each other long enough. Let us consider, now, what happens if we have two gases in containers separated by a movable piston as in Fig. 39–2 (just for simplicity we shall take two monatomic gases, say helium and neon). In container (1) the atoms have mass $m_1$, velocity $v_1$, and there are $n_1$ per unit volume, and in the other container the atoms have mass $m_2$, velocity $v_2$, there are $n_2$ atoms per unit volume. What are the conditions for equilibrium? Obviously, the bombardment from the left side must be such that it moves the piston to the right and compresses the other gas until its pressure builds up, and the thing will thus slosh back and forth, and will gradually come to rest at a place where the pressures are equal on both sides. So we can arrange that the pressures are equal; that just means that the internal energies per unit volume are equal, or that the numbers $n$ times the average kinetic energies on each side are equal. What we have to try to prove, eventually, is that the numbers themselves are equal. So far, all we know is that the numbers times the kinetic energies are equal, \begin{equation*} n_1\avg{m_1v_1^2/2} = n_2\avg{m_2v_2^2/2}, \end{equation*} from (39.8), because the pressures are equal. We must realize that this is not the only condition over the long run, but something else must happen more slowly as the true complete equilibrium corresponding to equal temperatures sets in. To see the idea, suppose that the pressure on the left side were developed by having a very high density but a low velocity. By having a large $n$ and a small $v$, we can get the same pressure as by having a small $n$ and a large $v$. The atoms may be moving slowly but be packed nearly solidly, or there may be fewer but they are hitting harder. Will it stay like that forever? At first we might think so, but then we think again and find we have forgotten one important point. That is, that the intermediate piston does not receive a steady pressure; it wiggles, just like the eardrum that we were first talking about, because the bangings are not absolutely uniform. There is not a perpetual, steady pressure, but a tattoo—the pressure varies, and so the thing jiggles. Suppose that the atoms on the right side are not jiggling much, but those on the left are few and far between and very energetic. The piston will, now and then, get a big impulse from the left, and will be driven against the slow atoms on the right, giving them more speed. (As each atom collides with the piston, it either gains or loses energy, depending upon whether the piston is moving one way or the other when the atom strikes it.) So, as a result of the collisions, the piston finds itself jiggling, jiggling, jiggling, and this shakes the other gas—it gives energy to the other atoms, and they build up faster motions, until they balance the jiggling that the piston is giving to them. The system comes to some equilibrium where the piston is moving at such a mean square speed that it picks up energy from the atoms at about the same rate as it puts energy back into them. So the piston picks up a certain mean irregularity in speed, and it is our problem to find it. When we do find it, we can solve our problem better, because the gases will adjust their velocities until the rate at which they are trying to pour energy into each other through the piston will become equal. It is quite difficult to figure out the details of the piston in this particular circumstance; although it is ideally simple to understand, it turns out to be a little harder to analyze. Before we analyze that, let us analyze another problem in which we have a box of gas but now we have two different kinds of molecules in it, having masses $m_1$ and $m_2$, velocities $v_1$ and $v_2$, and so forth; there is now a much more intimate relationship. If all of the No. $2$ molecules are standing still, that condition is not going to last, because they get kicked by the No. $1$ molecules and so pick up speed. If they are all going much faster than the No. $1$ molecules, then maybe that will not last either—they will pass the energy back to the No. $1$ molecules. So when both gases are in the same box, the problem is to find the rule that determines the relative speeds of the two. This is still a very difficult problem, but we will solve it as follows. First we consider the following sub-problem (again this is one of those cases where—never mind the derivation—in the end the result is very simple to remember, but the derivation is just ingenious). Let us suppose that we have two molecules, of different mass, colliding, and that the collision is viewed in the center-of-mass (CM) system. In order to remove a complication, we look at the collision in the CM. As we know from the laws of collision, by the conservation of momentum and energy, after the molecules collide the only way they can move is such that each maintains its own original speed—and they just change their direction. So we have an average collision that looks like that in Fig. 39–3. Suppose, for a moment, that we watch all the collisions with the CM at rest. Suppose we imagine that they are all initially moving horizontally. Of course, after the first collision some of them are moving at an angle. In other words, if they were all going horizontally, then at least some would later be moving vertically. Now in some other collision, they would be coming in from another direction, and then they would be twisted at still another angle. So even if they were completely organized in the beginning, they would get sprayed around at all angles, and then the sprayed ones would get sprayed some more, and sprayed some more, and sprayed some more. Ultimately, what will be the distribution? Answer: It will be equally likely to find any pair moving in any direction in space. After that further collisions could not change the distribution. They are equally likely to go in all directions, but how do we say that? There is of course no likelihood that they will go in any specific direction, because a specific direction is too exact, so we have to talk about per unit “something.” The idea is that any area on a sphere centered at a collision point will have just as many molecules going through it as go through any other equal area on the sphere. So the result of the collisions will be to distribute the directions so that equal areas on a sphere will have equal probabilities. Incidentally, if we just want to discuss the original direction and some other direction an angle $\theta$ from it, it is an interesting property that the differential area of a sphere of unit radius is $\sin\theta\,d\theta$ times $2\pi$ (see Fig. 32–1). And $\sin\theta\,d\theta$ is the same as the differential of $-\cos\theta$. So what it means is that the cosine of the angle $\theta$ between any two directions is equally likely to be anything from $-1$ to $+1$. Next, we have to worry about the actual case, where we do not have the collision in the CM system, but we have two atoms which are coming together with vector velocities $\FLPv_1$ and $\FLPv_2$. What happens now? We can analyze this collision with the vector velocities $\FLPv_1$ and $\FLPv_2$ in the following way: We first say that there is a certain CM; the velocity of the CM is given by the “average” velocity, with weights proportional to the masses, so the velocity of the CM is $\FLPv_{\text{CM}} =(m_1\FLPv_1 + m_2\FLPv_2)/(m_1 + m_2)$. If we watch this collision in the CM system, then we see a collision just like that in Fig. 39–3, with a certain relative velocity $\FLPw$ coming in. The relative velocity is just $\FLPv_1 - \FLPv_2$. Now the idea is that, first, the whole CM is moving, and in the CM there is a relative velocity $\FLPw$, and the molecules collide and come off in some new direction. All this happens while the CM keeps right on moving, without any change. Now then, what is the distribution resulting from this? From our previous argument we conclude this: that at equilibrium, all directions for $\FLPw$ are equally likely, relative to the direction of the motion of the CM.1 There will be no particular correlation, in the end, between the direction of the motion of the relative velocity and that of the motion of the CM. Of course, if there were, the collisions would spray it about, so it is all sprayed around. So the cosine of the angle between $\FLPw$ and $\FLPv_{\text{CM}}$ is zero on the average. That is, \begin{equation} \label{Eq:I:39:19} \avg{\FLPw\cdot\FLPv_{\text{CM}}} = 0. \end{equation} But $\FLPw\cdot\FLPv_{\text{CM}}$ can be expressed in terms of $\FLPv_1$ and $\FLPv_2$ as well: \begin{align} \FLPw\cdot\FLPv_{\text{CM}} &= \frac{(\FLPv_1 - \FLPv_2)\cdot(m_1\FLPv_1 + m_2\FLPv_2)} {m_1 + m_2}\notag\\[1.5ex] \label{Eq:I:39:20} &= \frac{(m_1v_1^2 - m_2v_2^2) + (m_2 - m_1)(\FLPv_1\cdot\FLPv_2)} {m_1 + m_2}. \end{align}
\begin{equation} \begin{aligned} \FLPw\cdot\FLPv_{\text{CM}} &= \frac{(\FLPv_1 - \FLPv_2)\cdot(m_1\FLPv_1 + m_2\FLPv_2)} {m_1 + m_2}\\[1.5ex] &= \frac{(m_1v_1^2 - m_2v_2^2) + (m_2 - m_1)(\FLPv_1\cdot\FLPv_2)} {m_1 + m_2}. \end{aligned} \label{Eq:I:39:20} \end{equation}
First, let us look at the $\FLPv_1\cdot\FLPv_2$; what is the average of $\FLPv_1\cdot\FLPv_2$? That is, what is the average of the component of velocity of one molecule in the direction of another? Surely there is just as much likelihood of finding any given molecule moving one way as another. The average of the velocity $\FLPv_2$ in any direction is zero. Certainly, then, in the direction of $\FLPv_1$, $\FLPv_2$ has zero average. So, the average of $\FLPv_1\cdot\FLPv_2$ is zero! Therefore, we conclude that the average of $m_1v_1^2$ must be equal to the average of $m_2v_2^2$. That is, the average kinetic energy of the two must be equal: \begin{equation} \label{Eq:I:39:21} \avg{\tfrac{1}{2}m_1v_1^2} = \avg{\tfrac{1}{2}m_2v_2^2}. \end{equation} If we have two kinds of atoms in a gas, it can be shown, and we presume to have shown it, that the average of the kinetic energy of one is the same as the average of the kinetic energy of the other, when they are both in the same gas in the same box in equilibrium. That means that the heavy ones will move slower than the light ones; this is easily shown by experimentation with “atoms” of different masses in an air trough. Now we would like to go one step further, and say that if we have two different gases separated in a box, they will also have equal average kinetic energy when they have finally come to equilibrium, even though they are not in the same box. We can make the argument in a number of ways. One way is to argue that if we have a fixed partition with a tiny hole in it (Fig. 39–4) so that one gas could leak out through the holes while the other could not, because the molecules are too big, and these had attained equilibrium, then we know that in one part, where they are mixed, they have the same average kinetic energy, but some come through the hole without loss of kinetic energy, so the average kinetic energy in the pure gas and in the mixture must be the same. That is not too satisfactory, because maybe there are no holes, for this kind of molecule, that separate one kind from the other. Let us now go back to the piston problem. We can give an argument which shows that the kinetic energy of this piston must also be $\tfrac{1}{2}m_2v_2^2$. Actually, that would be the kinetic energy due to the purely horizontal motion of the piston, so, forgetting its up and down motion, it will have to be the same as $\tfrac{1}{2}m_2v_{2_x}^2$. Likewise, from the equilibrium on the other side, we can prove that the kinetic energy of the piston is $\tfrac{1}{2}m_1v_{1_x}^2$. Although this is not in the middle of the gas, but is on one side of the gas, we can still make the argument, although it is a little more difficult, that the average kinetic energy of the piston and of the gas molecules are equal as a result of all the collisions. If this still does not satisfy us, we may make an artificial example by which the equilibrium is generated by an object which can be hit on all sides. Suppose that we have a short rod with a ball on each end sticking through the piston, on a frictionless sliding universal joint. Each ball is round, like one of the molecules, and can be hit on all sides. This whole object has a certain total mass, $m$. Now, we have the gas molecules with mass $m_1$ and mass $m_2$ as before. The result of the collisions, by the analysis that was made before, is that the kinetic energy of $m$ because of collisions with the molecules on one side must be $\tfrac{1}{2}m_1v_1^2$, on the average. Likewise, because of the collisions with molecules on the other side, it has to be $\tfrac{1}{2}m_2v_2^2$ on the average. So, therefore, both sides have to have the same kinetic energy when they are in thermal equilibrium. So, although we only proved it for a mixture of gases, it is easily extended to the case where there are two different, separate gases at the same temperature. Thus when we have two gases at the same temperature, the mean kinetic energy of the CM motions are equal. The mean molecular kinetic energy is a property only of the “temperature.” Being a property of the “temperature,” and not of the gas, we can use it as a definition of the temperature. The mean kinetic energy of a molecule is thus some function of the temperature. But who is to tell us what scale to use for the temperature? We may arbitrarily define the scale of temperature so that the mean energy is linearly proportional to the temperature. The best way to do it would be to call the mean energy itself “the temperature.” That would be the simplest possible function. Unfortunately, the scale of temperature has been chosen differently, so instead of calling it temperature directly we use a constant conversion factor between the energy of a molecule and a degree of absolute temperature called a degree Kelvin. The constant of proportionality is $k = 1.38\times10^{-23}$ joule for every degree Kelvin.2 So if $T$ is absolute temperature, our definition says that the mean molecular kinetic energy is $\tfrac{3}{2}kT$. (The $\tfrac{3}{2}$ is put in as a matter of convenience, so as to get rid of it somewhere else.) We point out that the kinetic energy associated with the component of motion in any particular direction is only $\tfrac{1}{2}kT$. The three independent directions that are involved make it $\tfrac{3}{2}kT$. |
|
1 | 39 | The Kinetic Theory of Gases | 5 | The ideal gas law | Now, of course, we can put our definition of temperature into Eq. (39.9) and so find the law for the pressure of gases as a function of the temperature: it is that the pressure times the volume is equal to the total number of atoms times the universal constant $k$, times the temperature: \begin{equation} \label{Eq:I:39:22} PV = NkT. \end{equation} Furthermore, at the same temperature and pressure and volume, the number of atoms is determined; it too is a universal constant! So equal volumes of different gases, at the same pressure and temperature, have the same number of molecules, because of Newton’s laws. That is an amazing conclusion! In practice, when dealing with molecules, because the numbers are so large, the chemists have artificially chosen a specific number, a very large number, and called it something else. They have a number which they call a mole. A mole is merely a handy number. Why they did not choose $10^{24}$ objects, so it would come out even, is a historical question. They happened to choose, for the convenient number of objects on which they standardize, $N_0 = 6.02\times10^{23}$ objects, and this is called a mole of objects. So instead of measuring the number of molecules in units, they measure in terms of numbers of moles.3 In terms of $N_0$ we can write the number of moles, times the number of atoms in a mole, times $kT$, and if we want to, we can take the number of atoms in a mole times $k$, which is a mole’s worth of $k$, and call it something else, and we do—we call it $R$. A mole’s worth of $k$ is $8.317$ joules: $R = N_0k = 8.317$ J${}\cdot{}$mole$^{-1}\cdot{}^\circ$K$^{-1}$. Thus we also find the gas law written as the number of moles (also called $N$) times $RT$, or the number of atoms, times $kT$: \begin{equation} \label{Eq:I:39:23} PV = NRT. \end{equation} It is the same thing, just a different scale for measuring numbers. We use $1$ as a unit, and chemists use $6\times10^{23}$ as a unit! We now make one more remark about our gas law, and that has to do with the law for objects other than monatomic molecules. We have dealt only with the CM motion of the atoms of a monatomic gas. What happens if there are forces present? First, consider the case that the piston is held by a horizontal spring, and there are forces on it. The exchange of jiggling motion between atoms and piston at any moment does not depend on where the piston is at that moment, of course. The equilibrium conditions are the same. No matter where the piston is, its speed of motion must be such that it passes energy to the molecules in just the right way. So it makes no difference about the spring. The speed at which the piston has to move, on the average, is the same. So our theorem, that the mean value of the kinetic energy in one direction is $\tfrac{1}{2}kT$, is true whether there are forces present or not. Consider, for example, a diatomic molecule composed of atoms $m_A$ and $m_B$. What we have proved is that the motion of the CM of part $A$ and that of part $B$ are such that $\avg{\tfrac{1}{2}m_Av_A^2} = \avg{\tfrac{1}{2}m_Bv_B^2} = \tfrac{3}{2}kT$. How can this be, if they are held together? Although they are held together, when they are spinning and turning in there, when something hits them, exchanging energy with them, the only thing that counts is how fast they are moving. That alone determines how fast they exchange energy in collisions. At the particular instant, the force is not an essential point. Therefore the same principle is right, even when there are forces. Let us prove, finally, that the gas law is consistent also with a disregard of the internal motion. We did not really include the internal motions before; we just treated a monatomic gas. But we shall now show that an entire object, considered as a single body of total mass $M$, has a velocity of the CM such that \begin{equation} \label{Eq:I:39:24} \left\langle\tfrac{1}{2}Mv_{\text{CM}}^2\right\rangle = \tfrac{3}{2}kT. \end{equation} In other words, we can consider either the separate pieces or the whole thing! Let us see the reason for that: The mass of the diatomic molecule is $M = m_A + m_B$, and the velocity of the center of mass is equal to $\FLPv_{\text{CM}} = (m_A\FLPv_A + m_B\FLPv_B)/M$. Now we need $\avg{v_{\text{CM}}^2}$. If we square $\FLPv_{\text{CM}}$, we get \begin{equation*} v_{\text{CM}}^2 = \frac{m_A^2v_A^2 + 2m_Am_B\FLPv_A\cdot\FLPv_B + m_B^2v_B^2}{M^2}. \end{equation*} Now we multiply $\tfrac{1}{2}M$ and take the average, and thus we get \begin{align*} \left\langle\tfrac{1}{2}Mv_{\text{CM}}^2\right\rangle &= \frac{m_A\tfrac{3}{2}kT + m_Am_B\avg{\FLPv_A\cdot\FLPv_B} + m_B\tfrac{3}{2}kT}{M}\\[.5ex] &= \tfrac{3}{2}kT + \frac{m_Am_B\avg{\FLPv_A\cdot\FLPv_B}}{M}. \end{align*} (We have used the fact that $(m_A + m_B)/M = 1$.) Now what is $\avg{\FLPv_A\cdot\FLPv_B}$? (It had better be zero!) To find out, let us use our assumption that the relative velocity, $\FLPw = \FLPv_A - \FLPv_B$ is not any more likely to point in one direction than in another—that is, that its average component in any direction is zero. Thus we assume that \begin{equation*} \avg{\FLPw\cdot\FLPv_{\text{CM}}} = 0. \end{equation*} But what is $\FLPw\cdot\FLPv_{\text{CM}}$? It is \begin{align*} \FLPw\cdot\FLPv_{\text{CM}} &= \frac{(\FLPv_A - \FLPv_B)\cdot (m_A\FLPv_A + m_B\FLPv_B)}{M}\\[.5ex] &= \frac{m_Av_A^2 + (m_B - m_A)(\FLPv_A\cdot\FLPv_B) - m_Bv_B^2}{M}. \end{align*} Therefore, since $\avg{m_Av_A^2} = \avg{m_Bv_B^2}$, the first and last terms cancel out on the average, and we are left with \begin{equation*} (m_B - m_A)\avg{\FLPv_A\cdot\FLPv_B} = 0. \end{equation*} Thus if $m_A \neq m_B$, we find that $\avg{\FLPv_A\cdot\FLPv_B} = 0$, and therefore that the bodily motion of the entire molecule, regarded as a single particle of mass $M$, has a kinetic energy, on the average, equal to $\tfrac{3}{2}kT$. Incidentally, we have also proved at the same time that the average kinetic energy of the internal motions of the diatomic molecule, disregarding the bodily motion of the CM, is $\tfrac{3}{2}kT$! For, the total kinetic energy of the parts of the molecule is $\tfrac{1}{2}m_Av_A^2 + \tfrac{1}{2}m_Bv_B^2$, whose average is $\tfrac{3}{2}kT + \tfrac{3}{2}kT$, or $3kT$. The kinetic energy of the center-of-mass motion is $\tfrac{3}{2}kT$, so the average kinetic energy of the rotational and vibratory motions of the two atoms inside the molecule is the difference, $\tfrac{3}{2}kT$. The theorem concerning the average energy of the CM motion is general: for any object considered as a whole, with forces present or no, for every independent direction of motion that there is, the average kinetic energy in that motion is $\tfrac{1}{2}kT$. These “independent directions of motion” are sometimes called the degrees of freedom of the system. The number of degrees of freedom of a molecule composed of $r$ atoms is $3r$, since each atom needs three coordinates to define its position. The entire kinetic energy of the molecule can be expressed either as the sum of the kinetic energies of the separate atoms, or as the sum of the kinetic energy of the CM motion plus the kinetic energy of the internal motions. The latter can sometimes be expressed as a sum of rotational kinetic energy of the molecule and vibrational energy, but this is an approximation. Our theorem, applied to the $r$-atom molecule, says that the molecule will have, on the average, $3rkT/2$ joules of kinetic energy, of which $\tfrac{3}{2}kT$ is kinetic energy of the center-of-mass motion of the entire molecule, and the rest, $\tfrac{3}{2}(r - 1)kT$, is internal vibrational and rotational kinetic energy. |
|
1 | 40 | The Principles of Statistical Mechanics | 1 | The exponential atmosphere | We have discussed some of the properties of large numbers of intercolliding atoms. The subject is called kinetic theory, a description of matter from the point of view of collisions between the atoms. Fundamentally, we assert that the gross properties of matter should be explainable in terms of the motion of its parts. We limit ourselves for the present to conditions of thermal equilibrium, that is, to a subclass of all the phenomena of nature. The laws of mechanics which apply just to thermal equilibrium are called statistical mechanics, and in this section we want to become acquainted with some of the central theorems of this subject. We already have one of the theorems of statistical mechanics, namely, the mean value of the kinetic energy for any motion at the absolute temperature $T$ is $\tfrac{1}{2}kT$ for each independent motion, i.e., for each degree of freedom. That tells us something about the mean square velocities of the atoms. Our objective now is to learn more about the positions of the atoms, to discover how many of them are going to be in different places at thermal equilibrium, and also to go into a little more detail on the distribution of the velocities. Although we have the mean square velocity, we do not know how to answer a question such as how many of them are going three times faster than the root mean square, or how many of them are going one-quarter of the root mean square speed. Or have they all the same speed exactly? So, these are the two questions that we shall try to answer: How are the molecules distributed in space when there are forces acting on them, and how are they distributed in velocity? It turns out that the two questions are completely independent, and that the distribution of velocities is always the same. We already received a hint of the latter fact when we found that the average kinetic energy is the same, $\tfrac{1}{2}kT$ per degree of freedom, no matter what forces are acting on the molecules. The distribution of the velocities of the molecules is independent of the forces, because the collision rates do not depend upon the forces. Let us begin with an example: the distribution of the molecules in an atmosphere like our own, but without the winds and other kinds of disturbance. Suppose that we have a column of gas extending to a great height, and at thermal equilibrium—unlike our atmosphere, which as we know gets colder as we go up. We could remark that if the temperature differed at different heights, we could demonstrate lack of equilibrium by connecting a rod to some balls at the bottom (Fig. 40–1), where they would pick up $\tfrac{1}{2}kT$ from the molecules there and would shake, via the rod, the balls at the top and those would shake the molecules at the top. So, ultimately, of course, the temperature becomes the same at all heights in a gravitational field. If the temperature is the same at all heights, the problem is to discover by what law the atmosphere becomes tenuous as we go up. If $N$ is the total number of molecules in a volume $V$ of gas at pressure $P$, then we know $PV = NkT$, or $P = nkT$, where $n = N/V$ is the number of molecules per unit volume. In other words, if we know the number of molecules per unit volume, we know the pressure, and vice versa: they are proportional to each other, since the temperature is constant in this problem. But the pressure is not constant, it must increase as the altitude is reduced, because it has to hold, so to speak, the weight of all the gas above it. That is the clue by which we may determine how the pressure changes with height. If we take a unit area at height $h$, then the vertical force from below, on this unit area, is the pressure $P$. The vertical force per unit area pushing down at a height $h + dh$ would be the same, in the absence of gravity, but here it is not, because the force from below must exceed the force from above by the weight of gas in the section between $h$ and $h + dh$. Now $mg$ is the force of gravity on each molecule, where $g$ is the acceleration due to gravity, and $n\,dh$ is the total number of molecules in the unit section. So this gives us the differential equation $P_{h + dh} - P_h =$ $dP =$ $-mgn\,dh$. Since $P = nkT$, and $T$ is constant, we can eliminate either $P$ or $n$, say $P$, and get \begin{equation*} \ddt{n}{h} = -\frac{mg}{kT}\,n \end{equation*} for the differential equation, which tells us how the density goes down as we go up in energy. We thus have an equation for the particle density $n$, which varies with height, but which has a derivative which is proportional to itself. Now a function which has a derivative proportional to itself is an exponential, and the solution of this differential equation is \begin{equation} \label{Eq:I:40:1} n = n_0e^{-mgh/kT}. \end{equation} Here the constant of integration, $n_0$, is obviously the density at $h = 0$ (which can be chosen anywhere), and the density goes down exponentially with height. Note that if we have different kinds of molecules with different masses, they go down with different exponentials. The ones which were heavier would decrease with altitude faster than the light ones. Therefore we would expect that because oxygen is heavier than nitrogen, as we go higher and higher in an atmosphere with nitrogen and oxygen the proportion of nitrogen would increase. This does not really happen in our own atmosphere, at least at reasonable heights, because there is so much agitation which mixes the gases back together again. It is not an isothermal atmosphere. Nevertheless, there is a tendency for lighter materials, like hydrogen, to dominate at very great heights in the atmosphere, because the lowest masses continue to exist, while the other exponentials have all died out (Fig. 40–2). |
|
1 | 40 | The Principles of Statistical Mechanics | 2 | The Boltzmann law | Here we note the interesting fact that the numerator in the exponent of Eq. (40.1) is the potential energy of an atom. Therefore we can also state this particular law as: the density at any point is proportional to \begin{equation*} e^{-\text{the potential energy of each atom}/kT}. \end{equation*} That may be an accident, i.e., may be true only for this particular case of a uniform gravitational field. However, we can show that it is a more general proposition. Suppose that there were some kind of force other than gravity acting on the molecules in a gas. For example, the molecules may be charged electrically, and may be acted on by an electric field or another charge that attracts them. Or, because of the mutual attractions of the atoms for each other, or for the wall, or for a solid, or something, there is some force of attraction which varies with position and which acts on all the molecules. Now suppose, for simplicity, that the molecules are all the same, and that the force acts on each individual one, so that the total force on a piece of gas would be simply the number of molecules times the force on each one. To avoid unnecessary complication, let us choose a coordinate system with the $x$-axis in the direction of the force, $\FLPF$. In the same manner as before, if we take two parallel planes in the gas, separated by a distance $dx$, then the force on each atom, times the $n$ atoms per cm³ (the generalization of the previous $nmg$), times $dx$, must be balanced by the pressure change: $Fn\,dx = dP = kT\,dn$. Or, to put this law in a form which will be useful to us later, \begin{equation} \label{Eq:I:40:2} F = kT\,\ddt{}{x}\,(\ln n). \end{equation} For the present, observe that $-F\,dx$ is the work we would do in taking a molecule from $x$ to $x + dx$, and if $F$ comes from a potential, i.e., if the work done can be represented by a potential energy at all, then this would also be the difference in the potential energy (P.E.). The negative differential of potential energy is the work done, $F\,dx$, and we find that $d(\ln n) = -d(\text{P.E.})/kT$, or, after integrating, \begin{equation} \label{Eq:I:40:3} n = (\text{constant})e^{-\text{P.E.}/kT}. \end{equation} Therefore what we noticed in a special case turns out to be true in general. (What if $F$ does not come from a potential? Then (40.2) has no solution at all. Energy can be generated, or lost by the atoms running around in cyclic paths for which the work done is not zero, and no equilibrium can be maintained at all. Thermal equilibrium cannot exist if the external forces on the atoms are not conservative.) Equation (40.3), known as Boltzmann’s law, is another of the principles of statistical mechanics: that the probability of finding molecules in a given spatial arrangement varies exponentially with the negative of the potential energy of that arrangement, divided by $kT$. This, then, could tell us the distribution of molecules: Suppose that we had a positive ion in a liquid, attracting negative ions around it, how many of them would be at different distances? If the potential energy is known as a function of distance, then the proportion of them at different distances is given by this law, and so on, through many applications. |
|
1 | 40 | The Principles of Statistical Mechanics | 3 | Evaporation of a liquid | In more advanced statistical mechanics one tries to solve the following important problem. Consider an assembly of molecules which attract each other, and suppose that the force between any two, say $i$ and $j$, depends only on their separation $r_{ij}$, and can be represented as the derivative of a potential function $V(r_{ij})$. Figure 40–3 shows a form such a function might have. For $r > r_0$, the energy decreases as the molecules come together, because they attract, and then the energy increases very sharply as they come still closer together, because they repel strongly, which is characteristic of the way molecules behave, roughly speaking. Now suppose we have a whole box full of such molecules, and we would like to know how they arrange themselves on the average. The answer is $e^{-\text{P.E.}/kT}$. The total potential energy in this case would be the sum over all the pairs, supposing that the forces are all in pairs (there may be three-body forces in more complicated things, but in electricity, for example, the potential energy is all in pairs). Then the probability for finding molecules in any particular combination of $r_{ij}$’s will be proportional to \begin{equation*} \exp\Bigl[-\sum_{i,j}V(r_{ij})/kT\Bigr]. \end{equation*} Now, if the temperature is very high, so that $kT \gg \abs{V(r_0)}$, the exponent is relatively small almost everywhere, and the probability of finding a molecule is almost independent of position. Let us take the case of just two molecules: the $e^{-\text{P.E.}/kT}$ would be the probability of finding them at various mutual distances $r$. Clearly, where the potential goes most negative, the probability is largest, and where the potential goes toward infinity, the probability is almost zero, which occurs for very small distances. That means that for such atoms in a gas, there is no chance that they are on top of each other, since they repel so strongly. But there is a greater chance of finding them per unit volume at the point $r_0$ than at any other point. How much greater, depends on the temperature. If the temperature is very large compared with the difference in energy between $r = r_0$ and $r = \infty$, the exponential is always nearly unity. In this case, where the mean kinetic energy (about $kT$) greatly exceeds the potential energy, the forces do not make much difference. But as the temperature falls, the probability of finding the molecules at the preferred distance $r_0$ gradually increases relative to the probability of finding them elsewhere and, in fact, if $kT$ is much less than $\abs{V(r_0)}$, we have a relatively large positive exponent in that neighborhood. In other words, in a given volume they are much more likely to be at the distance of minimum energy than far apart. As the temperature falls, the atoms fall together, clump in lumps, and reduce to liquids, and solids, and molecules, and as you heat them up they evaporate. The requirements for the determination of exactly how things evaporate, exactly how things should happen in a given circumstance, involve the following. First, to discover the correct molecular-force law $V(r)$, which must come from something else, quantum mechanics, say, or experiment. But, given the law of force between the molecules, to discover what a billion molecules are going to do merely consists of studying the function $e^{-\sum V_{ij}/kT}$. Surprisingly enough, since it is such a simple function and such an easy idea, given the potential, the labor is enormously complicated; the difficulty is the tremendous number of variables. In spite of such difficulties, the subject is quite exciting and interesting. It is often called an example of a “many-body problem,” and it really has been a very interesting thing. In that single formula must be contained all the details, for example, about the solidification of gas, or the forms of the crystals that the solid can take, and people have been trying to squeeze it out, but the mathematical difficulties are very great, not in writing the law, but in dealing with so enormous a number of variables. That then, is the distribution of particles in space. That is the end of classical statistical mechanics, practically speaking, because if we know the forces, we can, in principle, find the distribution in space, and the distribution of velocities is something that we can work out once and for all, and is not something that is different for the different cases. The great problems are in getting particular information out of our formal solution, and that is the main subject of classical statistical mechanics. |
|
1 | 40 | The Principles of Statistical Mechanics | 4 | The distribution of molecular speeds | Now we go on to discuss the distribution of velocities, because sometimes it is interesting or useful to know how many of them are moving at different speeds. In order to do that, we may make use of the facts which we discovered with regard to the gas in the atmosphere. We take it to be a perfect gas, as we have already assumed in writing the potential energy, disregarding the energy of mutual attraction of the atoms. The only potential energy that we included in our first example was gravity. We would, of course, have something more complicated if there were forces between the atoms. Thus we assume that there are no forces between the atoms and, for a moment, disregard collisions also, returning later to the justification of this. Now we saw that there are fewer molecules at the height $h$ than there are at the height $0$; according to formula (40.1), they decrease exponentially with height. How can there be fewer at greater heights? After all, do not all the molecules which are moving up at height $0$ arrive at $h$? No!, because some of those which are moving up at $0$ are going too slowly, and cannot climb the potential hill to $h$. With that clue, we can calculate how many must be moving at various speeds, because from (40.1) we know how many are moving with less than enough speed to climb a given distance $h$. Those are just the ones that account for the fact that the density at $h$ is lower than at $0$. Now let us put that idea a little more precisely: let us count how many molecules are passing from below to above the plane $h = 0$ (by calling it height${} = 0$, we do not mean that there is a floor there; it is just a convenient label, and there is gas at negative $h$). These gas molecules are moving around in every direction, but some of them are moving through the plane, and at any moment a certain number per second of them are passing through the plane from below to above with different velocities. Now we note the following: if we call $u$ the velocity which is just needed to get up to the height $h$ (kinetic energy $mu^2/2 = mgh$), then the number of molecules per second which are passing upward through the lower plane in a vertical direction with velocity component greater than $u$ is exactly the same as the number which pass through the upper plane with any upward velocity. Those molecules whose vertical velocity does not exceed $u$ cannot get through the upper plane. So therefore we see that \begin{equation*} \text{Number passing }h = 0 \text{ with }v_z > u = \text{number passing }h = h \text{ with }v_z > 0. \end{equation*}
\begin{equation*} \begin{pmatrix} \text{Number passing }h = 0\\ \text{ with }v_z > u \end{pmatrix} = \begin{pmatrix} \text{number passing }h = h\\ \text{ with }v_z > 0 \end{pmatrix}. \end{equation*} But the number which pass through $h$ with any velocity greater than $0$ is less than the number which pass through the lower height with any velocity greater than $0$, because the number of atoms is greater; that is all we need. We know already that the distribution of velocities is the same, after the argument we made earlier about the temperature being constant all the way through the atmosphere. So, since the velocity distributions are the same, and it is just that there are more atoms lower down, clearly the number $n_{> 0}(h)$, passing with positive velocity at height $h$, and the number $n_{> 0}(0)$, passing with positive velocity at height $0$, are in the same ratio as the densities at the two heights, which is $e^{-mgh/kT}$. But $n_{> 0}(h) = n_{> u}(0)$, and therefore we find that \begin{equation*} \frac{n_{> u}(0)}{n_{> 0}(0)} = e^{-mgh/kT} = e^{-mu^2/2kT}, \end{equation*} since $\tfrac{1}{2}mu^2 = mgh$. Thus, in words, the number of molecules per unit area per second passing the height $0$ with a $z$-component of velocity greater than $u$ is $e^{-mu^2/2kT}$ times the total number that are passing through the plane with velocity greater than zero. Now this is not only true at the arbitrarily chosen height $0$, but of course it is true at any other height, and thus the distributions of velocities are all the same! (The final statement does not involve the height $h$, which appeared only in the intermediate argument.) The result is a general proposition that gives us the distribution of velocities. It tells us that if we drill a little hole in the side of a gas pipe, a very tiny hole, so that the collisions are few and far between, i.e., are farther apart than the diameter of the hole, then the particles which are coming out will have different velocities, but the fraction of particles which come out at a velocity greater than $u$ is $e^{-mu^2/2kT}$. Now we return to the question about the neglect of collisions: Why does it not make any difference? We could have pursued the same argument, not with a finite height $h$, but with an infinitesimal height $h$, which is so small that there would be no room for collisions between $0$ and $h$. But that was not necessary: the argument is evidently based on an analysis of the energies involved, the conservation of energy, and in the collisions that occur there is an exchange of energies among the molecules. However, we do not really care whether we follow the same molecule if energy is merely exchanged with another molecule. So it turns out that even if the problem is analyzed more carefully (and it is more difficult, naturally, to do a rigorous job), it still makes no difference in the result. It is interesting that the velocity distribution we have found is just \begin{equation} \label{Eq:I:40:4} n_{> u} \propto e^{-\text{kinetic energy}/kT}. \end{equation} This way of describing the distribution of velocities, by giving the number of molecules that pass a given area with a certain minimum $z$-component, is not the most convenient way of giving the velocity distribution. For instance, inside the gas, one more often wants to know how many molecules are moving with a $z$-component of velocity between two given values, and that, of course, is not directly given by Eq. (40.4). We would like to state our result in the more conventional form, even though what we already have written is quite general. Note that it is not possible to say that any molecule has exactly some stated velocity; none of them has a velocity exactly equal to $1.7962899173$ meters per second. So in order to make a meaningful statement, we have to ask how many are to be found in some range of velocities. We have to say how many have velocities between $1.796$ and $1.797$, and so on. On mathematical terms, let $f(u)\,du$ be the fraction of all the molecules which have velocities between $u$ and $u + du$ or, what is the same thing (if $du$ is infinitesimal), all that have a velocity $u$ with a range $du$. Figure 40–5 shows a possible form for the function $f(u)$, and the shaded part, of width $du$ and mean height $f(u)$, represents this fraction $f(u)\,du$. That is, the ratio of the shaded area to the total area of the curve is the relative proportion of molecules with velocity $u$ within $du$. If we define $f(u)$ so that the fraction having a velocity in this range is given directly by the shaded area, then the total area must be $100$ percent of them, that is, \begin{equation} \label{Eq:I:40:5} \int_{-\infty}^\infty f(u)\,du = 1. \end{equation} Now we have only to get this distribution by comparing it with the theorem we derived before. First we ask, what is the number of molecules passing through an area per second with a velocity greater than $u$, expressed in terms of $f(u)$? At first we might think it is merely the integral of $\int_u^\infty f(u)\,du$, but it is not, because we want the number that are passing the area per second. The faster ones pass more often, so to speak, than the slower ones, and in order to express how many pass, you have to multiply by the velocity. (We discussed that in the previous chapter when we talked about the number of collisions.) In a given time $t$ the total number which pass through the surface is all of those which have been able to arrive at the surface, and the number which arrive come from a distance $ut$. So the number of molecules which arrive is not simply the number which are there, but the number that are there per unit volume, multiplied by the distance that they sweep through in racing for the area through which they are supposed to go, and that distance is proportional to $u$. Thus we need the integral of $u$ times $f(u)\,du$, an infinite integral with a lower limit $u$, and this must be the same as we found before, namely $e^{-mu^2/2kT}$, with a proportionality constant which we will get later: \begin{equation} \label{Eq:I:40:6} \int_u^\infty uf(u)\,du = \text{const}\cdot e^{-mu^2/2kT}. \end{equation} Now if we differentiate the integral with respect to $u$, we get the thing that is inside the integral, i.e., the integrand (with a minus sign, since $u$ is the lower limit), and if we differentiate the other side, we get $u$ times the same exponential (and some constants). The $u$’s cancel and we find \begin{equation} \label{Eq:I:40:7} f(u)\,du = Ce^{-mu^2/2kT}\,du. \end{equation} We retain the $du$ on both sides as a reminder that it is a distribution, and it tells what the proportion is for velocity between $u$ and $u + du$. The constant $C$ must be so determined that the integral is unity, according to Eq. (40.5). Now we can prove1 that \begin{equation*} \int_{-\infty}^\infty e^{-x^2}\,dx = \sqrt{\pi}. \end{equation*} Using this fact, it is easy to find that $C = \sqrt{m/2\pi kT}$. Since velocity and momentum are proportional, we may say that the distribution of momenta is also proportional to $e^{-\text{K.E.}/kT}$ per unit momentum range. It turns out that this theorem is true in relativity too, if it is in terms of momentum, while if it is in velocity it is not, so it is best to learn it in momentum instead of in velocity: \begin{equation} \label{Eq:I:40:8} f(p)\,dp = Ce^{-\text{K.E.}/kT}\,dp. \end{equation} So we find that the probabilities of different conditions of energy, kinetic and potential, are both given by $e^{-\text{energy}/kT}$, a very easy thing to remember and a rather beautiful proposition. So far we have, of course, only the distribution of the velocities “vertically.” We might want to ask, what is the probability that a molecule is moving in another direction? Of course these distributions are connected, and one can obtain the complete distribution from the one we have, because the complete distribution depends only on the square of the magnitude of the velocity, not upon the $z$-component. It must be something that is independent of direction, and there is only one function involved, the probability of different magnitudes. We have the distribution of the $z$-component, and therefore we can get the distribution of the other components from it. The result is that the probability is still proportional to $e^{-\text{K.E.}/kT}$, but now the kinetic energy involves three parts, $mv_x^2/2$, $mv_y^2/2$, and $mv_z^2/2$, summed in the exponent. Or we can write it as a product: \begin{equation} \begin{aligned} f(v_x,v_y,v_z)&\;dv_x dv_y dv_z\\[.5ex] &\kern{-3.5em}\propto e^{-mv_x^2/2kT}\!\!\cdot e^{-mv_y^2/2kT}\!\!\cdot e^{-mv_z^2/2kT}\,dv_x dv_y dv_z. \end{aligned} \label{Eq:I:40:9} \end{equation} You can see that this formula must be right because, first, it is a function only of $v^2$, as required, and second, the probabilities of various values of $v_z$ obtained by integrating over all $v_x$ and $v_y$ is just (40.7). But this one function (40.9) can do both those things! |
|
1 | 40 | The Principles of Statistical Mechanics | 5 | The specific heats of gases | Now we shall look at some ways to test the theory, and to see how successful is the classical theory of gases. We saw earlier that if $U$ is the internal energy of $N$ molecules, then $PV =$ $NkT =$ $(\gamma - 1)U$ holds, sometimes, for some gases, maybe. If it is a monatomic gas, we know this is also equal to $\tfrac{2}{3}$ of the kinetic energy of the center-of-mass motion of the atoms. If it is a monatomic gas, then the kinetic energy is equal to the internal energy, and therefore $\gamma - 1 = \tfrac{2}{3}$. But suppose it is, say, a more complicated molecule, that can spin and vibrate, and let us suppose (it turns out to be true according to classical mechanics) that the energies of the internal motions are also proportional to $kT$. Then at a given temperature, in addition to kinetic energy $\tfrac{3}{2}kT$, it has internal vibrational and rotational energies. So the total $U$ includes not just the kinetic energy, but also the rotational and vibrational energies, and we get a different value of $\gamma$. Technically, the best way to measure $\gamma$ is by measuring the specific heat, which is the change in energy with temperature. We will return to that approach later. For our present purposes, we may suppose $\gamma$ is found experimentally from the $PV^\gamma$ curve for adiabatic compression. Let us make a calculation of $\gamma$ for some cases. First, for a monatomic gas $U$ is the total energy, the same as the kinetic energy, and we know already that $\gamma$ should be $\tfrac{5}{3}$. For a diatomic gas, we may take, as an example, oxygen, hydrogen iodide, hydrogen, etc., and suppose that the diatomic gas can be represented as two atoms held together by some kind of force like the one of Fig. 40–3. We may also suppose, and it turns out to be quite true, that at the temperatures that are of interest for the diatomic gas, the pairs of atoms tend strongly to be separated by $r_0$, the distance of potential minimum. If this were not true, if the probability were not strongly varying enough to make the great majority sit near the bottom, we would have to remember that oxygen gas is a mixture of O$_2$ and single oxygen atoms in a nontrivial ratio. We know that there are, in fact, very few single oxygen atoms, which means that the potential energy minimum is very much greater in magnitude than $kT$, as we have seen. Since they are clustered strongly around $r_0$, the only part of the curve that is needed is the part near the minimum, which may be approximated by a parabola. A parabolic potential implies a harmonic oscillator, and in fact, to an excellent approximation, the oxygen molecule can be represented as two atoms connected by a spring. Now what is the total energy of this molecule at temperature $T$? We know that for each of the two atoms, each of the kinetic energies should be $\tfrac{3}{2}kT$, so the kinetic energy of both of them is $\tfrac{3}{2}kT + \tfrac{3}{2}kT$. We can also put this in a different way: the same $\tfrac{3}{2}$ plus $\tfrac{3}{2}$ can also be looked at as kinetic energy of the center of mass ($\tfrac{3}{2}$), kinetic energy of rotation ($\tfrac{2}{2}$), and kinetic energy of vibration ($\tfrac{1}{2}$). We know that the kinetic energy of vibration is $\frac{1}{2}$, since there is just one dimension involved and each degree of freedom has $\tfrac{1}{2}kT$. Regarding the rotation, it can turn about either of two axes, so there are two independent motions. We assume that the atoms are some kind of points, and cannot spin about the line joining them; this is something to bear in mind, because if we get a disagreement, maybe that is where the trouble is. But we have one more thing, which is the potential energy of vibration; how much is that? In a harmonic oscillator the average kinetic energy and average potential energy are equal, and therefore the potential energy of vibration is $\tfrac{1}{2}kT$, also. The grand total of energy is $U = \tfrac{7}{2}kT$, or $kT$ is $\tfrac{2}{7}U$ per atom. That means, then, that $\gamma$ is $\tfrac{9}{7}$ instead of $\tfrac{5}{3}$, i.e., $\gamma = 1.286$. We may compare these numbers with the relevant measured values shown in Table 40–1. Looking first at helium, which is a monatomic gas, we find very nearly $\tfrac{5}{3}$, and the error is probably experimental, although at such a low temperature there may be some forces between the atoms. Krypton and argon, both monatomic, agree also within the accuracy of the experiment. We turn to the diatomic gases and find hydrogen with $1.404$, which does not agree with the theory, $1.286$. Oxygen, $1.399$, is very similar, but again not in agreement. Hydrogen iodide again is similar at $1.40$. It begins to look as though the right answer is $1.40$, but it is not, because if we look further at bromine we see $1.32$, and at iodine we see $1.30$. Since $1.30$ is reasonably close to $1.286$, iodine may be said to agree rather well, but oxygen is far off. So here we have a dilemma. We have it right for one molecule, we do not have it right for another molecule, and we may need to be pretty ingenious in order to explain both. Let us look further at a still more complicated molecule with large numbers of parts, for example, C$_2$H$_6$, which is ethane. It has eight different atoms, and they are all vibrating and rotating in various combinations, so the total amount of internal energy must be an enormous number of $kT$’s, at least $12kT$ for kinetic energy alone, and $\gamma - 1$ must be very close to zero, or $\gamma$ almost exactly $1$. In fact, it is lower, but $1.22$ is not so much lower, and is higher than the $1\tfrac{1}{12}$ calculated from the kinetic energy alone, and it is just not understandable! Furthermore, the whole mystery is deep, because the diatomic molecule cannot be made rigid by a limit. Even if we made the couplings stiffer indefinitely, although it might not vibrate much, it would nevertheless keep vibrating. The vibrational energy inside is still $kT$, since it does not depend on the strength of the coupling. But if we could imagine absolute rigidity, stopping all vibration to eliminate a variable, then we would get $U = \tfrac{5}{2}kT$ and $\gamma = 1.40$ for the diatomic case. This looks good for H$_2$ or O$_2$. On the other hand, we would still have problems, because $\gamma$ for either hydrogen or oxygen varies with temperature! From the measured values shown in Fig. 40–6, we see that for H$_2$, $\gamma$ varies from about $1.6$ at $-185^\circ$C to $1.3$ at $2000^\circ$C. The variation is more substantial in the case of hydrogen than for oxygen, but nevertheless, even in oxygen, $\gamma$ tends definitely to go up as we go down in temperature. |
|
1 | 40 | The Principles of Statistical Mechanics | 6 | The failure of classical physics | So, all in all, we might say that we have some difficulty. We might try some force law other than a spring, but it turns out that anything else will only make $U$ higher. If we include more forms of energy, $\gamma$ approaches unity more closely, contradicting the facts. All the classical theoretical things that one can think of will only make it worse. The fact is that there are electrons in each atom, and we know from their spectra that there are internal motions; each of the electrons should have at least $\tfrac{1}{2}kT$ of kinetic energy, and something for the potential energy, so when these are added in, $\gamma$ gets still smaller. It is ridiculous. It is wrong. The first great paper on the dynamical theory of gases was by Maxwell in 1859. On the basis of ideas we have been discussing, he was able accurately to explain a great many known relations, such as Boyle’s law, the diffusion theory, the viscosity of gases, and things we shall talk about in the next chapter. He listed all these great successes in a final summary, and at the end he said, “Finally, by establishing a necessary relation between the motions of translation and rotation (he is talking about the $\tfrac{1}{2}kT$ theorem) of all particles not spherical, we proved that a system of such particles could not possibly satisfy the known relation between the two specific heats.” He is referring to $\gamma$ (which we shall see later is related to two ways of measuring specific heat), and he says we know we cannot get the right answer. Ten years later, in a lecture, he said, “I have now put before you what I consider to be the greatest difficulty yet encountered by the molecular theory.” These words represent the first discovery that the laws of classical physics were wrong. This was the first indication that there was something fundamentally impossible, because a rigorously proved theorem did not agree with experiment. About 1905, Sir James Hopwood Jeans and Lord Rayleigh (John William Strutt) were to talk about this puzzle again. One often hears it said that physicists at the latter part of the nineteenth century thought they knew all the significant physical laws and that all they had to do was to calculate more decimal places. Someone may have said that once, and others copied it. But a thorough reading of the literature of the time shows they were all worrying about something. Jeans said about this puzzle that it is a very mysterious phenomenon, and it seems as though as the temperature falls, certain kinds of motions “freeze out.” If we could assume that the vibrational motion, say, did not exist at low temperature and did exist at high temperature, then we could imagine that a gas might exist at a temperature sufficiently low that vibrational motion does not occur, so $\gamma = 1.40$, or a higher temperature at which it begins to come in, so $\gamma$ falls. The same might be argued for the rotation. If we can eliminate the rotation, say it “freezes out” at sufficiently low temperature, then we can understand the fact that the $\gamma$ of hydrogen approaches $1.66$ as we go down in temperature. How can we understand such a phenomenon? Of course that these motions “freeze out” cannot be understood by classical mechanics. It was only understood when quantum mechanics was discovered. Without proof, we may state the results for statistical mechanics of the quantum-mechanical theory. We recall that according to quantum mechanics, a system which is bound by a potential, for the vibrations, for example, will have a discrete set of energy levels, i.e., states of different energy. Now the question is: how is statistical mechanics to be modified according to quantum-mechanical theory? It turns out, interestingly enough, that although most problems are more difficult in quantum mechanics than in classical mechanics, problems in statistical mechanics are much easier in quantum theory! The simple result we have in classical mechanics, that $n = n_0e^{-\text{energy}/kT}$, becomes the following very important theorem: If the energies of the set of molecular states are called, say, $E_0$, $E_1$, $E_2$, …, $E_i$, …, then in thermal equilibrium the probability of finding a molecule in the particular state of having energy $E_i$ is proportional to $e^{-E_i/kT}$. That gives the probability of being in various states. In other words, the relative chance, the probability, of being in state $E_1$ relative to the chance of being in state $E_0$, is \begin{equation} \label{Eq:I:40:10} \frac{P_1}{P_0} = \frac{e^{-E_1/kT}}{e^{-E_0/kT}}, \end{equation} which, of course, is the same as \begin{equation} \label{Eq:I:40:11} n_1 = n_0e^{-(E_1 - E_0)/kT}, \end{equation} since $P_1 = n_1/N$ and $P_0 = n_0/N$. So it is less likely to be in a higher energy state than in a lower one. The ratio of the number of atoms in the upper state to the number in the lower state is $e$ raised to the power (minus the energy difference, over $kT$)—a very simple proposition. Now it turns out that for a harmonic oscillator the energy levels are evenly spaced. Calling the lowest energy $E_0 = 0$ (it actually is not zero, it is a little different, but it does not matter if we shift all energies by a constant), the first one is then $E_1 = \hbar\omega$, and the second one is $2\hbar\omega$, and the third one is $3\hbar\omega$, and so on. Now let us see what happens. We suppose we are studying the vibrations of a diatomic molecule, which we approximate as a harmonic oscillator. Let us ask what is the relative chance of finding a molecule in state $E_1$ instead of in state $E_0$. The answer is that the chance of finding it in state $E_1$, relative to that of finding it in state $E_0$, goes down as $e^{-\hbar\omega/kT}$. Now suppose that $kT$ is much less than $\hbar\omega$, and we have a low-temperature circumstance. Then the probability of its being in state $E_1$ is extremely small. Practically all the atoms are in state $E_0$. If we change the temperature but still keep it very small, then the chance of its being in state $E_1 = \hbar\omega$ remains infinitesimal—the energy of the oscillator remains nearly zero; it does not change with temperature so long as the temperature is much less than $\hbar\omega$. All oscillators are in the bottom state, and their motion is effectively “frozen”; there is no contribution of it to the specific heat. We can judge, then, from Table 40–1, that at $100^\circ$C, which is $373$ degrees absolute, $kT$ is much less than the vibrational energy in the oxygen or hydrogen molecules, but not so in the iodine molecule. The reason for the difference is that an iodine atom is very heavy, compared with hydrogen, and although the forces may be comparable in iodine and hydrogen, the iodine molecule is so heavy that the natural frequency of vibration is very low compared with the natural frequency of hydrogen. With $\hbar\omega$ higher than $kT$ at room temperature for hydrogen, but lower for iodine, only the latter, iodine, exhibits the classical vibrational energy. As we increase the temperature of a gas, starting from a very low value of $T$, with the molecules almost all in their lowest state, they gradually begin to have an appreciable probability to be in the second state, and then in the next state, and so on. When the probability is appreciable for many states, the behavior of the gas approaches that given by classical physics, because the quantized states become nearly indistinguishable from a continuum of energies, and the system can have almost any energy. Thus, as the temperature rises, we should again get the results of classical physics, as indeed seems to be the case in Fig. 40–6. It is possible to show in the same way that the rotational states of atoms are also quantized, but the states are so much closer together that in ordinary circumstances $kT$ is bigger than the spacing. Then many levels are excited, and the rotational kinetic energy in the system participates in the classical way. The one example where this is not quite true at room temperature is for hydrogen. This is the first time that we have really deduced, by comparison with experiment, that there was something wrong with classical physics, and we have looked for a resolution of the difficulty in quantum mechanics in much the same way as it was done originally. It took 30 or 40 years before the next difficulty was discovered, and that had to do again with statistical mechanics, but this time the mechanics of a photon gas. That problem was solved by Planck, in the early years of the 20th century. |
|
1 | 41 | The Brownian Movement | 1 | Equipartition of energy | The Brownian movement was discovered in 1827 by Robert Brown, a botanist. While he was studying microscopic life, he noticed little particles of plant pollens jiggling around in the liquid he was looking at in the microscope, and he was wise enough to realize that these were not living, but were just little pieces of dirt moving around in the water. In fact he helped to demonstrate that this had nothing to do with life by getting from the ground an old piece of quartz in which there was some water trapped. It must have been trapped for millions and millions of years, but inside he could see the same motion. What one sees is that very tiny particles are jiggling all the time. This was later proved to be one of the effects of molecular motion, and we can understand it qualitatively by thinking of a great push ball on a playing field, seen from a great distance, with a lot of people underneath, all pushing the ball in various directions. We cannot see the people because we imagine that we are too far away, but we can see the ball, and we notice that it moves around rather irregularly. We also know, from the theorems that we have discussed in previous chapters, that the mean kinetic energy of a small particle suspended in a liquid or a gas will be $\tfrac{3}{2}kT$ even though it is very heavy compared with a molecule. If it is very heavy, that means that the speeds are relatively slow, but it turns out, actually, that the speed is not really so slow. In fact, we cannot see the speed of such a particle very easily because although the mean kinetic energy is $\tfrac{3}{2}kT$, which represents a speed of a millimeter or so per second for an object a micron or two in diameter, this is very hard to see even in a microscope, because the particle continuously reverses its direction and does not get anywhere. How far it does get we will discuss at the end of the present chapter. This problem was first solved by Einstein at the beginning of the 20th century. Incidentally, when we say that the mean kinetic energy of this particle is $\tfrac{3}{2}kT$, we claim to have derived this result from the kinetic theory, that is, from Newton’s laws. We shall find that we can derive all kinds of things—marvelous things—from the kinetic theory, and it is most interesting that we can apparently get so much from so little. Of course we do not mean that Newton’s laws are “little”—they are enough to do it, really—what we mean is that we did not do very much. How do we get so much out? The answer is that we have been perpetually making a certain important assumption, which is that if a given system is in thermal equilibrium at some temperature, it will also be in thermal equilibrium with anything else at the same temperature. For instance, if we wanted to see how a particle would move if it was really colliding with water, we could imagine that there was a gas present, composed of another kind of particle, little fine pellets that (we suppose) do not interact with water, but only hit the particle with “hard” collisions. Suppose the particle has a prong sticking out of it; all our pellets have to do is hit the prong. We know all about this imaginary gas of pellets at temperature $T$—it is an ideal gas. Water is complicated, but an ideal gas is simple. Now, our particle has to be in equilibrium with the gas of pellets. Therefore, the mean motion of the particle must be what we get for gaseous collisions, because if it were not moving at the right speed relative to the water but, say, was moving faster, that would mean that the pellets would pick up energy from it and get hotter than the water. But we had started them at the same temperature, and we assume that if a thing is once in equilibrium, it stays in equilibrium—parts of it do not get hotter and other parts colder, spontaneously. This proposition is true and can be proved from the laws of mechanics, but the proof is very complicated and can be established only by using advanced mechanics. It is much easier to prove in quantum mechanics than it is in classical mechanics. It was proved first by Boltzmann, but for now we simply take it to be true, and then we can argue that our particle has to have $\tfrac{3}{2}kT$ of energy if it is hit with artificial pellets, so it also must have $\tfrac{3}{2}kT$ when it is being hit with water at the same temperature and we take away the pellets; so it is $\tfrac{3}{2}kT$. It is a strange line of argument, but perfectly valid. In addition to the motion of colloidal particles for which the Brownian movement was first discovered, there are a number of other phenomena, both in the laboratory and in other situations, where one can see Brownian movement. If we are trying to build the most delicate possible equipment, say a very small mirror on a thin quartz fiber for a very sensitive ballistic galvanometer (Fig. 41–1), the mirror does not stay put, but jiggles all the time—all the time—so that when we shine a light on it and look at the position of the spot, we do not have a perfect instrument because the mirror is always jiggling. Why? Because the average kinetic energy of rotation of this mirror has to be, on the average, $\tfrac{1}{2}kT$. What is the mean-square angle over which the mirror will wobble? Suppose we find the natural vibration period of the mirror by tapping on one side and seeing how long it takes to oscillate back and forth, and we also know the moment of inertia, $I$. We know the formula for the kinetic energy of rotation—it is given by Eq. (19.8): $T = \tfrac{1}{2}I\omega^2$. That is the kinetic energy, and the potential energy that goes with it will be proportional to the square of the angle—it is $V = \tfrac{1}{2}\alpha\theta^2$. But, if we know the period $t_0$ and calculate from that the natural frequency $\omega_0 = 2\pi/t_0$, then the potential energy is $V = \tfrac{1}{2}I\omega_0^2\theta^2$. Now we know that the average kinetic energy is $\tfrac{1}{2}kT$, but since it is a harmonic oscillator the average potential energy is also $\tfrac{1}{2}kT$. Thus \begin{equation} \tfrac{1}{2}I\omega_0^2\avg{\theta^2} = \tfrac{1}{2}kT,\notag \end{equation} or \begin{equation} \label{Eq:I:41:1} \avg{\theta^2} = kT/I\omega_0^2. \end{equation} In this way we can calculate the oscillations of a galvanometer mirror, and thereby find what the limitations of our instrument will be. If we want to have smaller oscillations, we have to cool the mirror. An interesting question is, where to cool it. This depends upon where it is getting its “kicks” from. If it is through the fiber, we cool it at the top—if the mirror is surrounded by a gas and is getting hit mostly by collisions in the gas, it is better to cool the gas. As a matter of fact, if we know where the damping of the oscillations comes from, it turns out that that is always the source of the fluctuations also, a point which we will come back to. The same thing works, amazingly enough, in electrical circuits. Suppose that we are building a very sensitive, accurate amplifier for a definite frequency and have a resonant circuit (Fig. 41–2) in the input so as to make it very sensitive to this certain frequency, like a radio receiver, but a really good one. Suppose we wish to go down to the very lowest limit of things, so we take the voltage, say off the inductance, and send it into the rest of the amplifier. Of course, in any circuit like this, there is a certain amount of loss. It is not a perfect resonant circuit, but it is a very good one and there is a little resistance, say (we put the resistor in so we can see it, but it is supposed to be small). Now we would like to find out: How much does the voltage across the inductance fluctuate? Answer: We know that $\tfrac{1}{2}LI^2$ is the “kinetic energy”—the energy associated with a coil in a resonant circuit (Chapter 25). Therefore the mean value of $\tfrac{1}{2}LI^2$ is equal to $\tfrac{1}{2}kT$—that tells us what the rms current is and we can find out what the rms voltage is from the rms current. For if we want the voltage across the inductance the formula is $\hat{V}_L = i\omega L\hat{I}$, and the mean absolute square voltage on the inductance is $\avg{V_L^2} = L^2\omega_0^2\avg{I^2}$, and putting in $\tfrac{1}{2}L\avg{I^2} = \tfrac{1}{2}kT$, we obtain \begin{equation} \label{Eq:I:41:2} \avg{V_L^2} = L\omega_0^2 kT. \end{equation} So now we can design circuits and tell when we are going to get what is called Johnson noise, the noise associated with thermal fluctuations! Where do the fluctuations come from this time? They come again from the resistor—they come from the fact that the electrons in the resistor are jiggling around because they are in thermal equilibrium with the matter in the resistor, and they make fluctuations in the density of electrons. They thus make tiny electric fields which drive the resonant circuit. Electrical engineers represent the answer in another way. Physically, the resistor is effectively the source of noise. However, we may replace the real circuit having an honest, true physical resistor which is making noise, by an artificial circuit which contains a little generator that is going to represent the noise, and now the resistor is otherwise ideal—no noise comes from it. All the noise is in the artificial generator. And so if we knew the characteristics of the noise generated by a resistor, if we had the formula for that, then we could calculate what the circuit is going to do in response to that noise. So, we need a formula for the noise fluctuations. Now the noise that is generated by the resistor is at all frequencies, since the resistor by itself is not resonant. Of course the resonant circuit only “listens” to the part that is near the right frequency, but the resistor has many different frequencies in it. We may describe how strong the generator is, as follows: The mean power that the resistor would absorb if it were connected directly across the noise generator would be $\avg{E^2}/R$, if $E$ were the voltage from the generator. But we would like to know in more detail how much power there is at every frequency. There is very little power in any one frequency; it is a distribution. Let $P(\omega)\,d\omega$ be the power that the generator would deliver in the frequency range $d\omega$ into the very same resistor. Then we can prove (we shall prove it for another case, but the mathematics is exactly the same) that the power comes out \begin{equation} \label{Eq:I:41:3} P(\omega)\,d\omega = (2/\pi)kT\,d\omega, \end{equation} and is independent of the resistance when put this way. |
|
1 | 41 | The Brownian Movement | 2 | Thermal equilibrium of radiation | Now we go on to consider a still more advanced and interesting proposition that is as follows. Suppose we have a charged oscillator like those we were talking about when we were discussing light, let us say an electron oscillating up and down in an atom. If it oscillates up and down, it radiates light. Now suppose that this oscillator is in a very thin gas of other atoms, and that from time to time the atoms collide with it. Then in equilibrium, after a long time, this oscillator will pick up energy such that its kinetic energy of oscillation is $\tfrac{1}{2}kT$, and since it is a harmonic oscillator, its entire energy will become $kT$. That is, of course, a wrong description so far, because the oscillator carries electric charge, and if it has an energy $kT$ it is shaking up and down and radiating light. Therefore it is impossible to have equilibrium of real matter alone without the charges in it emitting light, and as light is emitted, energy flows away, the oscillator loses its $kT$ as time goes on, and thus the whole gas which is colliding with the oscillator gradually cools off. And that is, of course, the way a warm stove cools, by radiating the light into the sky, because the atoms are jiggling their charge and they continually radiate, and slowly, because of this radiation, the jiggling motion slows down. On the other hand, if we enclose the whole thing in a box so that the light does not go away to infinity, then we can eventually get thermal equilibrium. We may either put the gas in a box where we can say that there are other radiators in the box walls sending light back or, to take a nicer example, we may suppose the box has mirror walls. It is easier to think about that case. Thus we assume that all the radiation that goes out from the oscillator keeps running around in the box. Then, of course, it is true that the oscillator starts to radiate, but pretty soon it can maintain its $kT$ of energy in spite of the fact that it is radiating, because it is being illuminated, we may say, by its own light reflected from the walls of the box. That is, after a while there is a great deal of light rushing around in the box, and although the oscillator is radiating some, the light comes back and returns some of the energy that was radiated. We shall now determine how much light there must be in such a box at temperature $T$ in order that the shining of the light on this oscillator will generate just enough energy to account for the light it radiated. Let the gas atoms be very few and far between, so that we have an ideal oscillator with no resistance except radiation resistance. Then we consider that at thermal equilibrium the oscillator is doing two things at the same time. First, it has a mean energy $kT$, and we calculate how much radiation it emits. Second, this radiation should be exactly the amount that would result because of the fact that the light shining on the oscillator is scattered. Since there is nowhere else the energy can go, this effective radiation is really just scattered light from the light that is in there. Thus we first calculate the energy that is radiated by the oscillator per second, if the oscillator has a certain energy. (We borrow from Chapter 32 on radiation resistance a number of equations without going back over their derivation.) The energy radiated per radian divided by the energy of the oscillator is called $1/Q$ (Eq. 32.8): $1/Q = (dW/dt)/\omega_0W$. Using the quantity $\gamma$, the damping constant, this can also be written as $1/Q = \gamma/\omega_0$, where $\omega_0$ is the natural frequency of the oscillator—if gamma is very small, $Q$ is very large. The energy radiated per second is then \begin{equation} \label{Eq:I:41:4} \ddt{W}{t} = \frac{\omega_0W}{Q} = \frac{\omega_0W\gamma}{\omega_0} = \gamma W. \end{equation} The energy radiated per second is thus simply gamma times the energy of the oscillator. Now the oscillator should have an average energy $kT$, so we see that gamma $kT$ is the average amount of energy radiated per second: \begin{equation} \label{Eq:I:41:5} \avg{dW/dt} = \gamma kT. \end{equation} Now we only have to know what gamma is. Gamma is easily found from Eq. (32.12). It is \begin{equation} \label{Eq:I:41:6} \gamma = \frac{\omega_0}{Q} = \frac{2}{3}\, \frac{r_0\omega_0^2}{c}, \end{equation} where $r_0 = e^2/mc^2$ is the classical electron radius, and we have set $\lambda = 2\pi c/\omega_0$. Our final result for the average rate of radiation of light near the frequency $\omega_0$ is therefore \begin{equation} \label{Eq:I:41:7} \avg{dW/dt} = \frac{2}{3}\, \frac{r_0\omega_0^2kT}{c}. \end{equation} Next we ask how much light must be shining on the oscillator. It must be enough that the energy absorbed from the light (and thereupon scattered) is just exactly this much. In other words, the emitted light is accounted for as scattered light from the light that is shining on the oscillator in the cavity. So we must now calculate how much light is scattered from the oscillator if there is a certain amount—unknown—of radiation incident on it. Let $I(\omega)\,d\omega$ be the amount of light energy there is at the frequency $\omega$, within a certain range $d\omega$ (because there is no light at exactly a certain frequency; it is spread all over the spectrum). So $I(\omega)$ is a certain spectral distribution which we are now going to find—it is the color of a furnace at temperature $T$ that we see when we open the door and look in the hole. Now how much light is absorbed? We worked out the amount of radiation absorbed from a given incident light beam, and we calculated it in terms of a cross section. It is just as though we said that all of the light that falls on a certain cross section is absorbed. So the total amount that is re-radiated (scattered) is the incident intensity $I(\omega)\,d\omega$ multiplied by the cross section $\sigma$. The formula for the cross section that we derived (Eq. 32.19) did not have the damping included. It is not hard to go through the derivation again and put in the resistance term, which we neglected. If we do that, and calculate the cross section the same way, we get \begin{equation} \label{Eq:I:41:8} \sigma_s = \frac{8\pi r_0^2}{3}\biggl( \frac{\omega^4}{(\omega^2 - \omega_0^2)^2 + \gamma^2\omega^2} \biggr). \end{equation} Now, as a function of frequency, $\sigma_s$ is of significant size only for $\omega$ very near to the natural frequency $\omega_0$. (Remember that the $Q$ for a radiating oscillator is about $10^8$.) The oscillator scatters very strongly when $\omega$ is equal to $\omega_0$, and very weakly for other values of $\omega$. Therefore we can replace $\omega$ by $\omega_0$ and $\omega^2 - \omega_0^2$ by $2\omega_0(\omega - \omega_0)$, and we get \begin{equation} \label{Eq:I:41:9} \sigma_s = \frac{2\pi r_0^2\omega_0^2} {3[(\omega - \omega_0)^2 + \gamma^2/4]}. \end{equation} Now the whole curve is localized near $\omega = \omega_0$. (We do not really have to make any approximations, but it is much easier to do the integrals if we simplify the equation a bit.) Now we multiply the intensity in a given frequency range by the cross section of scattering, to get the amount of energy scattered in the range $d\omega$. The total energy scattered is then the integral of this for all $\omega$. Thus \begin{equation} \begin{aligned} \ddt{W_s}{t} &= \int_0^\infty I(\omega)\sigma_s(\omega)\,d\omega\\[1ex] &= \int_0^\infty\frac{2\pi r_0^2\omega_0^2I(\omega)\,d\omega} {3[(\omega - \omega_0)^2 + \gamma^2/4]}. \end{aligned} \label{Eq:I:41:10} \end{equation} Now we set $dW_s/dt = 3\gamma kT$. Why three? Because when we made our analysis of the cross section in Chapter 32, we assumed that the polarization was such that the light could drive the oscillator. If we had used an oscillator which could move only in one direction, and the light, say, was polarized in the wrong way, it would not give any scattering. So we must either average the cross section of an oscillator which can go only in one direction, over all directions of incidence and polarization of the light or, more easily, we can imagine an oscillator which will follow the field no matter which way the field is pointing. Such an oscillator, which can oscillate equally in three directions, would have $3kT$ average energy because there are $3$ degrees of freedom in that oscillator. So we should use $3\gamma kT$ because of the $3$ degrees of freedom. Now we have to do the integral. Let us suppose that the unknown spectral distribution $I(\omega)$ of the light is a smooth curve and does not vary very much across the very narrow frequency region where $\sigma_s$ is peaked (Fig. 41–3). Then the only significant contribution comes when $\omega$ is very close to $\omega_0$, within an amount gamma, which is very small. So therefore, although $I(\omega)$ may be an unknown and complicated function, the only place where it is important is near $\omega = \omega_0$, and there we may replace the smooth curve by a flat one—a “constant”—at the same height. In other words, we simply take $I(\omega)$ outside the integral sign and call it $I(\omega_0)$. We may also take the rest of the constants out in front of the integral, and what we have left is \begin{equation} \label{Eq:I:41:11} \tfrac{2}{3}\pi r_0^2\omega_0^2I(\omega_0) \int_0^\infty\frac{d\omega} {(\omega - \omega_0)^2 + \gamma^2/4} = 3\gamma kT. \end{equation} Now, the integral should go from $0$ to $\infty$, but $0$ is so far from $\omega_0$ that the curve is all finished by that time, so we go instead to minus $\infty$—it makes no difference and it is much easier to do the integral. The integral is an inverse tangent function of the form $\int dx/(x^2 + a^2)$. If we look it up in a book we see that it is equal to $\pi/a$. So what it comes to for our case is $2\pi/\gamma$. Therefore we get, with some rearranging, \begin{equation} \label{Eq:I:41:12} I(\omega_0) = \frac{9\gamma^2kT}{4\pi^2r_0^2\omega_0^2}. \end{equation} Then we substitute the formula (41.6) for gamma (do not worry about writing $\omega_0$; since it is true of any $\omega_0$, we may just call it $\omega$) and the formula for $I(\omega)$ then comes out \begin{equation} \label{Eq:I:41:13} I(\omega) = \frac{\omega^2kT}{\pi^2c^2}. \end{equation} And that gives us the distribution of light in a hot furnace. It is called the blackbody radiation. Black, because the hole in the furnace that we look at is black when the temperature is zero. Inside a closed box at temperature $T$, (41.13) is the distribution of energy of the radiation, according to classical theory. First, let us notice a remarkable feature of that expression. The charge of the oscillator, the mass of the oscillator, all properties specific to the oscillator, cancel out, because once we have reached equilibrium with one oscillator, we must be at equilibrium with any other oscillator of a different mass, or we will be in trouble. So this is an important kind of check on the proposition that equilibrium does not depend on what we are in equilibrium with, but only on the temperature. Now let us draw a picture of the $I(\omega)$ curve (Fig. 41–4). It tells us how much light we have at different frequencies. The amount of intensity that there is in our box, per unit frequency range, goes, as we see, as the square of the frequency, which means that if we have a box at any temperature at all, and if we look at the x-rays that are coming out, there will be a lot of them! Of course we know this is false. When we open the furnace and take a look at it, we do not burn our eyes out from x-rays at all. It is completely false. Furthermore, the total energy in the box, the total of all this intensity summed over all frequencies, would be the area under this infinite curve. Therefore, something is fundamentally, powerfully, and absolutely wrong. Thus was the classical theory absolutely incapable of correctly describing the distribution of light from a blackbody, just as it was incapable of correctly describing the specific heats of gases. Physicists went back and forth over this derivation from many different points of view, and there is no escape. This is the prediction of classical physics. Equation (41.13) is called Rayleigh’s law, and it is the prediction of classical physics, and is obviously absurd. |
|
1 | 41 | The Brownian Movement | 3 | Equipartition and the quantum oscillator | The difficulty above was another part of the continual problem of classical physics, which started with the difficulty of the specific heat of gases, and now has been focused on the distribution of light in a blackbody. Now, of course, at the time that theoreticians studied this thing, there were also many measurements of the actual curve. And it turned out that the correct curve looked like the dashed curves in Fig. 41–4. That is, the x-rays were not there. If we lower the temperature, the whole curve goes down in proportion to $T$, according to the classical theory, but the observed curve also cuts off sooner at a lower temperature. Thus the low-frequency end of the curve is right, but the high-frequency end is wrong. Why? When Sir James Jeans was worrying about the specific heats of gases, he noted that motions which have high frequency are “frozen out” as the temperature goes too low. That is, if the temperature is too low, if the frequency is too high, the oscillators do not have $kT$ of energy on the average. Now recall how our derivation of (41.13) worked: It all depends on the energy of an oscillator at thermal equilibrium. What the $kT$ of (41.5) was, and what the same $kT$ in (41.13) is, is the mean energy of a harmonic oscillator of frequency $\omega$ at temperature $T$. Classically, this is $kT$, but experimentally, no!—not when the temperature is too low or the oscillator frequency is too high. And so the reason that the curve falls off is the same reason that the specific heats of gases fail. It is easier to study the blackbody curve than it is the specific heats of gases, which are so complicated, therefore our attention is focused on determining the true blackbody curve, because this curve is a curve which correctly tells us, at every frequency, what the average energy of harmonic oscillators actually is as a function of temperature. Planck studied this curve. He first determined the answer empirically, by fitting the observed curve with a nice function that fitted very well. Thus he had an empirical formula for the average energy of a harmonic oscillator as a function of frequency. In other words, he had the right formula instead of $kT$, and then by fiddling around he found a simple derivation for it which involved a very peculiar assumption. That assumption was that the harmonic oscillator can take up energies only $\hbar\omega$ at a time. The idea that they can have any energy at all is false. Of course, that was the beginning of the end of classical mechanics. The very first correctly determined quantum-mechanical formula will now be derived. Suppose that the permitted energy levels of a harmonic oscillator were equally spaced at $\hbar\omega_0$ apart, so that the oscillator could take on only these different energies (Fig. 41–5). Planck made a somewhat more complicated argument than the one that is being given here, because that was the very beginning of quantum mechanics and he had to prove some things. But we are going to take it as a fact (which he demonstrated in this case) that the probability of occupying a level of energy $E$ is $P(E) = \alpha e^{-E/kT}$. If we go along with that, we will obtain the right result. Suppose now that we have a lot of oscillators, and each is a vibrator of frequency $\omega_0$. Some of these vibrators will be in the bottom quantum state, some will be in the next one, and so forth. What we would like to know is the average energy of all these oscillators. To find out, let us calculate the total energy of all the oscillators and divide by the number of oscillators. That will be the average energy per oscillator in thermal equilibrium, and will also be the energy that is in equilibrium with the blackbody radiation and that should go in Eq. (41.13) in place of $kT$. Thus we let $N_0$ be the number of oscillators that are in the ground state (the lowest energy state); $N_1$ the number of oscillators in the state $E_1$; $N_2$ the number that are in state $E_2$; and so on. According to the hypothesis (which we have not proved) that in quantum mechanics the law that replaced the probability $e^{-\text{P.E.}/kT}$ or $e^{-\text{K.E.}/kT}$ in classical mechanics is that the probability goes down as $e^{-\Delta E/kT}$, where $\Delta E$ is the excess energy, we shall assume that the number $N_1$ that are in the first state will be the number $N_0$ that are in the ground state, times $e^{-\hbar\omega/kT}$. Similarly, $N_2$, the number of oscillators in the second state, is $N_2 = N_0e^{-2\hbar\omega/kT}$. To simplify the algebra, let us call $e^{-\hbar\omega/kT} = x$. Then we simply have $N_1 = N_0x$, $N_2 = N_0x^2$, …, $N_n = N_0x^n$. The total energy of all the oscillators must first be worked out. If an oscillator is in the ground state, there is no energy. If it is in the first state, the energy is $\hbar\omega$, and there are $N_1$ of them. So $N_1\hbar\omega$, or $\hbar\omega N_0x$ is how much energy we get from those. Those that are in the second state have $2\hbar\omega$, and there are $N_2$ of them, so $N_2\cdot 2\hbar\omega = 2\hbar\omega N_0x^2$ is how much energy we get, and so on. Then we add it all together to get $E_{\text{tot}} = N_0\hbar\omega(0 + x +2x^2 + 3x^3 + \dotsb)$. And now, how many oscillators are there? Of course, $N_0$ is the number that are in the ground state, $N_1$ in the first state, and so on, and we add them together: $N_{\text{tot}} = N_0(1 + x + x^2 + x^3 + \dotsb)$. Thus the average energy is \begin{equation} \label{Eq:I:41:14} \avg{E} = \frac{E_{\text{tot}}}{N_{\text{tot}}} = \frac{N_0\hbar\omega(0 + x +2x^2 + 3x^3 + \dotsb)} {N_0(1 + x + x^2 + x^3 + \dotsb)}. \end{equation} Now the two sums which appear here we shall leave for the reader to play with and have some fun with. When we are all finished summing and substituting for $x$ in the sum, we should get—if we make no mistakes in the sum— \begin{equation} \label{Eq:I:41:15} \avg{E} = \frac{\hbar\omega}{e^{\hbar\omega/kT} - 1}. \end{equation} This, then, was the first quantum-mechanical formula ever known, or ever discussed, and it was the beautiful culmination of decades of puzzlement. Maxwell knew that there was something wrong, and the problem was, what was right? Here is the quantitative answer of what is right instead of $kT$. This expression should, of course, approach $kT$ as $\omega \to 0$ or as $T \to \infty$. See if you can prove that it does—learn how to do the mathematics. This is the famous cutoff factor that Jeans was looking for, and if we use it instead of $kT$ in (41.13), we obtain for the distribution of light in a black box \begin{equation} \label{Eq:I:41:16} I(\omega)\,d\omega = \frac{\hbar\omega^3\,d\omega} {\pi^2c^2(e^{\hbar\omega/kT} - 1)}. \end{equation} We see that for a large $\omega$, even though we have $\omega^3$ in the numerator, there is an $e$ raised to a tremendous power in the denominator, so the curve comes down again and does not “blow up”—we do not get ultraviolet light and x-rays where we do not expect them! One might complain that in our derivation of (41.16) we used the quantum theory for the energy levels of the harmonic oscillator, but the classical theory in determining the cross section $\sigma_s$. But the quantum theory of light interacting with a harmonic oscillator gives exactly the same result as that given by the classical theory. That, in fact, is why we were justified in spending so much time on our analysis of the index of refraction and the scattering of light, using a model of atoms like little oscillators—the quantum formulas are substantially the same. Now let us return to the Johnson noise in a resistor. We have already remarked that the theory of this noise power is really the same theory as that of the classical blackbody distribution. In fact, rather amusingly, we have already said that if the resistance in a circuit were not a real resistance, but were an antenna (an antenna acts like a resistance because it radiates energy), a radiation resistance, it would be easy for us to calculate what the power would be. It would be just the power that runs into the antenna from the light that is all around, and we would get the same distribution, changed by only one or two factors. We can suppose that the resistor is a generator with an unknown power spectrum $P(\omega)$. The spectrum is determined by the fact that this same generator, connected to a resonant circuit of any frequency, as in Fig. 41–2(b), generates in the inductance a voltage of the magnitude given in Eq. (41.2). One is thus led to the same integral as in (41.10), and the same method works to give Eq. (41.3). For low temperatures the $kT$ in (41.3) must of course be replaced by (41.15). The two theories (blackbody radiation and Johnson noise) are also closely related physically, for we may of course connect a resonant circuit to an antenna, so the resistance $R$ is a pure radiation resistance. Since (41.2) does not depend on the physical origin of the resistance, we know the generator $G$ for a real resistance and for radiation resistance is the same. What is the origin of the generated power $P(\omega)$ if the resistance $R$ is only an ideal antenna in equilibrium with its environment at temperature $T$? It is the radiation $I(\omega)$ in the space at temperature $T$ which impinges on the antenna and, as “received signals,” makes an effective generator. Therefore one can deduce a direct relation of $P(\omega)$ and $I(\omega)$, leading then from (41.13) to (41.3). All the things we have been talking about—the so-called Johnson noise and Planck’s distribution, and the correct theory of the Brownian movement which we are about to describe—are developments of the first decade or so of the 20th century. Now with those points and that history in mind, we return to the Brownian movement. |
|
1 | 41 | The Brownian Movement | 4 | The random walk | Let us consider how the position of a jiggling particle should change with time, for very long times compared with the time between “kicks.” Consider a little Brownian movement particle which is jiggling about because it is bombarded on all sides by irregularly jiggling water molecules. Query: After a given length of time, how far away is it likely to be from where it began? This problem was solved by Einstein and Smoluchowski. If we imagine that we divide the time into little intervals, let us say a hundredth of a second or so, then after the first hundredth of a second it moves here, and in the next hundredth it moves some more, in the next hundredth of a second it moves somewhere else, and so on. In terms of the rate of bombardment, a hundredth of a second is a very long time. The reader may easily verify that the number of collisions a single molecule of water receives in a second is about $10^{14}$, so in a hundredth of a second it has $10^{12}$ collisions, which is a lot! Therefore, after a hundredth of a second it is not going to remember what happened before. In other words, the collisions are all random, so that one “step” is not related to the previous “step.” It is like the famous drunken sailor problem: the sailor comes out of the bar and takes a sequence of steps, but each step is chosen at an arbitrary angle, at random (Fig. 41–6). The question is: After a long time, where is the sailor? Of course we do not know! It is impossible to say. What do we mean—he is just somewhere more or less random. Well then, on the average, where is he? On the average, how far away from the bar has he gone? We have already answered this question, because once we were discussing the superposition of light from a whole lot of different sources at different phases, and that meant adding a lot of arrows at different angles (Chapter 30). There we discovered that the mean square of the distance from one end to the other of the chain of random steps, which was the intensity of the light, is the sum of the intensities of the separate pieces. And so, by the same kind of mathematics, we can prove immediately that if $\FLPR_N$ is the vector distance from the origin after $N$ steps, the mean square of the distance from the origin is proportional to the number $N$ of steps. That is, $\avg{R_N^2} = NL^2$, where $L$ is the length of each step. Since the number of steps is proportional to the time in our present problem, the mean square distance is proportional to the time: \begin{equation} \label{Eq:I:41:17} \avg{R^2} = \alpha t. \end{equation} This does not mean that the mean distance is proportional to the time. If the mean distance were proportional to the time it would mean that the drifting is at a nice uniform velocity. The sailor is making some relatively sensible headway, but only such that his mean square distance is proportional to time. That is the characteristic of a random walk. We may show very easily that in each successive step the square of the distance increases, on the average, by $L^2$. For if we write $\FLPR_N = \FLPR_{N - 1} + \FLPL$, we find that $\FLPR_N^2$ is \begin{equation*} \FLPR_N\!\cdot\!\FLPR_N = R_N^2 = R_{N - 1}^2 + 2\FLPR_{N - 1}\!\cdot\!\FLPL + L^2, \end{equation*} and averaging over many trials, we have $\avg{R_N^2} = \avg{R_{N - 1}^2} + L^2$, since $\avg{\FLPR_{N - 1}\cdot\FLPL} = 0$. Thus, by induction, \begin{equation} \label{Eq:I:41:18} \avg{R_N^2} = NL^2. \end{equation} Now we would like to calculate the coefficient $\alpha$ in Eq. (41.17), and to do so we must add a feature. We are going to suppose that if we were to put a force on this particle (having nothing to do with the Brownian movement—we are taking a side issue for the moment), then it would react in the following way against the force. First, there would be inertia. Let $m$ be the coefficient of inertia, the effective mass of the object (not necessarily the same as the real mass of the real particle, because the water has to move around the particle if we pull on it). Thus if we talk about motion in one direction, there is a term like $m(d^2x/dt^2)$ on one side. And next, we want also to assume that if we kept a steady pull on the object, there would be a drag on it from the fluid, proportional to its velocity. Besides the inertia of the fluid, there is a resistance to flow due to the viscosity and the complexity of the fluid. It is absolutely essential that there be some irreversible losses, something like resistance, in order that there be fluctuations. There is no way to produce the $kT$ unless there are also losses. The source of the fluctuations is very closely related to these losses. What the mechanism of this drag is, we will discuss soon—we shall talk about forces that are proportional to the velocity and where they come from. But let us suppose for now that there is such a resistance. Then the formula for the motion under an external force, when we are pulling on it in a normal manner, is \begin{equation} \label{Eq:I:41:19} m\,\frac{d^2x}{dt^2} + \mu\,\ddt{x}{t} = F_{\text{ext}}. \end{equation} The quantity $\mu$ can be determined directly from experiment. For example, we can watch the drop fall under gravity. Then we know that the force is $mg$, and $\mu$ is $mg$ divided by the speed of fall the drop ultimately acquires. Or we could put the drop in a centrifuge and see how fast it sediments. Or if it is charged, we can put an electric field on it. So $\mu$ is a measurable thing, not an artificial thing, and it is known for many types of colloidal particles, etc. Now let us use the same formula in the case where the force is not external, but is equal to the irregular forces of the Brownian movement. We shall then try to determine the mean square distance that the object goes. Instead of taking the distances in three dimensions, let us take just one dimension, and find the mean of $x^2$, just to prepare ourselves. (Obviously the mean of $x^2$ is the same as the mean of $y^2$ is the same as the mean of $z^2$, and therefore the mean square of the distance is just $3$ times what we are going to calculate.) The $x$-component of the irregular forces is, of course, just as irregular as any other component. What is the rate of change of $x^2$? It is $d(x^2)/dt = 2x(dx/dt)$, so what we have to find is the average of the position times the velocity. We shall show that this is a constant, and that therefore the mean square radius will increase proportionally to the time, and at what rate. Now if we multiply Eq. (41.19) by $x$, $mx(d^2x/dt^2) + \mu x(dx/dt) = xF_x$. We want the time average of $x(dx/dt)$, so let us take the average of the whole equation, and study the three terms. Now what about $x$ times the force? If the particle happens to have gone a certain distance $x$, then, since the irregular force is completely irregular and does not know where the particle started from, the next impulse can be in any direction relative to $x$. If $x$ is positive, there is no reason why the average force should also be in that direction. It is just as likely to be one way as the other. The bombardment forces are not driving it in a definite direction. So the average value of $x$ times $F$ is zero. On the other hand, for the term $mx(d^2x/dt^2)$ we will have to be a little fancy, and write this as \begin{equation*} mx\,\frac{d^2x}{dt^2} = m\,\frac{d[x(dx/dt)]}{dt} - m\biggl(\ddt{x}{t}\biggr)^2. \end{equation*} Thus we put in these two terms and take the average of both. So let us see how much the first term should be. Now $x$ times the velocity has a mean that does not change with time, because when it gets to some position it has no remembrance of where it was before, so things are no longer changing with time. So this quantity, on the average, is zero. We have left the quantity $mv^2$, and that is the only thing we know: $mv^2/2$ has a mean value $\tfrac{1}{2}kT$. Therefore we find that \begin{equation} \biggl\langle mx\,\frac{d^2x}{dt^2}\biggr\rangle + \mu\,\biggl\langle x\,\ddt{x}{t}\biggr\rangle = \avg{xF_x}\notag \end{equation} implies \begin{equation} -\avg{mv^2} + \frac{\mu}{2}\,\ddt{}{t}\,\avg{x^2} = 0,\notag \end{equation} or \begin{equation} \label{Eq:I:41:20} \ddt{\avg{x^2}}{t} = 2\,\frac{kT}{\mu}. \end{equation} Therefore the object has a mean square distance $\avg{R^2}$, at the end of a certain amount of $t$, equal to \begin{equation} \label{Eq:I:41:21} \avg{R^2} = 6kT\,\frac{t}{\mu}. \end{equation} And so we can actually determine how far the particles go! We first must determine how they react to a steady force, how fast they drift under a known force (to find $\mu$), and then we can determine how far they go in their random motions. This equation was of considerable importance historically, because it was one of the first ways by which the constant $k$ was determined. After all, we can measure $\mu$, the time, how far the particles go, and we can take an average. The reason that the determination of $k$ was important is that in the law $PV = RT$ for a mole, we know that $R$, which can also be measured, is equal to the number of atoms in a mole times $k$. A mole was originally defined as so and so many grams of oxygen-16 (now carbon is used), so the number of atoms in a mole was not known, originally. It is, of course, a very interesting and important problem. How big are atoms? How many are there? So one of the earliest determinations of the number of atoms was by the determination of how far a dirty little particle would move if we watched it patiently under a microscope for a certain length of time. And thus Boltzmann’s constant $k$ and the Avogadro number $N_0$ were determined because $R$ had already been measured. |
|
1 | 42 | Applications of Kinetic Theory | 1 | Evaporation | In this chapter we shall discuss some further applications of kinetic theory. In the previous chapter we emphasized one particular aspect of kinetic theory, namely, that the average kinetic energy in any degree of freedom of a molecule or other object is $\tfrac{1}{2}kT$. The central feature of what we shall now discuss, on the other hand, is the fact that the probability of finding a particle in different places, per unit volume, varies as $e^{-\text{potential energy}/kT}$; we shall make a number of applications of this. The phenomena which we want to study are relatively complicated: a liquid evaporating, or electrons in a metal coming out of the surface, or a chemical reaction in which there are a large number of atoms involved. In such cases it is no longer possible to make from the kinetic theory any simple and correct statements, because the situation is too complicated. Therefore, this chapter, except where otherwise emphasized, is quite inexact. The idea to be emphasized is only that we can understand, from the kinetic theory, more or less how things ought to behave. By using thermodynamic arguments, or some empirical measurements of certain critical quantities, we can get a more accurate representation of the phenomena. However, it is very useful to know even only more or less why something behaves as it does, so that when the situation is a new one, or one that we have not yet started to analyze, we can say, more or less, what ought to happen. So this discussion is highly inaccurate but essentially right—right in idea, but a little bit simplified, let us say, in the specific details. The first example that we shall consider is the evaporation of a liquid. Suppose we have a box with a large volume, partially filled with liquid in equilibrium and with the vapor at a certain temperature. We shall suppose that the molecules of the vapor are relatively far apart, and that inside the liquid, the molecules are packed close together. The problem is to find out how many molecules there are in the vapor phase, compared with the number there are in the liquid. How dense is the vapor at a given temperature, and how does it depend on the temperature? Let us say that $n$ equals the number of molecules per unit volume in the vapor. That number, of course, varies with the temperature. If we add heat, we get more evaporation. Now let another quantity, $1/V_a$, equal the number of atoms per unit volume in the liquid: We suppose that each molecule in the liquid occupies a certain volume, so that if there are more molecules of liquid, then all together they occupy a bigger volume. Thus if $V_a$ is the volume occupied by one molecule, the number of molecules in a unit volume is a unit volume divided by the volume of each molecule. Furthermore, we suppose that there is a force of attraction between the molecules to hold them together in the liquid. Otherwise we cannot understand why it condenses. Thus suppose that there is such a force and that there is an energy of binding of the molecules in the liquid which is lost when they go into the vapor. That is, we are going to suppose that, in order to take a single molecule out of the liquid into the vapor, a certain amount of work $W$ has to be done. There is a certain difference, $W$, in the energy of a molecule in the liquid from what it would have if it were in the vapor, because we have to pull it away from the other molecules which attract it. Now we use the general principle that the number of atoms per unit volume in two different regions is $n_2/n_1 = e^{-(E_2 - E_1)/kT}$. So the number $n$ per unit volume in the vapor, divided by the number $1/V_a$ per unit volume in the liquid, is equal to \begin{equation} \label{Eq:I:42:1} nV_a = e^{-W/kT}, \end{equation} because that is the general rule. It is like the atmosphere in equilibrium under gravity, where the gas at the bottom is denser than that at the top because of the work $mgh$ needed to lift the gas molecules to the height $h$. In the liquid, the molecules are denser than in the vapor because we have to pull them out through the energy “hill” $W$, and the ratio of the densities is $e^{-W/kT}$. This is what we wanted to deduce—that the vapor density varies as $e$ to the minus some energy or other over $kT$. The factors in front are not really interesting to us, because in most cases the vapor density is very much lower than the liquid density. In those circumstances, where we are not near the critical point where they are almost the same, but where the vapor density is much lower than the liquid density, then the fact that $n$ is very much less than $1/V_a$ is occasioned by the fact that $W$ is very much greater than $kT$. So formulas such as (42.1) are interesting only when $W$ is very much bigger than $kT$, because in those circumstances, since we are raising $e$ to minus a tremendous amount, if we change $T$ a little bit, that tremendous power changes a bit, and the change produced in the exponential factor is very much more important than any change that might occur in the factors out in front. Why should there be any changes in such factors as $V_a$? Because ours was an approximate analysis. After all, there is not really a definite volume for each molecule; as we change the temperature, the volume $V_a$ does not stay constant—the liquid expands. There are other little features like that, and so the actual situation is more complicated. There are slowly varying temperature-dependent factors all over the place. In fact, we might say that $W$ itself varies slightly with temperature, because at a higher temperature, at a different molecular volume, there would be different average attractions, and so on. So, while we might think that if we have a formula in which everything varies in an unknown way with temperature then we have no formula at all, if we realize that the exponent $W/kT$ is, in general, very large, we see that in the curve of the vapor density as a function of temperature most of the variation is occasioned by the exponential factor, and if we take $W$ as a constant and the coefficient $1/V_a$ as nearly constant, it is a good approximation for short intervals along the curve. Most of the variation, in other words, is of the general nature $e^{-W/kT}$. It turns out that there are many, many phenomena in nature which are characterized by having to borrow an energy from somewhere, and in which the central feature of the temperature variation is $e$ to the minus the energy over $kT$. This is a useful fact only when the energy is large compared with $kT$, so that most of the variation is contained in the variation of the $kT$ and not in the constant and in other factors. Now let us consider another way of obtaining a somewhat similar result for the evaporation, but looking at it in more detail. To arrive at (42.1), we simply applied a rule which is valid at equilibrium, but in order to understand things better, there is no harm in trying to look at the details of what is going on. We may also describe what is going on in the following way: the molecules that are in the vapor continually bombard the surface of the liquid; when they hit it, they may bounce off or they may get stuck. There is an unknown factor for that—maybe $50$–$50$, maybe $10$ to $90$—we do not know. Let us say they always get stuck—we can analyze it over again later on the assumption that they do not always get stuck. Then at a given moment there will be a certain number of atoms which are condensing onto the surface of the liquid. The number of condensing molecules, the number that arrive on a unit area per unit time, is the number $n$ per unit volume times the velocity $v$. This velocity of the molecules is related to the temperature, because we know that $\tfrac{1}{2}mv^2$ is equal to $\tfrac{3}{2}kT$ on the average. So $v$ is some kind of a mean velocity. Of course we should integrate over the angles and get some kind of an average, but it is roughly proportional to the root-mean-square velocity, within some factor. Thus \begin{equation} \label{Eq:I:42:2} N_c = nv \end{equation} is the rate at which the molecules arrive per unit area and are condensing. At the same time, however, the atoms in the liquid are jiggling about, and from time to time one of them gets kicked out. Now we have to estimate how fast they get kicked out. The idea will be that at equilibrium the number that are kicked out per second and the number that arrive per second are equal. How many get kicked out? In order to get kicked out, a particular molecule has to have acquired by accident an excess energy over its neighbors—a considerable excess energy, because it is attracted very strongly by the other molecules in the liquid. Ordinarily it does not leave because it is so strongly attracted, but in the collisions sometimes one of them gets an extra energy by accident. And the chance that it gets the extra energy $W$ which it needs in our case is very small if $W \gg kT$. In fact, $e^{-W/kT}$ is the chance that an atom has picked up more than this much energy. That is the general principle in kinetic theory: in order to borrow an excess energy $W$ over the average, the odds are $e$ to the minus the energy that we have to borrow, over $kT$. Now suppose that some molecules have borrowed this energy. We now have to estimate how many leave the surface per second. Of course, just because a molecule has the necessary energy does not mean that it will actually evaporate, since it may be buried too deeply inside the liquid or, even if it is near the surface, it may be travelling in the wrong direction. The number that are going to leave a unit area per second is going to be something like this: the number of atoms there are near the surface, per unit area, divided by the time it takes one to escape, multiplied by the probability $e^{-W/kT}$ that they are ready to escape in the sense that they have enough energy. We shall suppose that each molecule at the surface of the liquid occupies a certain cross-sectional area $A$. Then the number of molecules per unit area of liquid surface will be $1/A$. And now, how long does it take a molecule to escape? If the molecules have a certain average speed $v$, and have to move, say, one molecular diameter $D$, the thickness of the first layer, then the time it takes to get across that thickness is the time needed to escape, if the molecule has enough energy. The time will be $D/v$. Thus the number evaporating should be approximately \begin{equation} \label{Eq:I:42:3} N_e = (1/A)(v/D)e^{-W/kT}. \end{equation} Now the area of each atom times the thickness of the layer is approximately the same as the volume $V_a$ occupied by a single atom. And so, in order to get equilibrium, we must have $N_c = N_e$, or \begin{equation} \label{Eq:I:42:4} nv = (v/V_a)e^{-W/kT}. \end{equation} We may cancel the $v$’s, since they are equal; even though one is the velocity of a molecule in the vapor and the other is the velocity of an evaporating molecule, these are the same, because we know their mean kinetic energy (in one direction) is $\tfrac{1}{2}kT$. But one may object, “No! No! These are the especially fast-moving ones; these are the ones that have picked up excess energy.” Not really, because the moment they start to pull away from the liquid, they have to lose that excess energy against the potential energy. So, as they come to the surface they are slowed down to the velocity $v$! It is the same as it was in our discussion of the distribution of molecular velocities in the atmosphere—at the bottom, the molecules had a certain distribution of energy. The ones that arrive at the top have the same distribution of energy, because the slow ones did not arrive at all, and the fast ones were slowed down. The molecules that are evaporating have the same distribution of energy as the ones inside—a rather remarkable fact. Anyway, it is useless to try to argue so closely about our formula because of other inaccuracies, such as the probability of bouncing back rather than entering the liquid, and so on. Thus we have a rough idea of the rate of evaporation and condensation, and we see, of course, that the vapor density $n$ varies in the same way as before, but now we have understood it in some detail rather than just as an arbitrary formula. This deeper understanding permits us to analyze some things. For example, suppose that we were to pump away the vapor at such a great rate that we removed the vapor as fast as it formed (if we had very good pumps and the liquid was evaporating very slowly), how fast would evaporation occur if we maintained a liquid temperature $T$? Suppose that we have already experimentally measured the equilibrium vapor density, so that we know, at the given temperature, how many molecules per unit volume are in equilibrium with the liquid. Now we would like to know how fast it will evaporate. Even though we have used only a rough analysis so far as the evaporation part of it is concerned, the number of vapor molecules arriving was not done so badly, aside from the unknown factor of reflection coefficient. So therefore we may use the fact that the number that are leaving, at equilibrium, is the same as the number that arrive. True, the vapor is being swept away and so the molecules are only coming out, but if the vapor were left alone, it would attain the equilibrium density at which the number that come back would equal the number that are evaporating. Therefore, we can easily see that the number that are coming off the surface per second is equal to one minus the unknown reflection coefficient $R$ times the number that would come down to the surface per second were the vapor still there, because that is how many would balance the evaporation at equilibrium: \begin{equation} \label{Eq:I:42:5} N_e = nv(1-R) = (v(1-R)/V_a)e^{-W/kT}. \end{equation} Of course, the number of molecules that hit the liquid from the vapor is easy to calculate, since we do not need to know as much about the forces as we do when we are worrying about how they get to escape through the liquid surface; it is much easier to make the argument the other way. |
|
1 | 42 | Applications of Kinetic Theory | 2 | Thermionic emission | We may give another example of a very practical situation that is similar to the evaporation of a liquid—so similar that it is not worth making a separate analysis. It is essentially the same problem. In a radio tube there is a source of electrons, namely a heated tungsten filament, and a positively charged plate to attract the electrons. Any electron that escapes from the surface of the tungsten is immediately swept away to the plate. That is our ideal “pump,” which is “pumping” the electrons away all the time. Now the question is: How many electrons per second can we get out of a piece of tungsten, and how does that number vary with temperature? The answer to that problem is the same as (42.5), because it turns out that in a piece of metal, electrons are attracted to the ions, or to atoms, of the metal. They are attracted, if we may say it crudely, to the metal. In order to get an electron out of a piece of metal, it takes a certain amount of energy or work to pull it out. This work varies with the different kinds of metal. In fact, it varies even with the character of the surface of a given kind of metal, but the total work may be a few electron volts, which, incidentally, is typical of the energy involved in chemical reactions. We can remember the latter fact by remembering that the voltage in a chemical cell like a flashlight battery, which is produced by chemical reactions, is about one volt. How can we find out how many electrons come out per second? It would be quite difficult to analyze the effects on the electrons going out; it is easier to analyze the situation the other way. So, we could start out by imagining that we did not draw the electrons away, and that the electrons were like a gas, and could come back to the metal. Then there would be a certain density of electrons at equilibrium which would, of course, be given by exactly the same formula as (42.1), where $V_a$ is the volume per electron in the metal, roughly, and $W$ is equal to $q_e\phi$, where $\phi$ is the so-called work function, or the voltage needed to pull an electron off the surface. This would tell us how many electrons would have to be in the surrounding space and striking the metal in order to balance the ones that are coming out. And thus it is easy to calculate how many are coming out if we sweep away all of them, because the number that are coming out is exactly equal to the number that would be going in with the above density of electron “vapor.” In other words, the answer is that the current of electricity that comes in per unit area is equal to the charge on each times the number that arrive per second per unit area, which is the number per unit volume times the velocity, as we have seen many times: \begin{equation} \label{Eq:I:42:6} I = q_env = (q_ev/V_a)e^{-q_e\phi/kT}. \end{equation} Now one electron volt corresponds to $kT$ at a temperature of $11{,}600$ degrees. The filament of the tube may be operating at a temperature of, say, $1100$ degrees, so the exponential factor is something like $e^{-10}$; when we change the temperature a little bit, the exponential factor changes a lot. Thus, again, the central feature of the formula is the $e^{-q_e\phi/kT}$. As a matter of fact, the factor in front is quite wrong—it turns out that the behavior of electrons in a metal is not correctly described by the classical theory, but by quantum mechanics, but this only changes the factor in front a little. Actually, no one has ever been able to get the thing straightened out very well, even though many people have used the high-class quantum-mechanical theory for their calculations. The big problem is, does $W$ change slightly with temperature? If it does, one cannot distinguish a $W$ changing slowly with temperature from a different coefficient in front. That is, if $W$ changed linearly, say, with temperature, so that $W = W_0 + \alpha kT$, then we would have \begin{equation*} e^{-W/kT} = e^{-(W_0 + \alpha kT)/kT} = e^{-\alpha} e^{-W_0/kT}. \end{equation*} Thus a linearly temperature-dependent $W$ is equivalent to a shifted “constant.” It is really quite difficult and usually fruitless to try to obtain the coefficient in the front accurately. |
|
1 | 42 | Applications of Kinetic Theory | 3 | Thermal ionization | Now we go on to another example of the same idea; always the same idea. This has to do with ionization. Suppose that in a gas we have a whole lot of atoms which are in the neutral state, say, but the gas is hot and the atoms can become ionized. We would like to know how many ions there are in a given circumstance if we have a certain density of atoms per unit volume at a certain temperature. Again we consider a box in which there are $N$ atoms which can hold electrons. (If an electron has come off an atom, it is called an ion, and if the atom is neutral, we simply call it an atom.) Then suppose that, at any given moment, the number of neutral atoms is $n_a$, the number of ions is $n_i$, and the number of electrons is $n_e$, all per unit volume. The problem is: What is the relationship of these three numbers? In the first place, we have two conditions or constraints on the numbers. For instance, as we vary different conditions, like the temperature and so on, $n_a + n_i$ would remain constant, because this would be simply the number $N$ of atomic nuclei that are in the box. If we keep the number of nuclei per unit volume fixed, and change, say, the temperature, then as the ionization proceeded some atoms would turn to ions, but the total number of atoms plus ions would be unchanged. That is, $n_a + n_i = N$. Another condition is that if the entire gas is to be electrically neutral (and if we neglect double or triple ionization), that means that the number of ions is equal to the number of electrons at all times, or $n_i = n_e$. These are subsidiary equations that simply express the conservation of charge and the conservation of atoms. These equations are true, and we ultimately will use them when we consider a real problem. But we want to obtain another relationship between the quantities. We can do this as follows. We again use the idea that it takes a certain amount of energy to lift the electron out of the atom, which we call the ionization energy, and we would write it as $W$, in order to make all of the formulas look the same. So we let $W$ equal the energy needed to pull an electron out of an atom and make an ion. Now we again say that the number of free electrons per unit volume in the “vapor” is equal to the number of bound electrons per unit volume in the atoms, times $e$ to the minus the energy difference between being bound and being free, over $kT$. That is the basic equation again. How can we write it? The number of free electrons per unit volume would, of course, be $n_e$, because that is the definition of $n_e$. Now what about the number of electrons per unit volume that are bound to atoms? The total number of places that we could put the electrons is apparently $n_a + n_i$, and we will suppose that when they are bound each one is bound within a certain volume $V_a$. So the total amount of volume which is available to electrons which would be bound is $(n_a + n_i)V_a$, so we might want to write our formula as \begin{equation*} n_e = \frac{n_a}{(n_a + n_i)V_a}\,e^{-W/kT}. \end{equation*} The formula is wrong, however, in one essential feature, which is the following: when an electron is already on an atom, another electron cannot come to that volume anymore! In other words, all the volumes of all the possible sites are not really available for the one electron which is trying to make up its mind whether or not to be in the vapor or in the condensed position, because in this problem there is an extra feature that when one electron is where another electron is, it is not allowed to go—it is repelled. For that reason, it comes out that we should count only that part of the volume which is available for an electron to sit on or not. That is, those which are already occupied do not count in the total available volume, but the only volume which is allowed is that of the ions, where there are vacant places for the electron to go. Then, in those circumstances, we find that a nicer way to write our formula is \begin{equation} \label{Eq:I:42:7} \frac{n_en_i}{n_a} = \frac{1}{V_a}\,e^{-W/kT}. \end{equation} This formula is called the Saha ionization equation. Now let us see if we can understand qualitatively why a formula like this is right, by arguing about the kinetic things that are happening. First, every once in a while an electron comes to an ion and they combine to make an atom. And also, every once in a while, an atom gets into a collision and breaks up into an ion and an electron. Now those two rates must be equal. How fast do electrons and ions find each other? The rate is certainly increased if the number of electrons per unit volume is increased. It is also increased if the number of ions per unit volume is increased. That is, the total rate at which recombination is occurring is certainly proportional to the number of electrons times the number of ions. Now the total rate at which ionization is occurring due to collisions must be dependent linearly on how many atoms there are to ionize. And so the rates will balance when there is some relationship between the product $n_en_i$ and the number of atoms, $n_a$. The fact that this relationship happens to be given by this particular formula, where $W$ is the ionization energy, is of course a little bit more information, but we can easily understand that the formula would necessarily involve the concentrations of the electrons, ions, and atoms in the combination $n_en_i/n_a$ to produce a constant independent of the $n$’s, and dependent only on temperature, the atomic cross sections, and other constant factors. We may also note that, since the equation involves the numbers per unit volume, if we were to do two experiments with a given total number $N$ of atoms plus ions, that is, a certain fixed number of nuclei, but using boxes with different volumes, the $n$’s would all be smaller in the larger box. But since the ratio $n_en_i/n_a$ stays the same, the total number of electrons and ions must be greater in the larger box. To see this, suppose that there are $N$ nuclei inside a box of volume $V$, and that a fraction $f$ of them are ionized. Then $n_e =$ $fN/V =$ $n_i$, and $n_a = (1 - f)N/V$. Then our equation becomes \begin{equation} \label{Eq:I:42:8} \frac{f^2}{1 - f}\,\frac{N}{V} = \frac{e^{-W/kT}}{V_a}. \end{equation} In other words, if we take a smaller and smaller density of atoms, or make the volume of the container bigger and bigger, the fraction $f$ of electrons and ions must increase. That ionization, just from “expansion” as the density goes down, is the reason why we believe that at very low densities, such as in the cold space between the stars, there may be ions present, even though we might not understand it from the point of view of the available energy. Although it takes many, many $kT$ of energy to make them, there are ions present. Why can there be ions present when there is so much space around, while if we increase the density, the ions tend to disappear? Answer: Consider an atom. Every once in a while, light, or another atom, or an ion, or whatever it is that maintains thermal equilibrium, strikes it. Very rarely, because it takes such a terrific amount of excess energy, an electron comes off and an ion is left. Now that electron, if the space is enormous, wanders and wanders and does not come near anything for years, perhaps. But once in a very great while, it does come back to an ion and they combine to make an atom. So the rate at which electrons are coming out from the atoms is very slow. But if the volume is enormous, an electron which has escaped takes so long to find another ion to recombine with that its probability of recombination is very, very small; thus, in spite of the large excess energy needed, there may be a reasonable number of electrons. |
|
1 | 42 | Applications of Kinetic Theory | 4 | Chemical kinetics | The same situation that we have just called “ionization” is also found in a chemical reaction. For instance, if two objects $A$ and $B$ combine into a compound $AB$, then if we think about it for a while we see that $AB$ is what we have called an atom, $B$ is what we call an electron, and $A$ is what we call an ion. With these substitutions the equations of equilibrium are exactly the same in form: \begin{equation} \label{Eq:I:42:9} \frac{n_An_B}{n_{AB}} = ce^{-W/kT}. \end{equation} This formula, of course, is not exact, since the “constant” $c$ depends on how much volume is allowed for the $A$ and $B$ to combine, and so on, but by thermodynamic arguments one can identify what the meaning of the $W$ in the exponential factor is, and it turns out that it is very close to the energy needed in the reaction. Suppose that we tried to understand this formula as a result of collisions, much in the way that we understood the evaporation formula, by arguing about how many electrons came off and how many of them came back per unit time. Suppose that $A$ and $B$ combine in a collision every once in a while to form a compound $AB$. And suppose that the compound $AB$ is a complicated molecule which jiggles around and is hit by other molecules, and from time to time it gets enough energy to explode and break up again into $A$ and $B$. Now it actually turns out, in chemical reactions, that if the atoms come together with too small an energy, even though energy may be released in the reaction $A + B \to AB$, the fact that $A$ and $B$ may touch each other does not necessarily make the reaction start. It usually is required that the collision be rather hard, in fact, to get the reaction to go at all—a “soft” collision between $A$ and $B$ may not do it, even though energy may be released in the process. So let us suppose that it is very common in chemical reactions that, in order for $A$ and $B$ to form $AB$, they cannot just hit each other, but they have to hit each other with sufficient energy. This energy is called the activation energy—the energy needed to “activate” the reaction. Call $A\stared$ the activation energy, the excess energy needed in a collision in order that the reaction may really occur. Then the rate $R_f$ at which $A$ and $B$ produce $AB$ would involve the number of atoms of $A$ times the number of atoms of $B$, times the rate at which a single atom would strike a certain cross section $\sigma_{AB}$, times a factor $e^{-A\stared/kT}$ which is the probability that they have enough energy: \begin{equation} \label{Eq:I:42:10} R_f = n_An_Bv\sigma_{AB}e^{-A\stared/kT}. \end{equation} Now we have to find the opposite rate, $R_r$. There is a certain chance that $AB$ will fly apart. In order to fly apart, it not only must have the energy $W$ which it needs in order to get apart at all but, just as it was hard for $A$ and $B$ to come together, so there is a kind of hill that $A$ and $B$ have to climb over to get apart again; they must have not only enough energy just to get ready to pull apart, but a certain excess. It is like climbing a hill to get into a deep valley; they have to climb the hill coming in and they have to climb out of the valley and then over the hill coming back (Fig. 42–1). Thus the rate at which $AB$ goes to $A$ and $B$ will be proportional to the number $n_{AB}$ that are present, times $e^{-(W + A\stared)/kT}$: \begin{equation} \label{Eq:I:42:11} R_r = c'n_{AB}e^{-(W + A\stared)/kT}. \end{equation} The $c'$ will involve the volume of atoms and the rate of collisions, which we can work out, as we did the case of evaporation, with areas and times and thicknesses; but we shall not do this. The main feature of interest to us is that when these two rates are equal, the ratio of them is equal to unity. This tells us that $n_An_B/n_{AB} = ce^{-W/kT}$, as before, where $c$ involves the cross sections, velocities, and other factors independent of the $n$’s. The interesting thing is that the rate of the reaction also varies as $e^{-\text{const}/kT}$, although the constant is not the same as that which governs the concentrations; the activation energy $A\stared$ is quite different from the energy $W$. $W$ governs the proportions of $A$, $B$, and $AB$ that we have in equilibrium, but if we want to know how fast $A + B$ goes to $AB$, that is not a question of equilibrium, and here a different energy, the activation energy, governs the rate of reaction through an exponential factor. Furthermore, $A\stared$ is not a fundamental constant like $W$. Suppose that at the surface of the wall—or at some other place—$A$ and $B$ could temporarily stick there in such a way that they could combine more easily. In other words, we might find a “tunnel” through the hill, or perhaps a lower hill. By the conservation of energy, when we are all finished we have still made $AB$ out of $A$ and $B$, so the energy difference $W$ will be quite independent of the way the reaction occurred, but the activation energy $A\stared$ will depend very much on the way the reaction occurs. This is why the rates of chemical reactions are very sensitive to outside conditions. We can change the rate by putting in a surface of a different kind, we can put it in a “different barrel” and it will go at a different rate, if it depends on the nature of the surface. Or if we put in a third kind of object it may change the rate very much; some things produce enormous changes in rate simply by changing the $A\stared$ a little bit—they are called catalysts. A reaction might practically not occur at all because $A\stared$ is too big at the given temperature, but when we put in this special stuff, the catalyst, then the reaction goes very fast indeed, because $A\stared$ is reduced. Incidentally, there is some trouble with such a reaction, $A$ plus $B$, making $AB$, because we cannot conserve both energy and momentum when we try to put two objects together to make one that is more stable. Therefore, we need at least a third object $C$, so the actual reaction is much more complicated. The forward rate would involve the product $n_An_Bn_C$, and it might seem that our formula is going wrong, but no! When we look at the rate at which $AB$ goes the other way, we find that it also needs to collide with $C$, so there is an $n_{AB}n_C$ in the reverse rate; the $n_C$’s cancel out in the formula for the equilibrium concentrations. The law of equilibrium, (42.9), which we first wrote down is absolutely guaranteed to be true, no matter what the mechanism of the reaction may be! |
|
1 | 42 | Applications of Kinetic Theory | 5 | Einstein’s laws of radiation | We now turn to an interesting analogous situation having to do with the blackbody radiation law. In the last chapter we worked out the distribution law for the radiation in a cavity the way Planck did, considering the radiation from an oscillator. The oscillator had to have a certain mean energy, and since it was oscillating, it would radiate and would keep pumping radiation into the cavity until it piled up enough radiation to balance the absorption and emission. In that way we found that the intensity of radiation at frequency $\omega$ was given by the formula \begin{equation} \label{Eq:I:42:12} I(\omega)\,d\omega = \frac{\hbar\omega^3\,d\omega} {\pi^2c^2(e^{\hbar\omega/kT} - 1)}. \end{equation} This result involved the assumption that the oscillator which was generating the radiation had definite, equally spaced energy levels. We did not say that light had to be a photon or anything like that. There was no discussion about how, when an atom goes from one level to another, the energy must come out in one unit of energy, $\hbar\omega$, in the form of light. Planck’s original idea was that the matter was quantized but not the light: material oscillators cannot take up just any energy, but have to take it in lumps. Furthermore, the trouble with the derivation is that it was partially classical. We calculated the rate of radiation from an oscillator according to classical physics; then we turned around and said, “No, this oscillator has a lot of energy levels.” So gradually, in order to find the right result, the completely quantum-mechanical result, there was a slow development which culminated in the quantum mechanics of 1927. But in the meantime, there was an attempt by Einstein to convert Planck’s viewpoint that only oscillators of matter were quantized, to the idea that light was really photons and could be considered in a certain way as particles with energy $\hbar\omega$. Furthermore, Bohr had pointed out that any system of atoms has energy levels, but they are not necessarily equally spaced like Planck’s oscillator. And so it became necessary to rederive or at least rediscuss the radiation law from a more completely quantum-mechanical viewpoint. Einstein assumed that Planck’s final formula was right, and he used that formula to obtain some new information, previously unknown, about the interaction of radiation with matter. His discussion went as follows: Consider any two of the many energy levels of an atom, say the $m$th level and the $n$th level (Fig. 42–2). Now Einstein proposed that when such an atom has light of the right frequency shining on it, it can absorb that photon of light and make a transition from state $n$ to state $m$, and that the probability that this occurs per second depends upon the two levels, of course, but is proportional to how intense the light is that is shining on it. Let us call the proportionality constant $B_{nm}$, merely to remind us that this is not a universal constant of nature, but depends on the particular pair of levels: some levels are easy to excite; some levels are hard to excite. Now what is the formula going to be for the rate of emission from $m$ to $n$? Einstein proposed that this must have two parts to it. First, even if there were no light present, there would be some chance that an atom in an excited state would fall to a lower state, emitting a photon; this we call spontaneous emission. It is analogous to the idea that an oscillator with a certain amount of energy, even in classical physics, does not keep that energy, but loses it by radiation. Thus the analog of spontaneous radiation of a classical system is that if the atom is in an excited state there is a certain probability $A_{mn}$, which depends on the levels again, for it to go down from $m$ to $n$, and this probability is independent of whether light is shining on the atom or not. But then Einstein went further, and by comparison with the classical theory and by other arguments, concluded that emission was also influenced by the presence of light—that when light of the right frequency is shining on an atom, it has an increased rate of emitting a photon that is proportional to the intensity of the light, with a proportionality constant $B_{mn}$. Later, if we deduce that this coefficient is zero, then we will have found that Einstein was wrong. Of course we will find he was right. Thus Einstein assumed that there are three kinds of processes: an absorption proportional to the intensity of light, an emission proportional to the intensity of light, called induced emission or sometimes stimulated emission, and a spontaneous emission independent of light. Now suppose that we have, in equilibrium at temperature $T$, a certain number of atoms $N_n$ in the state $n$ and another number $N_m$ in the state $m$. Then the total number of atoms that are going from $n$ to $m$ is the number that are in the state $n$ times the rate per second that, if one is in $n$, it goes up to $m$. So we have a formula for the number that are going from $n$ to $m$ per second: \begin{equation} \label{Eq:I:42:13} R_{n \to m} = N_nB_{nm}I(\omega). \end{equation} The number that will go from $m$ to $n$ is expressed in the same manner, as the number $N_{m}$ that are in $m$, times the chance per second that each one goes down to $n$. This time our expression is \begin{equation} \label{Eq:I:42:14} R_{m \to n} = N_m[A_{mn} + B_{mn}I(\omega)]. \end{equation} Now we shall suppose that in thermal equilibrium the number of atoms going up must equal the number coming down. That is one way, at least, in which the number will be sure to stay constant in each level.1 So we take these two rates to be equal at equilibrium. But we have one other piece of information: we know how large $N_m$ is compared with $N_n$—the ratio of those two is $e^{-(E_m - E_n)/kT}$. Now Einstein assumed that the only light which is effective in making the transition from $n$ to $m$ is the light which has the frequency corresponding to the energy difference, so $E_m - E_n = \hbar\omega$ in all our formulas. Thus \begin{equation} \label{Eq:I:42:15} N_m = N_ne^{-\hbar\omega/kT}. \end{equation} Thus if we set the two rates equal: $N_nB_{nm}I(\omega) = N_m[A_{mn} + B_{mn}I(\omega)]$, and divide by $N_m$, we get \begin{equation} \label{Eq:I:42:16} B_{nm}I(\omega)e^{\hbar\omega/kT} = A_{mn} + B_{mn}I(\omega). \end{equation} From this equation, we can calculate $I(\omega)$. It is simply \begin{equation} \label{Eq:I:42:17} I(\omega) = \frac{A_{mn}}{B_{nm}e^{\hbar\omega/kT} - B_{mn}}. \end{equation} But Planck has already told us that the formula must be (42.12). Therefore we can deduce something: First, that $B_{nm}$ must equal $B_{mn}$, since otherwise we cannot get the $(e^{\hbar\omega/kT} - 1)$. So Einstein discovered some things that he did not know how to calculate, namely that the induced emission probability and the absorption probability must be equal. This is interesting. And furthermore, in order for (42.17) and (42.12) to agree, \begin{equation} \label{Eq:I:42:18} A_{mn}/B_{mn}\quad \text{must be}\quad \hbar\omega^3/\pi^2c^2. \end{equation} So if we know, for instance, the absorption rate for a given level, we can deduce the spontaneous emission rate and the induced emission rate, or any combination. This is as far as Einstein or anyone else could go using such arguments. To actually compute the absolute spontaneous emission rate or the other rates for any specific atomic transition, of course, requires a knowledge of the machinery of the atom, called quantum electrodynamics, which was not discovered until eleven years later. This work of Einstein was done in 1916. The possibility of induced emission has, today, found interesting applications. If there is light present, it will tend to induce the downward transition. The transition then adds its $\hbar\omega$ to the available light energy, if there were some atoms sitting in the upper state. Now we can arrange, by some nonthermal method, to have a gas in which the number in the state $m$ is very much greater than the number in the state $n$. This is far out of equilibrium, and so is not given by the formula $e^{-\hbar\omega/kT}$, which is for equilibrium. We can even arrange it so that the number in the upper state is very large, while the number in the lower state is practically zero. Then light which has the frequency corresponding to the energy difference $E_m - E_n$ will not be strongly absorbed, because there are not many atoms in state $n$ to absorb it. On the other hand, when that light is present, it will induce the emission from this upper state! So, if we had a lot of atoms in the upper state, there would be a sort of chain reaction, in which, the moment the atoms began to emit, more would be caused to emit, and the whole lot of them would dump down together. This is what is called a laser, or, in the case of the far infrared, a maser. Various tricks can be used to obtain the atoms in state $m$. There may be higher levels to which the atoms can get if we shine in a strong beam of light of high frequency. From these high levels, they may trickle down, emitting various photons, until they all get stuck in the state $m$. If they tend to stay in the state $m$ without emitting, the state is called metastable. And then they are all dumped down together by induced emissions. One more technical point—if we put this system in an ordinary box, it would radiate in so many different directions spontaneously, compared with the induced effect, that we would still be in trouble. But we can enhance the induced effect, increase its efficiency, by putting nearly perfect mirrors on each side of the box, so that the light which is emitted gets another chance, and another chance, and another chance, to induce more emission. Although the mirrors are almost one hundred percent reflecting, there is a slight amount of transmission of the mirror, and a little light gets out. In the end, of course, from the conservation of energy, all the light goes out in a nice uniform straight direction which makes the strong light beams that are possible today with lasers. |
|
1 | 43 | Diffusion | 1 | Collisions between molecules | We have considered so far only the molecular motions in a gas which is in thermal equilibrium. We want now to discuss what happens when things are near, but not exactly in, equilibrium. In a situation far from equilibrium, things are extremely complicated, but in a situation very close to equilibrium we can easily work out what happens. To see what happens, we must, however, return to the kinetic theory. Statistical mechanics and thermodynamics deal with the equilibrium situation, but away from equilibrium we can only analyze what occurs atom by atom, so to speak. As a simple example of a nonequilibrium circumstance, we shall consider the diffusion of ions in a gas. Suppose that in a gas there is a relatively small concentration of ions—electrically charged molecules. If we put an electric field on the gas, then each ion will have a force on it which is different from the forces on the neutral molecules of the gas. If there were no other molecules present, an ion would have a constant acceleration until it reached the wall of the container. But because of the presence of the other molecules, it cannot do that; its velocity increases only until it collides with a molecule and loses its momentum. It starts again to pick up more speed, but then it loses its momentum again. The net effect is that an ion works its way along an erratic path, but with a net motion in the direction of the electric force. We shall see that the ion has an average “drift” with a mean speed which is proportional to the electric field—the stronger the field, the faster it goes. While the field is on, and while the ion is moving along, it is, of course, not in thermal equilibrium, it is trying to get to equilibrium, which is to be sitting at the end of the container. By means of the kinetic theory we can compute the drift velocity. It turns out that with our present mathematical abilities we cannot really compute precisely what will happen, but we can obtain approximate results which exhibit all the essential features. We can find out how things will vary with pressure, with temperature, and so on, but it will not be possible to get precisely the correct numerical factors in front of all the terms. We shall, therefore, in our derivations, not worry about the precise value of numerical factors. They can be obtained only by a very much more sophisticated mathematical treatment. Before we consider what happens in nonequilibrium situations, we shall need to look a little closer at what goes on in a gas in thermal equilibrium. We shall need to know, for example, what the average time between successive collisions of a molecule is. Any molecule experiences a sequence of collisions with other molecules—in a random way, of course. A particular molecule will, in a long period of time $T$, have a certain number, $N$, of hits. If we double the length of time, there will be twice as many hits. So the number of collisions is proportional to the time $T$. We would like to write it this way: \begin{equation} \label{Eq:I:43:1} N = T/\tau. \end{equation} We have written the constant of proportionality as $1/\tau$, where $\tau$ will have the dimensions of a time. The constant $\tau$ is the average time between collisions. Suppose, for example, that in an hour there are $60$ collisions; then $\tau$ is one minute. We would say that $\tau$ (one minute) is the average time between the collisions. We may often wish to ask the following question: “What is the chance that a molecule will experience a collision during the next small interval of time $dt$?” The answer, we may intuitively understand, is $dt/\tau$. But let us try to make a more convincing argument. Suppose that there were a very large number $N$ of molecules. How many will have collisions in the next interval of time $dt$? If there is equilibrium, nothing is changing on the average with time. So $N$ molecules waiting the time $dt$ will have the same number of collisions as one molecule waiting for the time $N\,dt$. That number we know is $N\,dt/\tau$. So the number of hits of $N$ molecules is $N\,dt/\tau$ in a time $dt$, and the chance, or probability, of a hit for any one molecule is just $1/N$ as large, or $(1/N)(N\,dt/\tau) = dt/\tau$, as we guessed above. That is to say, the fraction of the molecules which will suffer a collision in the time $dt$ is $dt/\tau$. To take an example, if $\tau$ is one minute, then in one second the fraction of particles which will suffer collisions is $1/60$. What this means, of course, is that $1/60$ of the molecules happen to be close enough to what they are going to hit next that their collisions will occur in the next second. When we say that $\tau$, the mean time between collisions, is one minute, we do not mean that all the collisions will occur at times separated by exactly one minute. A particular particle does not have a collision, wait one minute, and then have another collision. The times between successive collisions are quite variable. We will not need it for our later work here, but we may make a small diversion to answer the question: “What are the times between collisions?” We know that for the case above, the average time is one minute, but we might like to know, for example, what is the chance that we get no collision for two minutes? We shall find the answer to the general question: “What is the probability that a molecule will go for a time $t$ without having a collision?” At some arbitrary instant—that we call $t = 0$—we begin to watch a particular molecule. What is the chance that it gets by until $t$ without colliding with another molecule? To compute the probability, we observe what is happening to all $N_0$ molecules in a container. After we have waited a time $t$, some of them will have had collisions. We let $N(t)$ be the number that have not had collisions up to the time $t$. $N(t)$ is, of course, less than $N_0$. We can find $N(t)$ because we know how it changes with time. If we know that $N(t)$ molecules have got by until $t$, then $N(t + dt)$, the number which get by until $t + dt$, is less than $N(t)$ by the number that have collisions in $dt$. The number that collide in $dt$ we have written above in terms of the mean time $\tau$ as $dN = N(t)\,dt/\tau$. We have the equation \begin{equation} \label{Eq:I:43:2} N(t + dt) = N(t) - N(t)\,\frac{dt}{\tau}. \end{equation} The quantity on the left-hand side, $N(t + dt)$, can be written, according to the definitions of calculus, as $N(t) + (dN/dt)\,dt$. Making this substitution, Eq. (43.2) yields \begin{equation} \label{Eq:I:43:3} \ddt{N(t)}{t} = -\frac{N(t)}{\tau}. \end{equation} The number that are being lost in the interval $dt$ is proportional to the number that are present, and inversely proportional to the mean life $\tau$. Equation (43.3) is easily integrated if we rewrite it as \begin{equation} \label{Eq:I:43:4} \frac{dN(t)}{N(t)} = -\frac{dt}{\tau}. \end{equation} Each side is a perfect differential, so the integral is \begin{equation} \label{Eq:I:43:5} \ln N(t) = -t/\tau + (\text{a constant}), \end{equation} which says the same thing as \begin{equation} \label{Eq:I:43:6} N(t) = (\text{constant})e^{-t/\tau}. \end{equation} We know that the constant must be just $N_0$, the total number of molecules present, since all of them start at $t = 0$ to wait for their “next” collision. We can write our result as \begin{equation} \label{Eq:I:43:7} N(t) = N_0e^{-t/\tau}. \end{equation} If we wish the probability of no collision, $P(t)$, we can get it by dividing $N(t)$ by $N_0$, so \begin{equation} \label{Eq:I:43:8} P(t) = e^{-t/\tau}. \end{equation} Our result is: the probability that a particular molecule survives a time $t$ without a collision is $e^{-t/\tau}$, where $\tau$ is the mean time between collisions. The probability starts out at $1$ (or certainty) for $t = 0$, and gets less as $t$ gets bigger and bigger. The probability that the molecule avoids a collision for a time equal to $\tau$ is $e^{-1} \approx 0.37$. The chance is less than one-half that it will have a greater than average time between collisions. That is all right, because there are enough molecules which go collision-free for times much longer than the mean time before colliding, so that the average time can still be $\tau$. We originally defined $\tau$ as the average time between collisions. The result we have obtained in Eq. (43.7) also says that the mean time from an arbitrary starting instant to the next collision is also $\tau$. We can demonstrate this somewhat surprising fact in the following way. The number of molecules which experience their next collision in the interval $dt$ at the time $t$ after an arbitrarily chosen starting time is $N(t)\,dt/\tau$. Their “time until the next collision” is, of course, just $t$. The “average time until the next collision” is obtained in the usual way: \begin{equation*} \text{Average time until the next collision} = \frac{1}{N_0}\int_0^\infty t\,\frac{N(t)\,dt}{\tau}. \end{equation*} Using $N(t)$ obtained in (43.7) and evaluating the integral, we find indeed that $\tau$ is the average time from any instant until the next collision. |
|
1 | 43 | Diffusion | 2 | The mean free path | Another way of describing the molecular collisions is to talk not about the time between collisions, but about how far the particle moves between collisions. If we say that the average time between collisions is $\tau$, and that the molecules have a mean velocity $v$, we can expect that the average distance between collisions, which we shall call $l$, is just the product of $\tau$ and $v$. This distance between collisions is usually called the mean free path: \begin{equation} \label{Eq:I:43:9} \text{Mean free path $l$} = \tau v. \end{equation} In this chapter we shall be a little careless about what kind of average we mean in any particular case. The various possible averages—the mean, the root-mean-square, etc.—are all nearly equal and differ by factors which are near to one. Since a detailed analysis is required to obtain the correct numerical factors anyway, we need not worry about which average is required at any particular point. We may also warn the reader that the algebraic symbols we are using for some of the physical quantities (e.g., $l$ for the mean free path) do not follow a generally accepted convention, mainly because there is no general agreement. Just as the chance that a molecule will have a collision in a short time $dt$ is equal to $dt/\tau$, the chance that it will have a collision in going a distance $dx$ is $dx/l$. Following the same line of argument used above, the reader can show that the probability that a molecule will go at least the distance $x$ before having its next collision is $e^{-x/l}$. The average distance a molecule goes before colliding with another molecule—the mean free path $l$—will depend on how many molecules there are around and on the “size” of the molecules, i.e., how big a target they represent. The effective “size” of a target in a collision we usually describe by a “collision cross section,” the same idea that is used in nuclear physics, or in light-scattering problems. Consider a moving particle which travels a distance $dx$ through a gas which has $n_0$ scatterers (molecules) per unit volume (Fig. 43–1). If we look at each unit of area perpendicular to the direction of motion of our selected particle, we will find there $n_0\,dx$ molecules. If each one presents an effective collision area or, as it is usually called, “collision cross section,” $\sigma_c$, then the total area covered by the scatterers is $\sigma_cn_0\,dx$. By “collision cross section” we mean the area within which the center of our particle must be located if it is to collide with a particular molecule. If molecules were little spheres (a classical picture) we would expect that $\sigma_c = \pi(r_1 + r_2)^2$, where $r_1$ and $r_2$ are the radii of the two colliding objects. The chance that our particle will have a collision is the ratio of the area covered by scattering molecules to the total area, which we have taken to be one. So the probability of a collision in going a distance $dx$ is just $\sigma_cn_0\,dx$: \begin{equation} \label{Eq:I:43:10} \text{Chance of a collision in $dx$} = \sigma_cn_0\,dx. \end{equation} We have seen above that the chance of a collision in $dx$ can also be written in terms of the mean free path $l$ as $dx/l$. Comparing this with (43.10), we can relate the mean free path to the collision cross section: \begin{equation} \label{Eq:I:43:11} \frac{1}{l} = \sigma_cn_0, \end{equation} which is easier to remember if we write it as \begin{equation} \label{Eq:I:43:12} \sigma_cn_0l = 1. \end{equation} This formula can be thought of as saying that there should be one collision, on the average, when the particle goes through a distance $l$ in which the scattering molecules could just cover the total area. In a cylindrical volume of length $l$ and a base of unit area, there are $n_0l$ scatterers; if each one has an area $\sigma_c$ the total area covered is $n_0l\sigma_c$, which is just one unit of area. The whole area is not covered, of course, because some molecules are partly hidden behind others. That is why some molecules go farther than $l$ before having a collision. It is only on the average that the molecules have a collision by the time they go the distance $l$. From measurements of the mean free path $l$ we can determine the scattering cross section $\sigma_c$, and compare the result with calculations based on a detailed theory of atomic structure. But that is a different subject! So we return to the problem of nonequilibrium states. |
|
1 | 43 | Diffusion | 3 | The drift speed | We want to describe what happens to a molecule, or several molecules, which are different in some way from the large majority of the molecules in a gas. We shall refer to the “majority” molecules as the “background” molecules, and we shall call the molecules which are different from the background molecules “special” molecules or, for short, the $S$-molecules. A molecule could be special for any number of reasons: It might be heavier than the background molecules. It might be a different chemical. It might have an electric charge—i.e., be an ion in a background of uncharged molecules. Because of their different masses or charges the $S$-molecules may have forces on them which are different from the forces on the background molecules. By considering what happens to these $S$-molecules we can understand the basic effects which come into play in a similar way in many different phenomena. To list a few: the diffusion of gases, electric currents in batteries, sedimentation, centrifugal separation, etc. We begin by concentrating on the basic process: an $S$-molecule in a background gas is acted on by some specific force $\FLPF$ (which might be, e.g., gravitational or electrical) and in addition by the not-so-specific forces due to collisions with the background molecules. We would like to describe the general behavior of the $S$-molecule. What happens to it, in detail, is that it darts around hither and yon as it collides over and over again with other molecules. But if we watch it carefully we see that it does make some net progress in the direction of the force $\FLPF$. We say that there is a drift, superposed on its random motion. We would like to know what the speed of its drift is—its drift velocity—due to the force $\FLPF$. If we start to observe an $S$-molecule at some instant we may expect that it is somewhere between two collisions. In addition to the velocity it was left with after its last collision it is picking up some velocity component due to the force $\FLPF$. In a short time (on the average, in a time $\tau$) it will experience a collision and start out on a new piece of its trajectory. It will have a new starting velocity, but the same acceleration from $\FLPF$. To keep things simple for the moment, we shall suppose that after each collision our $S$-molecule gets a completely “fresh” start. That is, that it keeps no remembrance of its past acceleration by $\FLPF$. This might be a reasonable assumption if our $S$-molecule were much lighter than the background molecules, but it is certainly not valid in general. We shall discuss later an improved assumption. For the moment, then, our assumption is that the $S$-molecule leaves each collision with a velocity which may be in any direction with equal likelihood. The starting velocity will take it equally in all directions and will not contribute to any net motion, so we shall not worry further about its initial velocity after a collision. In addition to its random motion, each $S$-molecule will have, at any moment, an additional velocity in the direction of the force $\FLPF$, which it has picked up since its last collision. What is the average value of this part of the velocity? It is just the acceleration $\FLPF/m$ (where $m$ is the mass of the $S$-molecule) times the average time since the last collision. Now the average time since the last collision must be the same as the average time until the next collision, which we have called $\tau$, above. The average velocity from $\FLPF$, of course, is just what is called the drift velocity, so we have the relation \begin{equation} \label{Eq:I:43:13} v_{\text{drift}} = \frac{F\tau}{m}. \end{equation} This basic relation is the heart of our subject. There may be some complication in determining what $\tau$ is, but the basic process is defined by Eq. (43.13). You will notice that the drift velocity is proportional to the force. There is, unfortunately, no generally used name for the constant of proportionality. Different names have been used for each different kind of force. If in an electrical problem the force is written as the charge times the electric field, $\FLPF = q\FLPE$, then the constant of proportionality between the velocity and the electric field $\FLPE$ is usually called the “mobility.” In spite of the possibility of some confusion, we shall use the term mobility for the ratio of the drift velocity to the force for any force. We write \begin{equation} \label{Eq:I:43:14} v_{\text{drift}} = \mu F \end{equation} in general, and we shall call $\mu$ the mobility. We have from Eq. (43.13) that \begin{equation} \label{Eq:I:43:15} \mu = \tau/m. \end{equation} The mobility is proportional to the mean time between collisions (there are fewer collisions to slow it down) and inversely proportional to the mass (more inertia means less speed picked up between collisions). To get the correct numerical coefficient in Eq. (43.13), which is correct as given, takes some care. Without intending to confuse, we should still point out that the arguments have a subtlety which can be appreciated only by a careful and detailed study. To illustrate that there are difficulties, in spite of appearances, we shall make over again the argument which led to Eq. (43.13) in a reasonable but erroneous way (and the way one will find in many textbooks!). We might have said: The mean time between collisions is $\tau$. After a collision the particle starts out with a random velocity, but it picks up an additional velocity between collisions, which is equal to the acceleration times the time. Since it takes the time $\tau$ to arrive at the next collision it gets there with the velocity $(F/m)\tau$. At the beginning of the collision it had zero velocity. So between the two collisions it has, on the average, a velocity one-half of the final velocity, so the mean drift velocity is $\tfrac{1}{2}F\tau/m$. (Wrong!) This result is wrong and the result in Eq. (43.13) is right, although the arguments may sound equally satisfactory. The reason the second result is wrong is somewhat subtle, and has to do with the following: The argument is made as though all collisions were separated by the mean time $\tau$. The fact is that some times are shorter and others are longer than the mean. Short times occur more often but make less contribution to the drift velocity because they have less chance “to really get going.” If one takes proper account of the distribution of free times between collisions, one can show that there should not be the factor $\tfrac{1}{2}$ that was obtained from the second argument. The error was made in trying to relate by a simple argument the average final velocity to the average velocity itself. This relationship is not simple, so it is best to concentrate on what is wanted: the average velocity itself. The first argument we gave determines the average velocity directly—and correctly! But we can perhaps see now why we shall not in general try to get all of the correct numerical coefficients in our elementary derivations! We return now to our simplifying assumption that each collision knocks out all memory of the past motion—that a fresh start is made after each collision. Suppose our $S$-molecule is a heavy object in a background of lighter molecules. Then our $S$-molecule will not lose its “forward” momentum in each collision. It would take several collisions before its motion was “randomized” again. We should assume, instead, that at each collision—in each time $\tau$ on the average—it loses a certain fraction of its momentum. We shall not work out the details, but just state that the result is equivalent to replacing $\tau$, the average collision time, by a new—and longer—$\tau$ which corresponds to the average “forgetting time,” i.e., the average time to forget its forward momentum. With such an interpretation of $\tau$ we can use our formula (43.15) for situations which are not quite as simple as we first assumed. |
|
1 | 43 | Diffusion | 4 | Ionic conductivity | We now apply our results to a special case. Suppose we have a gas in a vessel in which there are also some ions—atoms or molecules with a net electric charge. We show the situation schematically in Fig. 43–2. If two opposite walls of the container are metallic plates, we can connect them to the terminals of a battery and thereby produce an electric field in the gas. The electric field will result in a force on the ions, so they will begin to drift toward one or the other of the plates. An electric current will be induced, and the gas with its ions will behave like a resistor. By computing the ion flow from the drift velocity we can compute the resistance. We ask, specifically: How does the flow of electric current depend on the voltage difference $V$ that we apply across the two plates? We consider the case that our container is a rectangular box of length $b$ and cross-sectional area $A$ (Fig. 43–2). If the potential difference, or voltage, from one plate to the other is $V$, the electric field $E$ between the plates is $V/b$. (The electric potential is the work done in carrying a unit charge from one plate to the other. The force on a unit charge is $\FLPE$. If $\FLPE$ is the same everywhere between the plates, which is a good enough approximation for now, the work done on a unit charge is just $Eb$, so $V = Eb$.) The special force on an ion of the gas is $q\FLPE$, where $q$ is the charge on the ion. The drift velocity of the ion is then $\mu$ times this force, or \begin{equation} \label{Eq:I:43:16} v_{\text{drift}} = \mu F = \mu qE = \mu q\,\frac{V}{b}. \end{equation} An electric current $I$ is the flow of charge in a unit time. The electric current to one of the plates is given by the total charge of the ions which arrive at the plate in a unit of time. If the ions drift toward the plate with the velocity $v_{\text{drift}}$, then those which are within a distance ($v_{\text{drift}}\cdot T$) will arrive at the plate in the time $T$. If there are $n_i$ ions per unit volume, the number which reach the plate in the time $T$ is ($n_i\cdot A\cdot v_{\text{drift}}\cdot T$). Each ion carries the charge $q$, so we have that \begin{equation} \label{Eq:I:43:17} \text{Charge collected in $T$} = qn_iAv_{\text{drift}}T. \end{equation} The current $I$ is the charge collected in $T$ divided by $T$, so \begin{equation} \label{Eq:I:43:18} I = qn_iAv_{\text{drift}}. \end{equation} Substituting $v_{\text{drift}}$ from (43.16), we have \begin{equation} \label{Eq:I:43:19} I = \mu q^2n_i\,\frac{A}{b}\,V. \end{equation} We find that the current is proportional to the voltage, which is just the form of Ohm’s law, and the resistance $R$ is the inverse of the proportionality constant: \begin{equation} \label{Eq:I:43:20} \frac{1}{R} = \mu q^2n_i\,\frac{A}{b}. \end{equation} We have a relation between the resistance and the molecular properties $n_i$, $q$, and $\mu$, which depends in turn on $m$ and $\tau$. If we know $n_i$ and $q$ from atomic measurements, a measurement of $R$ could be used to determine $\mu$, and from $\mu$ also $\tau$. |
|
1 | 43 | Diffusion | 5 | Molecular diffusion | We turn now to a different kind of problem, and a different kind of analysis: the theory of diffusion. Suppose that we have a container of gas in thermal equilibrium, and that we introduce a small amount of a different kind of gas at some place in the container. We shall call the original gas the “background” gas and the new one the “special” gas. The special gas will start to spread out through the whole container, but it will spread slowly because of the presence of the background gas. This slow spreading-out process is called diffusion. The diffusion is controlled mainly by the molecules of the special gas getting knocked about by the molecules of the background gas. After a large number of collisions, the special molecules end up spread out more or less evenly throughout the whole volume. We must be careful not to confuse diffusion of a gas with the gross transport that may occur due to convection currents. Most commonly, the mixing of two gases occurs by a combination of convection and diffusion. We are interested now only in the case that there are no “wind” currents. The gas is spreading only by molecular motions, by diffusion. We wish to compute how fast diffusion takes place. We now compute the net flow of molecules of the “special” gas due to the molecular motions. There will be a net flow only when there is some nonuniform distribution of the molecules, otherwise all of the molecular motions would average to give no net flow. Let us consider first the flow in the $x$-direction. To find the flow, we consider an imaginary plane surface perpendicular to the $x$-axis and count the number of special molecules that cross this plane. To obtain the net flow, we must count as positive those molecules which cross in the direction of positive $x$ and subtract from this number the number which cross in the negative $x$-direction. As we have seen many times, the number which cross a surface area in a time $\Delta T$ is given by the number which start the interval $\Delta T$ in a volume which extends the distance $v\,\Delta T$ from the plane. (Note that $v$, here, is the actual molecular velocity, not the drift velocity.) We shall simplify our algebra by giving our surface one unit of area. Then the number of special molecules which pass from left to right (taking the $+x$-direction to the right) is $n_-v\,\Delta T$, where $n_-$ is the number of special molecules per unit volume to the left (within a factor of $2$ or so, but we are ignoring such factors!). The number which cross from right to left is, similarly, $n_+v\,\Delta T$, where $n_+$ is the number density of special molecules on the right-hand side of the plane. If we call the molecular current $J$, by which we mean the net flow of molecules per unit area per unit time, we have \begin{equation} \label{Eq:I:43:21} J = \frac{n_-v\,\Delta T - n_+v\,\Delta T}{\Delta T}, \end{equation} or \begin{equation} \label{Eq:I:43:22} J = (n_- - n_+)v. \end{equation} What shall we use for $n_-$ and $n_+$? When we say “the density on the left,” how far to the left do we mean? We should choose the density at the place from which the molecules started their “flight,” because the number which start such trips is determined by the number present at that place. So by $n_-$ we should mean the density a distance to the left equal to the mean free path $l$, and by $n_+$, the density at the distance $l$ to the right of our imaginary surface. It is convenient to consider that the distribution of our special molecules in space is described by a continuous function of $x$, $y$, and $z$ which we shall call $n_a$. By $n_a(x,y,z)$ we mean the number density of special molecules in a small volume element centered on $(x,y,z)$. In terms of $n_a$ we can express the difference $(n_+ - n_-)$ as \begin{equation} \label{Eq:I:43:23} (n_+ - n_-) = \ddt{n_a}{x}\,\Delta x = \ddt{n_a}{x}\cdot 2l. \end{equation} Substituting this result in Eq. (43.22) and neglecting the factor of $2$, we get \begin{equation} \label{Eq:I:43:24} J_x = -lv\,\ddt{n_a}{x}. \end{equation} We have found that the flow of special molecules is proportional to the derivative of the density, or to what is sometimes called the “gradient” of the density. It is clear that we have made several rough approximations. Besides various factors of two we have left out, we have used $v$ where we should have used $v_x$, and we have assumed that $n_+$ and $n_-$ refer to places at the perpendicular distance $l$ from our surface, whereas for those molecules which do not travel perpendicular to the surface element, $l$ should correspond to the slant distance from the surface. All of these refinements can be made; the result of a more careful analysis shows that the right-hand side of Eq. (43.24) should be multiplied by $1/3$. So a better answer is \begin{equation} \label{Eq:I:43:25} J_x = -\frac{lv}{3}\,\ddt{n_a}{x}. \end{equation} Similar equations can be written for the currents in the $y$- and $z$-directions. The current $J_x$ and the density gradient $dn_a/dx$ can be measured by macroscopic observations. Their experimentally determined ratio is called the “diffusion coefficient,” $D$. That is, \begin{equation} \label{Eq:I:43:26} J_x = -D\,\ddt{n_a}{x}. \end{equation} We have been able to show that for a gas we expect \begin{equation} \label{Eq:I:43:27} D = \tfrac{1}{3}lv. \end{equation} So far in this chapter we have considered two distinct processes: mobility, the drift of molecules due to “outside” forces; and diffusion, the spreading determined only by the internal forces, the random collisions. There is, however, a relation between them, since they both depend basically on the thermal motions, and the mean free path $l$ appears in both calculations. If, in Eq. (43.25), we substitute $l = v\tau$ and $\tau = \mu m$, we have \begin{equation} \label{Eq:I:43:28} J_x = -\tfrac{1}{3}mv^2\mu\,\ddt{n_a}{x}. \end{equation} But $mv^2$ depends only on the temperature. We recall that \begin{equation} \label{Eq:I:43:29} \tfrac{1}{2}mv^2 = \tfrac{3}{2}kT, \end{equation} so \begin{equation} \label{Eq:I:43:30} J_x = -\mu kT\,\ddt{n_a}{x}. \end{equation} We find that $D$, the diffusion coefficient, is just $kT$ times $\mu$, the mobility coefficient: \begin{equation} \label{Eq:I:43:31} D = \mu kT. \end{equation} And it turns out that the numerical coefficient in (43.31) is exactly right—no extra factors have to be thrown in to adjust for our rough assumptions. We can show, in fact, that (43.31) must always be correct—even in complicated situations (for example, the case of a suspension in a liquid) where the details of our simple calculations would not apply at all. To show that (43.31) must be correct in general, we shall derive it in a different way, using only our basic principles of statistical mechanics. Imagine a situation in which there is a gradient of “special” molecules, and we have a diffusion current proportional to the density gradient, according to Eq. (43.26). We now apply a force field in the $x$-direction, so that each special molecule feels the force $F$. According to the definition of the mobility $\mu$ there will be a drift velocity given by \begin{equation} \label{Eq:I:43:32} v_{\text{drift}} = \mu F. \end{equation} By our usual arguments, the drift current (the net number of molecules which pass a unit of area in a unit of time) will be \begin{equation} \label{Eq:I:43:33} J_{\text{drift}} = n_av_{\text{drift}}, \end{equation} or \begin{equation} \label{Eq:I:43:34} J_{\text{drift}} = n_a\mu F. \end{equation} We now adjust the force $F$ so that the drift current due to $F$ just balances the diffusion, so that there is no net flow of our special molecules. We have $J_x + J_{\text{drift}} = 0$, or \begin{equation} \label{Eq:I:43:35} D\,\ddt{n_a}{x} = n_a\mu F. \end{equation} Under the “balance” conditions we find a steady (with time) gradient of density given by \begin{equation} \label{Eq:I:43:36} \ddt{n_a}{x} = \frac{n_a\mu F}{D}. \end{equation} But notice! We are describing an equilibrium condition, so our equilibrium laws of statistical mechanics apply. According to these laws the probability of finding a molecule at the coordinate $x$ is proportional to $e^{-U/kT}$, where $U$ is the potential energy. In terms of the number density $n_a$, this means that \begin{equation} \label{Eq:I:43:37} n_a = n_0e^{-U/kT}. \end{equation} If we differentiate (43.37) with respect to $x$, we find \begin{equation} \label{Eq:I:43:38} \ddt{n_a}{x} = -n_0e^{-U/kT}\cdot\frac{1}{kT}\,\ddt{U}{x}, \end{equation} or \begin{equation} \label{Eq:I:43:39} \ddt{n_a}{x} = -\frac{n_a}{kT}\,\ddt{U}{x}. \end{equation} In our situation, since the force $F$ is in the $x$-direction, the potential energy $U$ is just $-Fx$, and $-dU/dx = F$. Equation (43.39) then gives \begin{equation} \label{Eq:I:43:40} \ddt{n_a}{x} = \frac{n_aF}{kT}. \end{equation} [This is just exactly Eq. (40.2), from which we deduced $e^{-U/kT}$ in the first place, so we have come in a circle]. Comparing (43.40) with (43.36), we get exactly Eq. (43.31). We have shown that Eq. (43.31), which gives the diffusion current in terms of the mobility, has the correct coefficient and is very generally true. Mobility and diffusion are intimately connected. This relation was first deduced by Einstein. |
|
1 | 43 | Diffusion | 6 | Thermal conductivity | The methods of the kinetic theory that we have been using above can be used also to compute the thermal conductivity of a gas. If the gas at the top of a container is hotter than the gas at the bottom, heat will flow from the top to the bottom. (We think of the top being hotter because otherwise convection currents would be set up and the problem would no longer be one of heat conduction.) The transfer of heat from the hotter gas to the colder gas is by the diffusion of the “hot” molecules—those with more energy—downward and the diffusion of the “cold” molecules upward. To compute the flow of thermal energy we can ask about the energy carried downward across an element of area by the downward-moving molecules, and about the energy carried upward across the surface by the upward-moving molecules. The difference will give us the net downward flow of energy. The thermal conductivity $\kappa$ is defined as the ratio of the rate at which thermal energy is carried across a unit surface area, to the temperature gradient: \begin{equation} \label{Eq:I:43:41} \frac{1}{A}\,\ddt{Q}{t} = -\kappa\,\ddt{T}{z}. \end{equation} Since the details of the calculations are quite similar to those we have done above in considering molecular diffusion, we shall leave it as an exercise for the reader to show that \begin{equation} \label{Eq:I:43:42} \kappa = \frac{knlv}{\gamma - 1}, \end{equation} where $kT/(\gamma - 1)$ is the average energy of a molecule at the temperature $T$. If we use our relation $nl\sigma_c = 1$, the heat conductivity can be written as \begin{equation} \label{Eq:I:43:43} \kappa = \frac{1}{\gamma - 1}\,\frac{kv}{\sigma_c}. \end{equation} We have a rather surprising result. We know that the average velocity of gas molecules depends on the temperature but not on the density. We expect $\sigma_c$ to depend only on the size of the molecules. So our simple result says that the thermal conductivity $\kappa$ (and therefore the rate of flow of heat in any particular circumstance) is independent of the density of the gas! The change in the number of “carriers” of energy with a change in density is just compensated by the larger distance the “carriers” can go between collisions. One may ask: “Is the heat flow independent of the gas density in the limit as the density goes to zero? When there is no gas at all?” Certainly not! The formula (43.43) was derived, as were all the others in this chapter, under the assumption that the mean free path between collisions is much smaller than any of the dimensions of the container. Whenever the gas density is so low that a molecule has a fair chance of crossing from one wall of its container to the other without having a collision, none of the calculations of this chapter apply. We must in such cases go back to kinetic theory and calculate again the details of what will occur. |
|
1 | 44 | The Laws of Thermodynamics | 1 | Heat engines; the first law | So far we have been discussing the properties of matter from the atomic point of view, trying to understand roughly what will happen if we suppose that things are made of atoms obeying certain laws. However, there are a number of relationships among the properties of substances which can be worked out without consideration of the detailed structure of the materials. The determination of the relationships among the various properties of materials, without knowing their internal structure, is the subject of thermodynamics. Historically, thermodynamics was developed before an understanding of the internal structure of matter was achieved. To give an example: we know from the kinetic theory that the pressure of a gas is caused by molecular bombardment, and we know that if we heat a gas, so that the bombardment increases, the pressure must increase. Conversely, if the piston in a container of the gas is moved inward against the force of bombardment, the energy of the molecules bombarding the piston will increase, and consequently the temperature will increase. So, on the one hand, if we increase the temperature at a given volume, we increase the pressure. On the other hand, if we compress the gas, we will find that the temperature will rise. From the kinetic theory, one can derive a quantitative relationship between these two effects, but instinctively one might guess that they are related in some necessary fashion which is independent of the details of the collisions. Let us consider another example. Many people are familiar with this interesting property of rubber: If we take a rubber band and pull it, it gets warm. If one puts it between his lips, for example, and pulls it out, he can feel a distinct warming, and this warming is reversible in the sense that if he relaxes the rubber band quickly while it is between his lips, it is distinctly cooled. That means that when we stretch a rubber band it heats, and when we release the tension of the band it cools. Now our instincts might suggest that if we heated a band, it might pull: that the fact that pulling a band heats it might imply that heating a band should cause it to contract. And, in fact, if we apply a gas flame to a rubber band holding a weight, we will see that the band contracts abruptly (Fig. 44–1). So it is true that when we heat a rubber band it pulls, and this fact is definitely related to the fact that when we release the tension of it, it cools. The internal machinery of rubber that causes these effects is quite complicated. We will describe it from a molecular point of view to some extent, although our main purpose in this chapter is to understand the relationship of these effects independently of the molecular model. Nevertheless, we can show from the molecular model that the effects are closely related. One way to understand the behavior of rubber is to recognize that this substance consists of an enormous tangle of long chains of molecules, a kind of “molecular spaghetti,” with one extra complication: between the chains there are cross-links—like spaghetti that is sometimes welded together where it crosses another piece of spaghetti—a grand tangle. When we pull out such a tangle, some of the chains tend to line up along the direction of the pull. At the same time, the chains are in thermal motion, so they hit each other continually. It follows that such a chain, if stretched, would not by itself remain stretched, because it would be hit from the sides by the other chains and other molecules, and would tend to kink up again. So the real reason why a rubber band tends to contract is this: when one pulls it out, the chains are lengthwise, and the thermal agitations of the molecules on the sides of the chains tend to kink the chains up, and make them shorten. One can then appreciate that if the chains are held stretched and the temperature is increased, so that the vigor of the bombardment on the sides of the chains is also increased, the chains tend to pull in, and they are able to pull a stronger weight when heated. If, after being stretched for a time, a rubber band is allowed to relax, each chain becomes soft, and the molecules striking it lose energy as they pound into the relaxing chain. So the temperature falls. We have seen how these two processes, contraction when heated and cooling during relaxation, can be related by the kinetic theory, but it would be a tremendous challenge to determine from the theory the precise relationship between the two. We would have to know how many collisions there were each second and what the chains look like, and we would have to take account of all kinds of other complications. The detailed mechanism is so complex that we cannot, by kinetic theory, really determine exactly what happens; still, a definite relation between the two effects we observe can be worked out without knowing anything about the internal machinery! The whole subject of thermodynamics depends essentially upon the following kind of consideration: because a rubber band is “stronger” at higher temperatures than it is at lower temperatures, it ought to be possible to lift weights, and to move them around, and thus to do work with heat. In fact, we have already seen experimentally that a heated rubber band can lift a weight. The study of the way that one does work with heat is the beginning of the science of thermodynamics. Can we make an engine which uses the heating effect on a rubber band to do work? One can make a silly looking engine that does just this. It consists of a bicycle wheel in which all the spokes are rubber bands (Fig. 44–2). If one heats the rubber bands on one side of the wheel with a pair of heat lamps, they become “stronger” than the rubber bands on the other side. The center of gravity of the wheel will be pulled to one side, away from the bearing, so that the wheel turns. As it turns, cool rubber bands move toward the heat, and the heated bands move away from the heat and cool, so that the wheel turns slowly so long as the heat is applied. The efficiency of this engine is extremely low. Four hundred watts of power pour into the two lamps, but it is just possible to lift a fly with such an engine! An interesting question, however, is whether we can get heat to do the work in more efficient ways. In fact, the science of thermodynamics began with an analysis, by the great engineer Sadi Carnot, of the problem of how to build the best and most efficient engine, and this constitutes one of the few famous cases in which engineering has contributed fundamentally to physical theory. Another example that comes to mind is the more recent analysis of information theory by Claude Shannon. These two analyses, incidentally, turn out to be closely related. Now the way a steam engine ordinarily operates is that heat from a fire boils some water, and the steam so formed expands and pushes on a piston which makes a wheel go around. So the steam pushes the piston—what then? One has to finish the job: a stupid way to complete the cycle would be to let the steam escape into the air, for then one has to keep supplying water. It is cheaper—more efficient—to let the steam go into another box, where it is condensed by cool water, and then pump the water back into the boiler, so that it circulates continuously. Heat is thus supplied to the engine and converted into work. Now would it be better to use alcohol? What property should a substance have so that it makes the best possible engine? That was the question to which Carnot addressed himself, and one of the by-products was the discovery of the type of relationship that we have just explained above. The results of thermodynamics are all contained implicitly in certain apparently simple statements called the laws of thermodynamics. At the time when Carnot lived, the first law of thermodynamics, the conservation of energy, was not known. Carnot’s arguments were so carefully drawn, however, that they are valid even though the first law was not known in his time! Some time afterwards, Clapeyron made a simpler derivation that could be understood more easily than Carnot’s very subtle reasoning. But it turned out that Clapeyron assumed, not the conservation of energy in general, but that heat was conserved according to the caloric theory, which was later shown to be false. So it has often been said that Carnot’s logic was wrong. But his logic was quite correct. Only Clapeyron’s simplified version, that everybody read, was incorrect. The so-called second law of thermodynamics was thus discovered by Carnot before the first law! It would be interesting to give Carnot’s argument that did not use the first law, but we shall not do so because we want to learn physics, not history. We shall use the first law from the start, in spite of the fact that a great deal can be done without it. Let us begin by stating the first law, the conservation of energy: if one has a system and puts heat into it, and does work on it, then its energy is increased by the heat put in and the work done. We can write this as follows: The heat $Q$ put into the system, plus the work $W$ done on the system, is the increase in the energy $U$ of the system; the latter energy is sometimes called the internal energy: \begin{equation} \label{Eq:I:44:1} \text{Change in $U$} = Q + W. \end{equation} The change in $U$ can be represented as adding a little heat $\Delta Q$ and adding a little work $\Delta W$: \begin{equation} \label{Eq:I:44:2} \Delta U = \Delta Q + \Delta W, \end{equation} which is a differential form of the same law. We know that very well, from an earlier chapter. |
|
1 | 44 | The Laws of Thermodynamics | 2 | The second law | Now, what about the second law of thermodynamics? We know that if we do work against friction, say, the work lost to us is equal to the heat produced. If we do work in a room at temperature $T$, and we do the work slowly enough, the room temperature does not change much, and we have converted work into heat at a given temperature. What about the reverse possibility? Is it possible to convert the heat back into work at a given temperature? The second law of thermodynamics asserts that it is not. It would be very convenient to be able to convert heat into work merely by reversing a process like friction. If we consider only the conservation of energy, we might think that heat energy, such as that in the vibrational motions of molecules, might provide a goodly supply of useful energy. But Carnot assumed that it is impossible to extract the energy of heat at a single temperature. In other words, if the whole world were at the same temperature, one could not convert any of its heat energy into work: while the process of making work go into heat can take place at a given temperature, one cannot reverse it to get the work back again. Specifically, Carnot assumed that heat cannot be taken in at a certain temperature and converted into work with no other change in the system or the surroundings. That last phrase is very important. Suppose we have a can of compressed air at a certain temperature, and we let the air expand. It can do work; it can make hammers go, for example. It cools off a little in the expansion, but if we had a big sea, like the ocean, at a given temperature—a heat reservoir—we could warm it up again. So we have taken the heat out of the sea, and we have done work with the compressed air. But Carnot was not wrong, because we did not leave everything as it was. If we recompress the air that we let expand, we will find we are doing extra work, and when we are finished we will discover that we not only got no work out of the system at temperature $T$, but we actually put some in. We must talk only about situations in which the net result of the whole process is to take heat away and convert it into work, just as the net result of the process of doing work against friction is to take work and convert it into heat. If we move in a circle, we can bring the system back precisely to its starting point, with the net result that we did work against friction and produced heat. Can we reverse the process? Turn a switch, so that everything goes backwards, so the friction does work against us, and cools the sea? According to Carnot: no! So let us suppose that this is impossible. If it were possible it would mean, among other things, that we could take heat out of a cold body and put it into a hot body at no cost, as it were. Now we know it is natural that a hot thing can warm up a cool thing; if we simply put a hot body and a cold one together, and change nothing else, our experience assures us that it is not going to happen that the hot one gets hotter, and the cold one gets colder! But if we could obtain work by extracting the heat out of the ocean, say, or from anything else at a single temperature, then that work could be converted back into heat by friction at some other temperature. For instance, the other arm of a working machine could be rubbing something that is already hot. The net result would be to take heat from a “cold” body, the ocean, and to put it into a hot body. Now, the hypothesis of Carnot, the second law of thermodynamics, is sometimes stated as follows: heat cannot, of itself, flow from a cold to a hot object. But, as we have just seen, these two statements are equivalent: first, that one cannot devise a process whose only result is to convert heat to work at a single temperature, and second, that one cannot make heat flow by itself from a cold to a hot place. We shall mostly use the first form. Carnot’s analysis of heat engines is quite similar to the argument that we gave about weight-lifting engines in our discussion of the conservation of energy in Chapter 4. In fact, that argument was patterned after Carnot’s argument about heat engines, and so the present treatment will sound very much the same. Suppose we build a heat engine that has a “boiler” somewhere at a temperature $T_1$. A certain heat $Q_1$ is taken from the boiler, the steam engine does some work $W$, and it then delivers some heat $Q_2$ into a “condenser” at another temperature $T_2$ (Fig. 44–3). Carnot did not say how much heat, because he did not know the first law, and he did not use the law that $Q_2$ was equal to $Q_1$ because he did not believe it. Although everybody thought that, according to the caloric theory, the heats $Q_1$ and $Q_2$ would have to be the same, Carnot did not say they were the same—that is part of the cleverness of his argument. If we do use the first law, we find that the heat delivered, $Q_2$, is the heat $Q_1$ that was put in minus the work $W$ that was done: \begin{equation} \label{Eq:I:44:3} Q_2 = Q_1 - W. \end{equation} (If we have some kind of cyclic process where water is pumped back into the boiler after it is condensed, we will say that we have heat $Q_1$ absorbed and work $W$ done, during each cycle, for a certain amount of water that goes around the cycle.) Now we shall build another engine, and see if we cannot get more work from the same amount of heat being delivered at the temperature $T_1$, with the condenser still at the temperature $T_2$. We shall use the same amount of heat $Q_1$ from the boiler, and we shall try to get more work than we did out of the steam engine, perhaps by using another fluid, such as alcohol. |
|
1 | 44 | The Laws of Thermodynamics | 3 | Reversible engines | Now we must analyze our engines. One thing is clear: we will lose something if the engines contain devices in which there is friction. The best engine will be a frictionless engine. We assume, then, the same idealization that we did when we studied the conservation of energy; that is, a perfectly frictionless engine. We must also consider the analog of frictionless motion, “frictionless” heat transfer. If we put a hot object at a high temperature against a cold object, so that the heat flows, then it is not possible to make that heat flow in a reverse direction by a very small change in the temperature of either object. But when we have a practically frictionless machine, if we push it with a little force one way, it goes that way, and if we push it with a little force the other way, it goes the other way. We need to find the analog of frictionless motion: heat transfer whose direction we can reverse with only a tiny change. If the difference in temperature is finite, that is impossible, but if one makes sure that heat flows always between two things at essentially the same temperature, with just an infinitesimal difference to make it flow in the desired direction, the flow is said to be reversible (Fig. 44–4). If we heat the object on the left a little, heat will flow to the right; if we cool it a little, heat will flow to the left. So we find that the ideal engine is a so-called reversible engine, in which every process is reversible in the sense that, by minor changes, infinitesimal changes, we can make the engine go in the opposite direction. That means that nowhere in the machine must there be any appreciable friction, and nowhere in the machine must there be any place where the heat of the reservoirs, or the flame of the boiler, is in direct contact with something definitely cooler or warmer. Let us now consider an idealized engine in which all the processes are reversible. To show that such a thing is possible in principle, we will give an example of an engine cycle which may or may not be practical, but which is at least reversible, in the sense of Carnot’s idea. Suppose that we have a gas in a cylinder equipped with a frictionless piston. The gas is not necessarily a perfect gas. The fluid does not even have to be a gas, but to be specific let us say we do have a perfect gas. Also, suppose that we have two heat pads, $T_1$ and $T_2$—great big things that have definite temperatures, $T_1$ and $T_2$. We will suppose in this case that $T_1$ is higher than $T_2$. Let us first heat the gas and at the same time expand it, while it is in contact with the heat pad at $T_1$. As we do this, pulling the piston out very slowly as the heat flows into the gas, we will make sure that the temperature of the gas never gets very far from $T_1$. If we pull the piston out too fast, the temperature of the gas will fall too much below $T_1$ and then the process will not be quite reversible, but if we pull it out slowly enough, the temperature of the gas will never depart much from $T_1$. On the other hand, if we push the piston back slowly, the temperature would be only infinitesimally higher than $T_1$, and the heat would pour back. We see that such an isothermal (constant-temperature) expansion, done slowly and gently enough, is a reversible process. To understand what we are doing, we shall use a plot (Fig. 44–6) of the pressure of the gas against its volume. As the gas expands, the pressure falls. The curve marked (1) tells us how the pressure and volume change if the temperature is kept fixed at the value $T_1$. For an ideal gas this curve would be $PV = NkT_1$. During an isothermal expansion the pressure falls as the volume increases until we stop at the point $b$. At the same time, a certain heat $Q_1$ must flow into the gas from the reservoir, for if the gas were expanded without being in contact with the reservoir it would cool off, as we already know. Having completed the isothermal expansion, stopping at the point $b$, let us take the cylinder away from the reservoir and continue the expansion. This time we permit no heat to enter the cylinder. Again we perform the expansion slowly, so there is no reason why we cannot reverse it, and we again assume there is no friction. The gas continues to expand and the temperature falls, since there is no longer any heat entering the cylinder. We let the gas expand, following the curve marked (2), until the temperature falls to $T_2$, at the point marked $c$. This kind of expansion, made without adding heat, is called an adiabatic expansion. For an ideal gas, we already know that curve (2) has the form $PV^\gamma = \text{constant}$, where $\gamma$ is a constant greater than $1$, so that the adiabatic curve has a more negative slope than the isothermal curve. The gas cylinder has now reached the temperature $T_2$, so that if we put it on the heat pad at temperature $T_2$ there will be no irreversible changes. Now we slowly compress the gas while it is in contact with the reservoir at $T_2$, following the curve marked (3) (Fig. 44–5, Step 3). Because the cylinder is in contact with the reservoir, the temperature does not rise, but heat $Q_2$ flows from the cylinder into the reservoir at the temperature $T_2$. Having compressed the gas isothermally along curve (3) to the point $d$, we remove the cylinder from the heat pad at temperature $T_2$ and compress it still further, without letting any heat flow out. The temperature will rise, and the pressure will follow the curve marked (4). If we carry out each step properly, we can return to the point $a$ at temperature $T_1$ where we started, and repeat the cycle. We see that on this diagram we have carried the gas around a complete cycle, and during one cycle we have put $Q_1$ in at temperature $T_1$, and have removed $Q_2$ at temperature $T_2$. Now the point is that this cycle is reversible, so that we could represent all the steps the other way around. We could have gone backwards instead of forwards: we could have started at point $a$, at temperature $T_1$, expanded along the curve (4), expanded further at the temperature $T_2$, absorbing heat $Q_2$, and so on, going around the cycle backward. If we go around the cycle in one direction, we must do work on the gas; if we go in the other direction, the gas does work on us. Incidentally, it is easy to find out what the total amount of work is, because the work during any expansion is the pressure times the change in volume, $\int P\,dV$. On this particular diagram, we have plotted $P$ vertically and $V$ horizontally. So if we call the vertical distance $y$ and the horizontal distance $x$, this is $\int y\,dx$—in other words, the area under the curve. So the area under each of the numbered curves is a measure of the work done by or on the gas in the corresponding step. It is easy to see that the net work done is the shaded area of the picture. Now that we have given a single example of a reversible machine, we shall suppose that other such engines are also possible. Let us assume that we have a reversible engine $A$ which takes $Q_1$ at $T_1$, does work $W$, and delivers some heat at $T_2$. Now let us assume we have any other engine $B$, made by man, already designed or not yet invented, made of rubber bands, steam, or whatever, reversible or not, which is designed so that it takes in the same amount of heat $Q_1$ at $T_1$, and rejects the heat at the lower temperature $T_2$ (Fig. 44–7). Assume that engine $B$ does some work, $W'$. Now we shall show that $W'$ is not greater than $W$—that no engine can do more work than a reversible one. Why? Suppose that, indeed, $W'$ were bigger than $W$. Then we could take the heat $Q_1$ out of the reservoir at $T_1$, and with engine $B$ we could do work $W'$ and deliver some heat to the reservoir at $T_2$; we do not care how much. That done, we could save some of the work $W'$, which is supposed to be greater than $W$; we could use a part of it, $W$, and save the remainder, $W' - W$, for useful work. With the work $W$ we could run engine $A$ backwards because it is a reversible engine. It will absorb some heat from the reservoir at $T_2$ and deliver $Q_1$ back to the reservoir at $T_1$. After this double cycle, the net result would be that we would have put everything back the way it was before, and we would have done some excess work, namely $W' - W$, and all we would have done would be to extract energy from the reservoir at $T_2$! We were careful to restore the heat $Q_1$ to the reservoir at $T_1$. So that reservoir can be small and “inside” our combined machine $A + B$, whose net effect is therefore to extract a net heat $W' - W$ from the reservoir at $T_2$ and convert it into work. But to obtain useful work from a reservoir at a single temperature with no other changes is impossible according to Carnot’s postulate; it cannot be done. Therefore no engine which absorbs a given amount of heat from a higher temperature $T_1$ and delivers it at the temperature $T_2$ can do more work than a reversible engine operating under the same temperature conditions. Now suppose that engine $B$ is also reversible. Then, of course, not only must $W'$ be not greater than $W$, but now we can reverse the argument and show that $W$ cannot be greater than $W'$. So, if both engines are reversible they must both do the same amount of work, and we thus come to Carnot’s brilliant conclusion: that if an engine is reversible, it makes no difference how it is designed, because the amount of work one will obtain if the engine absorbs a given amount of heat at temperature $T_1$ and delivers heat at some other temperature $T_2$ does not depend on the design of the engine. It is a property of the world, not a property of a particular engine. If we could find out what the law is that determines how much work we obtain when we absorb the heat $Q_1$ at $T_1$ and deliver heat at $T_2$, this quantity would be a universal thing, independent of the substance. Of course if we knew the properties of a particular substance, we could work it out and then say that all other substances must give the same amount of work in a reversible engine. That is the key idea, the clue by which we can find the relationship between how much, for instance, a rubber band contracts when we heat it, and how much it cools when we let it contract. Imagine that we put that rubber band in a reversible machine, and that we make it go around a reversible cycle. The net result, the total amount of work done, is that universal function, that great function which is independent of substance. So we see that a substance’s properties must be limited in a certain way; one cannot make up anything he wants, or he would be able to invent a substance which he could use to produce more than the maximum allowable work when he carried it around a reversible cycle. This principle, this limitation, is the only real rule that comes out of the thermodynamics. |
|
1 | 44 | The Laws of Thermodynamics | 4 | The efficiency of an ideal engine | Now we shall try to find the law which determines the work $W$ as a function of $Q_1$, $T_1$, and $T_2$. It is clear that $W$ is proportional to $Q_1$, for if we consider two reversible engines in parallel, both working together and both double engines, the combination is also a reversible engine. If each one absorbed heat $Q_1$, the two together absorb $2Q_1$ and the work done is $2W$, and so on. So it is not unreasonable that $W$ is proportional to $Q_1$. Now the next important step is to find this universal law. We can, and will, do so by studying a reversible engine with the one particular substance whose laws we know, a perfect gas. It is also possible to obtain the rule by a purely logical argument, using no particular substance at all. This is one of the very beautiful pieces of reasoning in physics and we are reluctant not to show it to you, so for those who would like to see it we shall discuss it in just a moment. But first we shall use the much less abstract and simpler method of direct calculation for a perfect gas. We need only obtain formulas for $Q_1$ and $Q_2$ (for $W$ is just $Q_1 - Q_2$), the heats exchanged with the reservoirs during the isothermal expansion or contraction. For example, how much heat $Q_1$ is absorbed from the reservoir at temperature $T_1$ during the isothermal expansion [marked (1) in Fig. 44–6] from point $a$, at pressure $p_a$, volume $V_a$, temperature $T_1$, to point $b$ with pressure $p_b$, volume $V_b$, and the same temperature $T_1$? For a perfect gas each molecule has an energy that depends only on the temperature, and since the temperature and the number of molecules are the same at $a$ and at $b$, the internal energy is the same. There is no change in $U$; all the work done by the gas, \begin{equation*} W = \int_a^bp\,dV, \end{equation*} during the expansion is energy $Q_1$ taken from the reservoir. During the expansion, $pV = NkT_1$, or \begin{equation} p = \frac{NkT_1}{V}\notag \end{equation} or \begin{equation} \label{Eq:I:44:4} Q_1 = \int_a^bp\,dV = \int_a^bNkT_1\,\frac{dV}{V} \end{equation} or \begin{equation} Q_1 = NkT_1\ln\frac{V_b}{V_a}\notag \end{equation} is the heat taken from the reservoir at $T_1$. In the same way, for the compression at $T_2$ [curve (3) of Fig. 44–6] the heat delivered to the reservoir at $T_2$ is \begin{equation} \label{Eq:I:44:5} Q_2 = NkT_2\ln\frac{V_c}{V_d}. \end{equation} To finish our analysis we need only find a relation between $V_c/V_d$ and $V_b/V_a$. This we do by noting that (2) is an adiabatic expansion from $b$ to $c$, during which $pV^\gamma$ is a constant. Since $pV = NkT$, we can write this as $(pV)V^{\gamma - 1} = \text{const}$ or, in terms of $T$ and $V$, as $TV^{\gamma - 1} = \text{const}$, or \begin{equation} \label{Eq:I:44:6} T_1V_b^{\gamma - 1} = T_2V_c^{\gamma - 1}. \end{equation} Likewise, since (4), the compression from $d$ to $a$, is also adiabatic, we find \begin{equation*} T_1V_a^{\gamma - 1} = T_2V_d^{\gamma - 1}. \tag{44.6a} \label{Eq:I:44:6a} \end{equation*} If we divide this equation by the previous one, we find that $V_b/V_a$ must equal $V_c/V_d$, so the $\ln$’s in (44.4) and (44.5) are equal, and that \begin{equation} \label{Eq:I:44:7} \frac{Q_1}{T_1} = \frac{Q_2}{T_2}. \end{equation} This is the relation we were seeking. Although proved for a perfect gas engine, we know it must be true for any reversible engine at all. Now we shall see how this universal law could also be obtained by logical argument, without knowing the properties of any specific substances, as follows. Suppose that we have three engines and three temperatures, let us say $T_1$, $T_2$, and $T_3$. Let one engine absorb heat $Q_1$ from the temperature $T_1$ and do a certain amount of work $W_{13}$, and let it deliver heat $Q_3$ to the temperature $T_3$ (Fig. 44–8). Let another engine run backwards between $T_2$ and $T_3$. Suppose that we let the second engine be of such a size that it will absorb the same heat $Q_3$, and deliver the heat $Q_2$. We will have to put a certain amount of work, $W_{32}$, into it—negative because the engine is running backwards. When the first machine goes through a cycle, it absorbs heat $Q_1$ and delivers $Q_3$ at the temperature $T_3$; then the second machine takes the same heat $Q_3$ out of the reservoir at the temperature $T_3$ and delivers it into the reservoir at temperature $T_2$. Therefore the net result of the two machines in tandem is to take the heat $Q_1$ from $T_1$, and deliver $Q_2$ at $T_2$. The two machines are thus equivalent to a third one, which absorbs $Q_1$ at $T_1$, does work $W_{12}$, and delivers heat $Q_2$ at $T_2$, because $W_{12} = W_{13} - W_{32}$, as one can immediately show from the first law, as follows: \begin{equation} \label{Eq:I:44:8} W_{13} - W_{32} = (Q_1 - Q_3) - (Q_2 - Q_3) = Q_1 - Q_2 = W_{12}. \end{equation}
\begin{align} W_{13} - W_{32} &= (Q_1 - Q_3) - (Q_2 - Q_3)\notag\\[1ex] \label{Eq:I:44:8} &= Q_1 - Q_2 = W_{12}. \end{align} We can now obtain the laws which relate the efficiencies of the engines, because there clearly must be some kind of relationship between the efficiencies of engines running between the temperatures $T_1$ and $T_3$, and between $T_2$ and $T_3$, and between $T_1$ and $T_2$. We can make the argument very clear in the following way: We have just seen that we can always relate the heat absorbed at $T_1$ to the heat delivered at $T_2$ by finding the heat delivered at some other temperature $T_3$. Therefore we can get all the engines’ properties if we introduce a standard temperature, analyzing everything with that standard temperature. In other words, if we knew the efficiency of an engine running between a certain temperature $T$ and a certain arbitrary standard temperature, then we could work out the efficiency for any other difference in temperature. Because we assume we are using only reversible engines, we can work from the initial temperature down to the standard temperature and back up to the final temperature again. We shall define the standard temperature arbitrarily as one degree. We shall also adopt a special symbol for the heat which is delivered at this standard temperature: we shall call it $Q_S$. In other words, when a reversible engine absorbs the heat $Q$ at temperature $T$, it will deliver, at the unit temperature, a heat $Q_S$. If one engine, absorbing heat $Q_1$ at $T_1$, delivers the heat $Q_S$ at one degree, and if an engine absorbing heat $Q_2$ at temperature $T_2$ will also deliver the same heat $Q_S$ at one degree, then it follows that an engine which absorbs heat $Q_1$ at the temperature $T_1$ will deliver heat $Q_2$ if it runs between $T_1$ and $T_2$, as we have already proved by considering engines running between three temperatures. So all we really have to do is to find how much heat $Q_1$ we need to put in at the temperature $T_1$ in order to deliver a certain amount of heat $Q_S$ at the unit temperature. If we discover that, we have everything. The heat $Q$, of course, is a function of the temperature $T$. It is easy to see that the heat must increase as the temperature increases, for we know that it takes work to run an engine backwards and deliver heat at a higher temperature. It is also easy to see that the heat $Q_1$ must be proportional to $Q_S$. So the great law is something like this: for a given amount of heat $Q_S$ delivered at one degree from an engine running at temperature $T$ degrees, the heat $Q$ absorbed must be that amount $Q_S$ times some increasing function of the temperature: \begin{equation} \label{Eq:I:44:9} Q = Q_Sf(T). \end{equation} |
|
1 | 44 | The Laws of Thermodynamics | 5 | The thermodynamic temperature | At this stage we are not going to try to find the formula for the above increasing function of the temperature in terms of our familiar mercury temperature scale, but instead we shall define temperature by a new scale. At one time “the temperature” was defined arbitrarily by dividing the expansion of water into even degrees of a certain size. But when one then measures temperature with a mercury thermometer, one finds that the degrees are no longer even. But now we can make a definition of temperature which is independent of any particular substance. We can use that function $f(T)$, which does not depend on what device we use, because the efficiency of these reversible engines is independent of their working substances. Since the function we found is rising with temperature, we will define the function itself as the temperature, measured in units of the standard one-degree temperature, as follows: \begin{equation} \label{Eq:I:44:10} Q = ST, \end{equation} where \begin{equation} \label{Eq:I:44:11} Q_S = S\cdot 1^\circ. \end{equation} This means that we can tell how hot an object is by finding out how much heat is absorbed by a reversible engine working between the temperature of the object and the unit temperature (Fig. 44–9). If seven times more heat is taken out of a boiler than is delivered at a one-degree condenser, the temperature of the boiler will be called seven degrees, and so forth. So, by measuring how much heat is absorbed at different temperatures, we determine the temperature. The temperature defined in this way is called the absolute thermodynamic temperature, and it is independent of the substance. We shall use this definition exclusively from now on.1 Now we see that when we have two engines, one working between $T_1$ and one degree, the other working between $T_2$ and one degree, delivering the same heat at unit temperature, then the heats absorbed must be related by \begin{equation} \label{Eq:I:44:12} \frac{Q_1}{T_1} = S = \frac{Q_2}{T_2}. \end{equation} But that means that if we have a single engine running between $T_1$ and $T_2$, then the result of the whole analysis, the grand finale, is that $Q_1$ is to $T_1$ as $Q_2$ is to $T_2$, if the engine absorbs energy $Q_1$ at temperature $T_1$ and delivers heat $Q_2$ at temperature $T_2$. Whenever the engine is reversible, this relationship between the heats must follow. That is all there is to it: that is the center of the universe of thermodynamics. If this is all there is to thermodynamics, why is it considered such a difficult subject? In doing a problem involving a given mass of some substance, the condition of the substance at any moment can be described by telling what its temperature is and what its volume is. If we know the temperature and volume of a substance, and that the pressure is some function of the temperature and volume, then we know the internal energy. One could say, “I do not want to do it that way. Tell me the temperature and the pressure, and I will tell you the volume. I can think of the volume as a function of temperature and pressure, and the internal energy as a function of temperature and pressure, and so on.” That is why thermodynamics is hard, because everyone uses a different approach. If we could only sit down once and decide on our variables, and stick to them, it would be fairly easy. Now we start to make deductions. Just as $F = ma$ is the center of the universe in mechanics, and it goes on and on and on after that, in the same way the principle just found is all there is to thermodynamics. But can one make conclusions out of it? We begin. To obtain our first conclusion, we shall combine both laws, the law of conservation of energy and this law which relates the heats $Q_2$ and $Q_1$, and we can easily obtain the efficiency of a reversible engine. From the first law, we have $W = Q_1 - Q_2$. According to our new principle, \begin{equation} Q_2 = \frac{T_2}{T_1}\,Q_1,\notag \end{equation} so the work becomes \begin{equation} \label{Eq:I:44:13} W = Q_1\biggl(1 - \frac{T_2}{T_1}\biggr) = Q_1\, \frac{T_1 - T_2}{T_1}, \end{equation} which tells us the efficiency of the engine—how much work we get out of so much heat. The efficiency of an engine is proportional to the difference in the temperatures between which the engine runs, divided by the higher temperature: \begin{equation} \label{Eq:I:44:14} \text{Efficiency} = \frac{W}{Q_1} = \frac{T_1 - T_2}{T_1}. \end{equation} The efficiency cannot be greater than unity and the absolute temperature cannot be less than zero, absolute zero. So, since $T_2$ must be positive, the efficiency is always less than unity. That is our first conclusion. |
|
1 | 44 | The Laws of Thermodynamics | 6 | Entropy | Equation (44.7) or (44.12) can be interpreted in a special way. Working always with reversible engines, a heat $Q_1$ at temperature $T_1$ is “equivalent” to $Q_2$ at $T_2$ if $Q_1/T_1 = Q_2/T_2$, in the sense that as one is absorbed the other is delivered. This suggests that if we call $Q/T$ something, we can say: in a reversible process as much $Q/T$ is absorbed as is liberated; there is no gain or loss of $Q/T$. This $Q/T$ is called entropy, and we say “there is no net change in entropy in a reversible cycle.” If $T$ is $1^\circ$, then the entropy is $Q_S/1^\circ$ or, as we symbolized it, $Q_S/1^\circ = S$. Actually, $S$ is the letter usually used for entropy, and it is numerically equal to the heat (which we have called $Q_S$) delivered to a $1^\circ$-reservoir (entropy is not itself a heat, it is heat divided by a temperature, hence it is measured in joules per degree). Now it is interesting that besides the pressure, which is a function of the temperature and the volume, and the internal energy, which is a function of temperature and volume, we have found another quantity which is a function of the condition, i.e., the entropy of the substance. Let us try to explain how we compute it, and what we mean when we call it a “function of the condition.” Consider the system in two different conditions, much as we had in the experiment where we did the adiabatic and isothermal expansions. (Incidentally, there is no need that a heat engine have only two reservoirs, it could have three or four different temperatures at which it takes in and delivers heats, and so on.) We can move around on a $pV$ diagram all over the place, and go from one condition to another. In other words, we could say the gas is in a certain condition $a$, and then it goes over to some other condition, $b$, and we will require that this transition, made from $a$ to $b$, be reversible. Now suppose that all along the path from $a$ to $b$ we have little reservoirs at different temperatures, so that the heat $dQ$ removed from the substance at each little step is delivered to each reservoir at the temperature corresponding to that point on the path. Then let us connect all these reservoirs, by reversible heat engines, to a single reservoir at the unit temperature. When we are finished carrying the substance from $a$ to $b$, we shall bring all the reservoirs back to their original condition. Any heat $dQ$ that has been absorbed from the substance at temperature $T$ has now been converted by a reversible machine, and a certain amount of entropy $dS$ has been delivered at the unit temperature as follows: \begin{equation} \label{Eq:I:44:15} dS = dQ/T. \end{equation} Let us compute the total amount of entropy which has been delivered. The entropy difference, or the entropy needed to go from $a$ to $b$ by this particular reversible transformation, is the total entropy, the total of the entropy taken out of the little reservoirs, and delivered at the unit temperature: \begin{equation} \label{Eq:I:44:16} S_b - S_a = \int_a^b\frac{dQ}{T}. \end{equation} The question is, does the entropy difference depend upon the path taken? There is more than one way to go from $a$ to $b$. Remember that in the Carnot cycle we could go from $a$ to $c$ in Fig. 44–6 by first expanding isothermally and then adiabatically; or we could first expand adiabatically and then isothermally. So the question is whether the entropy change which occurs when we go from $a$ to $b$ in Fig. 44–10 is the same on one route as it is on another. It must be the same, because if we went all the way around the cycle, going forward on one path and backward on another, we would have a reversible engine, and there would be no loss of heat to the reservoir at unit temperature. In a totally reversible cycle, no heat must be taken from the reservoir at the unit temperature, so the entropy needed to go from $a$ to $b$ is the same over one path as it is over another. It is independent of path, and depends only on the endpoints. We can, therefore, say that there is a certain function, which we call the entropy of the substance, that depends only on the condition, i.e., only on the volume and temperature. We can find a function $S(V,T)$ which has the property that if we compute the change in entropy, as the substance is moved along any reversible path, in terms of the heat rejected at unit temperature, then \begin{equation} \label{Eq:I:44:17} \Delta S = \int\frac{dQ}{T}, \end{equation} where $dQ$ is the heat removed from the substance at temperature $T$. This total entropy change is the difference between the entropy calculated at the initial and final points: \begin{equation} \label{Eq:I:44:18} \Delta S = S(V_b,T_b) - S(V_a,T_a) = \int_a^b\frac{dQ}{T}. \end{equation} This expression does not completely define the entropy, but rather only the difference of entropy between two different conditions. Only if we can evaluate the entropy for one special condition can we really define $S$ absolutely. For a long time it was believed that absolute entropy meant nothing—that only differences could be defined—but finally Nernst proposed what he called the heat theorem, which is also called the third law of thermodynamics. It is very simple. We will say what it is, but we will not explain why it is true. Nernst’s postulate states simply that the entropy of any object at absolute zero is zero. We know of one case of $T$ and $V$, namely $T = 0$, where $S$ is zero; and so we can get the entropy at any other point. To give an illustration of these ideas, let us calculate the entropy of a perfect gas. In an isothermal (and therefore reversible) expansion, $\int dQ/T$ is $Q/T$, since $T$ is constant. Therefore (from 44.4) the change in entropy is \begin{equation*} S(V_a,T) - S(V_b,T) = Nk\ln\frac{V_a}{V_b}, \end{equation*} so $S(V,T) = Nk\ln V$ plus some function of $T$ only. How does $S$ depend on $T$? We know that for a reversible adiabatic expansion, no heat is exchanged. Thus the entropy does not change even though $V$ changes, provided that $T$ changes also, such that $TV^{\gamma - 1} = \text{constant}$. Can you see that this implies that \begin{equation*} S(V,T) = Nk\biggl[\ln V + \frac{1}{\gamma - 1}\ln T\biggr] + a, \end{equation*} where $a$ is some constant independent of both $V$ and $T$? [$a$ is called the chemical constant. It depends on the gas in question, and may be determined experimentally from the Nernst theorem by measuring the heat liberated in cooling and condensing the gas until it is brought to a solid (or for helium, a liquid) at $0^\circ$, by integrating $\int dQ/T$. It can also be determined theoretically by means of Planck’s constant and quantum mechanics, but we shall not study it in this course.] Now we shall remark on some of the properties of the entropy of things. We first remember that if we go along a reversible cycle from $a$ to $b$, then the entropy of the substance will change by $S_b - S_a$. And we remember that as we go along the path, the entropy—the heat delivered at unit temperature—increases according to the rule $dS = dQ/T$, where $dQ$ is the heat we remove from the substance when its temperature is $T$. We already know that if we have a reversible cycle, the total entropy of everything is not changed, because the heat $Q_1$ absorbed at $T_1$ and the heat $Q_2$ delivered at $T_2$ correspond to equal and opposite changes in entropy, so that the net change in the entropy is zero. So for a reversible cycle there is no change in the entropy of anything, including the reservoirs. This rule may look like the conservation of energy again, but it is not; it applies only to reversible cycles. If we include irreversible cycles there is no law of conservation of entropy. We shall give two examples. First, suppose that we do irreversible work on an object by friction, generating a heat $Q$ on some object at temperature $T$. The entropy is increased by $Q/T$. The heat $Q$ is equal to the work, and thus when we do a certain amount of work by friction against an object whose temperature is $T$, the entropy of the whole world increases by $W/T$. Another example of irreversibility is this: If we put together two objects that are at different temperatures, say $T_1$ and $T_2$, a certain amount of heat will flow from one to the other by itself. Suppose, for instance, we put a hot stone in cold water. Then when a certain heat $\Delta Q$ is transferred from $T_1$ to $T_2$, how much does the entropy of the hot stone change? It decreases by $\Delta Q/T_1$. How much does the water entropy change? It increases by $\Delta Q/T_2$. The heat will, of course, flow only from the higher temperature $T_1$ to the lower temperature $T_2$, so that $\Delta Q$ is positive if $T_1$ is greater than $T_2$. So the change in entropy of the whole world is positive, and it is the difference of the two fractions: \begin{equation} \label{Eq:I:44:19} \Delta S = \frac{\Delta Q}{T_2} - \frac{\Delta Q}{T_1}. \end{equation} So the following proposition is true: in any process that is irreversible, the entropy of the whole world is increased. Only in reversible processes does the entropy remain constant. Since no process is absolutely reversible, there is always at least a small gain in the entropy; a reversible process is an idealization in which we have made the gain of entropy minimal. Unfortunately, we are not going to enter into the field of thermodynamics very far. Our purpose is only to illustrate the principal ideas involved and the reasons why it is possible to make such arguments, but we will not use thermodynamics very much in this course. Thermodynamics is used very often by engineers and, particularly, by chemists. So we must learn our thermodynamics in practice in chemistry or engineering. Because it is not worthwhile duplicating everything, we shall just give some discussion of the origin of the theory, rather than much detail for special applications. The two laws of thermodynamics are often stated this way: That is not a very good statement of the second law; it does not say, for example, that in a reversible cycle the entropy stays the same, and it does not say exactly what the entropy is. It is just a clever way of remembering the two laws, but it does not really tell us exactly where we stand. We have summarized the laws discussed in this chapter in Table 44–1. In the next chapter we shall apply these laws to discover the relationship between the heat generated in the expansion of a rubber band, and the extra tension when it is heated. |
|
1 | 45 | Illustrations of Thermodynamics | 1 | Internal energy | Thermodynamics is a rather difficult and complex subject when we come to apply it, and it is not appropriate for us to go very far into the applications in this course. The subject is of very great importance, of course, to engineers and chemists, and those who are interested in the subject can learn about the applications in physical chemistry or in engineering thermodynamics. There are also good equation reference books, such as Zemansky’s Heat and Thermodynamics, where one can learn more about the subject. In the Encyclopedia Britannica, fourteenth edition, one can find excellent articles on thermodynamics and thermochemistry, and in the article on chemistry, the sections on physical chemistry, vaporization, liquefication of gases, and so on. The subject of thermodynamics is complicated because there are so many different ways of describing the same thing. If we wish to describe the behavior of a gas, we can say that the pressure depends on the temperature and on the volume, or we can say that the volume depends on the temperature and the pressure. Or with respect to the internal energy $U$, we might say that it depends on the temperature and volume, if those are the variables we have chosen—but we might also say that it depends on the temperature and the pressure, or the pressure and the volume, and so on. In the last chapter we discussed another function of temperature and volume, called the entropy $S$, and we can of course construct as many other functions of these variables as we like: $U - TS$ is a function of temperature and volume. So we have a large number of different quantities which can be functions of many different combinations of variables. To keep the subject simple in this chapter, we shall decide at the start to use temperature and volume as the independent variables. Chemists use temperature and pressure, because they are easier to measure and control in chemical experiments, but we shall use temperature and volume throughout this chapter, except in one place where we shall see how to make the transformation into the chemists’ system of variables. We shall first, then, consider only one system of independent variables: temperature and volume. Secondly, we shall discuss only two dependent functions: the internal energy and the pressure. All the other functions can be derived from these, so it is not necessary to discuss them. With these limitations, thermodynamics is still a fairly difficult subject, but it is not quite so impossible! First we shall review some mathematics. If a quantity is a function of two variables, the idea of the derivative of the quantity requires a little more careful thought than for the case where there is only one variable. What do we mean by the derivative of the pressure with respect to the temperature? The pressure change accompanying a change in the temperature depends partly, of course, on what happens to the volume while $T$ is changing. We must specify the change in $V$ before the concept of a derivative with respect to $T$ has a precise meaning. We might ask, for example, for the rate of change of $P$ with respect to $T$ if $V$ is held constant. This ratio is just the ordinary derivative that we usually write as $dP/dT$. We customarily use a special symbol, $\ddpl{P}{T}$, to remind us that $P$ depends on another variable $V$ as well as on $T$, and that this other variable is held constant. We shall not only use the symbol $\partial$ to call attention to the fact that the other variable is held constant, but we shall also write the variable that is held constant as a subscript, $(\ddpl{P}{T})_V$. Since we have only two independent variables, this notation is redundant, but it will help us keep our wits about us in the thermodynamic jungle of partial derivatives. Let us suppose that the function $f(x,y)$ depends on the two independent variables $x$ and $y$. By $(\ddpl{f}{x})_y$ we mean simply the ordinary derivative, obtained in the usual way, if we treat $y$ as a constant: \begin{equation*} \biggl(\ddp{f}{x}\biggr)_y = \operatorname*{limit}_{\Delta x \to 0} \frac{f(x + \Delta x,y) - f(x,y)}{\Delta x}. \end{equation*} Similarly, we define \begin{equation*} \biggl(\ddp{f}{y}\biggr)_x = \operatorname*{limit}_{\Delta y \to 0} \frac{f(x, y + \Delta y) - f(x,y)}{\Delta y}. \end{equation*} For example, if $f(x,y) = x^2 + yx$, then $(\ddpl{f}{x})_y = 2x + y$, and $(\ddpl{f}{y})_x = x$. We can extend this idea to higher derivatives: $\partial^2f/\partial y^2$ or $\partial^2f/\partial y\partial x$. The latter symbol indicates that we first differentiate $f$ with respect to $x$, treating $y$ as a constant, then differentiate the result with respect to $y$, treating $x$ as a constant. The actual order of differentiation is immaterial: $\partial^2f/\partial x\partial y = \partial^2f/\partial y\partial x$. We will need to compute the change $\Delta f$ in $f(x,y)$ when $x$ changes to $x + \Delta x$ and $y$ changes to $y + \Delta y$. We assume throughout the following that $\Delta x$ and $\Delta y$ are infinitesimally small: \begin{align} \Delta f &= f(x + \Delta x, y + \Delta y) - f(x,y)\notag\\[2.5ex] &= \underbrace{f(x + \Delta x, y + \Delta y) - f(x,y + \Delta y)} + \underbrace{f(x,y + \Delta y) - f(x,y)}\notag\\[.5ex] \label{Eq:I:45:1} &= \kern{5em}\displaystyle{\Delta x\biggl(\ddp{f}{x}\biggr)_y} \kern{5em}+ \kern{2.2em}\displaystyle{\Delta y\biggl(\ddp{f}{y}\biggr)_x} \end{align}
\begin{alignat}{2} \Delta f &=\phantom{+}f(x + \Delta x, y + \Delta y) &-& \;\; f(x,y)&&\notag\\[2.5ex] &=\phantom{+}f(x + \Delta x, y + \Delta y) &-& \;\; f(x,y + \Delta y)&&\notag\\[1ex] &\phantom{=}+ f(x,y + \Delta y) &-& \;\; f(x,y)&&\notag\\[2.5ex] \label{Eq:I:45:1} &=\Delta x\biggl(\ddp{f}{x}\biggr)_y\!+\Delta y\biggl(\ddp{f}{y}&\biggr)_x& \end{alignat} The last equation is the fundamental relation that expresses $\Delta f$ in terms of $\Delta x$ and $\Delta y$. As an example of the use of this relation, let us calculate the change in the internal energy $U(T,V)$ when the temperature changes from $T$ to $T + \Delta T$ and the volume changes from $V$ to $V + \Delta V$. Using Eq. (45.1), we write \begin{equation} \label{Eq:I:45:2} \Delta U = \Delta T\biggl(\ddp{U}{T}\biggr)_V + \Delta V\biggl(\ddp{U}{V}\biggr)_T. \end{equation} In our last chapter we found another expression for the change $\Delta U$ in the internal energy when a quantity of heat $\Delta Q$ was added to the gas: \begin{equation} \label{Eq:I:45:3} \Delta U = \Delta Q - P\,\Delta V. \end{equation} In comparing Eqs. (45.2) and (45.3) one might at first be inclined to think that $P = -(\ddpl{U}{V})_T$, but this is not correct. To obtain the correct relation, let us first suppose that we add a quantity of heat $\Delta Q$ to the gas while keeping the volume constant, so that $\Delta V = 0$. With $\Delta V = 0$, Eq. (45.3) tells us that $\Delta U = \Delta Q$, and Eq. (45.2) tells us that $\Delta U =(\ddpl{U}{T})_V\,\Delta T$, so that $(\ddpl{U}{T})_V = \Delta Q/\Delta T$. The ratio $\Delta Q/\Delta T$, the amount of heat one must put into a substance in order to change its temperature by one degree with the volume held constant, is called the specific heat at constant volume and is designated by the symbol $C_V$. By this argument we have shown that \begin{equation} \label{Eq:I:45:4} \biggl(\ddp{U}{T}\biggr)_V = C_V. \end{equation} Now let us again add a quantity of heat $\Delta Q$ to the gas, but this time we will hold $T$ constant and allow the volume to change by $\Delta V$. The analysis in this case is more complex, but we can calculate $\Delta U$ by the argument of Carnot, making use of the Carnot cycle we introduced in the last chapter. The pressure-volume diagram for the Carnot cycle is shown in Fig. 45–1. As we have already shown, the total amount of work done by the gas in a reversible cycle is $\Delta Q(\Delta T/T)$, where $\Delta Q$ is the amount of heat energy added to the gas as it expands isothermally at temperature $T$ from volume $V$ to $V + \Delta V$, and $T - \Delta T$ is the final temperature reached by the gas as it expands adiabatically on the second leg of the cycle. Now we will show that this work done is also given by the shaded area in Fig. 45–1. In any circumstances, the work done by the gas is $\int P\,dV$, and is positive when the gas expands and negative when the gas is compressed. If we plot $P$ vs. $V$, the variation of $P$ and $V$ is represented by a curve which gives the value of $P$ corresponding to a particular value of $V$. As the volume changes from one value to another, the work done by the gas, the integral $\int P\,dV$, is the area under the curve connecting the initial and final values of $V$. When we apply this idea to the Carnot cycle, we see that as we go around the cycle, paying attention to the sign of the work done by the gas, the net work done by the gas is just the shaded area in Fig. 45–1. Now we want to evaluate the shaded area geometrically. The cycle we have used in Fig. 45–1 differs from that used in the previous chapter in that we now suppose that $\Delta T$ and $\Delta Q$ are infinitesimally small. We are working between adiabatic lines and isothermal lines that are very close together, and the figure described by the heavy lines in Fig. 45–1 will approach a parallelogram as the increments $\Delta T$ and $\Delta Q$ approach zero. The area of this parallelogram is just $\Delta V\,\Delta P$, where $\Delta V$ is the change in volume as energy $\Delta Q$ is added to the gas at constant temperature, and $\Delta P$ is the change in pressure as the temperature changes by $\Delta T$ at constant volume. One can easily show that the shaded area in Fig. 45–1 is given by $\Delta V\,\Delta P$ by recognizing that the shaded area is equal to the area enclosed by the dotted lines in Fig. 45–2, which in turn differs from the rectangle bounded by $\Delta P$ and $\Delta V$ only by the addition and subtraction of the equal triangular areas in Fig. 45–2. Now let us summarize the results of the arguments we have developed so far: \begin{equation} \left.\! \begin{array}{l} \displaystyle\qquad\text{Work done by the gas} = \text{shaded area} = \Delta V\,\Delta P = \Delta Q\biggl(\frac{\Delta T}{T}\biggr)\\ \text{or}\\[1ex] \displaystyle\qquad\frac{\Delta T}{T}\cdot(\text{heat needed to change $V$ by $\Delta V$})_{\text{constant $T$}}\\[1.5ex] \displaystyle\quad = \Delta V\cdot(\text{change in $P$ when $T$ changes by $\Delta T$})_{\text{constant $V$}}\\[2ex] \text{or}\\[.5ex] \displaystyle\qquad\frac{1}{\Delta V}\cdot(\text{heat needed to change $V$ by $\Delta V$})_T = T(\ddpl{P}{T})_V. \end{array}\!\right\} \label{Eq:I:45:5} \end{equation}
\begin{equation} \left.\! \begin{array}{l} \quad\begin{gathered} \displaystyle\text{Work done by the gas} = \text{shaded area}\\[1ex] = \Delta V\,\Delta P = \Delta Q\biggl(\frac{\Delta T}{T}\biggr) \end{gathered}\\[-2ex] \text{or}\\[1ex] \displaystyle\quad\frac{\Delta T}{T}\cdot \begin{pmatrix} \text{heat needed}\\[-.5ex] \text{to change $V$}\\[-.5ex] \text{by $\Delta V$} \end{pmatrix}_{\text{constant $T$}}\\[2ex] % ebook remove % ebook insert: \end{pmatrix}_{\text{constant $T$}}\\[5ex] \displaystyle\quad = \Delta V\cdot \begin{pmatrix} \text{change in $P$}\\[-.5ex] \text{when $T$ changes}\\[-.5ex] \text{by $\Delta T$} \end{pmatrix}_{\text{constant $V$}}\\[.5ex] \text{or}\\[.5ex] \displaystyle\quad\frac{1}{\Delta V}\cdot \begin{pmatrix} \text{heat needed}\\[-.5ex] \text{to change $V$}\\[-.5ex] \text{by $\Delta V$} \end{pmatrix}_T \kern{-1.5ex}= T\biggl(\ddp{P}{T}\biggr)_V. \end{array}\!\right\} \label{Eq:I:45:5} \end{equation} Equation (45.5) expresses the essential result of Carnot’s argument. The whole of thermodynamics can be deduced from Eq. (45.5) and the First Law, which is stated in Eq. (45.3). Equation (45.5) is essentially the Second Law, although it was originally deduced by Carnot in a slightly different form, since he did not use our definition of temperature. Now we can proceed to calculate $(\ddpl{U}{V})_T$. By how much would the internal energy $U$ change if we changed the volume by $\Delta V$? First, $U$ changes because heat is put in, and second, $U$ changes because work is done. The heat put in is \begin{equation*} \Delta Q = T\biggl(\ddp{P}{T}\biggr)_V\Delta V, \end{equation*} according to Eq. (45.5), and the work done on the substance is $-P\,\Delta V$. Therefore the change $\Delta U$ in internal energy has two pieces: \begin{equation} \label{Eq:I:45:6} \Delta U = T\biggl(\ddp{P}{T}\biggr)_V\Delta V - P\,\Delta V. \end{equation} Dividing both sides by $\Delta V$, we find for the rate of change of $U$ with $V$ at constant $T$ \begin{equation} \label{Eq:I:45:7} \biggl(\ddp{U}{V}\biggr)_T = T\biggl(\ddp{P}{T}\biggr)_V - P. \end{equation} In our thermodynamics, in which $T$ and $V$ are the only variables and $P$ and $U$ are the only functions, Eqs. (45.3) and (45.7) are the basic equations from which all the results of the subject can be deduced. |
|
1 | 45 | Illustrations of Thermodynamics | 2 | Applications | Now let us discuss the meaning of Eq. (45.7) and see why it answers the questions which we proposed in our last chapter. We considered the following problem: in kinetic theory it is obvious that an increase in temperature leads to an increase in pressure, because of the bombardments of the atoms on a piston. For the same physical reason, when we let the piston move back, heat is taken out of the gas and, in order to keep the temperature constant, heat will have to be put back in. The gas cools when it expands, and the pressure rises when it is heated. There must be some connection between these two phenomena, and this connection is given explicitly in Eq. (45.7). If we hold the volume fixed and increase the temperature, the pressure rises at a rate $(\ddpl{P}{T})_V$. Related to that fact is this: if we increase the volume, the gas will cool unless we pour some heat in to maintain the temperature constant, and $(\ddpl{U}{V})_T$ tells us the amount of heat needed to maintain the temperature. Equation (45.7) expresses the fundamental interrelationship between these two effects. That is what we promised we would find when we came to the laws of thermodynamics. Without knowing the internal mechanism of the gas, and knowing only that we cannot make perpetual motion of the second type, we can deduce the relationship between the amount of heat needed to maintain a constant temperature when the gas expands, and the pressure change when the gas is heated at constant volume! Now that we have the result we wanted for a gas, let us consider the rubber band. When we stretch a rubber band, we find that its temperature rises, and when we heat a rubber band, we find that it pulls itself in. What is the equation that gives the same relation for a rubber band as Eq. (45.3) gives for gas? For a rubber band the situation will be something like this: when heat $\Delta Q$ is put in, the internal energy is changed by $\Delta U$ and some work is done. The only difference will be that the work done by the rubber band is $-F\,\Delta L$ instead of $P\,\Delta V$, where $F$ is the force on the band, and $L$ is the length of the band. The force $F$ is a function of temperature and of length of the band. Replacing $P\,\Delta V$ in Eq. (45.3) by $-F\,\Delta L$, we get \begin{equation} \label{Eq:I:45:8} \Delta U = \Delta Q + F\,\Delta L. \end{equation} Comparing Eqs. (45.3) and (45.8), we see that the rubber band equation is obtained by a mere substitution of one letter for another. Furthermore, if we substitute $L$ for $V$, and $-F$ for $P$, all of our discussion of the Carnot cycle applies to the rubber band. We can immediately deduce, for instance, that the heat $\Delta Q$ needed to change the length by $\Delta L$ is given by the analog to Eq. (45.5): $\Delta Q = -T(\ddpl{F}{T})_L\,\Delta L$. This equation tells us that if we keep the length of a rubber band fixed and heat the band, we can calculate how much the force will increase in terms of the heat needed to keep the temperature constant when the band is relaxed a little bit. So we see that the same equation applies to both gas and a rubber band. In fact, if one can write $\Delta U = \Delta Q + A\,\Delta B$, where $A$ and $B$ represent different quantities, force and length, pressure and volume, etc., one can apply the results obtained for a gas by substituting $A$ and $B$ for $-P$ and $V$. For example, consider the electric potential difference, or “voltage,” $E$ in a battery and the charge $\Delta Z$ that moves through the battery. We know that the work done in a reversible electric cell, like a storage battery, is $E\,\Delta Z$. (Since we include no $P\,\Delta V$ term in the work, we require that our battery maintain a constant volume.) Let us see what thermodynamics can tell us about the performance of a battery. If we substitute $E$ for $P$ and $Z$ for $V$ in Eq. (45.6), we obtain \begin{equation} \label{Eq:I:45:9} \frac{\Delta U}{\Delta Z} = T\biggl(\ddp{E}{T}\biggr)_Z - E. \end{equation} Equation (45.9) says that the internal energy $U$ is changed when a charge $\Delta Z$ moves through the cell. Why is $\Delta U/\Delta Z$ not simply the voltage $E$ of the battery? (The answer is that a real battery gets warm when charge moves through the cell. The internal energy of the battery is changed, first, because the battery did some work on the outside circuit, and second, because the battery is heated.) The remarkable thing is that the second part can again be expressed in terms of the way in which the battery voltage changes with temperature. Incidentally, when the charge moves through the cell, chemical reactions occur, and Eq. (45.9) suggests a nifty way of measuring the amount of energy required to produce a chemical reaction. All we need to do is construct a cell that works on the reaction, measure the voltage, and measure how much the voltage changes with temperature when we draw no charge from the battery! Now we have assumed that the volume of the battery can be maintained constant, since we have omitted the $P\,\Delta V$ term when we set the work done by the battery equal to $E\,\Delta Z$. It turns out that it is technically quite difficult to keep the volume constant. It is much easier to keep the cell at constant atmospheric pressure. For that reason, the chemists do not like any of the equations we have written above: they prefer equations which describe performance under constant pressure. We chose at the beginning of this chapter to use $V$ and $T$ as independent variables. The chemists prefer $P$ and $T$, and we will now consider how the results we have obtained so far can be transformed into the chemists’ system of variables. Remember that in the following treatment confusion can easily set in because we are shifting gears from $T$ and $V$ to $T$ and $P$. We started in Eq. (45.3) with $\Delta U = \Delta Q - P\,\Delta V$; $P\,\Delta V$ may be replaced by $E\,\Delta Z$ or $A\,\Delta B$. If we could somehow replace the last term, $P\,\Delta V$, by $V\,\Delta P$, then we would have interchanged $V$ and $P$, and the chemists would be happy. Well, a clever man noticed that the differential of the product $PV$ is $d(PV) = P\,dV + V\,dP$, and if he added this equality to Eq. (45.3), he obtained \begin{alignat*}{2} \Delta(PV) &\,= P\,\Delta V &&+ V\,\Delta P\\[.65ex] \Delta U &\,= \Delta Q &&- P\,\Delta V\\ \hline \Delta(U + PV) &\,= \Delta Q &&+ V\,\Delta P \end{alignat*} In order that the result look like Eq. (45.3), we define $U + PV$ to be something new, called the enthalpy, $H$, and we write $\Delta H = \Delta Q + V\,\Delta P$. Now we are ready to transform our results into chemists’ language with the following rules: $U \to H$, $P \to -V$, $V \to P$. For example, the fundamental relationship that chemists would use instead of Eq. (45.7) is \begin{equation*} \biggl(\ddp{H}{P}\biggr)_T = -T\biggl(\ddp{V}{T}\biggr)_P + V. \end{equation*} It should now be clear how one transforms to the chemists’ variables $T$ and $P$. We now go back to our original variables: for the remainder of this chapter, $T$ and $V$ are the independent variables. Now let us apply the results we have obtained to a number of physical situations. Consider first the ideal gas. From kinetic theory we know that the internal energy of a gas depends only on the motion of the molecules and the number of molecules. The internal energy depends on $T$, but not on $V$. If we change $V$, but keep $T$ constant, $U$ is not changed. Therefore $(\ddpl{U}{V})_T = 0$, and Eq. (45.7) tells us that for an ideal gas \begin{equation} \label{Eq:I:45:10} T\biggl(\ddp{P}{T}\biggr)_V - P = 0. \end{equation} Equation (45.10) is a differential equation that can tell us something about $P$. We take account of the partial derivatives in the following way: Since the partial derivative is at constant $V$, we will replace the partial derivative by an ordinary derivative and write explicitly, to remind us, “constant $V$.” Equation (45.10) then becomes \begin{equation} \label{Eq:I:45:11} T\,\frac{\Delta P}{\Delta T} - P = 0;\quad \text{const $V$}, \end{equation} which we can integrate to get \begin{alignat}{2} \ln P &\;= \ln T + \text{const};\quad &&\text{const $V$},\notag\\[1ex] \label{Eq:I:45:12} P &\;= \text{const}\times T;\quad &&\text{const $V$}. \end{alignat} We know that for an ideal gas the pressure per mole is equal to \begin{equation} \label{Eq:I:45:13} P = \frac{RT}{V}, \end{equation} which is consistent with (45.12), since $V$ and $R$ are constants. Why did we bother to go through this calculation if we already knew the results? Because we have been using two independent definitions of temperature! At one stage we assumed that the kinetic energy of the molecules was proportional to the temperature, an assumption that defines one scale of temperature which we will call the ideal gas scale. The $T$ in Eq. (45.13) is based on the gas scale. We also call temperatures measured on the gas scale kinetic temperatures. Later, we defined the temperature in a second way which was completely independent of any substance. From arguments based on the Second Law we defined what we might call the “grand thermodynamic absolute temperature” $T$, the $T$ that appears in Eq. (45.12). What we proved here is that the pressure of an ideal gas (defined as one for which the internal energy does not depend on the volume) is proportional to the grand thermodynamic absolute temperature. We also know that the pressure is proportional to the temperature measured on the gas scale. Therefore we can deduce that the kinetic temperature is proportional to the “grand thermodynamic absolute temperature.” That means, of course, that if we were sensible we could make two scales agree. In this instance, at least, the two scales have been chosen so that they coincide; the proportionality constant has been chosen to be $1$. Most of the time man chooses trouble for himself, but in this case he made them equal! |
|
1 | 45 | Illustrations of Thermodynamics | 3 | The Clausius-Clapeyron equation | The vaporization of a liquid is another application of the results we have derived. Suppose we have some liquid in a cylinder, such that we can compress it by pushing on the piston, and we ask ourselves, “If we keep the temperature constant, how does the pressure vary with volume?” In other words, we want to draw an isothermal line on the $P$-$V$ diagram. The substance in the cylinder is not the ideal gas that we considered earlier; now it may be in the liquid or the vapor phase, or both may be present. If we apply sufficient pressure, the substance will condense to a liquid. Now if we squeeze still harder, the volume changes very little, and our isothermal line rises rapidly with decreasing volume, as shown at the left in Fig. 45–3. If we increase the volume by pulling the piston out, the pressure drops until we reach the point at which the liquid starts to boil, and then vapor starts to form. If we pull the piston out farther, all that happens is that more liquid vaporizes. When there is part liquid and part vapor in the cylinder, the two phases are in equilibrium—liquid is evaporating and vapor is condensing at the same rate. If we make more room for the vapor, more vapor is needed to maintain the pressure, so a little more liquid evaporates, but the pressure remains constant. On the flat part of the curve in Fig. 45–3 the pressure does not change, and the value of the pressure here is called the vapor pressure at temperature $T$. As we continue to increase the volume, there comes a time when there is no more liquid to evaporate. At this juncture, if we expand the volume further, the pressure will fall as for an ordinary gas, as shown at the right of the $P$-$V$ diagram. The lower curve in Fig. 45–3 is the isothermal line at a slightly lower temperature $T - \Delta T$. The pressure in the liquid phase is slightly reduced because liquid expands with an increase in temperature (for most substances, but not for water near the freezing point) and, of course, the vapor pressure is lower at the lower temperature. We will now make a cycle out of the two isothermal lines by connecting them (say by adiabatic lines) at both ends of the upper flat section, as shown in Fig. 45–4. We are going to use the argument of Carnot, which tells us that the heat added to the substance in changing it from a liquid to a vapor is related to the work done by the substance as it goes around the cycle. Let us call $L$ the heat needed to vaporize the substance in the cylinder. As in the argument immediately preceding Eq. (45.5), we know that $L(\Delta T/T) = {}$work done by the substance. As before, the work done by the substance is the shaded area, which is approximately $\Delta P(V_G - V_L)$, where $\Delta P$ is the difference in vapor pressure at the two temperatures $T$ and $T - \Delta T$, $V_G$ is the volume of the gas, and $V_L$ is the volume of the liquid, both volumes measured at the vapor pressure at temperature $T$. Setting these two expressions for the area equal, we get $L\,\Delta T/T = \Delta P(V_G - V_L)$, or \begin{equation} \label{Eq:I:45:14} \frac{L}{T(V_G - V_L)} = (\ddpl{P_{\text{vap}}}{T}). \end{equation} Equation (45.14) gives the relationship between the rate of change of vapor pressure with temperature and the amount of heat required to evaporate the liquid. This relationship was deduced by Carnot, but it is called the Clausius-Clapeyron equation. Now let us compare Eq. (45.14) with the results deduced from kinetic theory. Usually $V_G$ is very much larger than $V_L$. So $V_G - V_L \approx V_G = RT/P$ per mole. If we further assume that $L$ is a constant, independent of temperature—not a very good approximation—then we would have $\ddpl{P}{T} = L/(RT^2/P)$. The solution of this differential equation is \begin{equation} \label{Eq:I:45:15} P = \text{const}\,e^{-L/RT}. \end{equation} Let us compare this with the pressure variation with temperature that we deduced earlier from kinetic theory. Kinetic theory indicated the possibility, at least roughly, that the number of molecules per unit volume of vapor above a liquid would be \begin{equation} \label{Eq:I:45:16} n = \biggl(\frac{1}{V_a}\biggr)e^{-(U_G - U_L)/RT}, \end{equation} where $U_G - U_L$ is the internal energy per mole in the gas minus the internal energy per mole in the liquid, i.e., the energy needed to vaporize a mole of liquid. Equation (45.15) from thermodynamics and Eq. (45.16) from kinetic theory are very closely related because the pressure is $nkT$, but they are not exactly the same. However, they will turn out to be exactly the same if we assume $U_G - U_L = \text{const}$, instead of $L = \text{const}$. If we assume $U_G - U_L = \text{const}$, independent of temperature, then the argument leading to Eq. (45.15) will produce Eq. (45.16). Since the pressure is constant while the volume is changing, the change in internal energy $U_G-U_L$ is equal to the heat $L$ put in minus the work done $P(V_G-V_L)$, so $L=(U_G+PV_G)-(U_L+PV_L)$. This comparison shows the advantages and disadvantages of thermodynamics over kinetic theory: First of all, Eq. (45.14) obtained by thermodynamics is exact, while Eq. (45.16) can only be approximated, for instance, if $U$ is nearly constant, and if the model is right. Second, we may not understand correctly how the gas goes into the liquid; nevertheless, Eq. (45.14) is right, while (45.16) is only approximate. Third, although our treatment applies to a gas condensing into a liquid, the argument is true for any other change of state. For instance, the solid-to-liquid transition has the same kind of curve as that shown in Figs. 45–3 and 45–4. Introducing the latent heat for melting, $M$/mole, the formula analogous to Eq. (45.14) then is $(\ddpl{P_{\text{melt}}}{T})_V = M/[T(V_{\text{liq}} - V_{\text{solid}})]$. Although we may not understand the kinetic theory of the melting process, we nevertheless have a correct equation. However, when we can understand the kinetic theory, we have another advantage. Equation (45.14) is only a differential relationship, and we have no way of obtaining the constants of integration. In the kinetic theory we can obtain the constants also if we have a good model that describes the phenomenon completely. So there are advantages and disadvantages to each. When knowledge is weak and the situation is complicated, thermodynamic relations are really the most powerful. When the situation is very simple and a theoretical analysis can be made, then it is better to try to get more information from theoretical analysis. One more example: blackbody radiation. We have discussed a box containing radiation and nothing else. We have talked about the equilibrium between the oscillator and the radiation. We also found that the photons hitting the wall of the box would exert the pressure $P$, and we found $PV = U/3$, where $U$ is the total energy of all the photons and $V$ is the volume of the box. If we substitute $U = 3PV$ in the basic Eq. (45.7), we find1 \begin{equation} \label{Eq:I:45:17} \biggl(\ddp{U}{V}\biggr)_T = 3P = T\biggl(\ddp{P}{T}\biggr)_V - P. \end{equation} Since the volume of our box is constant, we can replace $(\ddpl{P}{T})_V$ by $dP/dT$ to obtain an ordinary differential equation we can integrate: $\ln P = 4\ln T + \text{const}$, or $P = \text{const}\times T^4$. The pressure of radiation varies as the fourth power of the temperature, and the total energy density of the radiation, $U/V = 3P$, also varies as $T^4$. It is usual to write $U/V = (4\sigma/c)T^4$, where $c$ is the speed of light and $\sigma$ is called the Stefan-Boltzmann constant. It is not possible to get $\sigma$ from thermodynamics alone. Here is a good example of its power, and its limitations. To know that $U/V$ goes as $T^4$ is a great deal, but to know how big $U/V$ actually is at any temperature requires that we go into the kind of detail that only a complete theory can supply. For blackbody radiation we have such a theory and we can derive an expression for the constant $\sigma$ in the following manner. Let $I(\omega)\,d\omega$ be the intensity distribution, the energy flow through $1$ m² in one second with frequency between $\omega$ and $\omega + d\omega$. The energy density distribution${}={}$energy/volume${}= I(\omega)\,d\omega/c$ is \begin{align*} \frac{U}{V} &= \text{total energy density}\\ &= \int_{\omega = 0}^\infty\text{energy density between $\omega$ and $\omega + d\omega$}\\[1.5ex] &= \int_0^\infty \frac{I(\omega)\,d\omega}{c}. \end{align*} From our earlier discussions, we know that \begin{equation*} I(\omega) = \frac{\hbar\omega^3}{\pi^2c^2(e^{\hbar\omega/kT} - 1)}. \end{equation*} Substituting this expression for $I(\omega)$ in our equation for $U/V$, we get \begin{equation*} \frac{U}{V} = \frac{1}{\pi^2c^3}\int_0^\infty \frac{\hbar\omega^3\,d\omega}{e^{\hbar\omega/kT} - 1}. \end{equation*} If we substitute $x = \hbar\omega/kT$, the expression becomes \begin{equation*} \frac{U}{V} = \frac{(kT)^4}{\hbar^3\pi^2c^3}\int_0^\infty \frac{x^3\,dx}{e^x - 1}. \end{equation*} This integral is just some number that we can get, approximately, by drawing a curve and taking the area by counting squares. It is roughly $6.5$. The mathematicians among us can show that the integral is exactly $\pi^4/15$.2 Comparing this expression with $U/V = (4\sigma/c)T^4$, we find \begin{equation*} \sigma = \frac{k^4\pi^2}{60\hbar^3c^2} = 5.67\times10^{-8}\,\frac{\text{watts}} {(\text{meter})^2(\text{degree})^4}. \end{equation*} If we make a small hole in our box, how much energy will flow per second through the hole of unit area? To go from energy density to energy flow, we multiply the energy density $U/V$ by $c$. We also multiply by $\tfrac{1}{4}$, which arises as follows: first, a factor of $\tfrac{1}{2}$, because only the energy which is flowing out escapes; and second, another factor $\tfrac{1}{2}$, because energy which approaches the hole at an angle to the normal is less effective in getting through the hole by a cosine factor. The average value of the cosine factor is $\tfrac{1}{2}$. It is clear now why we write $U/V =(4\sigma/c)T^4$: so that we can ultimately say that the flux from a small hole is $\sigma T^4$ per unit area. |
|
1 | 46 | Ratchet and pawl | 1 | How a ratchet works | In this chapter we discuss the ratchet and pawl, a very simple device which allows a shaft to turn only one way. The possibility of having something turn only one way requires some detailed and careful analysis, and there are some very interesting consequences. The plan of the discussion came about in attempting to devise an elementary explanation, from the molecular or kinetic point of view, for the fact that there is a maximum amount of work which can be extracted from a heat engine. Of course we have seen the essence of Carnot’s argument, but it would be nice to find an explanation which is elementary in the sense that we can see what is happening physically. Now, there are complicated mathematical demonstrations which follow from Newton’s laws to demonstrate that we can get only a certain amount of work out when heat flows from one place to another, but there is great difficulty in converting this into an elementary demonstration. In short, we do not understand it, although we can follow the mathematics. In Carnot’s argument, the fact that more than a certain amount of work cannot be extracted in going from one temperature to another is deduced from another axiom, which is that if everything is at the same temperature, heat cannot be converted to work by means of a cyclic process. First, let us back up and try to see, in at least one elementary example, why this simpler statement is true. Let us try to invent a device which will violate the Second Law of Thermodynamics, that is, a gadget which will generate work from a heat reservoir with everything at the same temperature. Let us say we have a box of gas at a certain temperature, and inside there is an axle with vanes in it. (See Fig. 46–1 but take $T_1 =$ $T_2 =$ $T$, say.) Because of the bombardments of gas molecules on the vane, the vane oscillates and jiggles. All we have to do is to hook onto the other end of the axle a wheel which can turn only one way—the ratchet and pawl. Then when the shaft tries to jiggle one way, it will not turn, and when it jiggles the other, it will turn. Then the wheel will slowly turn, and perhaps we might even tie a flea onto a string hanging from a drum on the shaft, and lift the flea! Now let us ask if this is possible. According to Carnot’s hypothesis, it is impossible. But if we just look at it, we see, prima facie, that it seems quite possible. So we must look more closely. Indeed, if we look at the ratchet and pawl, we see a number of complications. First, our idealized ratchet is as simple as possible, but even so, there is a pawl, and there must be a spring in the pawl. The pawl must return after coming off a tooth, so the spring is necessary. Another feature of this ratchet and pawl, not shown in the figure, is quite essential. Suppose the device were made of perfectly elastic parts. After the pawl is lifted off the end of the tooth and is pushed back by the spring, it will bounce against the wheel and continue to bounce. Then, when another fluctuation came, the wheel could turn the other way, because the tooth could get underneath during the moment when the pawl was up! Therefore an essential part of the irreversibility of our wheel is a damping or deadening mechanism which stops the bouncing. When the damping happens, of course, the energy that was in the pawl goes into the wheel and shows up as heat. So, as it turns, the wheel will get hotter and hotter. To make the thing simpler, we can put a gas around the wheel to take up some of the heat. Anyway, let us say the gas keeps rising in temperature, along with the wheel. Will it go on forever? No! The pawl and wheel, both at some temperature $T$, also have Brownian motion. This motion is such that, every once in a while, by accident, the pawl lifts itself up and over a tooth just at the moment when the Brownian motion on the vanes is trying to turn the axle backwards. And as things get hotter, this happens more often. So, this is the reason this device does not work in perpetual motion. When the vanes get kicked, sometimes the pawl lifts up and goes over the end. But sometimes, when it tries to turn the other way, the pawl has already lifted due to the fluctuations of the motions on the wheel side, and the wheel goes back the other way! The net result is nothing. It is not hard to demonstrate that when the temperature on both sides is equal, there will be no net average motion of the wheel. Of course the wheel will do a lot of jiggling this way and that way, but it will not do what we would like, which is to turn just one way. Let us look at the reason. It is necessary to do work against the spring in order to lift the pawl to the top of a tooth. Let us call this energy $\epsilon$, and let $\theta$ be the angle between the teeth. The chance that the system can accumulate enough energy, $\epsilon$, to get the pawl over the top of the tooth, is $e^{-\epsilon/kT}$. But the probability that the pawl will accidentally be up is also $e^{-\epsilon/kT}$. So the number of times that the pawl is up and the wheel can turn backwards freely is equal to the number of times that we have enough energy to turn it forward when the pawl is down. We thus get a “balance,” and the wheel will not go around. |
|
1 | 46 | Ratchet and pawl | 2 | The ratchet as an engine | Let us now go further. Take the example where the temperature of the vanes is $T_1$ and the temperature of the wheel, or ratchet, is $T_2$, and $T_2$ is less than $T_1$. Because the wheel is cold and the fluctuations of the pawl are relatively infrequent, it will be very hard for the pawl to attain an energy $\epsilon$. Because of the high temperature $T_1$, the vanes will often attain the energy $\epsilon$, so our gadget will go in one direction, as designed. We would now like to see if it can lift weights. Onto the drum in the middle we tie a string, and put a weight, such as our flea, on the string. We let $L$ be the torque due to the weight. If $L$ is not too great, our machine will lift the weight because the Brownian fluctuations make it more likely to move in one direction than the other. We want to find how much weight it can lift, how fast it goes around, and so on. First we consider a forward motion, the usual way one designs a ratchet to run. In order to make one step forward, how much energy has to be borrowed from the vane end? We must borrow an energy $\epsilon$ to lift the pawl. The wheel turns through an angle $\theta$ against a torque $L$, so we also need the energy $L\theta$. The total amount of energy that we have to borrow is thus $\epsilon + L\theta$. The probability that we get this energy is proportional to $e^{-(\epsilon + L\theta)/kT_1}$. Actually, it is not only a question of getting the energy, but we also would like to know the number of times per second it has this energy. The probability per second is proportional to $e^{-(\epsilon + L\theta)/kT_1}$, and we shall call the proportionality constant $1/\tau$. It will cancel out in the end anyway. When a forward step happens, the work done on the weight is $L\theta$. The energy taken from the vane is $\epsilon + L\theta$. The spring gets wound up with energy $\epsilon$, then it goes clatter, clatter, bang, and this energy goes into heat. All the energy taken out goes to lift the weight and to drive the pawl, which then falls back and gives heat to the other side. Now we look at the opposite case, which is backward motion. What happens here? To get the wheel to go backwards all we have to do is supply the energy to lift the pawl high enough so that the ratchet will slip. This is still energy $\epsilon$. Our probability per second for the pawl to lift this high is now $(1/\tau)e^{-\epsilon/kT_2}$. Our proportionality constant is the same, but this time $kT_2$ shows up because of the different temperature. When this happens, the work is released because the wheel slips backward. It loses one notch, so it releases work $L\theta$. The energy taken from the ratchet system is $\epsilon$, and the energy given to the gas at $T_1$ on the vane side is $L\theta + \epsilon$. It takes a little thinking to see the reason for that. Suppose the pawl has lifted itself up accidentally by a fluctuation. Then when it falls back and the spring pushes it down against the tooth, there is a force trying to turn the wheel, because the tooth is pushing on an inclined plane. This force is doing work, and so is the force due to the weights. So both together make up the total force, and all the energy which is slowly released appears at the vane end as heat. (Of course it must, by conservation of energy, but one must be careful to think the thing through!) We notice that all these energies are exactly the same, but reversed. So, depending upon which of these two rates is greater, the weight is either slowly lifted or slowly released. Of course, it is constantly jiggling around, going up for a while and down for a while, but we are talking about the average behavior. Suppose that for a particular weight the rates happen to be equal. Then we add an infinitesimal weight to the string. The weight will slowly go down, and work will be done on the machine. Energy will be taken from the wheel and given to the vanes. If instead we take off a little bit of weight, then the imbalance is the other way. The weight is lifted, and heat is taken from the vane and put into the wheel. So we have the conditions of Carnot’s reversible cycle, provided that the weight is just such that these two are equal. This condition is evidently that $(\epsilon + L\theta)/T_1 = \epsilon/T_2$. Let us say that the machine is slowly lifting the weight. Energy $Q_1$ is taken from the vanes and energy $Q_2$ is delivered to the wheel, and these energies are in the ratio $(\epsilon + L\theta)/\epsilon$. If we are lowering the weight, we also have $Q_1/Q_2 =(\epsilon + L\theta)/\epsilon$. Thus (Table 46–1) we have \begin{equation*} Q_1/Q_2 = T_1/T_2. \end{equation*} Furthermore, the work we get out is to the energy taken from the vane as $L\theta$ is to $L\theta + \epsilon$, hence as $(T_1 - T_2)/T_1$. We see that our device cannot extract more work than this, operating reversibly. This is the result that we expected from Carnot’s argument, and the main result of this lecture. However, we can use our device to understand a few other phenomena, even out of equilibrium, and therefore beyond the range of thermodynamics. Let us now calculate how fast our one-way device would turn if everything were at the same temperature and we hung a weight on the drum. If we pull very, very hard, of course, there are all kinds of complications. The pawl slips over the ratchet, or the spring breaks, or something. But suppose we pull gently enough that everything works nicely. In those circumstances, the above analysis is right for the probability of the wheel going forward and backward, if we remember that the two temperatures are equal. In each step an angle $\theta$ is obtained, so the angular velocity is $\theta$ times the probability of one of these jumps per second. It goes forward with probability $(1/\tau)e^{-(\epsilon + L\theta)/kT}$ and backward with probability $(1/\tau)e^{-\epsilon/kT}$, so that for the angular velocity we have \begin{align} \omega &= (\theta/\tau)(e^{-(\epsilon + L\theta)/kT} - e^{-\epsilon/kT})\notag\\[.5ex] \label{Eq:I:46:1} &= (\theta/\tau)e^{-\epsilon/kT}(e^{-L\theta/kT} - 1). \end{align} If we plot $\omega$ against $L$, we get the curve shown in Fig. 46–2. We see that it makes a great difference whether $L$ is positive or negative. If $L$ increases in the positive range, which happens when we try to drive the wheel backward, the backward velocity approaches a constant. As $L$ becomes negative, $\omega$ really “takes off” forward, since $e$ to a tremendous power is very great! The angular velocity that was obtained from different forces is thus very unsymmetrical. Going one way it is easy: we get a lot of angular velocity for a little force. Going the other way, we can put on a lot of force, and yet the wheel hardly goes around. We find the same thing in an electrical rectifier. Instead of the force, we have the electric field, and instead of the angular velocity, we have the electric current. In the case of a rectifier, the voltage is not proportional to resistance, and the situation is unsymmetrical. The same analysis that we made for the mechanical rectifier will also work for an electrical rectifier. In fact, the kind of formula we obtained above is typical of the current-carrying capacities of rectifiers as a function of their voltages. Now let us take all the weights away, and look at the original machine. If $T_2$ were less than $T_1$, the ratchet would go forward, as anybody would believe. But what is hard to believe, at first sight, is the opposite. If $T_2$ is greater than $T_1$, the ratchet goes around the opposite way! A dynamic ratchet with lots of heat in it runs itself backwards, because the ratchet pawl is bouncing. If the pawl, for a moment, is on the incline somewhere, it pushes the inclined plane sideways. But it is always pushing on an inclined plane, because if it happens to lift up high enough to get past the point of a tooth, then the inclined plane slides by, and it comes down again on an inclined plane. So a hot ratchet and pawl is ideally built to go around in a direction exactly opposite to that for which it was originally designed! In spite of all our cleverness of lopsided design, if the two temperatures are exactly equal there is no more propensity to turn one way than the other. The moment we look at it, it may be turning one way or the other, but in the long run, it gets nowhere. The fact that it gets nowhere is really the fundamental deep principle on which all of thermodynamics is based. |
|
1 | 46 | Ratchet and pawl | 3 | Reversibility in mechanics | What deeper mechanical principle tells us that, in the long run, if the temperature is kept the same everywhere, our gadget will turn neither to the right nor to the left? We evidently have a fundamental proposition that there is no way to design a machine which, left to itself, will be more likely to be turning one way than the other after a long enough time. We must try to see how this follows from the laws of mechanics. The laws of mechanics go something like this: the mass times the acceleration is the force, and the force on each particle is some complicated function of the positions of all the other particles. There are other situations in which forces depend on velocity, such as in magnetism, but let us not consider that now. We take a simpler case, such as gravity, where forces depend only on position. Now suppose that we have solved our set of equations and we have a certain motion $x(t)$ for each particle. In a complicated enough system, the solutions are very complicated, and what happens with time turns out to be very surprising. If we write down any arrangement we please for the particles, we will see this arrangement actually occur if we wait long enough! If we follow our solution for a long enough time, it tries everything that it can do, so to speak. This is not absolutely necessary in the simplest devices, but when systems get complicated enough, with enough atoms, it happens. Now there is something else the solution can do. If we solve the equations of motion, we may get certain functions such as $t + t^2 + t^3$. We claim that another solution would be $-t + t^2 - t^3$. In other words, if we substitute $-t$ everywhere for $t$ throughout the entire solution, we will once again get a solution of the same equation. This follows from the fact that if we substitute $-t$ for $t$ in the original differential equation, nothing is changed, since only second derivatives with respect to $t$ appear. This means that if we have a certain motion, then the exact opposite motion is also possible. In the complete confusion which comes if we wait long enough, it finds itself going one way sometimes, and it finds itself going the other way sometimes. There is nothing more beautiful about one of the motions than about the other. So it is impossible to design a machine which, in the long run, is more likely to be going one way than the other, if the machine is sufficiently complicated. One might think up an example for which this is obviously untrue. If we take a wheel, for instance, and spin it in empty space, it will go the same way forever. So there are some conditions, like the conservation of angular momentum, which violate the above argument. This just requires that the argument be made with a little more care. Perhaps the walls take up the angular momentum, or something like that, so that we have no special conservation laws. Then, if the system is complicated enough, the argument is true. It is based on the fact that the laws of mechanics are reversible. For historical interest, we would like to remark on a device invented by Maxwell, who first worked out the dynamical theory of gases. He supposed the following situation: We have two boxes of gas at the same temperature, with a little hole between them. At the hole sits a little demon (who may be a machine of course!). There is a door on the hole, which can be opened or closed by the demon. He watches the molecules coming from the left. Whenever he sees a fast molecule, he opens the door. When he sees a slow one, he leaves it closed. If we want him to be an extra special demon, he can have eyes at the back of his head, and do the opposite to the molecules from the other side. He lets the slow ones through to the left, and the fast through to the right. Pretty soon the left side will get cold and the right side hot. Then, are the ideas of thermodynamics violated because we could have such a demon? It turns out, if we build a finite-sized demon, that the demon himself gets so warm that he cannot see very well after a while. The simplest possible demon, as an example, would be a trap door held over the hole by a spring. A fast molecule comes through, because it is able to lift the trap door. The slow molecule cannot get through, and bounces back. But this thing is nothing but our ratchet and pawl in another form, and ultimately the mechanism will heat up. If we assume that the specific heat of the demon is not infinite, it must heat up. It has but a finite number of internal gears and wheels, so it cannot get rid of the extra heat that it gets from observing the molecules. Soon it is shaking from Brownian motion so much that it cannot tell whether it is coming or going, much less whether the molecules are coming or going, so it does not work. |
|
1 | 46 | Ratchet and pawl | 4 | Irreversibility | Are all the laws of physics reversible? Evidently not! Just try to unscramble an egg! Run a moving picture backwards, and it takes only a few minutes for everybody to start laughing. The most natural characteristic of all phenomena is their obvious irreversibility. Where does irreversibility come from? It does not come from Newton’s laws. If we claim that the behavior of everything is ultimately to be understood in terms of the laws of physics, and if it also turns out that all the equations have the fantastic property that if we put $t = -t$ we have another solution, then every phenomenon is reversible. How then does it come about in nature on a large scale that things are not reversible? Obviously there must be some law, some obscure but fundamental equation, perhaps in electricity, maybe in neutrino physics, in which it does matter which way time goes. Let us discuss that question now. We already know one of those laws, which says that the entropy is always increasing. If we have a hot thing and a cold thing, the heat goes from hot to cold. So the law of entropy is one such law. But we expect to understand the law of entropy from the point of view of mechanics. In fact, we have just been successful in obtaining all the consequences of the argument that heat cannot flow backwards by itself from just mechanical arguments, and we thereby obtained an understanding of the Second Law. Apparently we can get irreversibility from reversible equations. But was it only a mechanical argument that we used? Let us look into it more closely. Since our question has to do with the entropy, our problem is to try to find a microscopic description of entropy. If we say we have a certain amount of energy in something, like a gas, then we can get a microscopic picture of it, and say that every atom has a certain energy. All these energies added together give us the total energy. Similarly, maybe every atom has a certain entropy. If we add everything up, we would have the total entropy. It does not work so well, but let us see what happens. As an example, we calculate the entropy difference between a gas at a certain temperature at one volume, and a gas at the same temperature at another volume. We remember, from Chapter 44, that we had, for the change in entropy, \begin{equation*} \Delta S = \int\frac{dQ}{T}. \end{equation*} In the present case, the energy of the gas is the same before and after expansion, since the temperature does not change. So we have to add enough heat to equal the work done by the gas or, for each little change in volume, \begin{equation*} dQ = P\,dV. \end{equation*} Putting this in for $dQ$, we get \begin{align*} \Delta S &= \int_{V_1}^{V_2}P\,\frac{dV}{T} = \int_{V_1}^{V_2}\frac{NkT}{V}\,\frac{dV}{T}\\[.5ex] &= Nk\ln\frac{V_2}{V_1}, \end{align*} as we obtained in Chapter 44. For instance, if we expand the volume by a factor of $2$, the entropy change is $Nk\ln 2$. Let us now consider another interesting example. Suppose we have a box with a barrier in the middle. On one side is neon (“black” molecules), and on the other, argon (“white” molecules). Now we take out the barrier, and let them mix. How much has the entropy changed? It is possible to imagine that instead of the barrier we have a piston, with holes in it that let the whites through but not the blacks, and another kind of piston which is the other way around. If we move one piston to each end, we see that, for each gas, the problem is like the one we just solved. So we get an entropy change of $Nk\ln 2$, which means that the entropy has increased by $k\ln 2$ per molecule. The $2$ has to do with the extra room that the molecule has, which is rather peculiar. It is not a property of the molecule itself, but of how much room the molecule has to run around in. This is a strange situation, where entropy increases but where everything has the same temperature and the same energy! The only thing that is changed is that the molecules are distributed differently. We well know that if we just pull the barrier out, everything will get mixed up after a long time, due to the collisions, the jiggling, the banging, and so on. Every once in a while a white molecule goes toward a black, and a black one goes toward a white, and maybe they pass. Gradually the whites worm their way, by accident, across into the space of blacks, and the blacks worm their way, by accident, into the space of whites. If we wait long enough we get a mixture. Clearly, this is an irreversible process in the real world, and ought to involve an increase in the entropy. Here we have a simple example of an irreversible process which is completely composed of reversible events. Every time there is a collision between any two molecules, they go off in certain directions. If we took a moving picture of a collision in reverse, there would be nothing wrong with the picture. In fact, one kind of collision is just as likely as another. So the mixing is completely reversible, and yet it is irreversible. Everyone knows that if we started with white and with black, separated, we would get a mixture within a few minutes. If we sat and looked at it for several more minutes, it would not separate again but would stay mixed. So we have an irreversibility which is based on reversible situations. But we also see the reason now. We started with an arrangement which is, in some sense, ordered. Due to the chaos of the collisions, it becomes disordered. It is the change from an ordered arrangement to a disordered arrangement which is the source of the irreversibility. It is true that if we took a motion picture of this, and showed it backwards, we would see it gradually become ordered. Someone would say, “That is against the laws of physics!” So we would run the film over again, and we would look at every collision. Every one would be perfect, and every one would be obeying the laws of physics. The reason, of course, is that every molecule’s velocities are just right, so if the paths are all followed back, they get back to their original condition. But that is a very unlikely circumstance to have. If we start with the gas in no special arrangement, just whites and blacks, it will never get back. |
|
1 | 46 | Ratchet and pawl | 5 | Order and entropy | So we now have to talk about what we mean by disorder and what we mean by order. It is not a question of pleasant order or unpleasant disorder. What is different in our mixed and unmixed cases is the following. Suppose we divide the space into little volume elements. If we have white and black molecules, how many ways could we distribute them among the volume elements so that white is on one side, and black on the other? On the other hand, how many ways could we distribute them with no restriction on which goes where? Clearly, there are many more ways to arrange them in the latter case. We measure “disorder” by the number of ways that the insides can be arranged, so that from the outside it looks the same. The logarithm of that number of ways is the entropy. The number of ways in the separated case is less, so the entropy is less, or the “disorder” is less. So with the above technical definition of disorder we can understand the proposition. First, the entropy measures the disorder. Second, the universe always goes from “order” to “disorder,” so entropy always increases. Order is not order in the sense that we like the arrangement, but in the sense that the number of different ways we can hook it up, and still have it look the same from the outside, is relatively restricted. In the case where we reversed our motion picture of the gas mixing, there was not as much disorder as we thought. Every single atom had exactly the correct speed and direction to come out right! The entropy was not high after all, even though it appeared so. What about the reversibility of the other physical laws? When we talked about the electric field which comes from an accelerating charge, it was said that we must take the retarded field. At a time $t$ and at a distance $r$ from the charge, we take the field due to the acceleration at a time $t - r/c$, not $t + r/c$. So it looks, at first, as if the law of electricity is not reversible. Very strangely, however, the laws we used come from a set of equations called Maxwell’s equations, which are, in fact, reversible. Furthermore, it is possible to argue that if we were to use only the advanced field, the field due to the state of affairs at $t + r/c$, and do it absolutely consistently in a completely enclosed space, everything happens exactly the same way as if we use retarded fields! This apparent irreversibility in electricity, at least in an enclosure, is thus not an irreversibility at all. We have some feeling for that already, because we know that when we have an oscillating charge which generates fields which are bounced from the walls of an enclosure we ultimately get to an equilibrium in which there is no one-sidedness. The retarded field approach is only a convenience in the method of solution. So far as we know, all the fundamental laws of physics, like Newton’s equations, are reversible. Then where does irreversibility come from? It comes from order going to disorder, but we do not understand this until we know the origin of the order. Why is it that the situations we find ourselves in every day are always out of equilibrium? One possible explanation is the following. Look again at our box of mixed white and black molecules. Now it is possible, if we wait long enough, by sheer, grossly improbable, but possible, accident, that the distribution of molecules gets to be mostly white on one side and mostly black on the other. After that, as times goes on and accidents continue, they get more mixed up again. Thus one possible explanation of the high degree of order in the present-day world is that it is just a question of luck. Perhaps our universe happened to have had a fluctuation of some kind in the past, in which things got somewhat separated, and now they are running back together again. This kind of theory is not unsymmetrical, because we can ask what the separated gas looks like either a little in the future or a little in the past. In either case, we see a grey smear at the interface, because the molecules are mixing again. No matter which way we run time, the gas mixes. So this theory would say the irreversibility is just one of the accidents of life. We would like to argue that this is not the case. Suppose we do not look at the whole box at once, but only at a piece of the box. Then, at a certain moment, suppose we discover a certain amount of order. In this little piece, white and black are separate. What should we deduce about the condition in places where we have not yet looked? If we really believe that the order arose from complete disorder by a fluctuation, we must surely take the most likely fluctuation which could produce it, and the most likely condition is not that the rest of it has also become disentangled! Therefore, from the hypothesis that the world is a fluctuation, all of the predictions are that if we look at a part of the world we have never seen before, we will find it mixed up, and not like the piece we just looked at. If our order were due to a fluctuation, we would not expect order anywhere but where we have just noticed it. Now we assume the separation is because the past of the universe was really ordered. It is not due to a fluctuation, but the whole thing used to be white and black. This theory now predicts that there will be order in other places—the order is not due to a fluctuation, but due to a much higher ordering at the beginning of time. Then we would expect to find order in places where we have not yet looked. The astronomers, for example, have only looked at some of the stars. Every day they turn their telescopes to other stars, and the new stars are doing the same thing as the other stars. We therefore conclude that the universe is not a fluctuation, and that the order is a memory of conditions when things started. This is not to say that we understand the logic of it. For some reason, the universe at one time had a very low entropy for its energy content, and since then the entropy has increased. So that is the way toward the future. That is the origin of all irreversibility, that is what makes the processes of growth and decay, that makes us remember the past and not the future, remember the things which are closer to that moment in the history of the universe when the order was higher than now, and why we are not able to remember things where the disorder is higher than now, which we call the future. So, as we commented in an earlier chapter, the entire universe is in a glass of wine, if we look at it closely enough. In this case the glass of wine is complex, because there is water and glass and light and everything else. Another delight of our subject of physics is that even simple and idealized things, like the ratchet and pawl, work only because they are part of the universe. The ratchet and pawl works in only one direction because it has some ultimate contact with the rest of the universe. If the ratchet and pawl were in a box and isolated for some sufficient time, the wheel would be no more likely to go one way than the other. But because we pull up the shades and let the light out, because we cool off on the earth and get heat from the sun, the ratchets and pawls that we make can turn one way. This one-wayness is interrelated with the fact that the ratchet is part of the universe. It is part of the universe not only in the sense that it obeys the physical laws of the universe, but its one-way behavior is tied to the one-way behavior of the entire universe. It cannot be completely understood until the mystery of the beginnings of the history of the universe are reduced still further from speculation to scientific understanding. |
|
1 | 47 | Sound. The wave equation | 1 | Waves | In this chapter we shall discuss the phenomenon of waves. This is a phenomenon which appears in many contexts throughout physics, and therefore our attention should be concentrated on it not only because of the particular example considered here, which is sound, but also because of the much wider application of the ideas in all branches of physics. It was pointed out when we studied the harmonic oscillator that there are not only mechanical examples of oscillating systems but electrical ones as well. Waves are related to oscillating systems, except that wave oscillations appear not only as time-oscillations at one place, but propagate in space as well. We have really already studied waves. When we studied light, in learning about the properties of waves in that subject, we paid particular attention to the interference in space of waves from several sources at different locations and all at the same frequency. There are two important wave phenomena that we have not yet discussed which occur in light, i.e., electromagnetic waves, as well as in any other form of waves. The first of these is the phenomenon of interference in time rather than interference in space. If we have two sources of sound which have slightly different frequencies and if we listen to both at the same time, then sometimes the waves come with the crests together and sometimes with the crest and trough together (see Fig. 47–1). The rising and falling of the sound that results is the phenomenon of beats or, in other words, of interference in time. The second phenomenon involves the wave patterns which result when the waves are confined within a given volume and reflect back and forth from walls. These effects could have been discussed, of course, for the case of electromagnetic waves. The reason for not having done this is that by using one example we would not generate the feeling that we are actually learning about many different subjects at the same time. In order to emphasize the general applicability of waves beyond electrodynamics, we consider here a different example, in particular sound waves. Other examples of waves are water waves consisting of long swells that we see coming in to the shore, or the smaller water waves consisting of surface tension ripples. As another example, there are two kinds of elastic waves in solids; a compressional (or longitudinal) wave in which the particles of the solid oscillate back and forth along the direction of propagation of the wave (sound waves in a gas are of this kind), and a transverse wave in which the particles of the solid oscillate in a direction perpendicular to the direction of propagation. Earthquake waves contain elastic waves of both kinds, generated by a motion at some place in the earth’s crust. Still another example of waves is found in modern physics. These are waves which give the probability amplitude of finding a particle at a given place—the “matter waves” which we have already discussed. Their frequency is proportional to the energy and their wave number is proportional to the momentum. They are the waves of quantum mechanics. In this chapter we shall consider only waves for which the velocity is independent of the wavelength. This is, for example, the case for light in a vacuum. The speed of light is then the same for radiowaves, blue light, green light, or for any other wavelength. Because of this behavior, when we began to describe the wave phenomenon we did not notice at first that we had wave propagation. Instead, we said that if a charge is moved at one place, the electric field at a distance $x$ was proportional to the acceleration, not at the time $t$, but at the earlier time $t - x/c$. Therefore if we were to picture the electric field in space at some instant of time, as in Fig. 47–2, the electric field at a time $t$ later would have moved the distance $ct$, as indicated in the figure. Mathematically, we can say that in the one-dimensional example we are taking, the electric field is a function of $x - ct$. We see that at $t = 0$, it is some function of $x$. If we consider a later time, we need only to increase $x$ somewhat to get the same value of the electric field. For example, if the maximum field occurred at $x = 3$ at time zero, then to find the new position of the maximum field at time $t$ we need \begin{equation*} x - ct = 3\quad \text{or}\quad x = 3 + ct. \end{equation*} We see that this kind of function represents the propagation of a wave. Such a function, $f(x - ct)$, then represents a wave. We may summarize this description of a wave by saying simply that \begin{equation*} f(x - ct) = f(x + \Delta x - c(t + \Delta t)), \end{equation*} when $\Delta x = c\,\Delta t$. There is, of course, another possibility, i.e., that instead of a source to the left as indicated in Fig. 47–2, we have a source on the right, so that the wave propagates toward negative $x$. Then the wave would be described by $g(x + ct)$. There is the additional possibility that more than one wave exists in space at the same time, and so the electric field is the sum of the two fields, each one propagating independently. This behavior of electric fields may be described by saying that if $f_1(x - ct)$ is a wave, and if $f_2(x - ct)$ is another wave, then their sum is also a wave. This is called the principle of superposition. The same principle is valid in sound. We are familiar with the fact that if a sound is produced, we hear with complete fidelity the same sequence of sounds as was generated. If we had high frequencies travelling faster than low frequencies, a short, sharp noise would be heard as a succession of musical sounds. Similarly, if red light travelled faster than blue light, a flash of white light would be seen first as red, then as white, and finally as blue. We are familiar with the fact that this is not the case. Both sound and light travel with a speed in air which is very nearly independent of frequency. Examples of wave propagation for which this independence is not true will be considered in Chapter 48. In the case of light (electromagnetic waves) we gave a rule which determined the electric field at a point as a result of the acceleration of a charge. One might expect now that what we should do is give a rule whereby some quality of the air, say the pressure, is determined at a given distance from a source in terms of the source motion, delayed by the travel time of the sound. In the case of light this procedure was acceptable because all that we knew was that a charge at one place exerts a force on another charge at another place. The details of propagation from the one place to the other were not absolutely essential. In the case of sound, however, we know that it propagates through the air between the source and the hearer, and it is certainly a natural question to ask what, at any given moment, the pressure of the air is. We would like, in addition, to know exactly how the air moves. In the case of electricity we could accept a rule, since we could say that we do not yet know the laws of electricity, but we cannot make the same remark with regard to sound. We would not be satisfied with a rule stating how the sound pressure moves through the air, because the process ought to be understandable as a consequence of the laws of mechanics. In short, sound is a branch of mechanics, and so it is to be understood in terms of Newton’s laws. The propagation of sound from one place to another is merely a consequence of mechanics and the properties of gases, if it propagates in a gas, or of the properties of liquids or solids, if it propagates through such mediums. Later we shall derive the properties of light and its wave propagation in a similar way from the laws of electrodynamics. |
|
1 | 47 | Sound. The wave equation | 2 | The propagation of sound | We shall give a derivation of the properties of the propagation of sound between the source and the receiver as a consequence of Newton’s laws, and we shall not consider the interaction with the source and the receiver. Ordinarily we emphasize a result rather than a particular derivation of it. In this chapter we take the opposite view. The point here, in a certain sense, is the derivation itself. This problem of explaining new phenomena in terms of old ones, when we know the laws of the old ones, is perhaps the greatest art of mathematical physics. The mathematical physicist has two problems: one is to find solutions, given the equations, and the other is to find the equations which describe a new phenomenon. The derivation here is an example of the second kind of problem. We shall take the simplest example here—the propagation of sound in one dimension. To carry out such a derivation it is necessary first to have some kind of understanding of what is going on. Fundamentally what is involved is that if an object is moved at one place in the air, we observe that there is a disturbance which travels through the air. If we ask what kind of disturbance, we would say that we would expect that the motion of the object produces a change of pressure. Of course, if the object is moved gently, the air merely flows around it, but what we are concerned with is a rapid motion, so that there is not sufficient time for such a flow. Then, with the motion, the air is compressed and a change of pressure is produced which pushes on additional air. This air is in turn compressed, which leads again to an extra pressure, and a wave is propagated. We now want to formulate such a process. We have to decide what variables we need. In our particular problem we would need to know how much the air has moved, so that the air displacement in the sound wave is certainly one relevant variable. In addition we would like to describe how the air density changes as it is displaced. The air pressure also changes, so this is another variable of interest. Then, of course, the air has a velocity, so that we shall have to describe the velocity of the air particles. The air particles also have accelerations—but as we list these many variables we soon realize that the velocity and acceleration would be known if we knew how the air displacement varies with time. As we said, we shall consider the wave in one dimension. We can do this if we are sufficiently far from the source that what we call the wavefronts are very nearly planes. We thus make our argument simpler by taking the least complicated example. We shall then be able to say that the displacement, $\chi$, depends only on $x$ and $t$, and not on $y$ and $z$. Therefore the description of the air is given by $\chi(x,t)$. Is this description complete? It would appear to be far from complete, for we know none of the details of how the air molecules are moving. They are moving in all directions, and this state of affairs is certainly not described by means of this function $\chi(x,t)$. From the point of view of kinetic theory, if we have a higher density of molecules at one place and a lower density adjacent to that place, the molecules would move away from the region of higher density to the one of lower density, so as to equalize this difference. Apparently we would not get an oscillation and there would be no sound. What is necessary to get the sound wave is this situation: as the molecules rush out of the region of higher density and higher pressure, they give momentum to the molecules in the adjacent region of lower density. For sound to be generated, the regions over which the density and pressure change must be much larger than the distance the molecules travel before colliding with other molecules. This distance is the mean free path and the distance between pressure crests and troughs must be much larger than this. Otherwise the molecules would move freely from the crest to the trough and immediately smear out the wave. It is clear that we are going to describe the gas behavior on a scale large compared with the mean free path, and so the properties of the gas will not be described in terms of the individual molecules. The displacement, for example, will be the displacement of the center of mass of a small element of the gas, and the pressure or density will be the pressure or density in this region. We shall call the pressure $P$ and the density $\rho$, and they will be functions of $x$ and $t$. We must keep in mind that this description is an approximation which is valid only when these gas properties do not vary too rapidly with distance. |
|
1 | 47 | Sound. The wave equation | 3 | The wave equation | The physics of the phenomenon of sound waves thus involves three features: Let us consider II first. For a gas, a liquid, or a solid, the pressure is some function of the density. Before the sound wave arrives, we have equilibrium, with a pressure $P_0$ and a corresponding density $\rho_0$. A pressure $P$ in the medium is connected to the density by some characteristic relation $P = f(\rho)$ and, in particular, the equilibrium pressure $P_0$ is given by $P_0 = f(\rho_0)$. The changes of pressure in sound from the equilibrium value are extremely small. A convenient unit for measuring pressure is the bar, where $1$ bar${}= 10^5$ N/m². The pressure of $1$ standard atmosphere is very nearly $1$ bar: $1$ atm${} = 1.0133$ bars. In sound we use a logarithmic scale of intensities since the sensitivity of the ear is roughly logarithmic. This scale is the decibel scale, in which the acoustic pressure level for the pressure amplitude $P$ is defined as \begin{equation} \label{Eq:I:47:1} I\text{ (acoustic pressure level)} = 20\log_{10}(P/P_{\text{ref}})\text{ in dB}, \end{equation}
\begin{equation} \label{Eq:I:47:1} I \begin{pmatrix} \text{acoustic}\\[-.75ex] \text{pressure}\\[-.75ex] \text{level} \end{pmatrix} = 20\log_{10}(P/P_{\text{ref}})\text{ in dB}, \end{equation} where the reference pressure $P_{\text{ref}} = 2\times10^{-10}$ bar. A pressure amplitude of $P =$ $10^3P_{\text{ref}} =$ $2\times10^{-7}$ bar1 corresponds to a moderately intense sound of $60$ decibels. We see that the pressure changes in sound are extremely small compared with the equilibrium, or mean, pressure of $1$ atm. The displacements and the density changes are correspondingly extremely small. In explosions we do not have such small changes; the excess pressures produced can be greater than $1$ atm. These large pressure changes lead to new effects which we shall consider later. In sound we do not often consider acoustic intensity levels over $100$ dB; $120$ dB is a level which is painful to the ear. Therefore, for sound, if we write \begin{equation} \label{Eq:I:47:2} P = P_0 + P_e,\quad \rho = \rho_0 + \rho_e, \end{equation} we shall always have the pressure change $P_e$ very small compared with $P_0$ and the density change $\rho_e$ very small compared with $\rho_0$. Then \begin{equation} \label{Eq:I:47:3} P_0 + P_e = f(\rho_0 + \rho_e) = f(\rho_0) +\rho_ef'(\rho_0), \end{equation} where $P_0 = f(\rho_0)$ and $f'(\rho_0)$ stands for the derivative of $f(\rho)$ evaluated at $\rho = \rho_0$. We can take the second step in this equality only because $\rho_e$ is very small. We find in this way that the excess pressure $P_e$ is proportional to the excess density $\rho_e$, and we may call the proportionality factor $\kappa$: \begin{equation} \label{Eq:I:47:4} P_e = \kappa\rho_e, \,\text{where }\kappa = f'(\rho_0) = (dP/d\rho)_0.\quad\text{(II)} \end{equation}
\begin{equation} \label{Eq:I:47:4} P_e = \kappa\rho_e, \,\text{where }\kappa = f'(\rho_0) = (dP/d\rho)_0.\;\text{(II)} \end{equation} The relation we needed for II is this very simple one. Let us now consider I. We shall suppose that the position of a portion of air undisturbed by the sound wave is $x$ and the displacement at the time $t$ due to the sound is $\chi(x,t)$, so that its new position is $x + \chi(x,t)$, as in Fig. 47–3. Now the undisturbed position of a nearby portion of air is $x + \Delta x$, and its new position is $x + \Delta x + \chi(x + \Delta x,t)$. We can now find the density changes in the following way. Since we are limiting ourselves to plane waves, we can take a unit area perpendicular to the $x$-direction, which is the direction of propagation of the sound wave. The amount of air, per unit area, in $\Delta x$ is then $\rho_0\,\Delta x$, where $\rho_0$ is the undisturbed, or equilibrium, air density. This air, when displaced by the sound wave, now lies between $x + \chi(x,t)$ and $x + \Delta x + \chi(x + \Delta x,t)$, so that we have the same matter in this interval that was in $\Delta x$ when undisturbed. If $\rho$ is the new density, then \begin{equation} \label{Eq:I:47:5} \rho_0\,\Delta x = \rho[x + \Delta x + \chi(x + \Delta x,t) - x - \chi(x,t)]. \end{equation}
\begin{gather} \label{Eq:I:47:5} \rho_0\,\Delta x =\\[.5ex] \rho[x + \Delta x + \chi(x + \Delta x,t) - x - \chi(x,t)].\notag \end{gather} Since $\Delta x$ is small, we can write $\chi(x + \Delta x,t) - \chi(x,t) = (\ddpl{\chi}{x})\,\Delta x$. This derivative is a partial derivative, since $\chi$ depends on the time as well as on $x$. Our equation then is \begin{equation} \label{Eq:I:47:6} \rho_0\,\Delta x = \rho\biggl(\ddp{\chi}{x}\,\Delta x + \Delta x\biggr) \end{equation} or \begin{equation} \label{Eq:I:47:7} \rho_0 = (\rho_0 + \rho_e)\ddp{\chi}{x} + \rho_0 + \rho_e. \end{equation} Now in sound waves all changes are small so that $\rho_e$ is small, $\chi$ is small, and $\ddpl{\chi}{x}$ is also small. Therefore in the relation that we have just found, \begin{equation} \label{Eq:I:47:8} \rho_e = -\rho_0\,\ddp{\chi}{x} -\rho_e\,\ddp{\chi}{x}, \end{equation} we can neglect $\rho_e\,\ddpl{\chi}{x}$ compared with $\rho_0\,\ddpl{\chi}{x}$. Thus we get the relation we needed for I: \begin{equation} \label{Eq:I:47:9} \rho_e = -\rho_0\,\ddp{\chi}{x}.\quad\text{(I)} \end{equation} This equation is what we would expect physically. If the displacements vary with $x$, then there will be density changes. The sign is also right: if the displacement $\chi$ increases with $x$, so that the air is stretched out, the density must go down. We now need the third equation, which is the equation of the motion produced by the pressure. If we know the relation between the force and the pressure, we can then get the equation of motion. If we take a thin slab of air of length $\Delta x$ and of unit area perpendicular to $x$, then the mass of air in this slab is $\rho_0\,\Delta x$ and it has the acceleration $\partial^2\chi/\partial t^2$, so the mass times the acceleration for this slab of matter is $\rho_0\,\Delta x(\partial^2\chi/\partial t^2)$. (It makes no difference for small $\Delta x$ whether the acceleration $\partial^2\chi/\partial t^2$ is evaluated at an edge of the slab or at some intermediate position.) If now we find the force on this matter for a unit area perpendicular to $x$, it will then be equal to $\rho_0\,\Delta x(\partial^2\chi/\partial t^2)$. We have the force in the $+x$-direction, at $x$, of amount $P(x,t)$ per unit area, and we have the force in the opposite direction, at $x + \Delta x$, of amount $P(x + \Delta x,t)$ per unit area (Fig. 47–4): \begin{equation} \label{Eq:I:47:10} P(x,t)\!-\!P(x + \Delta x,t) = -\ddp{P}{x}\,\Delta x = -\ddp{P_e}{x}\,\Delta x, \end{equation}
\begin{align} P(x,t)\!-\!P(x + \Delta x,t) &= -\ddp{P}{x}\,\Delta x\notag\\[1.5ex] \label{Eq:I:47:10} &= -\ddp{P_e}{x}\,\Delta x, \end{align} since $\Delta x$ is small and since the only part of $P$ which changes is the excess pressure $P_e$. We now have III: \begin{equation} \label{Eq:I:47:11} \rho_0\,\frac{\partial^2\chi}{\partial t^2} = -\ddp{P_e}{x},\quad\text{(III)} \end{equation} and so we have enough equations to interconnect things and reduce down to one variable, say to $\chi$. We can eliminate $P_e$ from III by using II, so that we get \begin{equation} \label{Eq:I:47:12} \rho_0\,\frac{\partial^2\chi}{\partial t^2} = -\kappa\,\ddp{\rho_e}{x}, \end{equation} and then we can use I to eliminate $\rho_e$. In this way we find that $\rho_0$ cancels out and that we are left with \begin{equation} \label{Eq:I:47:13} \frac{\partial^2\chi}{\partial t^2} = \kappa\,\frac{\partial^2\chi}{\partial x^2}. \end{equation} We shall call $c_s^2 = \kappa$, so that we can write \begin{equation} \label{Eq:I:47:14} \frac{\partial^2\chi}{\partial x^2} = \frac{1}{c_s^2}\,\frac{\partial^2\chi}{\partial t^2}. \end{equation} This is the wave equation which describes the behavior of sound in matter. |
|
1 | 47 | Sound. The wave equation | 4 | Solutions of the wave equation | We now can see whether this equation really does describe the essential properties of sound waves in matter. We want to deduce that a sound pulse, or disturbance, will move with a constant speed. We want to verify that two different pulses can move through each other—the principle of superposition. We also want to verify that sound can go either to the right or to the left. All these properties should be contained in this one equation. We have remarked that any plane-wave disturbance which moves with a constant velocity $v$ has the form $f(x - vt)$. Now we have to see whether $\chi(x,t) = f(x - vt)$ is a solution of the wave equation. When we calculate $\ddpl{\chi}{x}$, we get the derivative of the function, $\ddpl{\chi}{x} = f'(x - vt)$. Differentiating once more, we find \begin{equation} \label{Eq:I:47:15} \frac{\partial^2\chi}{\partial x^2} = f''(x - vt). \end{equation} The differentiation of this same function with respect to $t$ gives $-v$ times the derivative of the function, or $\ddpl{\chi}{t} = -vf'(x - vt)$, and the second time derivative is \begin{equation} \label{Eq:I:47:16} \frac{\partial^2\chi}{\partial t^2} = v^2f''(x - vt). \end{equation} It is evident that $f(x - vt)$ will satisfy the wave equation provided the wave velocity $v$ is equal to $c_s$. We find, therefore, from the laws of mechanics that any sound disturbance propagates with the velocity $c_s$, and in addition we find that \begin{equation*} c_s = \kappa^{1/2} = (dP/d\rho)_0^{1/2}, \end{equation*} and so we have related the wave velocity to a property of the medium. If we consider a wave travelling in the opposite direction, so that $\chi(x,t) = g(x + vt)$, it is easy to see that such a disturbance also satisfies the wave equation. The only difference between such a wave and one travelling from left to right is in the sign of $v$, but whether we have $x + vt$ or $x - vt$ as the variable in the function does not affect the sign of $\partial^2\chi/\partial t^2$, since it involves only $v^2$. It follows that we have a solution for waves propagating in either direction with speed $c_s$. An extremely interesting question is that of superposition. Suppose one solution of the wave equation has been found, say $\chi_1$. This means that the second derivative of $\chi_1$ with respect to $x$ is equal to $1/c_s^2$ times the second derivative of $\chi_1$ with respect to $t$. Now any other solution $\chi_2$ has this same property. If we superpose these two solutions, we have \begin{equation} \label{Eq:I:47:17} \chi(x,t) = \chi_1(x,t) + \chi_2(x,t), \end{equation} and we wish to verify that $\chi(x,t)$ is also a wave, i.e., that $\chi$ satisfies the wave equation. We can easily prove this result, since we have \begin{equation} \label{Eq:I:47:18} \frac{\partial^2\chi}{\partial x^2} = \frac{\partial^2\chi_1}{\partial x^2} + \frac{\partial^2\chi_2}{\partial x^2} \end{equation} and, in addition, \begin{equation} \label{Eq:I:47:19} \frac{\partial^2\chi}{\partial t^2} = \frac{\partial^2\chi_1}{\partial t^2} + \frac{\partial^2\chi_2}{\partial t^2}. \end{equation} It follows that $\partial^2\chi/\partial x^2 = (1/c_s^2)\,\partial^2\chi/\partial t^2$, so we have verified the principle of superposition. The proof of the principle of superposition follows from the fact that the wave equation is linear in $\chi$. We can now expect that a plane light wave propagating in the $x$-direction, polarized so that the electric field is in the $y$-direction, will satisfy the wave equation \begin{equation} \label{Eq:I:47:20} \frac{\partial^2E_y}{\partial x^2} = \frac{1}{c^2}\,\frac{\partial^2E_y}{\partial t^2}, \end{equation} where $c$ is the speed of light. This wave equation is one of the consequences of Maxwell’s equations. The equations of electrodynamics will lead to the wave equation for light just as the equations of mechanics lead to the wave equation for sound. |
|
1 | 47 | Sound. The wave equation | 5 | The speed of sound | Our deduction of the wave equation for sound has given us a formula which connects the wave speed with the rate of change of pressure with the density at the normal pressure: \begin{equation} \label{Eq:I:47:21} c_s^2 = \biggl(\ddt{P}{\rho}\biggr)_0. \end{equation} In evaluating this rate of change, it is essential to know how the temperature varies. In a sound wave, we would expect that in the region of compression the temperature would be raised, and that in the region of rarefaction the temperature would be lowered. Newton was the first to calculate the rate of change of pressure with density, and he supposed that the temperature remained unchanged. He argued that the heat was conducted from one region to the other so rapidly that the temperature could not rise or fall. This argument gives the isothermal speed of sound, and it is wrong. The correct deduction was given later by Laplace, who put forward the opposite idea—that the pressure and temperature change adiabatically in a sound wave. The heat flow from the compressed region to the rarefied region is negligible so long as the wavelength is long compared with the mean free path. Under this condition the slight amount of heat flow in a sound wave does not affect the speed, although it gives a small absorption of the sound energy. We can expect correctly that this absorption increases as the wavelength approaches the mean free path, but these wavelengths are smaller by factors of about a million than the wavelengths of audible sound. The actual variation of pressure with density in a sound wave is the one that allows no heat flow. This corresponds to the adiabatic variation, which we found to be $PV^\gamma = \text{const}$, where $V$ was the volume. Since the density $\rho$ varies inversely with $V$, the adiabatic connection between $P$ and $\rho$ is \begin{equation} \label{Eq:I:47:22} P = \text{const}\,\rho^\gamma, \end{equation} from which we get $dP/d\rho = \gamma P/\rho$. We then have for the speed of sound the relation \begin{equation} \label{Eq:I:47:23} c_s^2 = \frac{\gamma P}{\rho}. \end{equation} We can also write $c_s^2 = \gamma PV/\rho V$ and make use of the relation $PV = NkT$. Further, we see that $\rho V$ is the mass of gas, which can also be expressed as $Nm$, or as $\mu$ per mole, where $m$ is the mass of a molecule and $\mu$ is the molecular weight. In this way we find that \begin{equation} \label{Eq:I:47:24} c_s^2 = \frac{\gamma kT}{m} = \frac{\gamma RT}{\mu}, \end{equation} from which it is evident that the speed of sound depends only on the gas temperature and not on the pressure or the density. We also have observed that \begin{equation} \label{Eq:I:47:25} kT = \tfrac{1}{3}m\avg{v^2}, \end{equation} where $\avg{v^2}$ is the mean square of the speed of the molecules. It follows that $c_s^2 = (\gamma/3)\avg{v^2}$, or \begin{equation} \label{Eq:I:47:26} c_s = \biggl(\frac{\gamma}{3}\biggr)^{1/2}v_{\text{av}}. \end{equation} This equation states that the speed of sound is some number which is roughly $1/(3)^{1/2}$ times some average speed, $v_{\text{av}}$, of the molecules (the square root of the mean square velocity). In other words, the speed of sound is of the same order of magnitude as the speed of the molecules, and is actually somewhat less than this average speed. Of course we could expect such a result, because a disturbance like a change in pressure is, after all, propagated by the motion of the molecules. However, such an argument does not tell us the precise propagation speed; it could have turned out that sound was carried primarily by the fastest molecules, or by the slowest molecules. It is reasonable and satisfying that the speed of sound is roughly $\tfrac{1}{2}$ of the average molecular speed $v_{\text{av}}$. |
|
1 | 48 | Beats | 1 | Adding two waves | Some time ago we discussed in considerable detail the properties of light waves and their interference—that is, the effects of the superposition of two waves from different sources. In all these analyses we assumed that the frequencies of the sources were all the same. In this chapter we shall discuss some of the phenomena which result from the interference of two sources which have different frequencies. It is easy to guess what is going to happen. Proceeding in the same way as we have done previously, suppose we have two equal oscillating sources of the same frequency whose phases are so adjusted, say, that the signals arrive in phase at some point $P$. At that point, if it is light, the light is very strong; if it is sound, it is very loud; or if it is electrons, many of them arrive. On the other hand, if the arriving signals were $180^\circ$ out of phase, we would get no signal at $P$, because the net amplitude there is then a minimum. Now suppose that someone twists the “phase knob” of one of the sources and changes the phase at $P$ back and forth, say, first making it $0^\circ$ and then $180^\circ$, and so on. Of course, we would then find variations in the net signal strength. Now we also see that if the phase of one source is slowly changing relative to that of the other in a gradual, uniform manner, starting at zero, going up to ten, twenty, thirty, forty degrees, and so on, then what we would measure at $P$ would be a series of strong and weak “pulsations,” because when the phase shifts through $360^\circ$ the amplitude returns to a maximum. Of course, to say that one source is shifting its phase relative to another at a uniform rate is the same as saying that the number of oscillations per second is slightly different for the two. So we know the answer: if we have two sources at slightly different frequencies we should find, as a net result, an oscillation with a slowly pulsating intensity. That is all there really is to the subject! It is very easy to formulate this result mathematically also. Suppose, for example, that we have two waves, and that we do not worry for the moment about all the spatial relations, but simply analyze what arrives at $P$. From one source, let us say, we would have $\cos\omega_1t$, and from the other source, $\cos\omega_2t$, where the two $\omega$’s are not exactly the same. Of course the amplitudes may not be the same, either, but we can solve the general problem later; let us first take the case where the amplitudes are equal. Then the total amplitude at $P$ is the sum of these two cosines. If we plot the amplitudes of the waves against the time, as in Fig. 48–1, we see that where the crests coincide we get a strong wave, and where a trough and crest coincide we get practically zero, and then when the crests coincide again we get a strong wave again. Mathematically, we need only to add two cosines and rearrange the result somehow. There exist a number of useful relations among cosines which are not difficult to derive. Of course we know that \begin{equation} \label{Eq:I:48:1} e^{i(a + b)} = e^{ia}e^{ib}, \end{equation} and that $e^{ia}$ has a real part, $\cos a$, and an imaginary part, $\sin a$. If we take the real part of $e^{i(a + b)}$, we get $\cos\,(a + b)$. If we multiply out: \begin{equation*} e^{ia}e^{ib} = (\cos a + i\sin a)(\cos b + i\sin b), \end{equation*} we get $\cos a\cos b - \sin a\sin b$, plus some imaginary parts. But we now need only the real part, so we have \begin{equation} \label{Eq:I:48:2} \cos\,(a + b) = \cos a\cos b - \sin a\sin b. \end{equation} Now if we change the sign of $b$, since the cosine does not change sign while the sine does, the same equation, for negative $b$, is \begin{equation} \label{Eq:I:48:3} \cos\,(a - b) = \cos a\cos b + \sin a\sin b. \end{equation} If we add these two equations together, we lose the sines and we learn that the product of two cosines is half the cosine of the sum, plus half the cosine of the difference: \begin{equation} \label{Eq:I:48:4} \cos a\cos b = \tfrac{1}{2}\cos\,(a + b) + \tfrac{1}{2}\cos\,(a - b). \end{equation} Now we can also reverse the formula and find a formula for $\cos\alpha + \cos\beta$ if we simply let $\alpha = a + b$ and $\beta = a - b$. That is, $a = \tfrac{1}{2}(\alpha + \beta)$ and $b = \tfrac{1}{2}(\alpha - \beta)$, so that \begin{equation} \label{Eq:I:48:5} \cos\alpha + \cos\beta = 2\cos\tfrac{1}{2}(\alpha + \beta) \cos\tfrac{1}{2}(\alpha - \beta). \end{equation} Now we can analyze our problem. The sum of $\cos\omega_1t$ and $\cos\omega_2t$ is \begin{equation} \label{Eq:I:48:6} \cos\omega_1t + \cos\omega_2t = 2\cos\tfrac{1}{2}(\omega_1 + \omega_2)t \cos\tfrac{1}{2}(\omega_1 - \omega_2)t. \end{equation}
\begin{align} \cos\omega_1t &+ \cos\omega_2t =\notag\\[.5ex] \label{Eq:I:48:6} &~2\cos\tfrac{1}{2}(\omega_1 + \omega_2)t \cos\tfrac{1}{2}(\omega_1 - \omega_2)t. \end{align} Now let us suppose that the two frequencies are nearly the same, so that $\tfrac{1}{2}(\omega_1 + \omega_2)$ is the average frequency, and is more or less the same as either. But $\omega_1 - \omega_2$ is much smaller than $\omega_1$ or $\omega_2$ because, as we suppose, $\omega_1$ and $\omega_2$ are nearly equal. That means that we can represent the solution by saying that there is a high-frequency cosine wave more or less like the ones we started with, but that its “size” is slowly changing—its “size” is pulsating with a frequency which appears to be $\tfrac{1}{2}(\omega_1 - \omega_2)$. But is this the frequency at which the beats are heard? Although (48.6) says that the amplitude goes as $\cos\tfrac{1}{2}(\omega_1 - \omega_2)t$, what it is really telling us is that the high-frequency oscillations are contained between two opposed cosine curves (shown dotted in Fig. 48–1). On this basis one could say that the amplitude varies at the frequency $\tfrac{1}{2}(\omega_1 - \omega_2)$, but if we are talking about the intensity of the wave we must think of it as having twice this frequency. That is, the modulation of the amplitude, in the sense of the strength of its intensity, is at frequency $\omega_1 - \omega_2$, although the formula tells us that we multiply by a cosine wave at half that frequency. The technical basis for the difference is that the high frequency-wave has a little different phase relationship in the second half-cycle. Ignoring this small complication, we may conclude that if we add two waves of frequency $\omega_1$ and $\omega_2$, we will get a net resulting wave of average frequency $\tfrac{1}{2}(\omega_1 + \omega_2)$ which oscillates in strength with a frequency $\omega_1 - \omega_2$. If the two amplitudes are different, we can do it all over again by multiplying the cosines by different amplitudes $A_1$ and $A_2$, and do a lot of mathematics, rearranging, and so on, using equations like (48.2)–(48.5). However, there are other, easier ways of doing the same analysis. For example, we know that it is much easier to work with exponentials than with sines and cosines and that we can represent $A_1\cos\omega_1t$ as the real part of $A_1e^{i\omega_1t}$. The other wave would similarly be the real part of $A_2e^{i\omega_2t}$. If we add the two, we get $A_1e^{i\omega_1t} + A_2e^{i\omega_2t}$. If we then factor out the average frequency, we have \begin{equation} \label{Eq:I:48:7} A_1e^{i\omega_1t} + A_2e^{i\omega_2t} = e^{i(\omega_1 + \omega _2)t/2}[ A_1e^{i(\omega_1 - \omega _2)t/2} + A_2e^{-i(\omega_1 - \omega_2)t/2}]. \end{equation}
\begin{gather} A_1e^{i\omega_1t} + A_2e^{i\omega_2t} =\notag\\[1ex] \label{Eq:I:48:7} e^{i(\omega_1 + \omega _2)t/2}[ A_1e^{i(\omega_1 - \omega _2)t/2} + A_2e^{-i(\omega_1 - \omega_2)t/2}]. \end{gather} Again we have the high-frequency wave with a modulation at the lower frequency. |
|
1 | 48 | Beats | 2 | Beat notes and modulation | If we are now asked for the intensity of the wave of Eq. (48.7), we can either take the absolute square of the left side, or of the right side. Let us take the left side. The intensity then is \begin{equation} \label{Eq:I:48:8} I = A_1^2 + A_2^2 + 2A_1A_2\cos\,(\omega_1 - \omega_2)t. \end{equation} We see that the intensity swells and falls at a frequency $\omega_1 - \omega_2$, varying between the limits $(A_1 + A_2)^2$ and $(A_1 - A_2)^2$. If $A_1 \neq A_2$, the minimum intensity is not zero. One more way to represent this idea is by means of a drawing, like Fig. 48–2. We draw a vector of length $A_1$, rotating at a frequency $\omega_1$, to represent one of the waves in the complex plane. We draw another vector of length $A_2$, going around at a frequency $\omega_2$, to represent the second wave. If the two frequencies are exactly equal, their resultant is of fixed length as it keeps revolving, and we get a definite, fixed intensity from the two. But if the frequencies are slightly different, the two complex vectors go around at different speeds. Figure 48–3 shows what the situation looks like relative to the vector $A_1e^{i\omega_1t}$. We see that $A_2$ is turning slowly away from $A_1$, and so the amplitude that we get by adding the two is first strong, and then, as it opens out, when it gets to the $180^\circ$ relative position the resultant gets particularly weak, and so on. As the vectors go around, the amplitude of the sum vector gets bigger and smaller, and the intensity thus pulsates. It is a relatively simple idea, and there are many different ways of representing the same thing. The effect is very easy to observe experimentally. In the case of acoustics, we may arrange two loudspeakers driven by two separate oscillators, one for each loudspeaker, so that they each make a tone. We thus receive one note from one source and a different note from the other source. If we make the frequencies exactly the same, the resulting effect will have a definite strength at a given space location. If we then de-tune them a little bit, we hear some variations in the intensity. The farther they are de-tuned, the more rapid are the variations of sound. The ear has some trouble following variations more rapid than ten or so per second. We may also see the effect on an oscilloscope which simply displays the sum of the currents to the two speakers. If the frequency of pulsing is relatively low, we simply see a sinusoidal wave train whose amplitude pulsates, but as we make the pulsations more rapid we see the kind of wave shown in Fig. 48–1. As we go to greater frequency differences, the “bumps” move closer together. Also, if the amplitudes are not equal and we make one signal stronger than the other, then we get a wave whose amplitude does not ever become zero, just as we expect. Everything works the way it should, both acoustically and electrically. The opposite phenomenon occurs too! In radio transmission using so-called amplitude modulation (am), the sound is broadcast by the radio station as follows: the radio transmitter has an ac electric oscillation which is at a very high frequency, for example $800$ kilocycles per second, in the broadcast band. If this carrier signal is turned on, the radio station emits a wave which is of uniform amplitude at $800{,}000$ oscillations a second. The way the “information” is transmitted, the useless kind of information about what kind of car to buy, is that when somebody talks into a microphone the amplitude of the carrier signal is changed in step with the vibrations of sound entering the microphone. If we take as the simplest mathematical case the situation where a soprano is singing a perfect note, with perfect sinusoidal oscillations of her vocal cords, then we get a signal whose strength is alternating as shown in Fig. 48–4. The audiofrequency alternation is then recovered in the receiver; we get rid of the carrier wave and just look at the envelope which represents the oscillations of the vocal cords, or the sound of the singer. The loudspeaker then makes corresponding vibrations at the same frequency in the air, and the listener is then essentially unable to tell the difference, so they say. Because of a number of distortions and other subtle effects, it is, in fact, possible to tell whether we are listening to a radio or to a real soprano; otherwise the idea is as indicated above. |
|
1 | 48 | Beats | 3 | Side bands | Mathematically, the modulated wave described above would be expressed as \begin{equation} \label{Eq:I:48:9} S = (1 + b\cos\omega_mt)\cos\omega_ct, \end{equation} where $\omega_c$ represents the frequency of the carrier and $\omega_m$ is the frequency of the audio tone. Again we use all those theorems about the cosines, or we can use $e^{i\theta}$; it makes no difference—it is easier with $e^{i\theta}$, but it is the same thing. We then get \begin{equation} \label{Eq:I:48:10} S = \cos\omega_ct + \tfrac{1}{2}b\cos\,(\omega_c + \omega_m)t + \tfrac{1}{2}b\cos\,(\omega_c - \omega_m)t. \end{equation}
\begin{align} \label{Eq:I:48:10} S = \cos\omega_ct &+ \tfrac{1}{2}b\cos\,(\omega_c + \omega_m)t\notag\\[.5ex] &+ \tfrac{1}{2}b\cos\,(\omega_c - \omega_m)t. \end{align} So, from another point of view, we can say that the output wave of the system consists of three waves added in superposition: first, the regular wave at the frequency $\omega_c$, that is, at the carrier frequency, and then two new waves at two new frequencies. One is the carrier frequency plus the modulation frequency, and the other is the carrier frequency minus the modulation frequency. If, therefore, we make some kind of plot of the intensity being generated by the generator as a function of frequency, we would find a lot of intensity at the frequency of the carrier, naturally, but when a singer started to sing, we would suddenly also find intensity proportional to the strength of the singer, $b^2$, at frequency $\omega_c + \omega_m$ and $\omega_c - \omega_m$, as shown in Fig. 48–5. These are called side bands; when there is a modulated signal from the transmitter, there are side bands. If there is more than one note at the same time, say $\omega_m$ and $\omega_{m'}$, there are two instruments playing; or if there is any other complicated cosine wave, then, of course, we can see from the mathematics that we get some more waves that correspond to the frequencies $\omega_c \pm \omega_{m'}$. Therefore, when there is a complicated modulation that can be represented as the sum of many cosines,1 we find that the actual transmitter is transmitting over a range of frequencies, namely the carrier frequency plus or minus the maximum frequency that the modulation signal contains. Although at first we might believe that a radio transmitter transmits only at the nominal frequency of the carrier, since there are big, superstable crystal oscillators in there, and everything is adjusted to be at precisely $800$ kilocycles, the moment someone announces that they are at $800$ kilocycles, he modulates the $800$ kilocycles, and so they are no longer precisely at $800$ kilocycles! Suppose that the amplifiers are so built that they are able to transmit over a good range of the ear’s sensitivity (the ear can hear up to $20{,}000$ cycles per second, but usually radio transmitters and receivers do not work beyond $10{,}000$, so we do not hear the highest parts), then, when the man speaks, his voice may contain frequencies ranging up, say, to $10{,}000$ cycles, so the transmitter is transmitting frequencies which may range from $790$ to $810$ kilocycles per second. Now if there were another station at $795$ kc/sec, there would be a lot of confusion. Also, if we made our receiver so sensitive that it picked up only $800$, and did not pick up the $10$ kilocycles on either side, we would not hear what the man was saying, because the information would be on these other frequencies! Therefore it is absolutely essential to keep the stations a certain distance apart, so that their side bands do not overlap and, also, the receiver must not be so selective that it does not permit reception of the side bands as well as of the main nominal frequency. In the case of sound, this problem does not really cause much trouble. We can hear over a $\pm20$ kc/sec range, and we have usually from $500$ to $1500$ kc/sec in the broadcast band, so there is plenty of room for lots of stations. The television problem is more difficult. As the electron beam goes across the face of the picture tube, there are various little spots of light and dark. That “light” and “dark” is the “signal.” Now ordinarily the beam scans over the whole picture, $500$ lines, approximately, in a thirtieth of a second. Let us consider that the resolution of the picture vertically and horizontally is more or less the same, so that there are the same number of spots per inch along a scan line. We want to be able to distinguish dark from light, dark from light, dark from light, over, say, $500$ lines. In order to be able to do this with cosine waves, the shortest wavelength needed thus corresponds to a wavelength, from maximum to maximum, of one $250$th of the screen size. So we have $250\times500\times30$ pieces of information per second. The highest frequency that we are going to carry, therefore, is close to $4$ megacycles per second. Actually, to keep the television stations apart, we have to use a little bit more than this, about $6$ mc/sec; part of it is used to carry the sound signal, and other information. So, television channels are $6$ megacycles per second wide. It certainly would not be possible to transmit tv on an $800$ kc/sec carrier, since we cannot modulate at a higher frequency than the carrier. At any rate, the television band starts at $54$ megacycles. The first transmission channel, which is channel $2$ (!), has a frequency range from $54$ to $60$ mc/sec, which is $6$ mc/sec wide. “But,” one might say, “we have just proved that there were side bands on both sides, and therefore it should be twice that wide.” It turns out that the radio engineers are rather clever. If we analyze the modulation signal using not just cosine terms, but cosine and sine terms, to allow for phase differences, we then see that there is a definite, invariant relationship between the side band on the high-frequency side and the side band on the low-frequency side. What we mean is that there is no new information on that other side band. So what is done is to suppress one side band, and the receiver is wired inside such that the information which is missing is reconstituted by looking at the single side band and the carrier. Single side-band transmission is a clever scheme for decreasing the band widths needed to transmit information. |
|
1 | 48 | Beats | 4 | Localized wave trains | The next subject we shall discuss is the interference of waves in both space and time. Suppose that we have two waves travelling in space. We know, of course, that we can represent a wave travelling in space by $e^{i(\omega t - kx)}$. This might be, for example, the displacement in a sound wave. This is a solution of the wave equation provided that $\omega^2 = k^2c^2$, where $c$ is the speed of propagation of the wave. In this case we can write it as $e^{-ik(x - ct)}$, which is of the general form $f(x - ct)$. Therefore this must be a wave which is travelling at this velocity, $\omega/k$, and that is $c$ and everything is all right. Now we want to add two such waves together. Suppose we have a wave that is travelling with one frequency, and another wave travelling with another frequency. We leave to the reader to consider the case where the amplitudes are different; it makes no real difference. Thus we want to add $e^{i(\omega_1t - k_1x)} + e^{i(\omega_2t - k_2x)}$. We can add these by the same kind of mathematics we used when we added signal waves. Of course, if $c$ is the same for both, this is easy, since it is the same as what we did before: \begin{equation} \label{Eq:I:48:11} e^{i\omega_1(t - x/c)} + e^{i\omega_2(t - x/c)} = e^{i\omega_1t'} + e^{i\omega_2t'}, \end{equation} except that $t' = t - x/c$ is the variable instead of $t$. So we get the same kind of modulations, naturally, but we see, of course, that those modulations are moving along with the wave. In other words, if we added two waves, but these waves were not just oscillating, but also moving in space, then the resultant wave would move along also, at the same speed. Now we would like to generalize this to the case of waves in which the relationship between the frequency and the wave number $k$ is not so simple. Example: material having an index of refraction. We have already studied the theory of the index of refraction in Chapter 31, where we found that we could write $k = n\omega/c$, where $n$ is the index of refraction. As an interesting example, for x-rays we found that the index $n$ is \begin{equation} \label{Eq:I:48:12} n = 1 - \frac{Nq_e^2}{2\epsO m\omega^2}. \end{equation} We actually derived a more complicated formula in Chapter 31, but this one is as good as any, as an example. Incidentally, we know that even when $\omega$ and $k$ are not linearly proportional, the ratio $\omega/k$ is certainly the speed of propagation for the particular frequency and wave number. We call this ratio the phase velocity; it is the speed at which the phase, or the nodes of a single wave, would move along: \begin{equation} \label{Eq:I:48:13} v_p = \frac{\omega}{k}. \end{equation} This phase velocity, for the case of x-rays in glass, is greater than the speed of light in vacuum (since $n$ in 48.12 is less than $1$), and that is a bit bothersome, because we do not think we can send signals faster than the speed of light! What we are going to discuss now is the interference of two waves in which $\omega$ and $k$ have a definite formula relating them. The above formula for $n$ says that $k$ is given as a definite function of $\omega$. To be specific, in this particular problem, the formula for $k$ in terms of $\omega$ is \begin{equation} \label{Eq:I:48:14} k = \frac{\omega}{c} - \frac{a}{\omega c}, \end{equation} where $a = Nq_e^2/2\epsO m$, a constant. At any rate, for each frequency there is a definite wave number, and we want to add two such waves together. Let us do it just as we did in Eq. (48.7): \begin{align} \label{Eq:I:48:15} e^{i(\omega_1t - k_1x)} &+ e^{i(\omega_2t - k_2x)} = e^{i[(\omega_1 + \omega_2)t - (k_1 + k_2)x]/2}\\[1ex] &\times\bigl[ e^{i[(\omega_1 - \omega_2)t - (k_1 - k_2)x]/2} + e^{-i[(\omega_1 - \omega_2)t - (k_1 - k_2)x]/2}\bigr].\notag \end{align}
\begin{align} \label{Eq:I:48:15} e^{i(\omega_1t - k_1x)} + \;&e^{i(\omega_2t - k_2x)} =\\[1ex] e^{i[(\omega_1 + \omega_2)t - (k_1 + k_2)x]/2} \times\bigl[ &e^{i[(\omega_1 - \omega_2)t - (k_1 - k_2)x]/2}\; +\notag\\[-.3ex] &\quad e^{-i[(\omega_1 - \omega_2)t - (k_1 - k_2)x]/2}\bigr].\notag \end{align} So we have a modulated wave again, a wave which travels with the mean frequency and the mean wave number, but whose strength is varying with a form which depends on the difference frequency and the difference wave number. Now let us take the case that the difference between the two waves is relatively small. Let us suppose that we are adding two waves whose frequencies are nearly equal; then $(\omega_1 + \omega_2)/2$ is practically the same as either one of the $\omega$’s, and similarly for $(k_1 + k_2)/2$. Thus the speed of the wave, the fast oscillations, the nodes, is still essentially $\omega/k$. But look, the speed of propagation of the modulation is not the same! How much do we have to change $x$ to account for a certain amount of $t$? The speed of this modulation wave is the ratio \begin{equation} \label{Eq:I:48:16} v_M = \frac{\omega_1 - \omega_2}{k_1 - k_2}. \end{equation} The speed of modulation is sometimes called the group velocity. If we take the case that the difference in frequency is relatively small, and the difference in wave number is then also relatively small, then this expression approaches, in the limit, \begin{equation} \label{Eq:I:48:17} v_g = \ddt{\omega}{k}. \end{equation} In other words, for the slowest modulation, the slowest beats, there is a definite speed at which they travel which is not the same as the phase speed of the waves—what a mysterious thing! The group velocity is the derivative of $\omega$ with respect to $k$, and the phase velocity is $\omega/k$.
The group velocity is the derivative of $\omega$ with respect to $k$, and the phase velocity is $\omega/k$. Let us see if we can understand why. Consider two waves, again of slightly different wavelength, as in Fig. 48–1. They are out of phase, in phase, out of phase, and so on. Now these waves represent, really, the waves in space travelling with slightly different frequencies also. Now because the phase velocity, the velocity of the nodes of these two waves, is not precisely the same, something new happens. Suppose we ride along with one of the waves and look at the other one; if they both went at the same speed, then the other wave would stay right where it was relative to us, as we ride along on this crest. We ride on that crest and right opposite us we see a crest; if the two velocities are equal the crests stay on top of each other. But it is not so that the two velocities are really equal. There is only a small difference in frequency and therefore only a small difference in velocity, but because of that difference in velocity, as we ride along the other wave moves slowly forward, say, or behind, relative to our wave. So as time goes on, what happens to the node? If we move one wave train just a shade forward, the node moves forward (or backward) a considerable distance. That is, the sum of these two waves has an envelope, and as the waves travel along, the envelope rides on them at a different speed. The group velocity is the speed at which modulated signals would be transmitted. If we made a signal, i.e., some kind of change in the wave that one could recognize when he listened to it, a kind of modulation, then that modulation would travel at the group velocity, provided that the modulations were relatively slow. (When they are fast, it is much more difficult to analyze.) Now we may show (at long last), that the speed of propagation of x-rays in a block of carbon is not greater than the speed of light, although the phase velocity is greater than the speed of light. In order to do that, we must find $d\omega/dk$, which we get by differentiating (48.14): $dk/d\omega = 1/c + a/\omega^2c$. The group velocity, therefore, is the reciprocal of this, namely, \begin{equation} \label{Eq:I:48:18} v_g = \frac{c}{1 + a/\omega^2}, \end{equation} which is smaller than $c$! So although the phases can travel faster than the speed of light, the modulation signals travel slower, and that is the resolution of the apparent paradox! Of course, if we have the simple case that $\omega= kc$, then $d\omega/dk$ is also $c$. So when all the phases have the same velocity, naturally the group has the same velocity. |
|
1 | 48 | Beats | 5 | Probability amplitudes for particles | Let us now consider one more example of the phase velocity which is extremely interesting. It has to do with quantum mechanics. We know that the amplitude to find a particle at a place can, in some circumstances, vary in space and time, let us say in one dimension, in this manner: \begin{equation} \label{Eq:I:48:19} \psi = Ae^{i(\omega t -kx)}, \end{equation} where $\omega$ is the frequency, which is related to the classical idea of the energy through $E = \hbar\omega$, and $k$ is the wave number, which is related to the momentum through $p = \hbar k$. We would say the particle had a definite momentum $p$ if the wave number were exactly $k$, that is, a perfect wave which goes on with the same amplitude everywhere. Equation (48.19) gives the amplitude, and if we take the absolute square, we get the relative probability for finding the particle as a function of position and time. This is a constant, which means that the probability is the same to find a particle anywhere. Now suppose, instead, that we have a situation where we know that the particle is more likely to be at one place than at another. We would represent such a situation by a wave which has a maximum and dies out on either side (Fig. 48–6). (It is not quite the same as a wave like (48.1) which has a series of maxima, but it is possible, by adding several waves of nearly the same $\omega$ and $k$ together, to get rid of all but one maximum.) Now in those circumstances, since the square of (48.19) represents the chance of finding a particle somewhere, we know that at a given instant the particle is most likely to be near the center of the “lump,” where the amplitude of the wave is maximum. If now we wait a few moments, the waves will move, and after some time the “lump” will be somewhere else. If we knew that the particle originally was situated somewhere, classically, we would expect that it would later be elsewhere as a matter of fact, because it has a speed, after all, and a momentum. The quantum theory, then, will go into the correct classical theory for the relationship of momentum, energy, and velocity only if the group velocity, the velocity of the modulation, is equal to the velocity that we would obtain classically for a particle of the same momentum. It is now necessary to demonstrate that this is, or is not, the case. According to the classical theory, the energy is related to the velocity through an equation like \begin{equation} \label{Eq:I:48:20} E = \frac{mc^2}{\sqrt{1 - v^2/c^2}}. \end{equation} Similarly, the momentum is \begin{equation} \label{Eq:I:48:21} p = \frac{mv}{\sqrt{1 - v^2/c^2}}. \end{equation} That is the classical theory, and as a consequence of the classical theory, by eliminating $v$, we can show that \begin{equation*} E^2 - p^2c^2 = m^2c^4. \end{equation*} That is the four-dimensional grand result that we have talked and talked about, that $p_\mu p_\mu = m^2$; that is the relation between energy and momentum in the classical theory. Now that means, since these $E$’s and $p$’s are going to become $\omega$’s and $k$’s, by substitution of $E = \hbar\omega$ and $p = \hbar k$, that for quantum mechanics it is necessary that \begin{equation} \label{Eq:I:48:22} \frac{\hbar^2\omega^2}{c^2} - \hbar^2k^2 = m^2c^2. \end{equation} This, then, is the relationship between the frequency and the wave number of a quantum-mechanical amplitude wave representing a particle of mass $m$. From this equation we can deduce that $\omega$ is \begin{equation*} \omega = c\sqrt{k^2 + m^2c^2/\hbar^2}. \end{equation*} The phase velocity, $\omega/k$, is here again faster than the speed of light! Now let us look at the group velocity. The group velocity should be $d\omega/dk$, the speed at which the modulations move. We have to differentiate a square root, which is not very difficult. The derivative is \begin{equation*} \ddt{\omega}{k} = \frac{kc}{\sqrt{k^2 + m^2c^2/\hbar^2}}. \end{equation*} Now the square root is, after all, $\omega/c$, so we could write this as $d\omega/dk = c^2k/\omega$. Further, $k/\omega$ is $p/E$, so \begin{equation*} v_g = \frac{c^2p}{E}. \end{equation*} But from (48.20) and (48.21), $c^2p/E = v$, the velocity of the particle, according to classical mechanics. So we see that whereas the fundamental quantum-mechanical relationship $E = \hbar\omega$ and $p = \hbar k$, for the identification of $\omega$ and $k$ with the classical $E$ and $p$, only produces the equation $\omega^2 - k^2c^2 = m^2c^4/\hbar^2$, now we also understand the relationships (48.20) and (48.21) which connected $E$ and $p$ to the velocity. Of course the group velocity must be the velocity of the particle if the interpretation is going to make any sense. If we think the particle is over here at one time, and then ten minutes later we think it is over there, as the quantum mechanics said, the distance traversed by the “lump,” divided by the time interval, must be, classically, the velocity of the particle. |
|
1 | 48 | Beats | 6 | Waves in three dimensions | We shall now bring our discussion of waves to a close with a few general remarks about the wave equation. These remarks are intended to give some view of the future—not that we can understand everything exactly just now, but rather to see what things are going to look like when we study waves a little more. First of all, the wave equation for sound in one dimension was \begin{equation*} \frac{\partial^2\chi}{\partial x^2} = \frac{1}{c^2}\,\frac{\partial^2\chi}{\partial t^2}, \end{equation*} where $c$ is the speed of whatever the wave is—in the case of sound, it is the sound speed; in the case of light, it is the speed of light. We showed that for a sound wave the displacements would propagate themselves at a certain speed. But the excess pressure also propagates at a certain speed, and so does the excess density. So we should expect that the pressure would satisfy the same equation, as indeed it does. We shall leave it to the reader to prove that it does. Hint: $\rho_e$ is proportional to the rate of change of $\chi$ with respect to $x$. Therefore if we differentiate the wave equation with respect to $x$, we will immediately discover that $\ddpl{\chi}{x}$ satisfies the same equation. That is to say, $\rho_e$ satisfies the same equation. But $P_e$ is proportional to $\rho_e$, and therefore $P_e$ does too. So the pressure, the displacements, everything, satisfy the same wave equation. Usually one sees the wave equation for sound written in terms of pressure instead of in terms of displacement, because the pressure is a scalar and has no direction. But the displacement is a vector and has direction, and it is thus easier to analyze the pressure. The next matter we discuss has to do with the wave equation in three dimensions. We know that the sound wave solution in one dimension is $e^{i(\omega t - kx)}$, with $\omega = kc_s$, but we also know that in three dimensions a wave would be represented by $e^{i(\omega t - k_xx - k_yy - k_zz)}$, where, in this case, $\omega^2 = k^2c_s^2$, which is, of course, $(k_x^2 + k_y^2 + k_z^2)c_s^2$. Now what we want to do is to guess what the correct wave equation in three dimensions is. Naturally, for the case of sound this can be deduced by going through the same dynamic argument in three dimensions that we made in one dimension. But we shall not do that; instead we just write down what comes out: the equation for the pressure (or displacement, or anything) is \begin{equation} \label{Eq:I:48:23} \frac{\partial^2P_e}{\partial x^2} + \frac{\partial^2P_e}{\partial y^2} + \frac{\partial^2P_e}{\partial z^2} = \frac{1}{c_s^2}\, \frac{\partial^2P_e}{\partial t^2}. \end{equation} That this is true can be verified by substituting in $e^{i(\omega t - \FLPk\cdot\FLPr)}$. Clearly, every time we differentiate with respect to $x$, we multiply by $-ik_x$. If we differentiate twice, it is equivalent to multiplying by $-k_x^2$, so the first term would become $-k_x^2P_e$, for that wave. Similarly, the second term becomes $-k_y^2P_e$, and the third term becomes $-k_z^2P_e$. On the right, we get $-(\omega^2/c_s^2)P_e$. Then, if we take away the $P_e$’s and change the sign, we see that the relationship between $k$ and $\omega$ is the one that we want. Working backwards again, we cannot resist writing down the grand equation which corresponds to the dispersion equation (48.22) for quantum-mechanical waves. If $\phi$ represents the amplitude for finding a particle at position $x,y,z$, at the time $t$, then the great equation of quantum mechanics for free particles is this: \begin{equation} \label{Eq:I:48:24} \frac{\partial^2\phi}{\partial x^2} + \frac{\partial^2\phi}{\partial y^2} + \frac{\partial^2\phi}{\partial z^2} - \frac{1}{c^2}\, \frac{\partial^2\phi}{\partial t^2} = \frac{m^2c^2}{\hbar^2}\,\phi. \end{equation} First of all, the relativity character of this expression is suggested by the appearance of $x$, $y$, $z$ and $t$ in the nice combination relativity usually involves. Second, it is a wave equation which, if we try a plane wave, would produce as a consequence that $-k^2 + \omega^2/c^2 = m^2c^2/\hbar^2$, which is the right relationship for quantum mechanics. There is still another great thing contained in the wave equation: the fact that any superposition of waves is also a solution. So this equation contains all of the quantum mechanics and the relativity that we have been discussing so far, at least so long as it deals with a single particle in empty space with no external potentials or forces on it! |
|
1 | 48 | Beats | 7 | Normal modes | Now we turn to another example of the phenomenon of beats which is rather curious and a little different. Imagine two equal pendulums which have, between them, a rather weak spring connection. They are made as nearly as possible the same length. If we pull one aside and let go, it moves back and forth, and it pulls on the connecting spring as it moves back and forth, and so it really is a machine for generating a force which has the natural frequency of the other pendulum. Therefore, as a consequence of the theory of resonance, which we studied before, when we put a force on something at just the right frequency, it will drive it. So, sure enough, one pendulum moving back and forth drives the other. However, in this circumstance there is a new thing happening, because the total energy of the system is finite, so when one pendulum pours its energy into the other to drive it, it finds itself gradually losing energy, until, if the timing is just right along with the speed, it loses all its energy and is reduced to a stationary condition! Then, of course, it is the other pendulum ball that has all the energy and the first one which has none, and as time goes on we see that it works also in the opposite direction, and that the energy is passed back into the first ball; this is a very interesting and amusing phenomenon. We said, however, that this is related to the theory of beats, and we must now explain how we can analyze this motion from the point of view of the theory of beats. We note that the motion of either of the two balls is an oscillation which has an amplitude which changes cyclically. Therefore the motion of one of the balls is presumably analyzable in a different way, in that it is the sum of two oscillations, present at the same time but having two slightly different frequencies. Therefore it ought to be possible to find two other motions in this system, and to claim that what we saw was a superposition of the two solutions, because this is of course a linear system. Indeed, it is easy to find two ways that we could start the motion, each one of which is a perfect, single-frequency motion—absolutely periodic. The motion that we started with before was not strictly periodic, since it did not last; soon one ball was passing energy to the other and so changing its amplitude; but there are ways of starting the motion so that nothing changes and, of course, as soon as we see it we understand why. For example, if we made both pendulums go together, then, since they are of the same length and the spring is not then doing anything, they will of course continue to swing like that for all time, assuming no friction and that everything is perfect. On the other hand, there is another possible motion which also has a definite frequency: that is, if we move the pendulums oppositely, pulling them aside exactly equal distances, then again they would be in absolutely periodic motion. We can appreciate that the spring just adds a little to the restoring force that the gravity supplies, that is all, and the system just keeps oscillating at a slightly higher frequency than in the first case. Why higher? Because the spring is pulling, in addition to the gravitation, and it makes the system a little “stiffer,” so that the frequency of this motion is just a shade higher than that of the other. Thus this system has two ways in which it can oscillate with unchanging amplitude: it can either oscillate in a manner in which both pendulums go the same way and oscillate all the time at one frequency, or they could go in opposite directions at a slightly higher frequency. Now the actual motion of the thing, because the system is linear, can be represented as a superposition of the two. (The subject of this chapter, remember, is the effects of adding two motions with different frequencies.) So think what would happen if we combined these two solutions. If at $t = 0$ the two motions are started with equal amplitude and in the same phase, the sum of the two motions means that one ball, having been impressed one way by the first motion and the other way by the second motion, is at zero, while the other ball, having been displaced the same way in both motions, has a large amplitude. As time goes on, however, the two basic motions proceed independently, so the phase of one relative to the other is slowly shifting. That means, then, that after a sufficiently long time, when the time is enough that one motion could have gone “$900\tfrac{1}{2}$” oscillations, while the other went only “$900$,” the relative phase would be just reversed with respect to what it was before. That is, the large-amplitude motion will have fallen to zero, and in the meantime, of course, the initially motionless ball will have attained full strength! So we see that we could analyze this complicated motion either by the idea that there is a resonance and that one passes energy to the other, or else by the superposition of two constant-amplitude motions at two different frequencies. |
|
1 | 49 | Modes | 1 | The reflection of waves | This chapter will consider some of the remarkable phenomena which are a result of confining waves in some finite region. We will be led first to discover a few particular facts about vibrating strings, for example, and then the generalization of these facts will give us a principle which is probably the most far-reaching principle of mathematical physics. Our first example of confining waves will be to confine a wave at one boundary. Let us take the simple example of a one-dimensional wave on a string. One could equally well consider sound in one dimension against a wall, or other situations of a similar nature, but the example of a string will be sufficient for our present purposes. Suppose that the string is held at one end, for example by fastening it to an “infinitely solid” wall. This can be expressed mathematically by saying that the displacement $y$ of the string at the position $x = 0$ must be zero, because the end does not move. Now if it were not for the wall, we know that the general solution for the motion is the sum of two functions, $F(x - ct)$ and $G(x + ct)$, the first representing a wave travelling one way in the string, and the second a wave travelling the other way in the string: \begin{equation} \label{Eq:I:49:1} y = F(x - ct) + G(x + ct) \end{equation} is the general solution for any string. But we have next to satisfy the condition that the string does not move at one end. If we put $x = 0$ in Eq. (49.1) and examine $y$ for any value of $t$, we get $y = F(-ct) + G(+ct)$. Now if this is to be zero for all times, it means that the function $G(ct)$ must be $-F(-ct)$. In other words, $G$ of anything must be $-F$ of minus that same thing. If this result is put back into Eq. (49.1), we find that the solution for the problem is \begin{equation} \label{Eq:I:49:2} y = F(x - ct) - F(-x - ct). \end{equation} It is easy to check that we will get $y = 0$ if we set $x = 0$. Figure 49–1 shows a wave travelling in the negative $x$-direction near $x = 0$, and a hypothetical wave travelling in the other direction reversed in sign and on the other side of the origin. We say hypothetical because, of course, there is no string to vibrate on that side of the origin. The total motion of the string is to be regarded as the sum of these two waves in the region of positive $x$. As they reach the origin, they will always cancel at $x = 0$, and finally the second (reflected) wave will be the only one to exist for positive $x$ and it will, of course, be travelling in the opposite direction. These results are equivalent to the following statement: if a wave reaches the clamped end of a string, it will be reflected with a change in sign. Such a reflection can always be understood by imagining that what is coming to the end of the string comes out upside down from behind the wall. In short, if we assume that the string is infinite and that whenever we have a wave going one way we have another one going the other way with the stated symmetry, the displacement at $x = 0$ will always be zero and it would make no difference if we clamped the string there. The next point to be discussed is the reflection of a periodic wave. Suppose that the wave represented by $F(x - ct)$ is a sine wave and has been reflected; then the reflected wave $-F(-x - ct)$ is also a sine wave of the same frequency, but travelling in the opposite direction. This situation can be most simply described by using the complex function notation: $F(x - ct) = e^{i\omega(t - x/c)}$ and $F(-x - ct) = e^{i\omega(t + x/c)}$. It can be seen that if these are substituted in (49.2) and if $x$ is set equal to $0$, then $y = 0$ for all values of $t$, so it satisfies the necessary condition. Because of the properties of exponentials, this can be written in a simpler form: \begin{equation} \label{Eq:I:49:3} y = e^{i\omega t}(e^{-i\omega x/c} \!- e^{i\omega x/c}) = -2ie^{i\omega t}\sin\,(\omega x/c). \end{equation} There is something interesting and new here, in that this solution tells us that if we look at any fixed $x$, the string oscillates at frequency $\omega$. No matter where this point is, the frequency is the same! But there are some places, in particular wherever $\sin\,(\omega x/c) = 0$, where there is no displacement at all. Furthermore, if at any time $t$ we take a snapshot of the vibrating string, the picture will be a sine wave. However, the displacement of this sine wave will depend upon the time $t$. From inspection of Eq. (49.3) we can see that the length of one cycle of the sine wave is equal to the wavelength of either of the superimposed waves: \begin{equation} \label{Eq:I:49:4} \lambda = 2\pi c/\omega. \end{equation} The points where there is no motion satisfy the condition $\sin\,(\omega x/c) = 0$, which means that $(\omega x/c) = 0$, $\pi$, $2\pi$, …, $n\pi$, … These points are called nodes. Between any two successive nodes, every point moves up and down sinusoidally, but the pattern of motion stays fixed in space. This is the fundamental characteristic of what we call a mode. If one can find a pattern of motion which has the property that at any point the object moves perfectly sinusoidally, and that all points move at the same frequency (though some will move more than others), then we have what is called a mode. |
|
1 | 49 | Modes | 2 | Confined waves, with natural frequencies | The next interesting problem is to consider what happens if the string is held at both ends, say at $x = 0$ and $x = L$. We can begin with the idea of the reflection of waves, starting with some kind of a bump moving in one direction. As time goes on, we would expect the bump to get near one end, and as time goes still further it will become a kind of little wobble, because it is combining with the reversed-image bump which is coming from the other side. Finally the original bump will disappear and the image bump will move in the other direction to repeat the process at the other end. This problem has an easy solution, but an interesting question is whether we can have a sinusoidal motion (the solution just described is periodic, but of course it is not sinusoidally periodic). Let us try to put a sinusoidally periodic wave on a string. If the string is tied at one end, we know it must look like our earlier solution (49.3). If it is tied at the other end, it has to look the same at the other end. So the only possibility for periodic sinusoidal motion is that the sine wave must neatly fit into the string length. If it does not fit into the string length, then it is not a natural frequency at which the string can continue to oscillate. In short, if the string is started with a sine wave shape that just fits in, then it will continue to keep that perfect shape of a sine wave and will oscillate harmonically at some frequency. Mathematically, we can write $\sin kx$ for the shape, where $k$ is equal to the factor $(\omega/c)$ in Eqs. (49.3) and (49.4), and this function will be zero at $x = 0$. However, it must also be zero at the other end. The significance of this is that $k$ is no longer arbitrary, as was the case for the half-open string. With the string closed at both ends, the only possibility is that $\sin\,(kL) = 0$, because this is the only condition that will keep both ends fixed. Now in order for a sine to be zero, the angle must be either $0$, $\pi$, $2\pi$, or some other integral multiple of $\pi$. The equation \begin{equation} \label{Eq:I:49:5} kL = n\pi \end{equation} will, therefore, give any one of the possible $k$’s, depending on what integer is put in. For each of the $k$’s there is a certain frequency $\omega$, which, according to (49.3), is simply \begin{equation} \label{Eq:I:49:6} \omega = kc = n\pi c/L. \end{equation} So we have found the following: that a string has a property that it can have sinusoidal motions, but only at certain frequencies. This is the most important characteristic of confined waves. No matter how complicated the system is, it always turns out that there are some patterns of motion which have a perfect sinusoidal time dependence, but with frequencies that are a property of the particular system and the nature of its boundaries. In the case of the string we have many different possible frequencies, each one, by definition, corresponding to a mode, because a mode is a pattern of motion which repeats itself sinusoidally. Figure 49–2 shows the first three modes for a string. For the first mode the wavelength $\lambda$ is $2L$. This can be seen if one continues the wave out to $x = 2L$ to obtain one complete cycle of the sine wave. The angular frequency $\omega$ is $2\pi c$ divided by the wavelength, in general, and in this case, since $\lambda$ is $2L$, the frequency is $\pi c/L$, which is in agreement with (49.6) with $n = 1$. Let us call the first mode frequency $\omega_1$. Now the next mode shows two loops with one node in the middle. For this mode the wavelength, then, is simply $L$. The corresponding value of $k$ is twice as great and the frequency is twice as large; it is $2\omega_1$. For the third mode it is $3\omega_1$, and so on. So all the different frequencies of the string are multiples, $1$, $2$, $3$, $4$, and so on, of the lowest frequency $\omega_1$. Returning now to the general motion of the string, it turns out that any possible motion can always be analyzed by asserting that more than one mode is operating at the same time. In fact, for general motion an infinite number of modes must be excited at the same time. To get some idea of this, let us illustrate what happens when there are two modes oscillating at the same time: Suppose that we have the first mode oscillating as shown by the sequence of pictures in Fig. 49–3, which illustrates the deflection of the string for equally spaced time intervals extending through half a cycle of the lowest frequency. Now, at the same time, we suppose that there is an oscillation of the second mode also. Figure 49–3 also shows a sequence of pictures of this mode, which at the start is $90^\circ$ out of phase with the first mode. This means that at the start it has no displacement, but the two halves of the string have oppositely directed velocities. Now we recall a general principle relating to linear systems: if there are any two solutions, then their sum is also a solution. Therefore a third possible motion of the string would be a displacement obtained by adding the two solutions shown in Fig. 49–3. The result, also shown in the figure, begins to suggest the idea of a bump running back and forth between the ends of the string, although with only two modes we cannot make a very good picture of it; more modes are needed. This result is, in fact, a special case of a great principle for linear systems: Any motion at all can be analyzed by assuming that it is the sum of the motions of all the different modes, combined with appropriate amplitudes and phases. The importance of the principle derives from the fact that each mode is very simple—it is nothing but a sinusoidal motion in time. It is true that even the general motion of a string is not really very complicated, but there are other systems, for example the whipping of an airplane wing, in which the motion is much more complicated. Nevertheless, even with an airplane wing, we find there is a certain particular way of twisting which has one frequency and other ways of twisting that have other frequencies. If these modes can be found, then the complete motion can always be analyzed as a superposition of harmonic oscillations (except when the whipping is of such degree that the system can no longer be considered as linear). |
|
1 | 49 | Modes | 3 | Modes in two dimensions | The next example to be considered is the interesting situation of modes in two dimensions. Up to this point we have talked only about one-dimensional situations—a stretched string or sound waves in a tube. Ultimately we should consider three dimensions, but an easier step will be that to two dimensions. Consider for definiteness a rectangular rubber drumhead which is confined so as to have no displacement anywhere on the rectangular edge, and let the dimensions of the rectangle be $a$ and $b$, as shown in Fig. 49–4. Now the question is, what are the characteristics of the possible motion? We can start with the same procedure used for the string. If we had no confinement at all, we would expect waves travelling along with some kind of wave motion. For example, $(e^{i\omega t})(e^{-ik_xx + ik_yy})$ would represent a sine wave travelling in some direction which depends on the relative values of $k_x$ and $k_y$. Now how can we make the $x$-axis, that is, the line $y = 0$, a node? Using the ideas developed for the one-dimensional string, we can imagine another wave represented by the complex function $(-e^{i\omega t})(e^{-ik_xx - ik_yy})$. The superposition of these waves will give zero displacement at $y = 0$ regardless of the values of $x$ and $t$. (Although these functions are defined for negative $y$ where there is no drumhead to vibrate, this can be ignored, since the displacement is truly zero at $y = 0$.) In this case we can look upon the second function as the reflected wave. However, we want a nodal line at $y = b$ as well as at $y = 0$. How do we do that? The solution is related to something we did when studying reflection from crystals. These waves which cancel each other at $y = 0$ will do the same at $y = b$ only if $2b\sin\theta$ is an integral multiple of $\lambda$, where $\theta$ is the angle shown in Fig. 49–4: \begin{equation} \label{Eq:I:49:7} m\lambda = 2b\sin\theta,\quad \text{$m = 0$, $1$, $2$, $\ldots$} \end{equation} Now in the same way we can make the $y$-axis a nodal line by adding two more functions $-(e^{i\omega t})(e^{+ik_xx + ik_yy})$ and $+(e^{i\omega t})(e^{+ik_xx - ik_yy})$, each representing a reflection of one of the other two waves from the $x = 0$ line. The condition for a nodal line at $x = a$ is similar to the one for $y = b$. It is that $2a\cos\theta$ must also be an integral multiple of $\lambda$: \begin{equation} \label{Eq:I:49:8} n\lambda = 2a\cos\theta. \end{equation} Then the final result is that the waves bouncing about in the box produce a standing-wave pattern, that is, a definite mode. So we must satisfy the above two conditions if we are to have a mode. Let us first find the wavelength. This can be obtained by eliminating the angle $\theta$ from (49.7) and (49.8) to obtain the wavelength in terms of $a$, $b$, $n$ and $m$. The easiest way to do that is to divide both sides of the respective equations by $2b$ and $2a$, square them, and add the two equations together. The result is $\sin^2\theta + \cos^2\theta = 1$$\;= (n\lambda/2a)^2 + (m\lambda/2b)^2$, which can be solved for $\lambda$: \begin{equation} \label{Eq:I:49:9} \frac{1}{\lambda^2} = \frac{n^2}{4a^2} + \frac{m^2}{4b^2}. \end{equation} In this way we have determined the wavelength in terms of two integers, and from the wavelength we immediately get the frequency $\omega$, because, as we know, the frequency is equal to $2\pi c$ divided by the wavelength. This result is interesting and important enough that we should deduce it by a purely mathematical analysis instead of by an argument about the reflections. Let us represent the vibration by a superposition of four waves chosen so that the four lines $x = 0$, $x = a$, $y = 0$, and $y = b$ are all nodes. In addition we shall require that all waves have the same frequency, so that the resulting motion will represent a mode. From our earlier treatment of light reflection we know that $(e^{i\omega t})(e^{-ik_xx + ik_yy})$ represents a wave travelling in the direction indicated in Fig. 49–4. Equation (49.6), that is, $k = \omega/c$, still holds, provided \begin{equation} \label{Eq:I:49:10} k^2 = k_x^2 + k_y^2. \end{equation} It is clear from the figure that $k_x = k\cos\theta$ and $k_y = k\sin\theta$. Now our equation for the displacement, say $\phi$, of the rectangular drumhead takes on the grand form \begin{equation*} \label{Eq:I:49:11a} \phi = [e^{i\omega t}][e^{(-ik_xx + ik_yy)} - e^{(+ik_xx + ik_yy)} - e^{(-ik_xx - ik_yy)} + e^{(+ik_xx - ik_yy)}]. \tag{49.11a} \end{equation*}
\begin{align*} \label{Eq:I:49:11a} \tag{49.11a} \phi = [e^{i\omega t}]\bigl[&e^{(-ik_xx + ik_yy)} - e^{(+ik_xx + ik_yy)}\\ &- e^{(-ik_xx - ik_yy)} + e^{(+ik_xx - ik_yy)}\bigr]. \end{align*} Although this looks rather a mess, the sum of these things now is not very hard. The exponentials can be combined to give sine functions, so that the displacement turns out to be \begin{equation*} \label{Eq:I:49:11b} \phi = [4\sin k_xx\sin k_yy][e^{i\omega t}]. \tag{49.11b} \end{equation*} In other words, it is a sinusoidal oscillation, all right, with a pattern that is also sinusoidal in both the $x$- and the $y$-direction. Our boundary conditions are of course satisfied at $x = 0$ and $y = 0$. We also want $\phi$ to be zero when $x = a$ and when $y = b$. Therefore we have to put in two other conditions: $k_xa$ must be an integral multiple of $\pi$, and $k_yb$ must be another integral multiple of $\pi$. Since we have seen that $k_x = k\cos\theta$ and $k_y = k\sin\theta$, we immediately get equations (49.7) and (49.8) and from these the final result (49.9). Now let us take as an example a rectangle whose width is twice the height. If we take $a = 2b$ and use Eqs. (49.4) and (49.9), we can calculate the frequencies of all of the modes: \begin{equation} \label{Eq:I:49:12} \omega^2 = \biggl(\frac{\pi c}{b}\biggr)^2 \frac{4m^2 + n^2}{4}. \end{equation} Table 49–1 lists a few of the simple modes and also shows their shape in a qualitative way. The most important point to be emphasized about this particular case is that the frequencies are not multiples of each other, nor are they multiples of any number. The idea that the natural frequencies are harmonically related is not generally true. It is not true for a system of more than one dimension, nor is it true for one-dimensional systems which are more complicated than a string with uniform density and tension. A simple example of the latter is a hanging chain in which the tension is higher at the top than at the bottom. If such a chain is set in harmonic oscillation, there are various modes and frequencies, but the frequencies are not simple multiples of any number, nor are the mode shapes sinusoidal. The modes of more complicated systems are still more elaborate. For example, inside the mouth we have a cavity above the vocal cords, and by moving the tongue and the lips, and so forth, we make an open-ended pipe or a closed-ended pipe of different diameters and shapes; it is a terribly complicated resonator, but it is a resonator nevertheless. Now when one talks with the vocal cords, they are made to produce some kind of tone. The tone is rather complicated and there are many sounds coming out, but the cavity of the mouth further modifies that tone because of the various resonant frequencies of the cavity. For instance, a singer can sing various vowels, a, or o, or oo, and so forth, at the same pitch, but they sound different because the various harmonics are in resonance in this cavity to different degrees. The very great importance of the resonant frequencies of a cavity in modifying the voice sounds can be demonstrated by a simple experiment. Since the speed of sound goes as the reciprocal of the square root of the density, the speed of sound may be varied by using different gases. If one uses helium instead of air, so that the density is lower, the speed of sound is much higher, and all the frequencies of a cavity will be raised. Consequently if one fills one’s lungs with helium before speaking, the character of his voice will be drastically altered even though the vocal cords may still be vibrating at the same frequency. |
|
1 | 49 | Modes | 4 | Coupled pendulums | Finally we should emphasize that not only do modes exist for complicated continuous systems, but also for very simple mechanical systems. A good example is the system of two coupled pendulums discussed in the preceding chapter. In that chapter it was shown that the motion could be analyzed as a superposition of two harmonic motions with different frequencies. So even this system can be analyzed in terms of harmonic motions or modes. The string has an infinite number of modes and the two-dimensional surface also has an infinite number of modes. In a sense it is a double infinity, if we know how to count infinities. But a simple mechanical thing which has only two degrees of freedom, and requires only two variables to describe it, has only two modes. Let us make a mathematical analysis of these two modes for the case where the pendulums are of equal length. Let the displacement of one be $x$, and the displacement of the other be $y$, as shown in Fig. 49–5. Without a spring, the force on the first mass is proportional to the displacement of that mass, because of gravity. There would be, if there were no spring, a certain natural frequency $\omega_0$ for this one alone. The equation of motion without a spring would be \begin{equation} \label{Eq:I:49:13} m\,\frac{d^2x}{dt^2} = -m\omega_0^2x. \end{equation} The other pendulum would swing in the same way if there were no spring. In addition to the force of restoration due to gravitation, there is an additional force pulling the first mass. That force depends upon the excess distance of $x$ over $y$ and is proportional to that difference, so it is some constant which depends on the geometry, times $(x - y)$. The same force in reverse sense acts on the second mass. The equations of motion that have to be solved are therefore \begin{equation} \label{Eq:I:49:14} m\,\frac{d^2x}{dt^2} = -m\omega_0^2x - k(x - y),\quad m\,\frac{d^2y}{dt^2} = -m\omega_0^2y - k(y - x). \end{equation}
\begin{equation} \begin{aligned} m\,\frac{d^2x}{dt^2} = -m\omega_0^2x - k(x - y),\\[1.5ex] m\,\frac{d^2y}{dt^2} = -m\omega_0^2y - k(y - x). \end{aligned} \label{Eq:I:49:14} \end{equation}
In order to find a motion in which both of the masses move at the same frequency, we must determine how much each mass moves. In other words, pendulum $x$ and pendulum $y$ will oscillate at the same frequency, but their amplitudes must have certain values, $A$ and $B$, whose relation is fixed. Let us try this solution: \begin{equation} \label{Eq:I:49:15} x = Ae^{i\omega t},\quad y = Be^{i\omega t}. \end{equation} If these are substituted in Eqs. (49.14) and similar terms are collected, the results are \begin{equation} \begin{aligned} \biggl(\omega^2 - \omega_0^2 - \frac{k}{m}\biggr)A &= -\frac{k}{m}\,B,\\[1ex] \biggl(\omega^2 - \omega_0^2 - \frac{k}{m}\biggr)B &= -\frac{k}{m}\,A. \end{aligned} \label{Eq:I:49:16} \end{equation} The equations as written have had the common factor $e^{i\omega t}$ removed and have been divided by $m$. Now we see that we have two equations for what looks like two unknowns. But there really are not two unknowns, because the whole size of the motion is something that we cannot determine from these equations. The above equations can determine only the ratio of $A$ to $B$, but they must both give the same ratio. The necessity for both of these equations to be consistent is a requirement that the frequency be something very special. In this particular case this can be worked out rather easily. If the two equations are multiplied together, the result is \begin{equation} \label{Eq:I:49:17} \biggl(\omega^2 - \omega_0^2 - \frac{k}{m}\biggr)^2AB = \biggl(\frac{k}{m}\biggr)^2AB. \end{equation} The term $AB$ can be removed from both sides unless $A$ and $B$ are zero, which means there is no motion at all. If there is motion, then the other terms must be equal, giving a quadratic equation to solve. The result is that there are two possible frequencies: \begin{equation} \label{Eq:I:49:18} \omega_1^2 = \omega_0^2,\quad \omega_2^2 = \omega_0^2 + \frac{2k}{m}. \end{equation} Furthermore, if these values of frequency are substituted back into Eq. (49.16), we find that for the first frequency $A = B$, and for the second frequency $A = -B$. These are the “mode shapes,” as can be readily verified by experiment. It is clear that in the first mode, where $A = B$, the spring is never stretched, and both masses oscillate at the frequency $\omega_0$, as though the spring were absent. In the other solution, where $A = -B$, the spring contributes a restoring force and raises the frequency. A more interesting case results if the pendulums have different lengths. The analysis is very similar to that given above, and is left as an exercise for the reader. |
|
1 | 49 | Modes | 5 | Linear systems | Now let us summarize the ideas discussed above, which are all aspects of what is probably the most general and wonderful principle of mathematical physics. If we have a linear system whose character is independent of the time, then the motion does not have to have any particular simplicity, and in fact may be exceedingly complex, but there are very special motions, usually a series of special motions, in which the whole pattern of motion varies exponentially with the time. For the vibrating systems that we are talking about now, the exponential is imaginary, and instead of saying “exponentially” we might prefer to say “sinusoidally” with time. However, one can be more general and say that the motions will vary exponentially with the time in very special modes, with very special shapes. The most general motion of the system can always be represented as a superposition of motions involving each of the different exponentials. This is worth stating again for the case of sinusoidal motion: a linear system need not be moving in a purely sinusoidal motion, i.e., at a definite single frequency, but no matter how it does move, this motion can be represented as a superposition of pure sinusoidal motions. The frequency of each of these motions is a characteristic of the system, and the pattern or waveform of each motion is also a characteristic of the system. The general motion in any such system can be characterized by giving the strength and the phase of each of these modes, and adding them all together. Another way of saying this is that any linear vibrating system is equivalent to a set of independent harmonic oscillators, with the natural frequencies corresponding to the modes. We conclude this chapter by remarking on the connection of modes with quantum mechanics. In quantum mechanics the vibrating object, or the thing that varies in space, is the amplitude of a probability function that gives the probability of finding an electron, or system of electrons, in a given configuration. This amplitude function can vary in space and time, and satisfies, in fact, a linear equation. But in quantum mechanics there is a transformation, in that what we call frequency of the probability amplitude is equal, in the classical idea, to energy. Therefore we can translate the principle stated above to this case by taking the word frequency and replacing it with energy. It becomes something like this: a quantum-mechanical system, for example an atom, need not have a definite energy, just as a simple mechanical system does not have to have a definite frequency; but no matter how the system behaves, its behavior can always be represented as a superposition of states of definite energy. The energy of each state is a characteristic of the atom, and so is the pattern of amplitude which determines the probability of finding particles in different places. The general motion can be described by giving the amplitude of each of these different energy states. This is the origin of energy levels in quantum mechanics. Since quantum mechanics is represented by waves, in the circumstance in which the electron does not have enough energy to ultimately escape from the proton, they are confined waves. Like the confined waves of a string, there are definite frequencies for the solution of the wave equation for quantum mechanics. The quantum-mechanical interpretation is that these are definite energies. Therefore a quantum-mechanical system, because it is represented by waves, can have definite states of fixed energy; examples are the energy levels of various atoms. |
|
1 | 50 | Harmonics | 1 | Musical tones | Pythagoras is said to have discovered the fact that two similar strings under the same tension and differing only in length, when sounded together give an effect that is pleasant to the ear if the lengths of the strings are in the ratio of two small integers. If the lengths are as one is to two, they then correspond to the octave in music. If the lengths are as two is to three, they correspond to the interval between $C$ and $G$, which is called a fifth. These intervals are generally accepted as “pleasant” sounding chords. Pythagoras was so impressed by this discovery that he made it the basis of a school—Pythagoreans they were called—which held mystic beliefs in the great powers of numbers. It was believed that something similar would be found out about the planets—or “spheres.” We sometimes hear the expression: “the music of the spheres.” The idea was that there would be some numerical relationships between the orbits of the planets or between other things in nature. People usually think that this is just a kind of superstition held by the Greeks. But is it so different from our own scientific interest in quantitative relationships? Pythagoras’ discovery was the first example, outside geometry, of any numerical relationship in nature. It must have been very surprising to suddenly discover that there was a fact of nature that involved a simple numerical relationship. Simple measurements of lengths gave a prediction about something which had no apparent connection to geometry—the production of pleasant sounds. This discovery led to the extension that perhaps a good tool for understanding nature would be arithmetic and mathematical analysis. The results of modern science justify that point of view. Pythagoras could only have made his discovery by making an experimental observation. Yet this important aspect does not seem to have impressed him. If it had, physics might have had a much earlier start. (It is always easy to look back at what someone else has done and to decide what he should have done!) We might remark on a third aspect of this very interesting discovery: that the discovery had to do with two notes that sound pleasant to the ear. We may question whether we are any better off than Pythagoras in understanding why only certain sounds are pleasant to our ear. The general theory of aesthetics is probably no further advanced now than in the time of Pythagoras. In this one discovery of the Greeks, there are the three aspects: experiment, mathematical relationships, and aesthetics. Physics has made great progress on only the first two parts. This chapter will deal with our present-day understanding of the discovery of Pythagoras. Among the sounds that we hear, there is one kind that we call noise. Noise corresponds to a sort of irregular vibration of the eardrum that is produced by the irregular vibration of some object in the neighborhood. If we make a diagram to indicate the pressure of the air on the eardrum (and, therefore, the displacement of the drum) as a function of time, the graph which corresponds to a noise might look like that shown in Fig. 50–1(a). (Such a noise might correspond roughly to the sound of a stamped foot.) The sound of music has a different character. Music is characterized by the presence of more-or-less sustained tones—or musical “notes.” (Musical instruments may make noises as well!) The tone may last for a relatively short time, as when a key is pressed on a piano, or it may be sustained almost indefinitely, as when a flute player holds a long note. What is the special character of a musical note from the point of view of the pressure in the air? A musical note differs from a noise in that there is a periodicity in its graph. There is some uneven shape to the variation of the air pressure with time, and the shape repeats itself over and over again. An example of a pressure-time function that would correspond to a musical note is shown in Fig. 50–1(b). Musicians will usually speak of a musical tone in terms of three characteristics: the loudness, the pitch, and the “quality.” The “loudness” is found to correspond to the magnitude of the pressure changes. The “pitch” corresponds to the period of time for one repetition of the basic pressure function. (“Low” notes have longer periods than “high” notes.) The “quality” of a tone has to do with the differences we may still be able to hear between two notes of the same loudness and pitch. An oboe, a violin, or a soprano are still distinguishable even when they sound notes of the same pitch. The quality has to do with the structure of the repeating pattern. Let us consider, for a moment, the sound produced by a vibrating string. If we pluck the string, by pulling it to one side and releasing it, the subsequent motion will be determined by the motions of the waves we have produced. We know that these waves will travel in both directions, and will be reflected at the ends. They will slosh back and forth for a long time. No matter how complicated the wave is, however, it will repeat itself. The period of repetition is just the time $T$ required for the wave to travel two full lengths of the string. For that is just the time required for any wave, once started, to reflect off each end and return to its starting position, and be proceeding in the original direction. The time is the same for waves which start out in either direction. Each point on the string will, then, return to its starting position after one period, and again one period later, etc. The sound wave produced must also have the same repetition. We see why a plucked string produces a musical tone. |
|
1 | 50 | Harmonics | 2 | The Fourier series | We have discussed in the preceding chapter another way of looking at the motion of a vibrating system. We have seen that a string has various natural modes of oscillation, and that any particular kind of vibration that may be set up by the starting conditions can be thought of as a combination—in suitable proportions—of several of the natural modes, oscillating together. For a string we found that the normal modes of oscillation had the frequencies $\omega_0$, $2\omega_0$, $3\omega_0$, … The most general motion of a plucked string, therefore, is composed of the sum of a sinusoidal oscillation at the fundamental frequency $\omega_0$, another at the second harmonic frequency $2\omega_0$, another at the third harmonic $3\omega_0$, etc. Now the fundamental mode repeats itself every period $T_1 = 2\pi/\omega_0$. The second harmonic mode repeats itself every $T_2 = 2\pi/2\omega_0$. It also repeats itself every $T_1 = 2T_2$, after two of its periods. Similarly, the third harmonic mode repeats itself after a time $T_1$ which is $3$ of its periods. We see again why a plucked string repeats its whole pattern with a periodicity of $T_1$. It produces a musical tone. We have been talking about the motion of the string. But the sound, which is the motion of the air, is produced by the motion of the string, so its vibrations too must be composed of the same harmonics—though we are no longer thinking about the normal modes of the air. Also, the relative strength of the harmonics may be different in the air than in the string, particularly if the string is “coupled” to the air via a sounding board. The efficiency of the coupling to the air is different for different harmonics. If we let $f(t)$ represent the air pressure as a function of time for a musical tone [such as that in Fig. 50–1(b)], then we expect that $f(t)$ can be written as the sum of a number of simple harmonic functions of time—like $\cos\omega t$—for each of the various harmonic frequencies. If the period of the vibration is $T$, the fundamental angular frequency will be $\omega = 2\pi/T$, and the harmonics will be $2\omega$, $3\omega$, etc. There is one slight complication. For each frequency we may expect that the starting phases will not necessarily be the same for all frequencies. We should, therefore, use functions like $\cos\,(\omega t + \phi)$. It is, however, simpler to use instead both the sine and cosine functions for each frequency. We recall that \begin{equation} \label{Eq:I:50:1} \cos\,(\omega t + \phi) = (\cos\phi\cos\omega t - \sin\phi\sin\omega t) \end{equation} and since $\phi$ is a constant, any sinusoidal oscillation at the frequency $\omega$ can be written as the sum of a term with $\cos\omega t$ and another term with $\sin\omega t$. We conclude, then, that any function $f(t)$ that is periodic with the period $T$ can be written mathematically as \begin{alignat}{4} f(t) &= a_0\notag\\[.5ex] &\quad\;+\;a_1\cos&&\omega t &&\;+\;b_1\sin&&\omega t\notag\\[.65ex] &\quad\;+\;a_2\cos2&&\omega t &&\;+\;b_2\sin2&&\omega t\notag\\[.65ex] &\quad\;+\;a_3\cos3&&\omega t &&\;+\;b_3\sin3&&\omega t\notag\\[.5ex] \label{Eq:I:50:2} &\quad\;+\;\dotsb && &&\;+\;\dotsb \end{alignat} where $\omega = 2\pi/T$ and the $a$’s and $b$’s are numerical constants which tell us how much of each component oscillation is present in the oscillation $f(t)$. We have added the “zero-frequency” term $a_0$ so that our formula will be completely general, although it is usually zero for a musical tone. It represents a shift of the average value (that is, the “zero” level) of the sound pressure. With it our formula can take care of any case. The equality of Eq. (50.2) is represented schematically in Fig. 50–2. (The amplitudes, $a_n$ and $b_n$, of the harmonic functions must be suitably chosen. They are shown schematically and without any particular scale in the figure.) The series (50.2) is called the Fourier series for $f(t)$. We have said that any periodic function can be made up in this way. We should correct that and say that any sound wave, or any function we ordinarily encounter in physics, can be made up of such a sum. The mathematicians can invent functions which cannot be made up of simple harmonic functions—for instance, a function that has a “reverse twist” so that it has two values for some values of $t$! We need not worry about such functions here. |
|
1 | 50 | Harmonics | 3 | Quality and consonance | Now we are able to describe what it is that determines the “quality” of a musical tone. It is the relative amounts of the various harmonics—the values of the $a$’s and $b$’s. A tone with only the first harmonic is a “pure” tone. A tone with many strong harmonics is a “rich” tone. A violin produces a different proportion of harmonics than does an oboe. We can “manufacture” various musical tones if we connect several “oscillators” to a loudspeaker. (An oscillator usually produces a nearly pure simple harmonic function.) We should choose the frequencies of the oscillators to be $\omega$, $2\omega$, $3\omega$, etc. Then by adjusting the volume control on each oscillator, we can add in any amount we wish of each harmonic—thereby producing tones of different quality. An electric organ works in much this way. The “keys” select the frequency of the fundamental oscillator and the “stops” are switches that control the relative proportions of the harmonics. By throwing these switches, the organ can be made to sound like a flute, or an oboe, or a violin. It is interesting that to produce such “artificial” tones we need only one oscillator for each frequency—we do not need separate oscillators for the sine and cosine components. The ear is not very sensitive to the relative phases of the harmonics. It pays attention mainly to the total of the sine and cosine parts of each frequency. Our analysis is more accurate than is necessary to explain the subjective aspect of music. The response of a microphone or other physical instrument does depend on the phases, however, and our complete analysis may be needed to treat such cases. The “quality” of a spoken sound also determines the vowel sounds that we recognize in speech. The shape of the mouth determines the frequencies of the natural modes of vibration of the air in the mouth. Some of these modes are set into vibration by the sound waves from the vocal chords. In this way, the amplitudes of some of the harmonics of the sound are increased with respect to others. When we change the shape of our mouth, harmonics of different frequencies are given preference. These effects account for the difference between an “e–e–e” sound and an “a–a–a” sound. We all know that a particular vowel sound—say “e–e–e”—still “sounds like” the same vowel whether we say (or sing) it at a high or a low pitch. From the mechanism we describe, we would expect that particular frequencies are emphasized when we shape our mouth for an “e–e–e,” and that they do not change as we change the pitch of our voice. So the relation of the important harmonics to the fundamental—that is, the “quality”—changes as we change pitch. Apparently the mechanism by which we recognize speech is not based on specific harmonic relationships. What should we say now about Pythagoras’ discovery? We understand that two similar strings with lengths in the ratio of $2$ to $3$ will have fundamental frequencies in the ratio $3$ to $2$. But why should they “sound pleasant” together? Perhaps we should take our clue from the frequencies of the harmonics. The second harmonic of the lower shorter string will have the same frequency as the third harmonic of the longer string. (It is easy to show—or to believe—that a plucked string produces strongly the several lowest harmonics.) Perhaps we should make the following rules. Notes sound consonant when they have harmonics with the same frequency. Notes sound dissonant if their upper harmonics have frequencies near to each other but far enough apart that there are rapid beats between the two. Why beats do not sound pleasant, and why unison of the upper harmonics does sound pleasant, is something that we do not know how to define or describe. We cannot say from this knowledge of what sounds good, what ought, for example, to smell good. In other words, our understanding of it is not anything more general than the statement that when they are in unison they sound good. It does not permit us to deduce anything more than the properties of concordance in music. It is easy to check on the harmonic relationships we have described by some simple experiments with a piano. Let us label the $3$ successive C’s near the middle of the keyboard by C, C$'$, and C$''$, and the G’s just above by G, G$'$, and G$''$. Then the fundamentals will have relative frequencies as follows: \begin{alignat*}{4} &\text{C}&&–2&&\quad \text{G}&&–\phantom{1}3\\[1ex] &\text{C}'&&–4&&\quad \text{G}'&&–\phantom{1}6\\[1ex] &\text{C}''&&–8&&\quad \text{G}''&&–12 \end{alignat*} These harmonic relationships can be demonstrated in the following way: Suppose we press C$'$ slowly—so that it does not sound but we cause the damper to be lifted. If we then sound C, it will produce its own fundamental and some second harmonic. The second harmonic will set the strings of C$'$ into vibration. If we now release C (keeping C$'$ pressed) the damper will stop the vibration of the C strings, and we can hear (softly) the note C$'$ as it dies away. In a similar way, the third harmonic of C can cause a vibration of G$'$. Or the sixth of C (now getting much weaker) can set up a vibration in the fundamental of G$''$. A somewhat different result is obtained if we press G quietly and then sound C$'$. The third harmonic of C$'$ will correspond to the fourth harmonic of G, so only the fourth harmonic of G will be excited. We can hear (if we listen closely) the sound of G$''$, which is two octaves above the G we have pressed! It is easy to think up many more combinations for this game. We may remark in passing that the major scale can be defined just by the condition that the three major chords (F–A–C); (C–E–G); and (G–B–D) each represent tone sequences with the frequency ratio $(4:5:6)$. These ratios—plus the fact that an octave (C–C$'$, B–B$'$, etc.) has the ratio $1:2$—determine the whole scale for the “ideal” case, or for what is called “just intonation.” Keyboard instruments like the piano are not usually tuned in this manner, but a little “fudging” is done so that the frequencies are approximately correct for all possible starting tones. For this tuning, which is called “tempered,” the octave (still $1:2$) is divided into $12$ equal intervals for which the frequency ratio is $(2)^{1/12}$. A fifth no longer has the frequency ratio $3/2$, but $2^{7/12} = 1.499$, which is apparently close enough for most ears. We have stated a rule for consonance in terms of the coincidence of harmonics. Is this coincidence perhaps the reason that two notes are consonant? One worker has claimed that two pure tones—tones carefully manufactured to be free of harmonics—do not give the sensations of consonance or dissonance as the relative frequencies are placed at or near the expected ratios. (Such experiments are difficult because it is difficult to manufacture pure tones, for reasons that we shall see later.) We cannot still be certain whether the ear is matching harmonics or doing arithmetic when we decide that we like a sound. |
|
1 | 50 | Harmonics | 4 | The Fourier coefficients | Let us return now to the idea that any note—that is, a periodic sound—can be represented by a suitable combination of harmonics. We would like to show how we can find out what amount of each harmonic is required. It is, of course, easy to compute $f(t)$, using Eq. (50.2), if we are given all the coefficients $a$ and $b$. The question now is, if we are given $f(t)$ how can we know what the coefficients of the various harmonic terms should be? (It is easy to make a cake from a recipe; but can we write down the recipe if we are given a cake?) Fourier discovered that it was not really very difficult. The term $a_0$ is certainly easy. We have already said that it is just the average value of $f(t)$ over one period (from $t = 0$ to $t = T$). We can easily see that this is indeed so. The average value of a sine or cosine function over one period is zero. Over two, or three, or any whole number of periods, it is also zero. So the average value of all of the terms on the right-hand side of Eq. (50.2) is zero, except for $a_0$. (Recall that we must choose $\omega = 2\pi/T$.) Now the average of a sum is the sum of the averages. So the average of $f(t)$ is just the average of $a_0$. But $a_0$ is a constant, so its average is just the same as its value. Recalling the definition of an average, we have \begin{equation} \label{Eq:I:50:3} a_0 = \frac{1}{T}\int_0^Tf(t)\,dt. \end{equation} The other coefficients are only a little more difficult. To find them we can use a trick discovered by Fourier. Suppose we multiply both sides of Eq. (50.2) by some harmonic function—say by $\cos7\omega t$. We have then \begin{alignat}{2} f(t)\cdot\cos7\omega t &= a_0\cdot\cos7\omega t\notag\\[.5ex] &\quad+\;a_1\cos\hphantom{1}\omega t\cdot\cos7\omega t &&\;+\; b_1\sin\hphantom{1}\omega t\cdot\cos7\omega t\notag\\[.65ex] &\quad+\;a_2\cos2\omega t\cdot\cos7\omega t &&\;+\; b_2\sin2\omega t\cdot\cos7\omega t\notag\\[.65ex] &\quad+\;\dotsb &&\;+\; \dotsb\notag\\[.65ex] &\quad+\;a_7\cos7\omega t\cdot\cos7\omega t &&\;+\; b_7\sin7\omega t\cdot\cos7\omega t\notag\\[.5ex] \label{Eq:I:50:4} &\quad+\;\dotsb &&\;+\; \dotsb \end{alignat}
\begin{alignat}{2} f(t)\cdot\cos7\omega t &= a_0\cdot\cos7\omega t\notag\\[.75ex] &\quad+\;a_1\cos\omega t \cdot\cos7\omega t\notag\\ &\qquad\qquad+\;b_1\sin\omega t\cdot\cos7\omega t&\notag\\[.75ex] &\quad+\;a_2\cos2\omega t\cdot\cos7\omega t \notag\\ &\qquad\qquad+\;b_2\sin2\omega t\cdot\cos7\omega t\notag\\[1ex] &\quad+\quad\dotsb\notag\\[2ex] &\quad+\;a_7\cos7\omega t\cdot\cos7\omega t\notag\\ &\qquad\qquad+\;b_7\sin7\omega t\cdot\cos7\omega t\notag\\[.75ex] \label{Eq:I:50:4} &\quad+\quad\dotsb \end{alignat}
Now let us average both sides. The average of $a_0\cos7\omega t$ over the time $T$ is proportional to the average of a cosine over $7$ whole periods. But that is just zero. The average of almost all of the rest of the terms is also zero. Let us look at the $a_1$ term. We know, in general, that \begin{equation} \label{Eq:I:50:5} \cos A\cos B = \tfrac{1}{2}\cos\,(A + B) + \tfrac{1}{2}\cos\,(A - B). \end{equation} The $a_1$ term becomes \begin{equation} \label{Eq:I:50:6} \tfrac{1}{2}a_1(\cos8\omega t + \cos6\omega t). \end{equation} We thus have two cosine terms, one with $8$ full periods in $T$ and the other with $6$. They both average to zero. The average of the $a_1$ term is therefore zero. For the $a_2$ term, we would find $a_2\cos9\omega t$ and $a_2\cos5\omega t$, each of which also averages to zero. For the $a_9$ term, we would find $\cos16\omega t$ and $\cos\,(-2\omega t)$. But $\cos\,(-2\omega t)$ is the same as $\cos2\omega t$, so both of these have zero averages. It is clear that all of the $a$ terms will have a zero average except one. And that one is the $a_7$ term. For this one we have \begin{equation} \label{Eq:I:50:7} \tfrac{1}{2}a_7(\cos14\omega t + \cos0). \end{equation} The cosine of zero is one, and its average, of course, is one. So we have the result that the average of all of the $a$ terms of Eq. (50.4) equals $\tfrac{1}{2}a_7$. The $b$ terms are even easier. When we multiply by any cosine term like $\cos n\omega t$, we can show by the same method that all of the $b$ terms have the average value zero. We see that Fourier’s “trick” has acted like a sieve. When we multiply by $\cos7\omega t$ and average, all terms drop out except $a_7$, and we find that \begin{equation} \label{Eq:I:50:8} \operatorname{Average}\,[f(t)\cdot\cos7\omega t]=a_7/2, \end{equation} or \begin{equation} \label{Eq:I:50:9} a_7 = \frac{2}{T}\int_0^Tf(t)\cdot\cos7\omega t\,dt. \end{equation} We shall leave it for the reader to show that the coefficient $b_7$ can be obtained by multiplying Eq. (50.2) by $\sin7\omega t$ and averaging both sides. The result is \begin{equation} \label{Eq:I:50:10} b_7 = \frac{2}{T}\int_0^Tf(t)\cdot\sin7\omega t\,dt. \end{equation} Now what is true for $7$ we expect is true for any integer. So we can summarize our proof and result in the following more elegant mathematical form. If $m$ and $n$ are integers other than zero, and if $\omega = 2\pi/T$, then \begin{align} \label{Eq:I:50:11} &\text{I.}\quad \int_0^T\sin n\omega t\cos m\omega t\,dt = 0.\\[1ex] % ebook break &\left.\hspace{-2mm} \begin{alignedat}{3} &\text{II.}\quad \int_0^T\cos n\omega t \cos m\omega t\,dt ={}\\[1ex] &\text{III.}\quad \int_0^T\sin n\omega t \sin m\omega t\,dt ={} \end{alignedat} \label{Eq:I:50:12} \right\}\; \begin{cases} 0 & \kern{-1ex}\text{if $n \neq m$}.\\[1ex] T/2 & \kern{-1ex}\text{if $n = m$}. \end{cases}\\[1ex] % ebook break \label{Eq:I:50:13} &\text{IV.}\quad f(t) = a_0 + \sum_{n = 1}^\infty a_n\cos n\omega t + \sum_{n = 1}^\infty b_n\sin n\omega t.\\[1ex] \label{Eq:I:50:14} &\text{V.}\quad a_0 = \frac{1}{T}\int_0^Tf(t)\,dt.\\[1.5ex] \label{Eq:I:50:15} &\text{}\qquad a_n = \frac{2}{T}\int_0^Tf(t)\cdot\cos n\omega t\,dt.\\[1.5ex] \label{Eq:I:50:16} &\text{}\qquad b_n = \frac{2}{T}\int_0^Tf(t)\cdot\sin n\omega t\,dt. \end{align}
\begin{align} \label{Eq:I:50:11} &\text{I.}\; \int_0^T\!\sin n\omega t\cos m\omega t\,dt = 0.\\[1ex] % ebook break &\left.\hspace{-2mm} \begin{alignedat}{3} &\text{II.}\; \int_0^T\!\cos n\omega t \cos m\omega t\,dt ={}\\[1ex] &\text{III.}\; \int_0^T\!\sin n\omega t \sin m\omega t\,dt ={} \end{alignedat} \right\}\notag\\ \label{Eq:I:50:12} &\kern{6em}\begin{cases} 0 & \kern{-1ex}\text{if $n \neq m$}.\\[1ex] T/2 & \kern{-1ex}\text{if $n = m$}. \end{cases}\\[1ex] % ebook break &\text{IV.}\; f(t) = a_0 + \sum_{n = 1}^\infty a_n\cos n\omega t\notag\\ \label{Eq:I:50:13} &\kern{5.5em}+\sum_{n = 1}^\infty b_n\sin n\omega t.\\[1ex] \label{Eq:I:50:14} &\text{V.}\; a_0 = \frac{1}{T}\int_0^T\!f(t)\,dt.\\[1.5ex] \label{Eq:I:50:15} &\text{}\quad a_n = \frac{2}{T}\int_0^T\!f(t)\cdot\cos n\omega t\,dt.\\[1.5ex] \label{Eq:I:50:16} &\text{}\quad b_n = \frac{2}{T}\int_0^T\!f(t)\cdot\sin n\omega t\,dt. \end{align}
In earlier chapters it was convenient to use the exponential notation for representing simple harmonic motion. Instead of $\cos\omega t$ we used $\FLPRe e^{i\omega t}$, the real part of the exponential function. We have used cosine and sine functions in this chapter because it made the derivations perhaps a little clearer. Our final result of Eq. (50.13) can, however, be written in the compact form \begin{equation} \label{Eq:I:50:17} f(t) = \FLPRe\sum_{n = 0}^\infty\hat{a}_ne^{in\omega t}, \end{equation} where $\hat{a}_n$ is the complex number $a_n - ib_n$ (with $b_0 = 0$). If we wish to use the same notation throughout, we can write also \begin{equation} \label{Eq:I:50:18} \hat{a}_n = \frac{2}{T}\int_0^Tf(t)e^{-in\omega t}\,dt\quad (n \geq 1). \end{equation} We now know how to “analyze” a periodic wave into its harmonic components. The procedure is called Fourier analysis, and the separate terms are called Fourier components. We have not shown, however, that once we find all of the Fourier components and add them together, we do indeed get back our $f(t)$. The mathematicians have shown, for a wide class of functions, in fact for all that are of interest to physicists, that if we can do the integrals we will get back $f(t)$. There is one minor exception. If the function $f(t)$ is discontinuous, i.e., if it jumps suddenly from one value to another, the Fourier sum will give a value at the breakpoint halfway between the upper and lower values at the discontinuity. So if we have the strange function $f(t) = 0$, $0 \leq t < t_0$, and $f(t) = 1$ for $t_0 \leq t \leq T$, the Fourier sum will give the right value everywhere except at $t_0$, where it will have the value $\tfrac{1}{2}$ instead of $1$. It is rather unphysical anyway to insist that a function should be zero up to $t_0$, but $1$ right at $t_0$. So perhaps we should make the “rule” for physicists that any discontinuous function (which can only be a simplification of a real physical function) should be defined with halfway values at the discontinuities. Then any such function—with any finite number of such jumps—as well as all other physically interesting functions, are given correctly by the Fourier sum. As an exercise, we suggest that the reader determine the Fourier series for the function shown in Fig. 50–3. Since the function cannot be written in an explicit algebraic form, you will not be able to do the integrals from zero to $T$ in the usual way. The integrals are easy, however, if we separate them into two parts: the integral from zero to $T/2$ (over which $f(t) = 1$) and the integral from $T/2$ to $T$ (over which $f(t) = -1$). The result should be \begin{equation} \label{Eq:I:50:19} f(t) = \frac{4}{\pi}(\sin\omega t + \tfrac{1}{3}\sin3\omega t + \tfrac{1}{5}\sin5\omega t + \dotsb), \end{equation} where $\omega = 2\pi/T$. We thus find that our square wave (with the particular phase chosen) has only odd harmonics, and their amplitudes are in inverse proportion to their frequencies. Let us check that Eq. (50.19) does indeed give us back $f(t)$ for some value of $t$. Let us choose $t = T/4$, or $\omega t = \pi/2$. We have \begin{align} \label{Eq:I:50:20} f(t) &= \frac{4}{\pi}\biggl(\sin\frac{\pi}{2} + \frac{1}{3}\sin\frac{3\pi}{2} + \frac{1}{5}\sin\frac{5\pi}{2} + \dotsb\biggr)\\[1.5ex] \label{Eq:I:50:21} &= \frac{4}{\pi}\biggl(1 - \frac{1}{3} + \frac{1}{5} - \frac{1}{7}\pm\dotsb\biggr). \end{align} The series1 has the value $\pi/4$, and we find that $f(t) = 1$. |
|
1 | 50 | Harmonics | 5 | The energy theorem | The energy in a wave is proportional to the square of its amplitude. For a wave of complex shape, the energy in one period will be proportional to $\int_0^Tf^2(t)\,dt$. We can also relate this energy to the Fourier coefficients. We write \begin{equation} \label{Eq:I:50:22} \int_0^Tf^2(t)\,dt = \int_0^T\biggl[a_0 + \sum_{n = 1}^\infty a_n\cos n\omega t + \sum_{n = 1}^\infty b_n\sin n\omega t\biggr]^2\,dt. \end{equation}
\begin{gather} \label{Eq:I:50:22} \int_0^Tf^2(t)\,dt =\\ \int_0^T\biggl[a_0 + \sum_{n = 1}^\infty a_n\cos n\omega t + \sum_{n = 1}^\infty b_n\sin n\omega t\biggr]^2\,dt.\notag \end{gather} When we expand the square of the bracketed term we will get all possible cross terms, such as $a_5\cos5\omega t\cdot a_7\cos7\omega t$ and $a_5\cos5\omega t\cdot b_7\sin7\omega t$. We have shown above, however, [Eqs. (50.11) and (50.12)] that the integrals of all such terms over one period is zero. We have left only the square terms like $a_5^2\cos^2 5\omega t$. The integral of any cosine squared or sine squared over one period is equal to $T/2$, so we get \begin{align} \int_0^Tf^2(t)\,dt &= Ta_0^2 + \frac{T}{2}\, (a_1^2 + a_2^2 + \dotsb + b_1^2 + b_2^2 + \dotsb)\notag\\[.5ex] \label{Eq:I:50:23} &= Ta_0^2 + \frac{T}{2}\sum_{n = 1}^\infty(a_n^2 + b_n^2). \end{align}
\begin{gather} \label{Eq:I:50:23} \int_0^Tf^2(t)\,dt =\\[.5ex] Ta_0^2 + \frac{T}{2}\,(a_1^2 + a_2^2 + \dotsb + b_1^2 + b_2^2 + \dotsb)= \notag\\[.5ex] Ta_0^2 + \frac{T}{2}\sum_{n = 1}^\infty(a_n^2 + b_n^2).\notag \end{gather} This equation is called the “energy theorem,” and says that the total energy in a wave is just the sum of the energies in all of the Fourier components. For example, applying this theorem to the series (50.19), since $[f(t)]^2 = 1$ we get \begin{equation*} T = \frac{T}{2}\cdot\biggl(\frac{4}{\pi}\biggr)^2\biggl( 1 + \frac{1}{3^2} + \frac{1}{5^2} + \frac{1}{7^2} + \dotsb\biggr), \end{equation*} so we learn that the sum of the squares of the reciprocals of the odd integers is $\pi^2/8$. In a similar way, by first obtaining the Fourier series for the function $f(t)=(t-T/2)^2$ and using the energy theorem, we can prove that $1 + 1/2^4 + 1/3^4 + \dotsb$ is $\pi^4/90$, a result we needed in Chapter 45. |
|
1 | 50 | Harmonics | 6 | Nonlinear responses | Finally, in the theory of harmonics there is an important phenomenon which should be remarked upon because of its practical importance—that of nonlinear effects. In all the systems that we have been considering so far, we have supposed that everything was linear, that the responses to forces, say the displacements or the accelerations, were always proportional to the forces. Or that the currents in the circuits were proportional to the voltages, and so on. We now wish to consider cases where there is not a strict proportionality. We think, at the moment, of some device in which the response, which we will call $x_{\text{out}}$ at the time $t$, is determined by the input $x_{\text{in}}$ at the time $t$. For example, $x_{\text{in}}$ might be the force and $x_{\text{out}}$ might be the displacement. Or $x_{\text{in}}$ might be the current and $x_{\text{out}}$ the voltage. If the device is linear, we would have \begin{equation} \label{Eq:I:50:24} x_{\text{out}}(t) = Kx_{\text{in}}(t), \end{equation} where $K$ is a constant independent of $t$ and of $x_{\text{in}}$. Suppose, however, that the device is nearly, but not exactly, linear, so that we can write \begin{equation} \label{Eq:I:50:25} x_{\text{out}}(t) = K[x_{\text{in}}(t) + \epsilon x_{\text{in}}^2(t)], \end{equation} where $\epsilon$ is small in comparison with unity. Such linear and nonlinear responses are shown in the graphs of Fig. 50–4. Nonlinear responses have several important practical consequences. We shall discuss some of them now. First we consider what happens if we apply a pure tone at the input. We let $x_{\text{in}} = \cos\omega t$. If we plot $x_{\text{out}}$ as a function of time we get the solid curve shown in Fig. 50–5. The dashed curve gives, for comparison, the response of a linear system. We see that the output is no longer a cosine function. It is more peaked at the top and flatter at the bottom. We say that the output is distorted. We know, however, that such a wave is no longer a pure tone, that it will have harmonics. We can find what the harmonics are. Using $x_{\text{in}} = \cos\omega t$ with Eq. (50.25), we have \begin{equation} \label{Eq:I:50:26} x_{\text{out}}(t) = K(\cos\omega t + \epsilon\cos^2\omega t). \end{equation} From the equality $\cos^2\theta = \tfrac{1}{2}(1 + \cos2\theta)$, we have \begin{equation} \label{Eq:I:50:27} x_{\text{out}}(t) = K\Bigl(\cos\omega t + \frac{\epsilon}{2} + \frac{\epsilon}{2}\cos2\omega t\Bigr). \end{equation} The output has not only a component at the fundamental frequency, that was present at the input, but also has some of its second harmonic. There has also appeared at the output a constant term $K(\epsilon/2)$, which corresponds to the shift of the average value, shown in Fig. 50–5. The process of producing a shift of the average value is called rectification. A nonlinear response will rectify and will produce harmonics of the frequencies at its input. Although the nonlinearity we assumed produced only second harmonics, nonlinearities of higher order—those which have terms like $x_{\text{in}}^3$ and $x_{\text{in}}^4$, for example—will produce harmonics higher than the second. Another effect which results from a nonlinear response is modulation. If our input function contains two (or more) pure tones, the output will have not only their harmonics, but still other frequency components. Let $x_{\text{in}} = A\cos\omega_1t + B\cos\omega_2t$, where now $\omega_1$ and $\omega_2$ are not intended to be in a harmonic relation. In addition to the linear term (which is $K$ times the input) we shall have a component in the output given by \begin{align} \label{Eq:I:50:28} x_{\text{out}} &= K\epsilon(A\cos\omega_1t + B\cos\omega_2t)^2\\[.5ex] \label{Eq:I:50:29} &= K\epsilon(A^2\cos^2\omega_1t + B^2\cos^2\omega_2t + 2AB\cos\omega_1t\cos\omega_2t). \end{align}
\begin{align} \label{Eq:I:50:28} x_{\text{out}} &= K\epsilon(A\cos\omega_1t + B\,\cos\omega_2t)^2&\\[1ex] &= K\epsilon(A^2\cos^2\omega_1t + B^2\cos^2\omega_2t \notag\\[.5ex] \label{Eq:I:50:29} &\phantom{=K\epsilon}+\,2AB\cos\omega_1t\cos\omega_2t). \end{align} The first two terms in the parentheses of Eq. (50.29) are just those which gave the constant terms and second harmonic terms we found above. The last term is new. We can look at this new “cross term” $AB\cos\omega_1t\cos\omega_2t$ in two ways. First, if the two frequencies are widely different (for example, if $\omega_1$ is much greater than $\omega_2$) we can consider that the cross term represents a cosine oscillation of varying amplitude. That is, we can think of the factors in this way: \begin{equation} \label{Eq:I:50:30} AB\cos\omega_1t\cos\omega_2t = C(t)\cos\omega_1t, \end{equation} with \begin{equation} \label{Eq:I:50:31} C(t)=AB\cos\omega_2t. \end{equation} We say that the amplitude of $\cos\omega_1t$ is modulated with the frequency $\omega_2$. Alternatively, we can write the cross term in another way: \begin{equation} \label{Eq:I:50:32} AB\cos\omega_1t\cos\omega_2t = \frac{AB}{2}\,[\cos\,(\omega_1 + \omega_2)t + \cos\,(\omega_1 - \omega_2)t]. \end{equation}
\begin{gather} \label{Eq:I:50:32} AB\cos\omega_1t\cos\omega_2t =\\ \frac{AB}{2}\,[\cos\,(\omega_1 + \omega_2)t + \cos\,(\omega_1 - \omega_2)t].\notag \end{gather} We would now say that two new components have been produced, one at the sum frequency $(\omega_1 + \omega_2)$, another at the difference frequency $(\omega_1 - \omega_2)$. We have two different, but equivalent, ways of looking at the same result. In the special case that $\omega_1 \gg \omega_2$, we can relate these two different views by remarking that since $(\omega_1 + \omega_2)$ and $(\omega_1 - \omega_2)$ are near to each other we would expect to observe beats between them. But these beats have just the effect of modulating the amplitude of the average frequency $\omega_1$ by one-half the difference frequency $2\omega_2$. We see, then, why the two descriptions are equivalent. In summary, we have found that a nonlinear response produces several effects: rectification, generation of harmonics, and modulation, or the generation of components with sum and difference frequencies. We should notice that all these effects (Eq. 50.29) are proportional not only to the nonlinearity coefficient $\epsilon$, but also to the product of two amplitudes—either $A^2$, $B^2$, or $AB$. We expect these effects to be much more important for strong signals than for weak ones. The effects we have been describing have many practical applications. First, with regard to sound, it is believed that the ear is nonlinear. This is believed to account for the fact that with loud sounds we have the sensation that we hear harmonics and also sum and difference frequencies even if the sound waves contain only pure tones. The components which are used in sound-reproducing equipment—amplifiers, loudspeakers, etc.—always have some nonlinearity. They produce distortions in the sound—they generate harmonics, etc.—which were not present in the original sound. These new components are heard by the ear and are apparently objectionable. It is for this reason that “Hi-Fi” equipment is designed to be as linear as possible. (Why the nonlinearities of the ear are not “objectionable” in the same way, or how we even know that the nonlinearity is in the loudspeaker rather than in the ear is not clear!) Nonlinearities are quite necessary, and are, in fact, intentionally made large in certain parts of radio transmitting and receiving equipment. In an am transmitter the “voice” signal (with frequencies of some kilocycles per second) is combined with the “carrier” signal (with a frequency of some megacycles per second) in a nonlinear circuit called a modulator, to produce the modulated oscillation that is transmitted. In the receiver, the components of the received signal are fed to a nonlinear circuit which combines the sum and difference frequencies of the modulated carrier to generate again the voice signal. When we discussed the transmission of light, we assumed that the induced oscillations of charges were proportional to the electric field of the light—that the response was linear. That is indeed a very good approximation. It is only within the last few years that light sources have been devised (lasers) which produce an intensity of light strong enough so that nonlinear effects can be observed. It is now possible to generate harmonics of light frequencies. When a strong red light passes through a piece of glass, a little bit of blue light—second harmonic—comes out! |
|
1 | 51 | Waves | 1 | Bow waves | Although we have finished our quantitative analyses of waves, this added chapter on the subject is intended to give some appreciation, qualitatively, for various phenomena that are associated with waves, which are too complicated to analyze in detail here. Since we have been dealing with waves for several chapters, more properly the subject might be called “some of the more complex phenomena associated with waves.” The first topic to be discussed concerns the effects that are produced by a source of waves which is moving faster than the wave velocity, or the phase velocity. Let us first consider waves that have a definite velocity, like sound and light. If we have a source of sound which is moving faster than the speed of sound, then something like this happens: Suppose at a given moment a sound wave is generated from the source at point $x_1$ in Fig. 51–1; then, in the next moment, as the source moves to $x_2$, the wave from $x_1$ expands by a radius $r_1$ smaller than the distance that the source moves; and, of course, another wave starts from $x_2$. When the sound source has moved still farther, to $x_3$, and a wave is starting there, the wave from $x_2$ has now expanded to $r_2$, and the one from $x_1$ has expanded to $r_3$. Of course the thing is done continuously, not in steps, and therefore, we have a series of wave circles with a common tangent line which goes through the center of the source. We see that instead of a source generating spherical waves, as it would if it were standing still, it generates a wavefront which forms a cone in three dimensions, or a pair of lines in two dimensions. The angle of the cone is very easy to figure out. In a given amount of time the source moves a distance, say $x_3 - x_1$, proportional to $v$, the velocity of the source. In the meantime the wavefront has moved out a distance $r_3$, proportional to $c_w$, the speed of the wave. Therefore it is clear that the half-angle of opening has a sine equal to the ratio of the speed of the waves, divided by the speed of the source, and this sine has a solution only if $c_w$ is less than $v$, or the speed of the object is faster than the speed of the wave: \begin{equation} \label{Eq:I:51:1} \sin\theta = \frac{c_w}{v}. \end{equation} Incidentally, although we implied that it is necessary to have a source of sound, it turns out, very interestingly, that once the object is moving faster than the speed of sound, it will make sound. That is, it is not necessary that it have a certain tone vibrational character. Any object moving through a medium faster than the speed at which the medium carries waves will generate waves on each side, automatically, just from the motion itself. This is simple in the case of sound, but it also occurs in the case of light. At first one might think nothing can move faster than the speed of light. However, light in glass has a phase velocity less than the speed of light in a vacuum, and it is possible to shoot a charged particle of very high energy through a block of glass such that the particle velocity is close to the speed of light in a vacuum, while the speed of light in the glass may be only $\tfrac{2}{3}$ the speed of light in the vacuum. A particle moving faster than the speed of light in the medium will produce a conical wave of light with its apex at the source, like the wave wake from a boat (which is from the same effect, as a matter of fact). By measuring the cone angle, we can determine the speed of the particle. This is used technically to determine the speeds of particles as one of the methods of determining their energy in high-energy research. The direction of the light is all that needs to be measured. This light is sometimes called Cherenkov radiation, because it was first observed by Cherenkov. How intense this light should be was analyzed theoretically by Frank and Tamm. The 1958 Nobel Prize for physics was awarded jointly to all three for this work. The corresponding circumstances in the case of sound are illustrated in Fig. 51–2, which is a photograph of an object moving through a gas at a speed greater than the speed of sound. The changes in pressure produce a change in refractive index, and with a suitable optical system the edges of the waves can be made visible. We see that the object moving faster than the speed of sound does, indeed, produce a conical wave. But closer inspection reveals that the surface is actually curved. It is straight asymptotically, but it is curved near the apex, and we have now to discuss how that can be, which brings us to the second topic of this chapter. |
|
1 | 51 | Waves | 2 | Shock waves | Wave speed often depends on the amplitude, and in the case of sound the speed depends upon the amplitude in the following way. An object moving through the air has to move the air out of the way, so the disturbance produced in this case is some kind of a pressure step, with the pressure higher behind the wavefront than in the undisturbed region not yet reached by the wave (running along at the normal speed, say). But the air that is left behind, after the wavefront passes, has been compressed adiabatically, and therefore the temperature is increased. Now the speed of sound increases with the temperature, so the speed in the region behind the jump is faster than in the air in front. That means that any other disturbance that is made behind this step, say by a continuous pushing of the body, or any other disturbance, will ride faster than the front, the speed increasing with higher pressure. Figure 51–3 illustrates the situation, with some little bumps of pressure added to the pressure contour to aid visualization. We see that the higher pressure regions at the rear overtake the front as time goes on, until ultimately the compressional wave develops a sharp front. If the strength is very high, “ultimately” means right away; if it is rather weak, it takes a long time; it may be, in fact, that the sound is spreading and dying out before it has time to do this. The sounds we make in talking are extremely weak relative to the atmospheric pressure—only $1$ part in a million or so. But for pressure changes of the order of $1$ atmosphere, the wave velocity increases by about twenty percent, and the wavefront sharpens up at a correspondingly high rate. In nature nothing happens infinitely rapidly, presumably, and what we call a “sharp” front has, actually, a very slight thickness; it is not infinitely steep. The distances over which it is varying are of the order of one mean free path, in which the theory of the wave equation begins to fail because we did not consider the structure of the gas. Now, referring again to Fig. 51–2, we see that the curvature can be understood if we appreciate that the pressures near the apex are higher than they are farther back, and so the angle $\theta$ is greater. That is, the curve is the result of the fact that the speed depends upon the strength of the wave. Therefore the wave from an atomic bomb explosion travels much faster than the speed of sound for a while, until it gets so far out that it is weakened to such an extent from spreading that the pressure bump is small compared with atmospheric pressure. The speed of the bump then approaches the speed of sound in the gas into which it is going. (Incidentally, it always turns out that the speed of the shock is higher than the speed of sound in the gas ahead, but is lower than the speed of sound in the gas behind. That is, impulses from the back will arrive at the front, but the front rides into the medium in which it is going faster than the normal speed of signals. So one cannot tell, acoustically, that the shock is coming until it is too late. The light from the bomb arrives first, but one cannot tell that the shock is coming until it arrives, because there is no sound signal coming ahead of it.) This is a very interesting phenomenon, this piling up of waves, and the main point on which it depends is that after a wave is present, the speed of the resulting wave should be higher. Another example of the same phenomenon is the following. Consider water flowing in a long channel with finite width and finite depth. If a piston, or a wall across the channel, is moved along the channel fast enough, water piles up, like snow before a snow plow. Now suppose the situation is as shown in Fig. 51–4, with a sudden step in water height somewhere in the channel. It can be demonstrated that long waves in a channel travel faster in deeper water than they do in shallow water. Therefore any new bumps or irregularities in energy supplied by the piston run off forward and pile up at the front. Again, ultimately what we have is just water with a sharp front, theoretically. However, as Fig. 51–4 shows, there are complications. Pictured is a wave coming up a channel; the piston is at the far right end of the channel. At first it might have appeared like a well-behaved wave, as one might expect, but farther along the channel, it has become sharper and sharper until the events pictured occurred. There is a terrible churning at the surface, as the pieces of water fall down, but it is essentially a very sharp rise with no disturbance of the water ahead. Actually water is much more complicated than sound. However, just to illustrate a point, we will try to analyze the speed of such a so-called bore, in a channel. The point here is not that this is of any basic importance for our purposes—it is not a great generalization—it is only to illustrate that the laws of mechanics that we already know are capable of explaining the phenomenon. Imagine, for a moment, that the water does look something like Fig. 51–5(a), that water at the higher height $h_2$ is moving with a velocity $v$, and that the front is moving with velocity $u$ into undisturbed water which is at height $h_1$. We would like to determine the speed at which the front moves. In a time $\Delta t$ a vertical plane initially at $x_1$ moves a distance $v\,\Delta t$ to $x_2$, while the front of the wave has moved $u\,\Delta t$. Now we apply the equations of conservation of matter and momentum. First, the former: Per unit channel width, we see that the amount $h_2v\,\Delta t$ of matter that has moved past $x_1$ (shown shaded) is compensated by the other shaded region, which amounts to $(h_2 - h_1)u\,\Delta t$. So, dividing by $\Delta t$, $vh_2 = u(h_2 - h_1)$. That does not yet give us enough, because although we have $h_2$ and $h_1$, we do not know either $u$ or $v$; we are trying to get both of them. Now the next step is to use conservation of momentum. We have not discussed the problems of water pressure, or anything in hydrodynamics, but it is clear anyway that the pressure of water at a given depth is just enough to hold up the column of water above it. Therefore the pressure of water is equal to $\rho$, the density of water, times $g$, times the depth below the surface. Since the pressure increases linearly with depth, the average pressure over the plane at $x_1$, say, is $\tfrac{1}{2}\rho gh_2$, which is also the average force per unit width and per unit height pushing the plane toward $x_2$. So we multiply by another $h_2$ to get the total force which is acting on the water pushing from the left. On the other hand, there is pressure in the water on the right also, exerting an opposite force on the region in question, which is, by the same kind of analysis, $\tfrac{1}{2}\rho gh_1^2$. Now we must balance the forces against the rate of change of the momentum. Thus we have to figure out how much more momentum there is in situation (b) in Fig. 51–5 than there was in (a). We see that the additional mass that has acquired the speed $v$ is just $\rho h_2u\,\Delta t - \rho h_2v\,\Delta t$ (per unit width), and multiplying this by $v$ gives the additional momentum to be equated to the impulse $F\,\Delta t$: \begin{equation*} (\rho h_2u\,\Delta t - \rho h_2v\,\Delta t)v = (\tfrac{1}{2}\rho gh_2^2 - \tfrac{1}{2}\rho gh_1^2)\,\Delta t. \end{equation*} If we eliminate $v$ from this equation by substituting $vh_2 = u(h_2 - h_1)$, already found, and simplify, we get finally that $u^2 = gh_2(h_1 + h_2)/2h_1$. If the height difference is very small, so that $h_1$ and $h_2$ are nearly equal, this says that the velocity${} = \sqrt{gh}$. As we will see later, that is only true provided the wavelength of the wave is longer than the depth of the channel. We could also do the analogous thing for sound waves—including the conservation of internal energy, not the conservation of entropy, because the shock is irreversible. In fact, if one checks the conservation of energy in the bore problem, one finds that energy is not conserved. If the height difference is small, it is almost perfectly conserved, but as soon as the height difference becomes very appreciable, there is a net loss of energy. This is manifested as the falling water and the churning shown in Fig. 51–4. In shock waves there is a corresponding apparent loss of energy, from the point of view of adiabatic reactions. The energy in the sound wave, behind the shock, goes into heating of the gas after shock passes, corresponding to churning of the water in the bore. In working it out, three equations for the sound case turn out to be necessary for solution, and the temperature behind the shock is not the same as the temperature in front, as we have seen. If we try to make a bore that is upside down ($h_2 < h_1$), then we find that the energy loss per second is negative. Since energy is not available from anywhere, that bore cannot then maintain itself; it is unstable. If we were to start a wave of that sort, it would flatten out, because the speed dependence on height that resulted in sharpening in the case we discussed would now have the opposite effect. |
|
1 | 51 | Waves | 3 | Waves in solids | The next kind of waves to be discussed are the more complicated waves in solids. We have already discussed sound waves in gas and in liquid, and there is a direct analog to a sound wave in a solid. If a sudden push is applied to a solid, it is compressed. It resists the compression, and a wave analogous to sound is started. However there is another kind of wave that is possible in a solid, and which is not possible in a fluid. If a solid is distorted by pushing it sideways (called shearing), then it tries to pull itself back. That is by definition what distinguishes a solid from a liquid: if we distort a liquid (internally), hold it a minute so that it calms down, and then let go, it will stay that way, but if we take a solid and push it, like shearing a piece of “Jello,” and let it go, it flies back and starts a shear wave, travelling in the same way the compressions travel. In all cases, the shear wave speed is less than the speed of longitudinal waves. The shear waves are somewhat more analogous, so far as their polarizations are concerned, to light waves. Sound has no polarization, it is just a pressure wave. Light has a characteristic orientation perpendicular to its direction of travel. In a solid, the waves are of both kinds. First, there is a compression wave, analogous to sound, that runs at one speed. If the solid is not crystalline, then a shear wave polarized in any direction will propagate at a characteristic speed. (Of course all solids are crystalline, but if we use a block made up of microcrystals of all orientations, the crystal anisotropies average out.) Another interesting question concerning sound waves is the following: What happens if the wavelength in a solid gets shorter, and shorter, and shorter? How short can it get? It is interesting that it cannot get any shorter than the space between the atoms, because if there is supposed to be a wave in which one point goes up and the next down, etc., the shortest possible wavelength is clearly the atom spacing. In terms of the modes of oscillation, we say that there are longitudinal modes, and transverse modes, long wave modes, short wave modes. As we consider wavelengths comparable to the spacing between the atoms, then the speeds are no longer constant; there is a dispersion effect where the velocity is not independent of the wave number. But, ultimately, the highest mode of transverse waves would be that in which every atom is doing the opposite of neighboring atoms. Now from the point of view of atoms, the situation is like the two pendulums that we were talking about, for which there are two modes, one in which they both go together, and the other in which they go apart. It is possible to analyze the solid waves another way, in terms of a system of coupled harmonic oscillators, like an enormous number of pendulums, with the highest mode such that they oscillate oppositely, and lower modes with different relationships of the timing. The shortest wavelengths are so short that they are not usually available technically. However they are of great interest because, in the theory of thermodynamics of a solid, the heat properties of a solid, for example specific heats, can be analyzed in terms of the properties of the short sound waves. Going to the extreme of sound waves of ever shorter wavelength, one necessarily comes to the individual motions of the atoms; the two things are the same ultimately. A very interesting example of sound waves in a solid, both longitudinal and transverse, are the waves that are in the solid earth. Who makes the noises we do not know, but inside the earth, from time to time, there are earthquakes—some rock slides past some other rock. That is like a little noise. So waves like sound waves start out from such a source very much longer in wavelength than one usually considers in sound waves, but still they are sound waves, and they travel around in the earth. The earth is not homogeneous, however, and the properties, of pressure, density, compressibility, and so on, change with depth, and therefore the speed varies with depth. Then the waves do not travel in straight lines—there is a kind of index of refraction and they go in curves. The longitudinal waves and the transverse waves have different speeds, so there are different solutions for the different speeds. Therefore if we place a seismograph at some location and watch the way the thing jiggles after there has been an earthquake somewhere else, then we do not just get an irregular jiggling. We might get a jiggling, and a quieting down, and then another jiggling—what happens depends upon the location. If it were close enough, we would first receive longitudinal waves from the disturbance, and then, a few moments later, transverse waves, because they travel more slowly. By measuring the time difference between the two, we can tell how far away the earthquake is, if we know enough about the speeds and composition of the interior regions involved. An example of the behavior pattern of waves in the earth is shown in Fig. 51–6. The two kinds of waves are represented by different symbols. If there were an earthquake at the place marked “source,” the transverse waves and longitudinal waves would arrive at different times at the station by the most direct routes, and there would also be reflections at discontinuities, resulting in other paths and times. It turns out that there is a core in the earth which does not carry transverse waves. If the station is opposite the source, transverse waves still arrive, but the timing is not right. What happens is that the transverse wave comes to the core, and whenever the transverse waves come to a surface which is oblique, between two materials, two new waves are generated, one transverse and one longitudinal. But inside the core of the earth, a transverse wave is not propagated (or at least, there is no evidence for it, only for a longitudinal wave); it comes out again in both forms and comes to the station. It is from the behavior of these earthquake waves that it has been determined that transverse waves cannot be propagated within the inner circle. This means that the center of the earth is liquid in the sense that it cannot propagate transverse waves. The only way we know what is inside the earth is by studying earthquakes. So, by using a large number of observations of many earthquakes at different stations, the details have been worked out—the speed, the curves, etc. are all known. We know what the speeds of various kinds of waves are at every depth. Knowing that, therefore, it is possible to figure out what the normal modes of the earth are, because we know the speed of propagation of sound waves—in other words, the elastic properties of both kinds of waves at every depth. Suppose the earth were distorted into an ellipsoid and let go. It is just a matter of superposing waves travelling around in the ellipsoid to determine the period and shapes in a free mode. We have figured out that if there is a disturbance, there are a lot of modes, from the lowest, which is ellipsoidal, to higher modes with more structure. The Chilean earthquake of May 1960 made a loud enough “noise” that the signals went around the earth many times, and new seismographs of great delicacy were made just in time to determine the frequencies of the fundamental modes of the earth and to compare them with the values that were calculated from the theory of sound with the known velocities, as measured from the independent earthquakes. The result of this experiment is illustrated in Fig. 51–7, which is a plot of the strength of the signal versus the frequency of its oscillation (a Fourier analysis). Note that at certain particular frequencies there is much more being received than at other frequencies; there are very definite maxima. These are the natural frequencies of the earth, because these are the main frequencies at which the earth can oscillate. In other words, if the entire motion of the earth is made up of many different modes, we would expect to obtain, for each station, irregular bumpings which indicate a superposition of many frequencies. If we analyze this in terms of frequencies, we should be able to find the characteristic frequencies of the earth. The vertical dark lines in the figure are the calculated frequencies, and we find a remarkable agreement, an agreement due to the fact that the theory of sound is right for the inside of the earth. A very curious point is revealed in Fig. 51–8, which shows a very careful measurement, with better resolution of the lowest mode, the ellipsoidal mode of the earth. Note that it is not a single maximum, but a double one, $54.7$ minutes and $53.1$ minutes—slightly different. The reason for the two different frequencies was not known at the time that it was measured, although it may have been found in the meantime. There are at least two possible explanations: One would be that there may be asymmetry in the earth’s distribution, which would result in two similar modes. Another possibility, which is even more interesting, is this: Imagine the waves going around the earth in two directions from the source. The speeds will not be equal because of effects of the rotation of the earth in the equations of motion, which have not been taken into account in making the analysis. Motion in a rotating system is modified by Coriolis forces, and these may cause the observed splitting. Regarding the method by which these quakes have been analyzed, what is obtained on the seismograph is not a curve of amplitude as a function of frequency, but displacement as a function of time, always a very irregular tracing. To find the amount of all the different sine waves for all different frequencies, we know that the trick is to multiply the data by a sine wave of a given frequency and integrate, i.e., average it, and in the average all other frequencies disappear. The figures were thus plots of the integrals found when the data were multiplied by sine waves of different cycles per minute, and integrated. |
|
1 | 51 | Waves | 4 | Surface waves | Now, the next waves of interest, that are easily seen by everyone and which are usually used as an example of waves in elementary courses, are water waves. As we shall soon see, they are the worst possible example, because they are in no respects like sound and light; they have all the complications that waves can have. Let us start with long water waves in deep water. If the ocean is considered infinitely deep and a disturbance is made on the surface, waves are generated. All kinds of irregular motions occur, but the sinusoidal type motion, with a very small disturbance, might look like the common smooth ocean waves coming in toward the shore. Now with such a wave, the water, of course, on the average, is standing still, but the wave moves. What is the motion, is it transverse or longitudinal? It must be neither; it is not transverse, nor is it longitudinal. Although the water at a given place is alternately trough or hill, it cannot simply be moving up and down, by the conservation of water. That is, if it goes down, where is the water going to go? The water is essentially incompressible. The speed of compression of waves—that is, sound in the water—is much, much higher, and we are not considering that now. Since water is incompressible on this scale, as a hill comes down the water must move away from the region. What actually happens is that particles of water near the surface move approximately in circles. When smooth swells are coming, a person floating in a tire can look at a nearby object and see it going in a circle. So it is a mixture of longitudinal and transverse, to add to the confusion. At greater depths in the water the motions are smaller circles until, reasonably far down, there is nothing left of the motion (Fig. 51–9). To find the velocity of such waves is an interesting problem: it must be some combination of the density of the water, the acceleration of gravity, which is the restoring force that makes the waves, and possibly of the wavelength and of the depth. If we take the case where the depth goes to infinity, it will no longer depend on the depth. Whatever formula we are going to get for the velocity of the phases of the waves must combine the various factors to make the proper dimensions, and if we try this in various ways, we find only one way to combine the density, $g$, and $\lambda$ in order to make a velocity, namely, $\sqrt{g\lambda}$, which does not include the density at all. Actually, this formula for the phase velocity is not exactly right, but a complete analysis of the dynamics, which we will not go into, shows that the factors are as we have them, except for $\sqrt{2\pi}$: \begin{equation*} v_{\text{phase}} = \sqrt{g\lambda/2\pi}\text{ (for gravity waves)}. \end{equation*} It is interesting that the long waves go faster than the short waves. Thus if a boat makes waves far out, because there is some sports-car driver in a motorboat travelling by, then after a while the waves come to shore with slow sloshings at first and then more and more rapid sloshings, because the first waves that come are long. The waves get shorter and shorter as the time goes on, because the velocities go as the square root of the wavelength. One may object, “That is not right, we must look at the group velocity in order to figure it out!” Of course that is true. The formula for the phase velocity does not tell us what is going to arrive first; what tells us is the group velocity. So we have to work out the group velocity, and it is left as a problem to show it to be one-half of the phase velocity, assuming that the velocity goes as the square root of the wavelength, which is all that is needed. The group velocity also goes as the square root of the wavelength. How can the group velocity go half as fast as the phase? If one looks at the bunch of waves that are made by a boat travelling along, following a particular crest, he finds that it moves forward in the group and gradually gets weaker and dies out in the front, and mystically and mysteriously a weak one in the back works its way forward and gets stronger. In short, the waves are moving through the group while the group is only moving at half the speed that the waves are moving. Because the group velocities and phase velocities are not equal, then the waves that are produced by an object moving through are no longer simply a cone, but it is much more interesting. We can see that in Fig. 51–10, which shows the waves produced by an object moving through the water. Note that it is quite different than what we would have for sound, in which the velocity is independent of wavelength, where we would have wavefronts only along the cone, travelling outward. Instead of that, we have waves in the back with fronts moving parallel to the motion of the boat, and then we have little waves on the sides at other angles. This entire pattern of waves can, with ingenuity, be analyzed by knowing only this: that the phase velocity is proportional to the square root of the wavelength. The trick is that the pattern of waves is stationary relative to the (constant-velocity) boat; any other pattern would get lost from the boat. The water waves that we have been considering so far were long waves in which the force of restoration is due to gravitation. But when waves get very short in the water, the main restoring force is capillary attraction, i.e., the energy of the surface, the surface tension. For surface tension waves, it turns out that the phase velocity is \begin{equation*} v_{\text{phase}} = \sqrt{2\pi T/\lambda\rho}\text{ (for ripples)}, \end{equation*} where $T$ is the surface tension and $\rho$ the density. It is the exact opposite: the phase velocity is higher, the shorter the wavelength, when the wavelength gets very small. When we have both gravity and capillary action, as we always do, we get the combination of these two together: \begin{equation*} v_{\text{phase}} = \sqrt{Tk/\rho + g/k}, \end{equation*} where $k = 2\pi/\lambda$ is the wave number. So the velocity of the waves of water is really quite complicated. The phase velocity as a function of the wavelength is shown in Fig. 51–11; for very short waves it is fast, for very long waves it is fast, and there is a minimum speed at which the waves can go. The group velocity can be calculated from the formula: it goes to $\tfrac{3}{2}$ the phase velocity for ripples and $\tfrac{1}{2}$ the phase velocity for gravity waves. To the left of the minimum the group velocity is higher than the phase velocity; to the right, the group velocity is less than the phase velocity. There are a number of interesting phenomena associated with these facts. In the first place, since the group velocity is increasing so rapidly as the wavelength goes down, if we make a disturbance there will be a slowest end of the disturbance going at the minimum speed with the corresponding wavelength, and then in front, going at higher speed, will be a short wave and a very long wave. It is very hard to see the long ones, but it is easy to see the short ones in a water tank. So we see that the ripples often used to illustrate simple waves are quite interesting and complicated; they do not have a sharp wavefront at all, as is the case for simple waves like sound and light. The main wave has little ripples which run out ahead. A sharp disturbance in the water does not produce a sharp wave because of the dispersion. First come the very fine waves. Incidentally, if an object moves through the water at a certain speed, a rather complicated pattern results, because all the different waves are going at different speeds. One can demonstrate this with a tray of water and see that the fastest ones are the fine capillary waves. There are slowest waves, of a certain kind, which go behind. By inclining the bottom, one sees that where the depth is lower, the speed is lower. If a wave comes in at an angle to the line of maximum slope, it bends and tends to follow that line. In this way one can show various things, and we conclude that waves are more complicated in water than in air. The speed of long waves in water with circulational motions is slower when the depth is less, faster in deep water. Thus as water comes toward a beach where the depth lessens, the waves go slower. But where the water is deeper, the waves are faster, so we get the effects of shock waves. This time, since the wave is not so simple, the shocks are much more contorted, and the wave over-curves itself, in the familiar way shown in Fig. 51–12. This is what happens when waves come into the shore, and the real complexities in nature are well revealed in such a circumstance. No one has yet been able to figure out what shape the wave should take as it breaks. It is easy enough when the waves are small, but when one gets large and breaks, then it is much more complicated. An interesting feature about capillary waves can be seen in the disturbances made by an object moving through the water. From the point of view of the object itself, the water is flowing past, and the waves which ultimately sit around it are always the waves which have just the right speed to stay still with the object in the water. Similarly, around an object in a stream, with the stream flowing by, the pattern of waves is stationary, and at just the right wavelengths to go at the same speed as the water going by. But if the group velocity is less than the phase velocity, then the disturbances propagate out backwards in the stream, because the group velocity is not quite enough to keep up with the stream. If the group velocity is faster than the velocity of the phase, the pattern of waves will appear in front of the object. If one looks closely at objects in a stream, one can see that there are little ripples in front and long “slurps” in the back. Another interesting feature of this sort can be observed in pouring liquids. If milk is poured fast enough out of a bottle, for instance, a large number of lines can be seen crossing both ways in the outgoing stream. They are waves starting from the disturbance at the edges and running out, much like the waves about an object in a stream. There are effects from both sides which produce the crossed pattern. We have investigated some of the interesting properties of waves and the various complications of dependence of phase velocity on wavelength, the speed of the waves on depth, and so forth, that produce the really complex, and therefore interesting, phenomena of nature. |
|
1 | 52 | Symmetry in Physical Laws | 1 | Symmetry operations | The subject of this chapter is what we may call symmetry in physical laws. We have already discussed certain features of symmetry in physical laws in connection with vector analysis (Chapter 11), the theory of relativity (Chapter 16), and rotation (Chapter 20). Why should we be concerned with symmetry? In the first place, symmetry is fascinating to the human mind, and everyone likes objects or patterns that are in some way symmetrical. It is an interesting fact that nature often exhibits certain kinds of symmetry in the objects we find in the world around us. Perhaps the most symmetrical object imaginable is a sphere, and nature is full of spheres—stars, planets, water droplets in clouds. The crystals found in rocks exhibit many different kinds of symmetry, the study of which tells us some important things about the structure of solids. Even the animal and vegetable worlds show some degree of symmetry, although the symmetry of a flower or of a bee is not as perfect or as fundamental as is that of a crystal. But our main concern here is not with the fact that the objects of nature are often symmetrical. Rather, we wish to examine some of the even more remarkable symmetries of the universe—the symmetries that exist in the basic laws themselves which govern the operation of the physical world. First, what is symmetry? How can a physical law be “symmetrical”? The problem of defining symmetry is an interesting one and we have already noted that Weyl gave a good definition, the substance of which is that a thing is symmetrical if there is something we can do to it so that after we have done it, it looks the same as it did before. For example, a symmetrical vase is of such a kind that if we reflect or turn it, it will look the same as it did before. The question we wish to consider here is what we can do to physical phenomena, or to a physical situation in an experiment, and yet leave the result the same. A list of the known operations under which various physical phenomena remain invariant is shown in Table 52–1. |
|
1 | 52 | Symmetry in Physical Laws | 2 | Symmetry in space and time | The first thing we might try to do, for example, is to translate the phenomenon in space. If we do an experiment in a certain region, and then build another apparatus at another place in space (or move the original one over) then, whatever went on in one apparatus, in a certain order in time, will occur in the same way if we have arranged the same condition, with all due attention to the restrictions that we mentioned before: that all of those features of the environment which make it not behave the same way have also been moved over—we talked about how to define how much we should include in those circumstances, and we shall not go into those details again. In the same way, we also believe today that displacement in time will have no effect on physical laws. (That is, as far as we know today—all of these things are as far as we know today!) That means that if we build a certain apparatus and start it at a certain time, say on Thursday at 10:00 a.m., and then build the same apparatus and start it, say, three days later in the same condition, the two apparatuses will go through the same motions in exactly the same way as a function of time no matter what the starting time, provided again, of course, that the relevant features of the environment are also modified appropriately in time. That symmetry means, of course, that if one bought General Motors stock three months ago, the same thing would happen to it if he bought it now! We have to watch out for geographical differences too, for there are, of course, variations in the characteristics of the earth’s surface. So, for example, if we measure the magnetic field in a certain region and move the apparatus to some other region, it may not work in precisely the same way because the magnetic field is different, but we say that is because the magnetic field is associated with the earth. We can imagine that if we move the whole earth and the equipment, it would make no difference in the operation of the apparatus. Another thing that we discussed in considerable detail was rotation in space: if we turn an apparatus at an angle it works just as well, provided we turn everything else that is relevant along with it. In fact, we discussed the problem of symmetry under rotation in space in some detail in Chapter 11, and we invented a mathematical system called vector analysis to handle it as neatly as possible. On a more advanced level we had another symmetry—the symmetry under uniform velocity in a straight line. That is to say—a rather remarkable effect—that if we have a piece of apparatus working a certain way and then take the same apparatus and put it in a car, and move the whole car, plus all the relevant surroundings, at a uniform velocity in a straight line, then so far as the phenomena inside the car are concerned there is no difference: all the laws of physics appear the same. We even know how to express this more technically, and that is that the mathematical equations of the physical laws must be unchanged under a Lorentz transformation. As a matter of fact, it was a study of the relativity problem that concentrated physicists’ attention most sharply on symmetry in physical laws. Now the above-mentioned symmetries have all been of a geometrical nature, time and space being more or less the same, but there are other symmetries of a different kind. For example, there is a symmetry which describes the fact that we can replace one atom by another of the same kind; to put it differently, there are atoms of the same kind. It is possible to find groups of atoms such that if we change a pair around, it makes no difference—the atoms are identical. Whatever one atom of oxygen of a certain type will do, another atom of oxygen of that type will do. One may say, “That is ridiculous, that is the definition of equal types!” That may be merely the definition, but then we still do not know whether there are any “atoms of the same type”; the fact is that there are many, many atoms of the same type. Thus it does mean something to say that it makes no difference if we replace one atom by another of the same type. The so-called elementary particles of which the atoms are made are also identical particles in the above sense—all electrons are the same; all protons are the same; all positive pions are the same; and so on. After such a long list of things that can be done without changing the phenomena, one might think we could do practically anything; so let us give some examples to the contrary, just to see the difference. Suppose that we ask: “Are the physical laws symmetrical under a change of scale?” Suppose we build a certain piece of apparatus, and then build another apparatus five times bigger in every part, will it work exactly the same way? The answer is, in this case, no! The wavelength of light emitted, for example, by the atoms inside one box of sodium atoms and the wavelength of light emitted by a gas of sodium atoms five times in volume is not five times longer, but is in fact exactly the same as the other. So the ratio of the wavelength to the size of the emitter will change. Another example: we see in the newspaper, every once in a while pictures of a great cathedral made with little matchsticks—a tremendous work of art by some retired fellow who keeps gluing matchsticks together. It is much more elaborate and wonderful than any real cathedral. If we imagine that this wooden cathedral were actually built on the scale of a real cathedral, we see where the trouble is; it would not last—the whole thing would collapse because of the fact that scaled-up matchsticks are just not strong enough. “Yes,” one might say, “but we also know that when there is an influence from the outside, it also must be changed in proportion!” We are talking about the ability of the object to withstand gravitation. So what we should do is first to take the model cathedral of real matchsticks and the real earth, and then we know it is stable. Then we should take the larger cathedral and take a bigger earth. But then it is even worse, because the gravitation is increased still more! Today, of course, we understand the fact that phenomena depend on the scale on the grounds that matter is atomic in nature, and certainly if we built an apparatus that was so small there were only five atoms in it, it would clearly be something we could not scale up and down arbitrarily. The scale of an individual atom is not at all arbitrary—it is quite definite. The fact that the laws of physics are not unchanged under a change of scale was discovered by Galileo. He realized that the strengths of materials were not in exactly the right proportion to their sizes, and he illustrated this property that we were just discussing, about the cathedral of matchsticks, by drawing two bones, the bone of one dog, in the right proportion for holding up his weight, and the imaginary bone of a “super dog” that would be, say, ten or a hundred times bigger—that bone was a big, solid thing with quite different proportions. We do not know whether he ever carried the argument quite to the conclusion that the laws of nature must have a definite scale, but he was so impressed with this discovery that he considered it to be as important as the discovery of the laws of motion, because he published them both in the same volume, called “On Two New Sciences.” Another example in which the laws are not symmetrical, that we know quite well, is this: a system in rotation at a uniform angular velocity does not give the same apparent laws as one that is not rotating. If we make an experiment and then put everything in a space ship and have the space ship spinning in empty space, all alone at a constant angular velocity, the apparatus will not work the same way because, as we know, things inside the equipment will be thrown to the outside, and so on, by the centrifugal or Coriolis forces, etc. In fact, we can tell that the earth is rotating by using a so-called Foucault pendulum, without looking outside. Next we mention a very interesting symmetry which is obviously false, i.e., reversibility in time. The physical laws apparently cannot be reversible in time, because, as we know, all obvious phenomena are irreversible on a large scale: “The moving finger writes, and having writ, moves on.” So far as we can tell, this irreversibility is due to the very large number of particles involved, and if we could see the individual molecules, we would not be able to discern whether the machinery was working forward or backwards. To make it more precise: we build a small apparatus in which we know what all the atoms are doing, in which we can watch them jiggling. Now we build another apparatus like it, but which starts its motion in the final condition of the other one, with all the velocities precisely reversed. It will then go through the same motions, but exactly in reverse. Putting it another way: if we take a motion picture, with sufficient detail, of all the inner works of a piece of material and shine it on a screen and run it backwards, no physicist will be able to say, “That is against the laws of physics, that is doing something wrong!” If we do not see all the details, of course, the situation will be perfectly clear. If we see the egg splattering on the sidewalk and the shell cracking open, and so on, then we will surely say, “That is irreversible, because if we run the moving picture backwards the egg will all collect together and the shell will go back together, and that is obviously ridiculous!” But if we look at the individual atoms themselves, the laws look completely reversible. This is, of course, a much harder discovery to have made, but apparently it is true that the fundamental physical laws, on a microscopic and fundamental level, are completely reversible in time! |
|
1 | 52 | Symmetry in Physical Laws | 3 | Symmetry and conservation laws | The symmetries of the physical laws are very interesting at this level, but they turn out, in the end, to be even more interesting and exciting when we come to quantum mechanics. For a reason which we cannot make clear at the level of the present discussion—a fact that most physicists still find somewhat staggering, a most profound and beautiful thing, is that, in quantum mechanics, for each of the rules of symmetry there is a corresponding conservation law; there is a definite connection between the laws of conservation and the symmetries of physical laws. We can only state this at present, without any attempt at explanation. The fact, for example, that the laws are symmetrical for translation in space when we add the principles of quantum mechanics, turns out to mean that momentum is conserved. That the laws are symmetrical under translation in time means, in quantum mechanics, that energy is conserved. Invariance under rotation through a fixed angle in space corresponds to the conservation of angular momentum. These connections are very interesting and beautiful things, among the most beautiful and profound things in physics. Incidentally, there are a number of symmetries which appear in quantum mechanics which have no classical analog, which have no method of description in classical physics. One of these is as follows: If $\psi$ is the amplitude for some process or other, we know that the absolute square of $\psi$ is the probability that the process will occur. Now if someone else were to make his calculations, not with this $\psi$, but with a $\psi'$ which differs merely by a change in phase (let $\Delta$ be some constant, and multiply $e^{i\Delta}$ times the old $\psi$), the absolute square of $\psi'$, which is the probability of the event, is then equal to the absolute square of $\psi$: \begin{equation} \label{Eq:I:52:1} \psi' = \psi e^{i\Delta};\quad \abs{\psi'}^2 = \abs{\psi}^2. \end{equation} Therefore the physical laws are unchanged if the phase of the wave function is shifted by an arbitrary constant. That is another symmetry. Physical laws must be of such a nature that a shift in the quantum-mechanical phase makes no difference. As we have just mentioned, in quantum mechanics there is a conservation law for every symmetry. The conservation law which is connected with the quantum-mechanical phase seems to be the conservation of electrical charge. This is altogether a very interesting business! |
|
1 | 52 | Symmetry in Physical Laws | 4 | Mirror reflections | Now the next question, which is going to concern us for most of the rest of this chapter, is the question of symmetry under reflection in space. The problem is this: Are the physical laws symmetrical under reflection? We may put it this way: Suppose we build a piece of equipment, let us say a clock, with lots of wheels and hands and numbers; it ticks, it works, and it has things wound up inside. We look at the clock in the mirror. How it looks in the mirror is not the question. But let us actually build another clock which is exactly the same as the first clock looks in the mirror—every time there is a screw with a right-hand thread in one, we use a screw with a left-hand thread in the corresponding place of the other; where one is marked “$2$” on the face, we mark a “” on the face of the other; each coiled spring is twisted one way in one clock and the other way in the mirror-image clock; when we are all finished, we have two clocks, both physical, which bear to each other the relation of an object and its mirror image, although they are both actual, material objects, we emphasize. Now the question is: If the two clocks are started in the same condition, the springs wound to corresponding tightnesses, will the two clocks tick and go around, forever after, as exact mirror images? (This is a physical question, not a philosophical question.) Our intuition about the laws of physics would suggest that they would. We would suspect that, at least in the case of these clocks, reflection in space is one of the symmetries of physical laws, that if we change everything from “right” to “left” and leave it otherwise the same, we cannot tell the difference. Let us, then, suppose for a moment that this is true. If it is true, then it would be impossible to distinguish “right” and “left” by any physical phenomenon, just as it is, for example, impossible to define a particular absolute velocity by a physical phenomenon. So it should be impossible, by any physical phenomenon, to define absolutely what we mean by “right” as opposed to “left,” because the physical laws should be symmetrical. Of course, the world does not have to be symmetrical. For example, using what we may call “geography,” surely “right” can be defined. For instance, we stand in New Orleans and look at Chicago, and Florida is to our right (when our feet are on the ground!). So we can define “right” and “left” by geography. Of course, the actual situation in any system does not have to have the symmetry that we are talking about; it is a question of whether the laws are symmetrical—in other words, whether it is against the physical laws to have a sphere like the earth with “left-handed dirt” on it and a person like ourselves standing looking at a city like Chicago from a place like New Orleans, but with everything the other way around, so Florida is on the other side. It clearly seems not impossible, not against the physical laws, to have everything changed left for right. Another point is that our definition of “right” should not depend on history. An easy way to distinguish right from left is to go to a machine shop and pick up a screw at random. The odds are it has a right-hand thread—not necessarily, but it is much more likely to have a right-hand thread than a left-hand one. This is a question of history or convention, or the way things happen to be, and is again not a question of fundamental laws. As we can well appreciate, everyone could have started out making left-handed screws! So we must try to find some phenomenon in which “right hand” is involved fundamentally. The next possibility we discuss is the fact that polarized light rotates its plane of polarization as it goes through, say, sugar water. As we saw in Chapter 33, it rotates, let us say, to the right in a certain sugar solution. That is a way of defining “right-hand,” because we may dissolve some sugar in the water and then the polarization goes to the right. But sugar has come from living things, and if we try to make the sugar artificially, then we discover that it does not rotate the plane of polarization! But if we then take that same sugar which is made artificially and which does not rotate the plane of polarization, and put bacteria in it (they eat some of the sugar) and then filter out the bacteria, we find that we still have sugar left (almost half as much as we had before), and this time it does rotate the plane of polarization, but the other way! It seems very confusing, but is easily explained. Take another example: One of the substances which is common to all living creatures and that is fundamental to life is protein. Proteins consist of chains of amino acids. Figure 52–1 shows a model of an amino acid that comes out of a protein. This amino acid is called alanine, and the molecular arrangement would look like that in Fig. 52–1(a) if it came out of a protein of a real living thing. On the other hand, if we try to make alanine from carbon dioxide, ethane, and ammonia (and we can make it, it is not a complicated molecule), we discover that we are making equal amounts of this molecule and the one shown in Fig. 52–1(b)! The first molecule, the one that comes from the living thing, is called L-alanine. The other one, which is the same chemically, in that it has the same kinds of atoms and the same connections of the atoms, is a “right-hand” molecule, compared with the “left-hand” L-alanine, and it is called D-alanine. The interesting thing is that when we make alanine at home in a laboratory from simple gases, we get an equal mixture of both kinds. However, the only thing that life uses is L-alanine. (This is not exactly true. Here and there in living creatures there is a special use for D-alanine, but it is very rare. All proteins use L-alanine exclusively.) Now if we make both kinds, and we feed the mixture to some animal which likes to “eat,” or use up, alanine, it cannot use D-alanine, so it only uses the L-alanine; that is what happened to our sugar—after the bacteria eat the sugar that works well for them, only the “wrong” kind is left! (Left-handed sugar tastes sweet, but not the same as right-handed sugar.) So it looks as though the phenomena of life permit a distinction between “right” and “left,” or chemistry permits a distinction, because the two molecules are chemically different. But no, it does not! So far as physical measurements can be made, such as of energy, the rates of chemical reactions, and so on, the two kinds work exactly the same way if we make everything else in a mirror image too. One molecule will rotate light to the right, and the other will rotate it to the left in precisely the same amount, through the same amount of fluid. Thus, so far as physics is concerned, these two amino acids are equally satisfactory. So far as we understand things today, the fundamentals of the Schrödinger equation have it that the two molecules should behave in exactly corresponding ways, so that one is to the right as the other is to the left. Nevertheless, in life it is all one way! It is presumed that the reason for this is the following. Let us suppose, for example, that life is somehow at one moment in a certain condition in which all the proteins in some creatures have left-handed amino acids, and all the enzymes are lopsided—every substance in the living creature is lopsided—it is not symmetrical. So when the digestive enzymes try to change the chemicals in the food from one kind to another, one kind of chemical “fits” into the enzyme, but the other kind does not (like Cinderella and the slipper, except that it is a “left foot” that we are testing). So far as we know, in principle, we could build a frog, for example, in which every molecule is reversed, everything is like the “left-hand” mirror image of a real frog; we have a left-hand frog. This left-hand frog would go on all right for a while, but he would find nothing to eat, because if he swallows a fly, his enzymes are not built to digest it. The fly has the wrong “kind” of amino acids (unless we give him a left-hand fly). So as far as we know, the chemical and life processes would continue in the same manner if everything were reversed. If life is entirely a physical and chemical phenomenon, then we can understand that the proteins are all made in the same corkscrew only from the idea that at the very beginning some living molecules, by accident, got started and a few won. Somewhere, once, one organic molecule was lopsided in a certain way, and from this particular thing the “right” happened to evolve in our particular geography; a particular historical accident was one-sided, and ever since then the lopsidedness has propagated itself. Once having arrived at the state that it is in now, of course, it will always continue—all the enzymes digest the right things, manufacture the right things: when the carbon dioxide and the water vapor, and so on, go in the plant leaves, the enzymes that make the sugars make them lopsided because the enzymes are lopsided. If any new kind of virus or living thing were to originate at a later time, it would survive only if it could “eat” the kind of living matter already present. Thus it, too, must be of the same kind. There is no conservation of the number of right-handed molecules. Once started, we could keep increasing the number of right-handed molecules. So the presumption is, then, that the phenomena in the case of life do not show a lack of symmetry in physical laws, but do show, on the contrary, the universal nature and the commonness of ultimate origin of all creatures on earth, in the sense described above. |
|
1 | 52 | Symmetry in Physical Laws | 5 | Polar and axial vectors | Now we go further. We observe that in physics there are a lot of other places where we have “right” and “left” hand rules. As a matter of fact, when we learned about vector analysis we learned about the right-hand rules we have to use in order to get the angular momentum, torque, magnetic field, and so on, to come out right. The force on a charge moving in a magnetic field, for example, is $\FLPF = q\FLPv\times\FLPB$. In a given situation, in which we know $\FLPF$, $\FLPv$, and $\FLPB$, isn’t that equation enough to define right-handedness? As a matter of fact, if we go back and look at where the vectors came from, we know that the “right-hand rule” was merely a convention; it was a trick. The original quantities, like the angular momenta and the angular velocities, and things of this kind, were not really vectors at all! They are all somehow associated with a certain plane, and it is just because there are three dimensions in space that we can associate the quantity with a direction perpendicular to that plane. Of the two possible directions, we chose the “right-hand” direction. So if the laws of physics are symmetrical, we should find that if some demon were to sneak into all the physics laboratories and replace the word “right” for “left” in every book in which “right-hand rules” are given, and instead we were to use all “left-hand rules,” uniformly, then it should make no difference whatever in the physical laws. Let us give an illustration. There are two kinds of vectors. There are “honest” vectors, for example a step $\FLPr$ in space. If in our apparatus there is a piece here and something else there, then in a mirror apparatus there will be the image piece and the image something else, and if we draw a vector from the “piece” to the “something else,” one vector is the mirror image of the other (Fig. 52–2). The vector arrow changes its head, just as the whole space turns inside out; such a vector we call a polar vector. But the other kind of vector, which has to do with rotations, is of a different nature. For example, suppose that in three dimensions something is rotating as shown in Fig. 52–3. Then if we look at it in a mirror, it will be rotating as indicated, namely, as the mirror image of the original rotation. Now we have agreed to represent the mirror rotation by the same rule, it is a “vector” which, on reflection, does not change about as the polar vector does, but is reversed relative to the polar vectors and to the geometry of the space; such a vector is called an axial vector. Now if the law of reflection symmetry is right in physics, then it must be true that the equations must be so designed that if we change the sign of each axial vector and each cross-product of vectors, which would be what corresponds to reflection, nothing will happen. For instance, when we write a formula which says that the angular momentum is $\FLPL = \FLPr\times\FLPp$, that equation is all right, because if we change to a left-hand coordinate system, we change the sign of $\FLPL$, but $\FLPp$ and $\FLPr$ do not change; the cross-product sign is changed, since we must change from a right-hand rule to a left-hand rule. As another example, we know that the force on a charge moving in a magnetic field is $\FLPF = q\FLPv\times\FLPB$, but if we change from a right- to a left-handed system, since $\FLPF$ and $\FLPv$ are known to be polar vectors the sign change required by the cross-product must be cancelled by a sign change in $\FLPB$, which means that $\FLPB$ must be an axial vector. In other words, if we make such a reflection, $\FLPB$ must go to $-\FLPB$. So if we change our coordinates from right to left, we must also change the poles of magnets from north to south. Let us see how that works in an example. Suppose that we have two magnets, as in Fig. 52–4. One is a magnet with the coils going around a certain way, and with current in a given direction. The other magnet looks like the reflection of the first magnet in a mirror—the coil will wind the other way, everything that happens inside the coil is exactly reversed, and the current goes as shown. Now, from the laws for the production of magnetic fields, which we do not know yet officially, but which we most likely learned in high school, it turns out that the magnetic field is as shown in the figure. In one case the pole is a south magnetic pole, while in the other magnet the current is going the other way and the magnetic field is reversed—it is a north magnetic pole. So we see that when we go from right to left we must indeed change from north to south! Never mind changing north to south; these too are mere conventions. Let us talk about phenomena. Suppose, now, that we have an electron moving through one field, going into the page. Then, if we use the formula for the force, $\FLPv\times\FLPB$ (remember the charge is minus), we find that the electron will deviate in the indicated direction according to the physical law. So the phenomenon is that we have a coil with a current going in a specified sense and an electron curves in a certain way—that is the physics—never mind how we label everything. Now let us do the same experiment with a mirror: we send an electron through in a corresponding direction and now the force is reversed, if we calculate it from the same rule, and that is very good because the corresponding motions are then mirror images! |
|
1 | 52 | Symmetry in Physical Laws | 6 | Which hand is right? | So the fact of the matter is that in studying any phenomenon there are always two right-hand rules, or an even number of them, and the net result is that the phenomena always look symmetrical. In short, therefore, we cannot tell right from left if we also are not able to tell north from south. However, it may seem that we can tell the north pole of a magnet. The north pole of a compass needle, for example, is one that points to the north. But of course that is again a local property that has to do with geography of the earth; that is just like talking about in which direction is Chicago, so it does not count. If we have seen compass needles, we may have noticed that the north-seeking pole is a sort of bluish color. But that is just due to the man who painted the magnet. These are all local, conventional criteria. However, if a magnet were to have the property that if we looked at it closely enough we would see small hairs growing on its north pole but not on its south pole, if that were the general rule, or if there were any unique way to distinguish the north from the south pole of a magnet, then we could tell which of the two cases we actually had, and that would be the end of the law of reflection symmetry. To illustrate the whole problem still more clearly, imagine that we were talking to a Martian, or someone very far away, by telephone. We are not allowed to send him any actual samples to inspect; for instance, if we could send light, we could send him right-hand circularly polarized light and say, “That is right-hand light—just watch the way it is going.” But we cannot give him anything, we can only talk to him. He is far away, or in some strange location, and he cannot see anything we can see. For instance, we cannot say, “Look at Ursa major; now see how those stars are arranged. What we mean by ‘right’ is …” We are only allowed to telephone him. Now we want to tell him all about us. Of course, first we start defining numbers, and say, “Tick, tick, two, tick, tick, tick, three, …,” so that gradually he can understand a couple of words, and so on. After a while we may become very familiar with this fellow, and he says, “What do you guys look like?” We start to describe ourselves, and say, “Well, we are six feet tall.” He says, “Wait a minute, what is six feet?” Is it possible to tell him what six feet is? Certainly! We say, “You know about the diameter of hydrogen atoms—we are $17{,}000{,}000{,}000$ hydrogen atoms high!” That is possible because physical laws are not invariant under change of scale, and therefore we can define an absolute length. And so we define the size of the body, and tell him what the general shape is—it has prongs with five bumps sticking out on the ends, and so on, and he follows us along, and we finish describing how we look on the outside, presumably without encountering any particular difficulties. He is even making a model of us as we go along. He says, “My, you are certainly very handsome fellows; now what is on the inside?” So we start to describe the various organs on the inside, and we come to the heart, and we carefully describe the shape of it, and say, “Now put the heart on the left side.” He says, “Duhhh—the left side?” Now our problem is to describe to him which side the heart goes on without his ever seeing anything that we see, and without our ever sending any sample to him of what we mean by “right”—no standard right-handed object. Can we do it? |
|
1 | 52 | Symmetry in Physical Laws | 7 | Parity is not conserved! | It turns out that the laws of gravitation, the laws of electricity and magnetism, nuclear forces, all satisfy the principle of reflection symmetry, so these laws, or anything derived from them, cannot be used. But associated with the many particles that are found in nature there is a phenomenon called beta decay, or weak decay. One of the examples of weak decay, in connection with a particle discovered in about 1954, posed a strange puzzle. There was a certain charged particle which disintegrated into three $\pi$-mesons, as shown schematically in Fig. 52–5. This particle was called, for a while, a $\tau$-meson. Now in Fig. 52–5 we also see another particle which disintegrates into two mesons; one must be neutral, from the conservation of charge. This particle was called a $\theta$-meson. So on the one hand we have a particle called a $\tau$, which disintegrates into three $\pi$-mesons, and a $\theta$, which disintegrates into two $\pi$-mesons. Now it was soon discovered that the $\tau$ and the $\theta$ are almost equal in mass; in fact, within the experimental error, they are equal. Next, the length of time it took for them to disintegrate into three $\pi$’s and two $\pi$’s was found to be almost exactly the same; they live the same length of time. Next, whenever they were made, they were made in the same proportions, say, $14$ percent $\tau$’s to $86$ percent $\theta$’s. Anyone in his right mind realizes immediately that they must be the same particle, that we merely produce an object which has two different ways of disintegrating—not two different particles. This object that can disintegrate in two different ways has, therefore, the same lifetime and the same production ratio (because this is simply the ratio of the odds with which it disintegrates into these two kinds). However, it was possible to prove (and we cannot here explain at all how), from the principle of reflection symmetry in quantum mechanics, that it was impossible to have these both come from the same particle—the same particle could not disintegrate in both of these ways. The conservation law corresponding to the principle of reflection symmetry is something which has no classical analog, and so this kind of quantum-mechanical conservation was called the conservation of parity. So, it was a result of the conservation of parity or, more precisely, from the symmetry of the quantum-mechanical equations of the weak decays under reflection, that the same particle could not go into both, so it must be some kind of coincidence of masses, lifetimes, and so on. But the more it was studied, the more remarkable the coincidence, and the suspicion gradually grew that possibly the deep law of the reflection symmetry of nature may be false. As a result of this apparent failure, the physicists Lee and Yang suggested that other experiments be done in related decays to try to test whether the law was correct in other cases. The first such experiment was carried out by Miss Wu from Columbia, and was done as follows. Using a very strong magnet at a very low temperature, it turns out that a certain isotope of cobalt, which disintegrates by emitting an electron, is magnetic, and if the temperature is low enough that the thermal oscillations do not jiggle the atomic magnets about too much, they line up in the magnetic field. So the cobalt atoms will all line up in this strong field. They then disintegrate, emitting an electron, and it was discovered that when the atoms were lined up in a field whose $\FLPB$ vector points upward, most of the electrons were emitted in a downward direction. If one is not really “hep” to the world, such a remark does not sound like anything of significance, but if one appreciates the problems and interesting things in the world, then he sees that it is a most dramatic discovery: When we put cobalt atoms in an extremely strong magnetic field, more disintegration electrons go down than up. Therefore if we were to put it in a corresponding experiment in a “mirror,” in which the cobalt atoms would be lined up in the opposite direction, they would spit their electrons up, not down; the action is unsymmetrical. The magnet has grown hairs! The south pole of a magnet is of such a kind that the electrons in a $\beta$-disintegration tend to go away from it; that distinguishes, in a physical way, the north pole from the south pole. After this, a lot of other experiments were done: the disintegration of the $\pi$ into $\mu$ and $\nu$; $\mu$ into an electron and two neutrinos; nowadays, the $\Lambda$ into proton and $\pi$; disintegration of $\Sigma$’s; and many other disintegrations. In fact, in almost all cases where it could be expected, all have been found not to obey reflection symmetry! Fundamentally, the law of reflection symmetry, at this level in physics, is incorrect. In short, we can tell a Martian where to put the heart: we say, “Listen, build yourself a magnet, and put the coils in, and put the current on, and then take some cobalt and lower the temperature. Arrange the experiment so the electrons go from the foot to the head, then the direction in which the current goes through the coils is the direction that goes in on what we call the right and comes out on the left.” So it is possible to define right and left, now, by doing an experiment of this kind. There are a lot of other features that were predicted. For example, it turns out that the spin, the angular momentum, of the cobalt nucleus before disintegration is $5$ units of $\hbar$, and after disintegration it is $4$ units. The electron carries spin angular momentum, and there is also a neutrino involved. It is easy to see from this that the electron must carry its spin angular momentum aligned along its direction of motion, the neutrino likewise. So it looks as though the electron is spinning to the left, and that was also checked. In fact, it was checked right here at Caltech by Boehm and Wapstra, that the electrons spin mostly to the left. (There were some other experiments that gave the opposite answer, but they were wrong!) The next problem, of course, was to find the law of the failure of parity conservation. What is the rule that tells us how strong the failure is going to be? The rule is this: it occurs only in these very slow reactions, called weak decays, and when it occurs, the rule is that the particles which carry spin, like the electron, neutrino, and so on, come out with a spin tending to the left. That is a lopsided rule; it connects a polar vector velocity and an axial vector angular momentum, and says that the angular momentum is more likely to be opposite to the velocity than along it. Now that is the rule, but today we do not really understand the whys and wherefores of it. Why is this the right rule, what is the fundamental reason for it, and how is it connected to anything else? At the moment we have been so shocked by the fact that this thing is unsymmetrical that we have not been able to recover enough to understand what it means with regard to all the other rules. However, the subject is interesting, modern, and still unsolved, so it seems appropriate that we discuss some of the questions associated with it. |
|
1 | 52 | Symmetry in Physical Laws | 8 | Antimatter | The first thing to do when one of the symmetries is lost is to immediately go back over the list of known or assumed symmetries and ask whether any of the others are lost. Now we did not mention one operation on our list, which must necessarily be questioned, and that is the relation between matter and antimatter. Dirac predicted that in addition to electrons there must be another particle, called the positron (discovered at Caltech by Anderson), that is necessarily related to the electron. All the properties of these two particles obey certain rules of correspondence: the energies are equal; the masses are equal; the charges are reversed; but, more important than anything, the two of them, when they come together, can annihilate each other and liberate their entire mass in the form of energy, say $\gamma$-rays. The positron is called an antiparticle to the electron, and these are the characteristics of a particle and its antiparticle. It was clear from Dirac’s argument that all the rest of the particles in the world should also have corresponding antiparticles. For instance, for the proton there should be an antiproton, which is now symbolized by a $\overline{p}$. The $\overline{p}$ would have a negative electrical charge and the same mass as a proton, and so on. The most important feature, however, is that a proton and an antiproton coming together can annihilate each other. The reason we emphasize this is that people do not understand it when we say there is a neutron and also an antineutron, because they say, “A neutron is neutral, so how can it have the opposite charge?” The rule of the “anti” is not just that it has the opposite charge, it has a certain set of properties, the whole lot of which are opposite. The antineutron is distinguished from the neutron in this way: if we bring two neutrons together, they just stay as two neutrons, but if we bring a neutron and an antineutron together, they annihilate each other with a great explosion of energy being liberated, with various $\pi$-mesons, $\gamma$-rays, and whatnot. Now if we have antineutrons, antiprotons, and antielectrons, we can make antiatoms, in principle. They have not been made yet, but it is possible in principle. For instance, a hydrogen atom has a proton in the center with an electron going around outside. Now imagine that somewhere we can make an antiproton with a positron going around, would it go around? Well, first of all, the antiproton is electrically negative and the antielectron is electrically positive, so they attract each other in a corresponding manner—the masses are all the same; everything is the same. It is one of the principles of the symmetry of physics, the equations seem to show, that if a clock, say, were made of matter on one hand, and then we made the same clock of antimatter, it would run in this way. (Of course, if we put the clocks together, they would annihilate each other, but that is different.) An immediate question then arises. We can build, out of matter, two clocks, one which is “left-hand” and one which is “right-hand.” For example, we could build a clock which is not built in a simple way, but has cobalt and magnets and electron detectors which detect the presence of $\beta$-decay electrons and count them. Each time one is counted, the second hand moves over. Then the mirror clock, receiving fewer electrons, will not run at the same rate. So evidently we can make two clocks such that the left-hand clock does not agree with the right-hand one. Let us make, out of matter, a clock which we call the standard or right-hand clock. Now let us make, also out of matter, a clock which we call the left-hand clock. We have just discovered that, in general, these two will not run the same way; prior to that famous physical discovery, it was thought that they would. Now it was also supposed that matter and antimatter were equivalent. That is, if we made an antimatter clock, right-hand, the same shape, then it would run the same as the right-hand matter clock, and if we made the same clock to the left it would run the same. In other words, in the beginning it was believed that all four of these clocks were the same; now of course we know that the right-hand and left-hand matter are not the same. Presumably, therefore, the right-handed antimatter and the left-handed antimatter are not the same. So the obvious question is, which goes with which, if either? In other words, does the right-handed matter behave the same way as the right-handed antimatter? Or does the right-handed matter behave the same as the left-handed antimatter? $\beta$-decay experiments, using positron decay instead of electron decay, indicate that this is the interconnection: matter to the “right” works the same way as antimatter to the “left.” Therefore, at long last, it is really true that right and left symmetry is still maintained! If we made a left-hand clock, but made it out of the other kind of matter, antimatter instead of matter, it would run in the same way. So what has happened is that instead of having two independent rules in our list of symmetries, two of these rules go together to make a new rule, which says that matter to the right is symmetrical with antimatter to the left. So if our Martian is made of antimatter and we give him instructions to make this “right” handed model like us, it will, of course, come out the other way around. What would happen when, after much conversation back and forth, we each have taught the other to make space ships and we meet halfway in empty space? We have instructed each other on our traditions, and so forth, and the two of us come rushing out to shake hands. Well, if he puts out his left hand, watch out! |
|
1 | 52 | Symmetry in Physical Laws | 9 | Broken symmetries | The next question is, what can we make out of laws which are nearly symmetrical? The marvelous thing about it all is that for such a wide range of important, strong phenomena—nuclear forces, electrical phenomena, and even weak ones like gravitation—over a tremendous range of physics, all the laws for these seem to be symmetrical. On the other hand, this little extra piece says, “No, the laws are not symmetrical!” How is it that nature can be almost symmetrical, but not perfectly symmetrical? What shall we make of this? First, do we have any other examples? The answer is, we do, in fact, have a few other examples. For instance, the nuclear part of the force between proton and proton, between neutron and neutron, and between neutron and proton, is all exactly the same—there is a symmetry for nuclear forces, a new one, that we can interchange neutron and proton—but it evidently is not a general symmetry, for the electrical repulsion between two protons at a distance does not exist for neutrons. So it is not generally true that we can always replace a proton with a neutron, but only to a good approximation. Why good? Because the nuclear forces are much stronger than the electrical forces. So this is an “almost” symmetry also. So we do have examples in other things. We have, in our minds, a tendency to accept symmetry as some kind of perfection. In fact it is like the old idea of the Greeks that circles were perfect, and it was rather horrible to believe that the planetary orbits were not circles, but only nearly circles. The difference between being a circle and being nearly a circle is not a small difference, it is a fundamental change so far as the mind is concerned. There is a sign of perfection and symmetry in a circle that is not there the moment the circle is slightly off—that is the end of it—it is no longer symmetrical. Then the question is why it is only nearly a circle—that is a much more difficult question. The actual motion of the planets, in general, should be ellipses, but during the ages, because of tidal forces, and so on, they have been made almost symmetrical. Now the question is whether we have a similar problem here. The problem from the point of view of the circles is if they were perfect circles there would be nothing to explain, that is clearly simple. But since they are only nearly circles, there is a lot to explain, and the result turned out to be a big dynamical problem, and now our problem is to explain why they are nearly symmetrical by looking at tidal forces and so on. So our problem is to explain where symmetry comes from. Why is nature so nearly symmetrical? No one has any idea why. The only thing we might suggest is something like this: There is a gate in Japan, a gate in Nikkō, which is sometimes called by the Japanese the most beautiful gate in all Japan; it was built in a time when there was great influence from Chinese art. This gate is very elaborate, with lots of gables and beautiful carving and lots of columns and dragon heads and princes carved into the pillars, and so on. But when one looks closely he sees that in the elaborate and complex design along one of the pillars, one of the small design elements is carved upside down; otherwise the thing is completely symmetrical. If one asks why this is, the story is that it was carved upside down so that the gods will not be jealous of the perfection of man. So they purposely put an error in there, so that the gods would not be jealous and get angry with human beings. We might like to turn the idea around and think that the true explanation of the near symmetry of nature is this: that God made the laws only nearly symmetrical so that we should not be jealous of His perfection! |
|
2 | 1 | Electromagnetism | 1 | Electrical forces | Consider a force like gravitation which varies predominantly inversely as the square of the distance, but which is about a billion-billion-billion-billion times stronger. And with another difference. There are two kinds of “matter,” which we can call positive and negative. Like kinds repel and unlike kinds attract—unlike gravity where there is only attraction. What would happen? A bunch of positives would repel with an enormous force and spread out in all directions. A bunch of negatives would do the same. But an evenly mixed bunch of positives and negatives would do something completely different. The opposite pieces would be pulled together by the enormous attractions. The net result would be that the terrific forces would balance themselves out almost perfectly, by forming tight, fine mixtures of the positive and the negative, and between two separate bunches of such mixtures there would be practically no attraction or repulsion at all. There is such a force: the electrical force. And all matter is a mixture of positive protons and negative electrons which are attracting and repelling with this great force. So perfect is the balance, however, that when you stand near someone else you don’t feel any force at all. If there were even a little bit of unbalance you would know it. If you were standing at arm’s length from someone and each of you had one percent more electrons than protons, the repelling force would be incredible. How great? Enough to lift the Empire State Building? No! To lift Mount Everest? No! The repulsion would be enough to lift a “weight” equal to that of the entire earth! With such enormous forces so perfectly balanced in this intimate mixture, it is not hard to understand that matter, trying to keep its positive and negative charges in the finest balance, can have a great stiffness and strength. The Empire State Building, for example, swings less than one inch in the wind because the electrical forces hold every electron and proton more or less in its proper place. On the other hand, if we look at matter on a scale small enough that we see only a few atoms, any small piece will not, usually, have an equal number of positive and negative charges, and so there will be strong residual electrical forces. Even when there are equal numbers of both charges in two neighboring small pieces, there may still be large net electrical forces because the forces between individual charges vary inversely as the square of the distance. A net force can arise if a negative charge of one piece is closer to the positive than to the negative charges of the other piece. The attractive forces can then be larger than the repulsive ones and there can be a net attraction between two small pieces with no excess charges. The force that holds the atoms together, and the chemical forces that hold molecules together, are really electrical forces acting in regions where the balance of charge is not perfect, or where the distances are very small. You know, of course, that atoms are made with positive protons in the nucleus and with electrons outside. You may ask: “If this electrical force is so terrific, why don’t the protons and electrons just get on top of each other? If they want to be in an intimate mixture, why isn’t it still more intimate?” The answer has to do with the quantum effects. If we try to confine our electrons in a region that is very close to the protons, then according to the uncertainty principle they must have some mean square momentum which is larger the more we try to confine them. It is this motion, required by the laws of quantum mechanics, that keeps the electrical attraction from bringing the charges any closer together. There is another question: “What holds the nucleus together”? In a nucleus there are several protons, all of which are positive. Why don’t they push themselves apart? It turns out that in nuclei there are, in addition to electrical forces, nonelectrical forces, called nuclear forces, which are greater than the electrical forces and which are able to hold the protons together in spite of the electrical repulsion. The nuclear forces, however, have a short range—their force falls off much more rapidly than $1/r^2$. And this has an important consequence. If a nucleus has too many protons in it, it gets too big, and it will not stay together. An example is uranium, with 92 protons. The nuclear forces act mainly between each proton (or neutron) and its nearest neighbor, while the electrical forces act over larger distances, giving a repulsion between each proton and all of the others in the nucleus. The more protons in a nucleus, the stronger is the electrical repulsion, until, as in the case of uranium, the balance is so delicate that the nucleus is almost ready to fly apart from the repulsive electrical force. If such a nucleus is just “tapped” lightly (as can be done by sending in a slow neutron), it breaks into two pieces, each with positive charge, and these pieces fly apart by electrical repulsion. The energy which is liberated is the energy of the atomic bomb. This energy is usually called “nuclear” energy, but it is really “electrical” energy released when electrical forces have overcome the attractive nuclear forces. We may ask, finally, what holds a negatively charged electron together (since it has no nuclear forces). If an electron is all made of one kind of substance, each part should repel the other parts. Why, then, doesn’t it fly apart? But does the electron have “parts”? Perhaps we should say that the electron is just a point and that electrical forces only act between different point charges, so that the electron does not act upon itself. Perhaps. All we can say is that the question of what holds the electron together has produced many difficulties in the attempts to form a complete theory of electromagnetism. The question has never been answered. We will entertain ourselves by discussing this subject some more in later chapters. As we have seen, we should expect that it is a combination of electrical forces and quantum-mechanical effects that will determine the detailed structure of materials in bulk, and, therefore, their properties. Some materials are hard, some are soft. Some are electrical “conductors”—because their electrons are free to move about; others are “insulators”—because their electrons are held tightly to individual atoms. We shall consider later how some of these properties come about, but that is a very complicated subject, so we will begin by looking at the electrical forces only in simple situations. We begin by treating only the laws of electricity—including magnetism, which is really a part of the same subject. We have said that the electrical force, like a gravitational force, decreases inversely as the square of the distance between charges. This relationship is called Coulomb’s law. But it is not precisely true when charges are moving—the electrical forces depend also on the motions of the charges in a complicated way. One part of the force between moving charges we call the magnetic force. It is really one aspect of an electrical effect. That is why we call the subject “electromagnetism.” There is an important general principle that makes it possible to treat electromagnetic forces in a relatively simple way. We find, from experiment, that the force that acts on a particular charge—no matter how many other charges there are or how they are moving—depends only on the position of that particular charge, on the velocity of the charge, and on the amount of charge. We can write the force $\FLPF$ on a charge $q$ moving with a velocity $\FLPv$ as \begin{equation} \label{Eq:II:1:1} \FLPF=q(\FLPE+\FLPv\times\FLPB). \end{equation} We call $\FLPE$ the electric field and $\FLPB$ the magnetic field at the location of the charge. The important thing is that the electrical forces from all the other charges in the universe can be summarized by giving just these two vectors. Their values will depend on where the charge is, and may change with time. Furthermore, if we replace that charge with another charge, the force on the new charge will be just in proportion to the amount of charge so long as all the rest of the charges in the world do not change their positions or motions. (In real situations, of course, each charge produces forces on all other charges in the neighborhood and may cause these other charges to move, and so in some cases the fields can change if we replace our particular charge by another.) We know from Vol. I how to find the motion of a particle if we know the force on it. Equation (1.1) can be combined with the equation of motion to give \begin{equation} \label{Eq:II:1:2} \ddt{}{t}\biggl[\frac{m\FLPv}{(1-v^2/c^2)^{1/2}}\biggr]= \FLPF=q(\FLPE+\FLPv\times\FLPB). \end{equation} So if $\FLPE$ and $\FLPB$ are given, we can find the motions. Now we need to know how the $\FLPE$’s and $\FLPB$’s are produced. One of the most important simplifying principles about the way the fields are produced is this: Suppose a number of charges moving in some manner would produce a field $\FLPE_1$, and another set of charges would produce $\FLPE_2$. If both sets of charges are in place at the same time (keeping the same locations and motions they had when considered separately), then the field produced is just the sum \begin{equation} \label{Eq:II:1:3} \FLPE=\FLPE_1+\FLPE_2. \end{equation} This fact is called the principle of superposition of fields. It holds also for magnetic fields. This principle means that if we know the law for the electric and magnetic fields produced by a single charge moving in an arbitrary way, then all the laws of electrodynamics are complete. If we want to know the force on charge $A$ we need only calculate the $\FLPE$ and $\FLPB$ produced by each of the charges $B$, $C$, $D$, etc., and then add the $\FLPE$’s and $\FLPB$’s from all the charges to find the fields, and from them the forces acting on charge $A$. If it had only turned out that the field produced by a single charge was simple, this would be the neatest way to describe the laws of electrodynamics. We have already given a description of this law (Chapter 28, Vol. I) and it is, unfortunately, rather complicated. It turns out that the forms in which the laws of electrodynamics are simplest are not what you might expect. It is not simplest to give a formula for the force that one charge produces on another. It is true that when charges are standing still the Coulomb force law is simple, but when charges are moving about the relations are complicated by delays in time and by the effects of acceleration, among others. As a result, we do not wish to present electrodynamics only through the force laws between charges; we find it more convenient to consider another point of view—a point of view in which the laws of electrodynamics appear to be the most easily manageable. |
|
2 | 2 | Differential Calculus of Vector Fields | 1 | Understanding physics | The physicist needs a facility in looking at problems from several points of view. The exact analysis of real physical problems is usually quite complicated, and any particular physical situation may be too complicated to analyze directly by solving the differential equation. But one can still get a very good idea of the behavior of a system if one has some feel for the character of the solution in different circumstances. Ideas such as the field lines, capacitance, resistance, and inductance are, for such purposes, very useful. So we will spend much of our time analyzing them. In this way we will get a feel as to what should happen in different electromagnetic situations. On the other hand, none of the heuristic models, such as field lines, is really adequate and accurate for all situations. There is only one precise way of presenting the laws, and that is by means of differential equations. They have the advantage of being fundamental and, so far as we know, precise. If you have learned the differential equations you can always go back to them. There is nothing to unlearn. It will take you some time to understand what should happen in different circumstances. You will have to solve the equations. Each time you solve the equations, you will learn something about the character of the solutions. To keep these solutions in mind, it will be useful also to study their meaning in terms of field lines and of other concepts. This is the way you will really “understand” the equations. That is the difference between mathematics and physics. Mathematicians, or people who have very mathematical minds, are often led astray when “studying” physics because they lose sight of the physics. They say: “Look, these differential equations—the Maxwell equations—are all there is to electrodynamics; it is admitted by the physicists that there is nothing which is not contained in the equations. The equations are complicated, but after all they are only mathematical equations and if I understand them mathematically inside out, I will understand the physics inside out.” Only it doesn’t work that way. Mathematicians who study physics with that point of view—and there have been many of them—usually make little contribution to physics and, in fact, little to mathematics. They fail because the actual physical situations in the real world are so complicated that it is necessary to have a much broader understanding of the equations. What it means really to understand an equation—that is, in more than a strictly mathematical sense—was described by Dirac. He said: “I understand what an equation means if I have a way of figuring out the characteristics of its solution without actually solving it.” So if we have a way of knowing what should happen in given circumstances without actually solving the equations, then we “understand” the equations, as applied to these circumstances. A physical understanding is a completely unmathematical, imprecise, and inexact thing, but absolutely necessary for a physicist. Ordinarily, a course like this is given by developing gradually the physical ideas—by starting with simple situations and going on to more and more complicated situations. This requires that you continuously forget things you previously learned—things that are true in certain situations, but which are not true in general. For example, the “law” that the electrical force depends on the square of the distance is not always true. We prefer the opposite approach. We prefer to take first the complete laws, and then to step back and apply them to simple situations, developing the physical ideas as we go along. And that is what we are going to do. Our approach is completely opposite to the historical approach in which one develops the subject in terms of the experiments by which the information was obtained. But the subject of physics has been developed over the past 200 years by some very ingenious people, and as we have only a limited time to acquire our knowledge, we cannot possibly cover everything they did. Unfortunately one of the things that we shall have a tendency to lose in these lectures is the historical, experimental development. It is hoped that in the laboratory some of this lack can be corrected. You can also fill in what we must leave out by reading the Encyclopedia Britannica, which has excellent historical articles on electricity and on other parts of physics. You will also find historical information in many textbooks on electricity and magnetism. |
|
2 | 2 | Differential Calculus of Vector Fields | 2 | Scalar and vector fields—$\boldsymbol{T}$ and $\FLPh$ | We begin now with the abstract, mathematical view of the theory of electricity and magnetism. The ultimate idea is to explain the meaning of the laws given in Chapter 1. But to do this we must first explain a new and peculiar notation that we want to use. So let us forget electromagnetism for the moment and discuss the mathematics of vector fields. It is of very great importance, not only for electromagnetism, but for all kinds of physical circumstances. Just as ordinary differential and integral calculus is so important to all branches of physics, so also is the differential calculus of vectors. We turn to that subject. Listed below are a few facts from the algebra of vectors. It is assumed that you already know them. \begin{align} \label{Eq:II:2:1} &\FLPA\,\cdot\,\FLPB=\text{scalar}=A_xB_x+A_yB_y+A_zB_z\\[1ex] \label{Eq:II:2:2} &\FLPA\times\FLPB=\text{vector}\\[1pt] &\begin{alignedat}{5} % ebook insert: \label{Eq:II:0:0} &\qquad(\FLPA\times\FLPB)_z&&=A_x&&B_y&&-A_y&&B_x\\[.25ex] % ebook insert: \label{Eq:II:0:0} &\qquad(\FLPA\times\FLPB)_x&&=A_y&&B_z&&-A_z&&B_y\\[.25ex] % ebook insert: \label{Eq:II:0:0} &\qquad(\FLPA\times\FLPB)_y&&=A_z&&B_x&&-A_x&&B_z \end{alignedat}\notag\\[1ex] % ebook break \label{Eq:II:2:3} &\FLPA\times\FLPA=\FLPzero\\[1ex] \label{Eq:II:2:4} &\FLPA\cdot(\FLPA\times\FLPB)=0\\[1ex] \label{Eq:II:2:5} &\FLPA\cdot(\FLPB\times\FLPC)=(\FLPA\times\FLPB)\cdot\FLPC\\[1ex] \label{Eq:II:2:6} &\FLPA\times(\FLPB\times\FLPC)=\FLPB(\FLPA\cdot\FLPC)-\FLPC(\FLPA\cdot\FLPB) \end{align} Also we will want to use the two following equalities from the calculus: \begin{gather} \label{Eq:II:2:7} \Delta f(x,y,z)=\ddp{f}{x}\,\Delta x+\ddp{f}{y}\,\Delta y+\ddp{f}{z}\,\Delta z,\\[1ex] \label{Eq:II:2:8} \frac{\partial^2f}{\partial x\,\partial y}= \frac{\partial^2f}{\partial y\,\partial x}. \end{gather} The first equation (2.7) is, of course, true only in the limit that $\Delta x$, $\Delta y$, and $\Delta z$ go toward zero. The simplest possible physical field is a scalar field. By a field, you remember, we mean a quantity which depends upon position in space. By a scalar field we merely mean a field which is characterized at each point by a single number—a scalar. Of course the number may change in time, but we need not worry about that for the moment. We will talk about what the field looks like at a given instant. As an example of a scalar field, consider a solid block of material which has been heated at some places and cooled at others, so that the temperature of the body varies from point to point in a complicated way. Then the temperature will be a function of $x$, $y$, and $z$, the position in space measured in a rectangular coordinate system. Temperature is a scalar field. One way of thinking about scalar fields is to imagine “contours” which are imaginary surfaces drawn through all points for which the field has the same value, just as contour lines on a map connect points with the same height. For a temperature field the contours are called “isothermal surfaces” or isotherms. Figure 2–1 illustrates a temperature field and shows the dependence of $T$ on $x$ and $y$ when $z=0$. Several isotherms are drawn. There are also vector fields. The idea is very simple. A vector is given for each point in space. The vector varies from point to point. As an example, consider a rotating body. The velocity of the material of the body at any point is a vector which is a function of position (Fig. 2–2). As a second example, consider the flow of heat in a block of material. If the temperature in the block is high at one place and low at another, there will be a flow of heat from the hotter places to the colder. The heat will be flowing in different directions in different parts of the block. The heat flow is a directional quantity which we call $\FLPh$. Its magnitude is a measure of how much heat is flowing. Examples of the heat flow vector are also shown in Fig. 2–1. Let’s make a more precise definition of $\FLPh$: The magnitude of the vector heat flow at a point is the amount of thermal energy that passes, per unit time and per unit area, through an infinitesimal surface element at right angles to the direction of flow. The vector points in the direction of flow (see Fig. 2–3). In symbols: If $\Delta J$ is the thermal energy that passes per unit time through the surface element $\Delta a$, then \begin{equation} \label{Eq:II:2:9} \FLPh=\frac{\Delta J}{\Delta a}\,\FLPe_f, \end{equation} where $\FLPe_f$ is a unit vector in the direction of flow. The vector $\FLPh$ can be defined in another way—in terms of its components. We ask how much heat flows through a small surface at any angle with respect to the flow. In Fig. 2–4 we show a small surface $\Delta a_2$ inclined with respect to $\Delta a_1$, which is perpendicular to the flow. The unit vector $\FLPn$ is normal to the surface $\Delta a_2$. The angle $\theta$ between $\FLPn$ and $\FLPh$ is the same as the angle between the surfaces (since $\FLPh$ is normal to $\Delta a_1$). Now what is the heat flow per unit area through $\Delta a_2$? The flow through $\Delta a_2$ is the same as through $\Delta a_1$; only the areas are different. In fact, $\Delta a_1=\Delta a_2\cos\theta$. The heat flow through $\Delta a_2$ is \begin{equation} \label{Eq:II:2:10} \frac{\Delta J}{\Delta a_2}=\frac{\Delta J}{\Delta a_1}\cos\theta= \FLPh\cdot\FLPn. \end{equation} We interpret this equation: the heat flow (per unit time and per unit area) through any surface element whose unit normal is $\FLPn$, is given by $\FLPh\cdot\FLPn$. Equally, we could say: the component of the heat flow perpendicular to the surface element $\Delta a_2$ is $\FLPh\cdot\FLPn$. We can, if we wish, consider that these statements define $\FLPh$. We will be applying the same ideas to other vector fields. |
|
2 | 3 | Vector Integral Calculus | 1 | Vector integrals; the line integral of $\FLPgrad{\boldsymbol{\psi}}$ | We found in Chapter 2 that there were various ways of taking derivatives of fields. Some gave vector fields; some gave scalar fields. Although we developed many different formulas, everything in Chapter 2 could be summarized in one rule: the operators $\ddpl{}{x}$, $\ddpl{}{y}$, and $\ddpl{}{z}$ are the three components of a vector operator $\FLPnabla$. We would now like to get some understanding of the significance of the derivatives of fields. We will then have a better feeling for what a vector field equation means. We have already discussed the meaning of the gradient operation ($\FLPnabla$ on a scalar). Now we turn to the meanings of the divergence and curl operations. The interpretation of these quantities is best done in terms of certain vector integrals and equations relating such integrals. These equations cannot, unfortunately, be obtained from vector algebra by some easy substitution, so you will just have to learn them as something new. Of these integral formulas, one is practically trivial, but the other two are not. We will derive them and explain their implications. The equations we shall study are really mathematical theorems. They will be useful not only for interpreting the meaning and the content of the divergence and the curl, but also in working out general physical theories. These mathematical theorems are, for the theory of fields, what the theorem of the conservation of energy is to the mechanics of particles. General theorems like these are important for a deeper understanding of physics. You will find, though, that they are not very useful for solving problems—except in the simplest cases. It is delightful, however, that in the beginning of our subject there will be many simple problems which can be solved with the three integral formulas we are going to treat. We will see, however, as the problems get harder, that we can no longer use these simple methods. We take up first an integral formula involving the gradient. The relation contains a very simple idea: Since the gradient represents the rate of change of a field quantity, if we integrate that rate of change, we should get the total change. Suppose we have the scalar field $\psi(x,y,z)$. At any two points $(1)$ and $(2)$, the function $\psi$ will have the values $\psi(1)$ and $\psi(2)$, respectively. [We use a convenient notation, in which $(2)$ represents the point $(x_2,y_2,z_2)$ and $\psi(2)$ means the same thing as $\psi(x_2,y_2,z_2)$.] If $\Gamma$ (gamma) is any curve joining $(1)$ and $(2)$, as in Fig. 3–1, the following relation is true:
Theorem 1. \begin{equation} \label{Eq:II:3:1} \psi(2)-\psi(1)= \underset{\text{along $\Gamma$}}{\int_{(1)}^{(2)}} (\FLPgrad{\psi})\cdot d\FLPs\,. \end{equation} The integral is a line integral, from $(1)$ to $(2)$ along the curve $\Gamma$, of the dot product of $\FLPgrad{\psi}$—a vector—with $d\FLPs$—another vector which is an infinitesimal line element of the curve $\Gamma$ (directed away from $(1)$ and toward $(2)$). First, we should review what we mean by a line integral. Consider a scalar function $f(x,y,z)$, and the curve $\Gamma$ joining two points $(1)$ and $(2)$. We mark off the curve at a number of points and join these points by straight-line segments, as shown in Fig. 3–2. Each segment has the length $\Delta s_i$, where $i$ is an index that runs $1$, $2$, $3$, … By the line integral \begin{equation*} \underset{\text{along $\Gamma$}}{\int_{(1)}^{(2)}} f\,ds \end{equation*} we mean the limit of the sum \begin{equation*} \sum\nolimits_if_i\Delta s_i, \end{equation*} where $f_i$ is the value of the function at the $i$th segment. The limiting value is what the sum approaches as we add more and more segments (in a sensible way, so that the largest $\Delta s_i\to0$). The integral in our theorem, Eq. (3.1), means the same thing, although it looks a little different. Instead of $f$, we have another scalar—the component of $\FLPgrad{\psi}$ in the direction of $\Delta\FLPs$. If we write $(\FLPgrad{\psi})_t$ for this tangential component, it is clear that \begin{equation} \label{Eq:II:3:2} (\FLPgrad{\psi})_t\,\Delta s=(\FLPgrad{\psi})\cdot\Delta\FLPs. \end{equation} The integral in Eq. (3.1) means the sum of such terms. Now let’s see why Eq. (3.1) is true. In Chapter 2, we showed that the component of $\FLPgrad{\psi}$ along a small displacement $\Delta\FLPR$ was the rate of change of $\psi$ in the direction of $\Delta\FLPR$. Consider the line segment $\Delta\FLPs$ from $(1)$ to point $a$ in Fig. 3–2. According to our definition, \begin{equation} \label{Eq:II:3:3} \Delta\psi_1=\psi(a)-\psi(1)=(\FLPgrad{\psi})_1\cdot\Delta\FLPs_1. \end{equation} Also, we have \begin{equation} \label{Eq:II:3:4} \psi(b)-\psi(a)=(\FLPgrad{\psi})_2\cdot\Delta\FLPs_2, \end{equation} where, of course, $(\FLPgrad{\psi})_1$ means the gradient evaluated at the segment $\Delta\FLPs_1$, and $(\FLPgrad{\psi})_2$, the gradient evaluated at $\Delta\FLPs_2$. If we add Eqs. (3.3) and (3.4), we get \begin{equation} \label{Eq:II:3:5} \psi(b)-\psi(1)=(\FLPgrad{\psi})_1\cdot\Delta\FLPs_1+ (\FLPgrad{\psi})_2\cdot\Delta\FLPs_2. \end{equation} You can see that if we keep adding such terms, we get the result \begin{equation} \label{Eq:II:3:6} \psi(2)-\psi(1)=\sum\nolimits_i(\FLPgrad{\psi})_i\cdot\Delta\FLPs_i. \end{equation} The left-hand side doesn’t depend on how we choose our intervals—if $(1)$ and $(2)$ are kept always the same—so we can take the limit of the right-hand side. We have therefore proved Eq. (3.1). You can see from our proof that just as the equality doesn’t depend on how the points $a$ $b$, $c$, …, are chosen, similarly it doesn’t depend on what we choose for the curve $\Gamma$ to join $(1)$ and $(2)$. Our theorem is correct for any curve from $(1)$ to $(2)$. One remark on notation: You will see that there is no confusion if we write, for convenience, \begin{equation} \label{Eq:II:3:7} (\FLPgrad{\psi})\cdot d\FLPs=\FLPgrad{\psi}\cdot d\FLPs. \end{equation} With this notation, our theorem is
Theorem 1. \begin{equation} \label{Eq:II:3:8} \psi(2)-\psi(1)= \underset{\substack{\text{any curve from}\\\text{$(1)$ to $(2)$}}}{\int_{(1)}^{(2)}} \FLPgrad{\psi}\cdot d\FLPs\,. \end{equation} |
|
2 | 3 | Vector Integral Calculus | 2 | The flux of a vector field | Before we consider our next integral theorem—a theorem about the divergence—we would like to study a certain idea which has an easily understood physical significance in the case of heat flow. We have defined the vector $\FLPh$, which represents the heat that flows through a unit area in a unit time. Suppose that inside a block of material we have some closed surface $S$ which encloses the volume $V$ (Fig. 3–3). We would like to find out how much heat is flowing out of this volume. We can, of course, find it by calculating the total heat flow out of the surface $S$. We write $da$ for the area of an element of the surface. The symbol stands for a two-dimensional differential. If, for instance, the area happened to be in the $xy$-plane we would have \begin{equation*} da=dx\,dy. \end{equation*} Later we shall have integrals over volume and for these it is convenient to consider a differential volume that is a little cube. So when we write $dV$ we mean \begin{equation*} dV=dx\,dy\,dz. \end{equation*} Some people like to write $d^2a$ instead of $da$ to remind themselves that it is kind of a second-order quantity. They would also write $d^3V$ instead of $dV$. We will use the simpler notation, and assume that you can remember that an area has two dimensions and a volume has three. The heat flow out through the surface element $da$ is the area times the component of $\FLPh$ perpendicular to $da$. We have already defined $\FLPn$ as a unit vector pointing outward at right angles to the surface (Fig. 3–3). The component of $\FLPh$ that we want is \begin{equation} \label{Eq:II:3:9} h_n=\FLPh\cdot\FLPn. \end{equation} The heat flow out through $da$ is then \begin{equation} \label{Eq:II:3:10} \FLPh\cdot\FLPn\,da. \end{equation} To get the total heat flow through any surface we sum the contributions from all the elements of the surface. In other words, we integrate (3.10) over the whole surface: \begin{equation} \label{Eq:II:3:11} \text{Total heat flow outward through $S$}=\int_S\FLPh\cdot\FLPn\,da. \end{equation}
\begin{equation} \label{Eq:II:3:11} \begin{pmatrix} \text{Total heat flow}\\[-.5ex] \text{outward through $S$} \end{pmatrix} =\int_S\FLPh\cdot\FLPn\,da. \end{equation}
We are also going to call this surface integral “the flux of $\FLPh$ through the surface.” Originally the word flux meant flow, so that the surface integral just means the flow of $\FLPh$ through the surface. We may think: $\FLPh$ is the “current density” of heat flow and the surface integral of it is the total heat current directed out of the surface; that is, the thermal energy per unit time (joules per second). We would like to generalize this idea to the case where the vector does not represent the flow of anything; for instance, it might be the electric field. We can certainly still integrate the normal component of the electric field over an area if we wish. Although it is not the flow of anything, we still call it the “flux.” We say \begin{equation} \label{Eq:II:3:12} \text{Flux of $\FLPE$ through the surface $S$}=\int_S\FLPE\cdot\FLPn\,da. \end{equation}
\begin{equation} \label{Eq:II:3:12} \begin{pmatrix} \text{Flux of $\FLPE$}\\[-.5ex] \text{through the}\\[-.5ex] \text{surface $S$} \end{pmatrix} =\int_S\FLPE\cdot\FLPn\,da. \end{equation} We generalize the word “flux” to mean the “surface integral of the normal component” of a vector. We will also use the same definition even when the surface considered is not a closed one, as it is here. Returning to the special case of heat flow, let us take a situation in which heat is conserved. For example, imagine some material in which after an initial heating no further heat energy is generated or absorbed. Then, if there is a net heat flow out of a closed surface, the heat content of the volume inside must decrease. So, in circumstances in which heat would be conserved, we say that \begin{equation} \label{Eq:II:3:13} \int_S\FLPh\cdot\FLPn\,da=-\ddt{Q}{t}, \end{equation} where $Q$ is the heat inside the surface. The heat flux out of $S$ is equal to minus the rate of change with respect to time of the total heat $Q$ inside of $S$. This interpretation is possible because we are speaking of heat flow and also because we supposed that the heat was conserved. We could not, of course, speak of the total heat inside the volume if heat were being generated there. Now we shall point out an interesting fact about the flux of any vector. You may think of the heat flow vector if you wish, but what we say will be true for any vector field $\FLPC$. Imagine that we have a closed surface $S$ that encloses the volume $V$. We now separate the volume into two parts by some kind of a “cut,” as in Fig. 3–4. Now we have two closed surfaces and volumes. The volume $V_1$ is enclosed in the surface $S_1$, which is made up of part of the original surface $S_a$ and of the surface of the cut, $S_{ab}$. The volume $V_2$ is enclosed by $S_2$, which is made up of the rest of the original surface $S_b$ and closed off by the cut $S_{ab}$. Now consider the following question: Suppose we calculate the flux out through surface $S_1$ and add to it the flux through surface $S_2$. Does the sum equal the flux through the whole surface that we started with? The answer is yes. The flux through the part of the surfaces $S_{ab}$ common to both $S_1$ and $S_2$ just exactly cancels out. For the flux of the vector $\FLPC$ out of $V_1$ we can write \begin{equation} \label{Eq:II:3:14} \text{Flux through $S_1$}=\int_{S_a}\FLPC\cdot\FLPn\,da+ \int_{S_{ab}}\FLPC\cdot\FLPn_1\,da, \end{equation}
\begin{equation} \begin{gathered} \text{Flux through $S_1$}=\\[.75ex] \int_{S_a}\FLPC\cdot\FLPn\,da+ \int_{S_{ab}}\FLPC\cdot\FLPn_1\,da, \end{gathered} \label{Eq:II:3:14} \end{equation} and for the flux out of $V_2$, \begin{equation} \label{Eq:II:3:15} \text{Flux through $S_2$}=\int_{S_b}\FLPC\cdot\FLPn\,da+ \int_{S_{ab}}\FLPC\cdot\FLPn_2\,da. \end{equation}
\begin{equation} \begin{gathered} \text{Flux through $S_2$}=\\[.75ex] \int_{S_b}\FLPC\cdot\FLPn\,da+ \int_{S_{ab}}\FLPC\cdot\FLPn_2\,da. \end{gathered} \label{Eq:II:3:15} \end{equation} Note that in the second integral we have written $\FLPn_1$ for the outward normal for $S_{ab}$ when it belongs to $S_1$, and $\FLPn_2$ when it belongs to $S_2$, as shown in Fig. 3–4. Clearly, $\FLPn_1=-\FLPn_2$, so that \begin{equation} \label{Eq:II:3:16} \int_{S_{ab}}\FLPC\cdot\FLPn_1\,da=-\int_{S_{ab}}\FLPC\cdot\FLPn_2\,da. \end{equation} If we now add Eqs. (3.14) and (3.15), we see that the sum of the fluxes through $S_1$ and $S_2$ is just the sum of two integrals which, taken together, give the flux through the original surface $S=S_a+S_b$. We see that the flux through the complete outer surface $S$ can be considered as the sum of the fluxes from the two pieces into which the volume was broken. We can similarly subdivide again—say by cutting $V_1$ into two pieces. You see that the same arguments apply. So for any way of dividing the original volume, it must be generally true that the flux through the outer surface, which is the original integral, is equal to a sum of the fluxes out of all the little interior pieces. |
|
2 | 3 | Vector Integral Calculus | 3 | The flux from a cube; Gauss’ theorem | We now take the special case of a small cube1 and find an interesting formula for the flux out of it. Consider a cube whose edges are lined up with the axes as in Fig. 3–5. Let us suppose that the coordinates of the corner nearest the origin are $x$, $y$, $z$. Let $\Delta x$ be the length of the cube in the $x$-direction, $\Delta y$ be the length in the $y$-direction, and $\Delta z$ be the length in the $z$-direction. We wish to find the flux of a vector field $\FLPC$ through the surface of the cube. We shall do this by making a sum of the fluxes through each of the six faces. First, consider the face marked $1$ in the figure. The flux outward on this face is the negative of the $x$-component of $\FLPC$, integrated over the area of the face. This flux is \begin{equation*} -\int C_x\,dy\,dz. \end{equation*} Since we are considering a small cube, we can approximate this integral by the value of $C_x$ at the center of the face—which we call the point $(1)$—multiplied by the area of the face, $\Delta y\,\Delta z$: \begin{equation*} \text{Flux out of $1$}=-C_x(1)\,\Delta y\,\Delta z. \end{equation*} Similarly, for the flux out of face $2$, we write \begin{equation*} \text{Flux out of $2$}=C_x(2)\,\Delta y\,\Delta z. \end{equation*} Now $C_x(1)$ and $C_x(2)$ are, in general, slightly different. If $\Delta x$ is small enough, we can write \begin{equation*} C_x(2)=C_x(1)+\ddp{C_x}{x}\,\Delta x. \end{equation*} There are, of course, more terms, but they will involve $(\Delta x)^2$ and higher powers, and so will be negligible if we consider only the limit of small $\Delta x$. So the flux through face $2$ is \begin{equation*} \text{Flux out of $2$}=\biggl[C_x(1)+\ddp{C_x}{x}\,\Delta x\biggr]\,\Delta y\,\Delta z. \end{equation*} Summing the fluxes for faces $1$ and $2$, we get \begin{equation*} \text{Flux out of $1$ and $2$}=\ddp{C_x}{x}\,\Delta x\,\Delta y\,\Delta z. \end{equation*} The derivative should really be evaluated at the center of face $1$; that is, at $[x,y+(\Delta y/2),z+(\Delta z/2)]$. But in the limit of an infinitesimal cube, we make a negligible error if we evaluate it at the corner $(x,y,z)$. Applying the same reasoning to each of the other pairs of faces, we have \begin{equation*} \text{Flux out of $3$ and $4$}=\ddp{C_y}{y}\,\Delta x\,\Delta y\,\Delta z \end{equation*} and \begin{equation*} \text{Flux out of $5$ and $6$}=\ddp{C_z}{z}\,\Delta x\,\Delta y\,\Delta z. \end{equation*} The total flux through all the faces is the sum of these terms. We find that \begin{equation*} \underset{\text{cube}}{\int}\FLPC\cdot\FLPn\,da= \biggl(\ddp{C_x}{x}+\ddp{C_y}{y}+\ddp{C_z}{z}\biggr)\Delta x\,\Delta y\,\Delta z, \end{equation*} and the sum of the derivatives is just $\FLPdiv{\FLPC}$. Also, $\Delta x\,\Delta y\,\Delta z=\Delta V$, the volume of the cube. So we can say that for an infinitesimal cube \begin{equation} \label{Eq:II:3:17} \underset{\text{surface}}{\int}\FLPC\cdot\FLPn\,da= (\FLPdiv{\FLPC})\,\Delta V. \end{equation} We have shown that the outward flux from the surface of an infinitesimal cube is equal to the divergence of the vector multiplied by the volume of the cube. We now see the “meaning” of the divergence of a vector. The divergence of a vector at the point $P$ is the flux—the outgoing “flow” of $\FLPC$—per unit volume, in the neighborhood of $P$. We have connected the divergence of $\FLPC$ to the flux of $\FLPC$ out of each infinitesimal volume. For any finite volume we can use the fact we proved above—that the total flux from a volume is the sum of the fluxes out of each part. We can, that is, integrate the divergence over the entire volume. This gives us the theorem that the integral of the normal component of any vector over any closed surface can also be written as the integral of the divergence of the vector over the volume enclosed by the surface. This theorem is named after Gauss.
Gauss’ Theorem \begin{equation} \label{Eq:II:3:18} \int_S\FLPC\cdot\FLPn\,da=\int_V\FLPdiv{\FLPC}\,dV, \end{equation} where $S$ is any closed surface and $V$ is the volume inside it. |
|
2 | 4 | Electrostatics | 1 | Statics | We begin now our detailed study of the theory of electromagnetism. All of electromagnetism is contained in the Maxwell equations.
Maxwell’s equations: \begin{align} \label{Eq:II:4:1} \FLPdiv{\FLPE}&=\frac{\rho}{\epsO},\\[1ex] \label{Eq:II:4:2} \FLPcurl{\FLPE}&=-\ddp{\FLPB}{t},\\[1ex] \label{Eq:II:4:3} c^2\,\FLPcurl{\FLPB}&=\ddp{\FLPE}{t}+\frac{\FLPj}{\epsO},\\[1ex] \label{Eq:II:4:4} \FLPdiv{\FLPB}&=0. \end{align} The situations that are described by these equations can be very complicated. We will consider first relatively simple situations, and learn how to handle them before we take up more complicated ones. The easiest circumstance to treat is one in which nothing depends on the time—called the static case. All charges are permanently fixed in space, or if they do move, they move as a steady flow in a circuit (so $\rho$ and $\FLPj$ are constant in time). In these circumstances, all of the terms in the Maxwell equations which are time derivatives of the field are zero. In this case, the Maxwell equations become:
Electrostatics: \begin{align} \label{Eq:II:4:5} \FLPdiv{\FLPE}&=\frac{\rho}{\epsO},\\[1ex] \label{Eq:II:4:6} \FLPcurl{\FLPE}&=\FLPzero. \end{align} Magnetostatics: \begin{align} \label{Eq:II:4:7} \FLPcurl{\FLPB}&=\frac{\FLPj}{\epsO c^2},\\[1ex] \label{Eq:II:4:8} \FLPdiv{\FLPB}&=0. \end{align} You will notice an interesting thing about this set of four equations. It can be separated into two pairs. The electric field $\FLPE$ appears only in the first two, and the magnetic field $\FLPB$ appears only in the second two. The two fields are not interconnected. This means that electricity and magnetism are distinct phenomena so long as charges and currents are static. The interdependence of $\FLPE$ and $\FLPB$ does not appear until there are changes in charges or currents, as when a condensor is charged, or a magnet moved. Only when there are sufficiently rapid changes, so that the time derivatives in Maxwell’s equations become significant, will $\FLPE$ and $\FLPB$ depend on each other. Now if you look at the equations of statics you will see that the study of the two subjects we call electrostatics and magnetostatics is ideal from the point of view of learning about the mathematical properties of vector fields. Electrostatics is a neat example of a vector field with zero curl and a given divergence. Magnetostatics is a neat example of a field with zero divergence and a given curl. The more conventional—and you may be thinking, more satisfactory—way of presenting the theory of electromagnetism is to start first with electrostatics and thus to learn about the divergence. Magnetostatics and the curl are taken up later. Finally, electricity and magnetism are put together. We have chosen to start with the complete theory of vector calculus. Now we shall apply it to the special case of electrostatics, the field of $\FLPE$ given by the first pair of equations. We will begin with the simplest situations—ones in which the positions of all charges are specified. If we had only to study electrostatics at this level (as we shall do in the next two chapters), life would be very simple—in fact, almost trivial. Everything can be obtained from Coulomb’s law and some integration, as you will see. In many real electrostatic problems, however, we do not know, initially, where the charges are. We know only that they have distributed themselves in ways that depend on the properties of matter. The positions that the charges take up depend on the $\FLPE$ field, which in turn depends on the positions of the charges. Then things can get quite complicated. If, for instance, a charged body is brought near a conductor or insulator, the electrons and protons in the conductor or insulator will move around. The charge density $\rho$ in Eq. (4.5) may have one part that we know about, from the charge that we brought up; but there will be other parts from charges that have moved around in the conductor. And all of the charges must be taken into account. One can get into some rather subtle and interesting problems. So although this chapter is to be on electrostatics, it will not cover the more beautiful and subtle parts of the subject. It will treat only the situation where we can assume that the positions of all the charges are known. Naturally, you should be able to do that case before you try to handle the other ones. |
|
2 | 4 | Electrostatics | 2 | Coulomb’s law; superposition | It would be logical to use Eqs. (4.5) and (4.6) as our starting points. It will be easier, however, if we start somewhere else and come back to these equations. The results will be equivalent. We will start with a law that we have talked about before, called Coulomb’s law, which says that between two charges at rest there is a force directly proportional to the product of the charges and inversely proportional to the square of the distance between. The force is along the straight line from one charge to the other.
Coulomb’s law: \begin{equation} \label{Eq:II:4:9} \FLPF_1=\frac{1}{4\pi\epsO}\,\frac{q_1q_2}{r_{12}^2}\,\FLPe_{12}=-\FLPF_2. \end{equation} $\FLPF_1$ is the force on charge $q_1$, $\FLPe_{12}$ is the unit vector in the direction to $q_1$ from $q_2$, and $r_{12}$ is the distance between $q_1$ and $q_2$. The force $\FLPF_2$ on $q_2$ is equal and opposite to $\FLPF_1$. The constant of proportionality, for historical reasons, is written as $1/4\pi\epsO$. In the system of units which we use—the mks system—it is defined as exactly $10^{-7}$ times the speed of light squared. Now since the speed of light is approximately $3\times10^8$ meters per second, the constant is approximately $9\times10^9$, and the unit turns out to be newton$\cdot$meter$^2$ per coulomb$^2$ or volt$\cdot$meter per coulomb. \begin{align} \frac{1}{4\pi\epsO}&=10^{-7}c^2\;\quad\,\text{(by definition)}\notag\\[-1pt] \label{Eq:II:4:10} &=9.0\times10^9\;\text{(by experiment).}\\[2pt] \text{Unit:}&\quad\text{newton$\cdot$meter$^2$$/$coulomb$^2$,}\notag\\[2pt] \text{or}&\quad\text{volt$\cdot$meter$/$coulomb.}\notag \end{align} When there are more than two charges present—the only really interesting times—we must supplement Coulomb’s law with one other fact of nature: the force on any charge is the vector sum of the Coulomb forces from each of the other charges. This fact is called “the principle of superposition.” That’s all there is to electrostatics. If we combine the Coulomb law and the principle of superposition, there is nothing else. Equations (4.5) and (4.6)—the electrostatic equations—say no more and no less. When applying Coulomb’s law, it is convenient to introduce the idea of an electric field. We say that the field $\FLPE(1)$ is the force per unit charge on $q_1$ (due to all other charges). Dividing Eq. (4.9) by $q_1$, we have, for one other charge besides $q_1$, \begin{equation} \label{Eq:II:4:11} \FLPE(1)=\frac{1}{4\pi\epsO}\,\frac{q_2}{r_{12}^2}\,\FLPe_{12}. \end{equation} Also, we consider that $\FLPE(1)$ describes something about the point $(1)$ even if $q_1$ were not there—assuming that all other charges keep their same positions. We say: $\FLPE(1)$ is the electric field at the point $(1)$. The electric field $\FLPE$ is a vector, so by Eq. (4.11) we really mean three equations—one for each component. Writing out explicitly the $x$-component, Eq. (4.11) means \begin{equation} \label{Eq:II:4:12} E_x(x_1,y_1,z_1)=\frac{q_2}{4\pi\epsO}\, \frac{x_1-x_2}{[(x_1-x_2)^2+(y_1-y_2)^2+(z_1-z_2)^2]^{3/2}}, \end{equation}
\begin{align} \label{Eq:II:4:12} E_x&(x_1,y_1,z_1)=\\[1.5ex] &\frac{q_2}{4\pi\epsO} \frac{x_1\!-x_2}{[(x_1\!-\!x_2)^2\!+\!(y_1\!-\!y_2)^2\!+\!(z_1\!-\!z_2)^2]^{3/2}},\notag \end{align} and similarly for the other components. If there are many charges present, the field $\FLPE$ at any point $(1)$ is a sum of the contributions from each of the other charges. Each term of the sum will look like (4.11) or (4.12). Letting $q_j$ be the magnitude of the $j$th charge, and $\FLPr_{1j}$ the displacement from $q_j$ to the point $(1)$, we write \begin{equation} \label{Eq:II:4:13} \FLPE(1)=\sum_j\frac{1}{4\pi\epsO}\,\frac{q_j}{r_{1j}^2}\,\FLPe_{1j}. \end{equation} Which means, of course, \begin{equation} \label{Eq:II:4:14} E_x(x_1,y_1,z_1)=\sum_j\frac{1}{4\pi\epsO}\, \frac{q_j(x_1-x_j)}{[(x_1-x_j)^2+(y_1-y_j)^2+(z_1-z_j)^2]^{3/2}}, \end{equation}
\begin{align} \label{Eq:II:4:14} E_x&(x_1,y_1,z_1)=\\[.5ex] &\sum_j\!\frac{1}{4\pi\epsO} \frac{q_j(x_1-x_j)}{[(x_1\!-\!x_j)^2\!+\!(y_1\!-\!y_j)^2\!+\!(z_1\!-\!z_j)^2]^{3/2}},\notag \end{align} and so on. Often it is convenient to ignore the fact that charges come in packages like electrons and protons, and think of them as being spread out in a continuous smear—or in a “distribution,” as it is called. This is O.K. so long as we are not interested in what is happening on too small a scale. We describe a charge distribution by the “charge density,” $\rho(x,y,z)$. If the amount of charge in a small volume $\Delta V_2$ located at the point $(2)$ is $\Delta q_2$, then $\rho$ is defined by \begin{equation} \label{Eq:II:4:15} \Delta q_2=\rho(2)\Delta V_2. \end{equation} To use Coulomb’s law with such a description, we replace the sums of Eqs. (4.13) or (4.14) by integrals over all volumes containing charges. Then we have \begin{equation} \label{Eq:II:4:16} \FLPE(1)=\frac{1}{4\pi\epsO} \underset{\substack{\text{all}\\\text{space}}}{\int} \frac{\rho(2)\FLPe_{12}\,dV_2}{r_{12}^2}. \end{equation} Some people prefer to write \begin{equation*} \FLPe_{12}=\frac{\FLPr_{12}}{r_{12}},\notag \end{equation*} where $\FLPr_{12}$ is the vector displacement to $(1)$ from $(2)$, as shown in Fig. 4-1. The integral for $\FLPE$ is then written as \begin{equation} \label{Eq:II:4:17} \FLPE(1)=\frac{1}{4\pi\epsO} \underset{\substack{\text{all}\\\text{space}}}{\int} \frac{\rho(2)\FLPr_{12}\,dV_2}{r_{12}^3}. \end{equation} When we want to calculate something with these integrals, we usually have to write them out in explicit detail. For the $x$-component of either Eq. (4.16) or (4.17), we would have \begin{equation} \label{Eq:II:4:18} E_x(x_1,y_1,z_1)= \underset{\substack{\text{all}\\\text{space}}}{\int} \frac{(x_1-x_2)\rho(x_2,y_2,z_2)\,dx_2\,dy_2\,dz_2} {4\pi\epsO [(x_1-x_2)^2+(y_1-y_2)^2+(z_1-z_2)^2]^{3/2}}. \end{equation}
\begin{align} \label{Eq:II:4:18} &E_x(x_1,y_1,z_1)=\\[1.5ex] &\underset{\substack{\text{all}\\\text{space}}}{\int}\!\! \frac{(x_1-x_2)\rho(x_2,y_2,z_2)\,dx_2\,dy_2\,dz_2} {4\pi\epsO [(x_1\!-\!x_2)^2\!+\!(y_1\!-\!y_2)^2\!+\!(z_1\!-\!z_2)^2]^{3/2}}.\notag \end{align}
We are not going to use this formula much. We write it here only to emphasize the fact that we have completely solved all the electrostatic problems in which we know the locations of all of the charges. Given the charges, what are the fields? Answer: Do this integral. So there is nothing to the subject; it is just a case of doing complicated integrals over three dimensions—strictly a job for a computing machine! With our integrals we can find the fields produced by a sheet of charge, from a line of charge, from a spherical shell of charge, or from any specified distribution. It is important to realize, as we go on to draw field lines, to talk about potentials, or to calculate divergences, that we already have the answer here. It is merely a matter of it being sometimes easier to do an integral by some clever guesswork than by actually carrying it out. The guesswork requires learning all kinds of strange things. In practice, it might be easier to forget trying to be clever and always to do the integral directly instead of being so smart. We are, however, going to try to be smart about it. We shall go on to discuss some other features of the electric field. |
|
2 | 4 | Electrostatics | 3 | Electric potential | First we take up the idea of electric potential, which is related to the work done in carrying a charge from one point to another. There is some distribution of charge, which produces an electric field. We ask about how much work it would take to carry a small charge from one place to another. The work done against the electrical forces in carrying a charge along some path is the negative of the component of the electrical force in the direction of the motion, integrated along the path. If we carry a charge from point $a$ to point $b$, \begin{equation*} W=-\int_a^b\FLPF\cdot d\FLPs, \end{equation*} where $\FLPF$ is the electrical force on the charge at each point, and $d\FLPs$ is the differential vector displacement along the path. (See Fig. 4-2.) It is more interesting for our purposes to consider the work that would be done in carrying one unit of charge. Then the force on the charge is numerically the same as the electric field. Calling the work done against electrical forces in this case $W(\text{unit})$, we write \begin{equation} \label{Eq:II:4:19} W(\text{unit})=-\int_a^b\FLPE\cdot d\FLPs. \end{equation} Now, in general, what we get with this kind of an integral depends on the path we take. But if the integral of (4.19) depended on the path from $a$ to $b$, we could get work out of the field by carrying the charge to $b$ along one path and then back to $a$ on the other. We would go to $b$ along the path for which $W$ is smaller and back along the other, getting out more work than we put in. There is nothing impossible, in principle, about getting energy out of a field. We shall, in fact, encounter fields where it is possible. It could be that as you move a charge you produce forces on the other part of the “machinery.” If the “machinery” moved against the force it would lose energy, thereby keeping the total energy in the world constant. For electrostatics, however, there is no such “machinery.” We know what the forces back on the sources of the field are. They are the Coulomb forces on the charges responsible for the field. If the other charges are fixed in position—as we assume in electrostatics only—these back forces can do no work on them. There is no way to get energy from them—provided, of course, that the principle of energy conservation works for electrostatic situations. We believe that it will work, but let’s just show that it must follow from Coulomb’s law of force. We consider first what happens in the field due to a single charge $q$. Let point $a$ be at the distance $r_a$ from $q$, and point $b$ at $r_b$. Now we carry a different charge, which we will call the “test” charge, and whose magnitude we choose to be one unit, from $a$ to $b$. Let’s start with the easiest possible path to calculate. We carry our test charge first along the arc of a circle, then along a radius, as shown in part (a) of Fig. 4-3. Now on that particular path it is child’s play to find the work done (otherwise we wouldn’t have picked it). First, there is no work done at all on the path from $a$ to $a'$. The field is radial (from Coulomb’s law), so it is at right angles to the direction of motion. Next, on the path from $a'$ to $b$, the field is in the direction of motion and varies as $1/r^2$. Thus the work done on the test charge in carrying it from $a$ to $b$ would be \begin{equation} \label{Eq:II:4:20} -\int_a^b\FLPE\cdot d\FLPs=-\frac{q}{4\pi\epsO}\int_{a'}^b \frac{dr}{r^2}=-\frac{q}{4\pi\epsO} \biggl(\frac{1}{r_a}-\frac{1}{r_b}\biggr). \end{equation}
\begin{align} -\int_a^b\FLPE\cdot d\FLPs &=-\frac{q}{4\pi\epsO}\int_{a'}^b \frac{dr}{r^2}\notag\\[1ex] \label{Eq:II:4:20} &=-\frac{q}{4\pi\epsO} \biggl(\frac{1}{r_a}-\frac{1}{r_b}\biggr). \end{align}
Now let’s take another easy path. For instance, the one shown in part (b) of Fig. 4-3. It goes for awhile along an arc of a circle, then radially for awhile, then along an arc again, then radially, and so on. Every time we go along the circular parts, we do no work. Every time we go along the radial parts, we must just integrate $1/r^2$. Along the first radial stretch, we integrate from $r_a$ to $r_{a'}$, then along the next radial stretch from $r_{a'}$ to $r_{a''}$, and so on. The sum of all these integrals is the same as a single integral directly from $r_a$ to $r_b$. We get the same answer for this path that we did for the first path we tried. It is clear that we would get the same answer for any path which is made up of an arbitrary number of the same kinds of pieces. What about smooth paths? Would we get the same answer? We discussed this point previously in Chapter 13 of Vol. I. Applying the same arguments used there, we can conclude that work done in carrying a unit charge from $a$ to $b$ is independent of the path. \begin{equation*} \left.\begin{gathered} W(\text{unit})\\[1ex] a\to b \end{gathered} \right\} =-\underset{\substack{\text{any}\\\text{path}}}{\int_a^b}\FLPE\cdot d\FLPs. \end{equation*} Since the work done depends only on the endpoints, it can be represented as the difference between two numbers. We can see this in the following way. Let’s choose a reference point $P_0$ and agree to evaluate our integral by using a path that always goes by way of point $P_0$. Let $\phi(a)$ stand for the work done against the field in going from $P_0$ to point $a$, and let $\phi(b)$ be the work done in going from $P_0$ to point $b$ (Fig. 4-4). The work in going to $P_0$ from $a$ (on the way to $b$) is the negative of $\phi(a)$, so we have that \begin{equation} \label{Eq:II:4:21} -\int_a^b\FLPE\cdot d\FLPs=\phi(b)-\phi(a). \end{equation} Since only the difference in the function $\phi$ at two points is ever involved, we do not really have to specify the location of $P_0$. Once we have chosen some reference point, however, a number $\phi$ is determined for any point in space; $\phi$ is then a scalar field. It is a function of $x$, $y$, $z$. We call this scalar function the electrostatic potential at any point.
Electrostatic potential: \begin{equation} \label{Eq:II:4:22} \phi(P)=-\int_{P_0}^P\FLPE\cdot d\FLPs. \end{equation} For convenience, we will often take the reference point at infinity. Then, for a single charge at the origin, the potential $\phi$ is given for any point $(x,y,z)$—using Eq. (4.20): \begin{equation} \label{Eq:II:4:23} \phi(x,y,z)=\frac{q}{4\pi\epsO}\,\frac{1}{r}. \end{equation} The electric field from several charges can be written as the sum of the electric field from the first, from the second, from the third, etc. When we integrate the sum to find the potential we get a sum of integrals. Each of the integrals is the negative of the potential from one of the charges. We conclude that the potential $\phi$ from a lot of charges is the sum of the potentials from all the individual charges. There is a superposition principle also for potentials. Using the same kind of arguments by which we found the electric field from a group of charges and for a distribution of charges, we can get the complete formulas for the potential $\phi$ at a point we call $(1)$: \begin{align} \label{Eq:II:4:24} \phi(1)&=\sum_{j}\frac{1}{4\pi\epsO}\,\frac{q_j}{r_{1j}},\\[1ex] \label{Eq:II:4:25} \phi(1)&=\frac{1}{4\pi\epsO} \underset{\substack{\text{all}\\\text{space}}}{\int} \frac{\rho(2)\,dV_2}{r_{12}}. \end{align} Remember that the potential $\phi$ has a physical significance: it is the potential energy which a unit charge would have if brought to the specified point in space from some reference point. |
|
2 | 4 | Electrostatics | 4 | $\boldsymbol{E=-\nabla\phi}$ | Who cares about $\phi$? Forces on charges are given by $\FLPE$, the electric field. The point is that $\FLPE$ can be obtained easily from $\phi$—it is as easy, in fact, as taking a derivative. Consider two points, one at $x$ and one at $(x+\Delta x)$, but both at the same $y$ and $z$, and ask how much work is done in carrying a unit charge from one point to the other. The path is along the horizontal line from $x$ to $x+\Delta x$. The work done is the difference in the potential at the two points: \begin{equation*} \Delta W=\phi(x+\Delta x,y,z)-\phi(x,y,z)=\ddp{\phi}{x}\,\Delta x. \end{equation*} But the work done against the field for the same path is \begin{equation*} \Delta W=-\int\FLPE\cdot d\FLPs=-E_x\,\Delta x. \end{equation*} We see that \begin{equation} \label{Eq:II:4:26} E_x=-\ddp{\phi}{x}. \end{equation} Similarly, $E_y=-\ddpl{\phi}{y}$, $E_z=-\ddpl{\phi}{z}$, or, summarizing with the notation of vector analysis, \begin{equation} \label{Eq:II:4:27} \FLPE=-\FLPgrad{\phi}. \end{equation} This equation is the differential form of Eq. (4.22). Any problem with specified charges can be solved by computing the potential from (4.24) or (4.25) and using (4.27) to get the field. Equation (4.27) also agrees with what we found from vector calculus: that for any scalar field $\phi$ \begin{equation} \label{Eq:II:4:28} \int_a^b\FLPgrad{\phi}\cdot d\FLPs=\phi(b)-\phi(a). \end{equation} According to Eq. (4.25) the scalar potential $\phi$ is given by a three-dimensional integral similar to the one we had for $\FLPE$. Is there any advantage to computing $\phi$ rather than $\FLPE$? Yes. There is only one integral for $\phi$, while there are three integrals for $\FLPE$—because it is a vector. Furthermore, $1/r$ is usually a little easier to integrate than $x/r^3$. It turns out in many practical cases that it is easier to calculate $\phi$ and then take the gradient to find the electric field, than it is to evaluate the three integrals for $\FLPE$. It is merely a practical matter. There is also a deeper physical significance to the potential $\phi$. We have shown that $\FLPE$ of Coulomb’s law is obtained from $\FLPE=-\grad\phi$, when $\phi$ is given by (4.22). But if $\FLPE$ is equal to the gradient of a scalar field, then we know from the vector calculus that the curl of $\FLPE$ must vanish: \begin{equation} \label{Eq:II:4:29} \FLPcurl{\FLPE}=\FLPzero. \end{equation} But that is just our second fundamental equation of electrostatics, Eq. (4.6). We have shown that Coulomb’s law gives an $\FLPE$ field that satisfies that condition. So far, everything is all right. We had really proved that $\FLPcurl{\FLPE}$ was zero before we defined the potential. We had shown that the work done around a closed path is zero. That is, that \begin{equation*} \oint\FLPE\cdot d\FLPs = 0 \end{equation*} for any path. We saw in Chapter 3 that for any such field $\FLPcurl{\FLPE}$ must be zero everywhere. The electric field in electrostatics is an example of a curl-free field. You can practice your vector calculus by proving that $\FLPcurl{\FLPE}$ is zero in a different way—by computing the components of $\FLPcurl{\FLPE}$ for the field of a point charge, as given by Eq. (4.11). If you get zero, the superposition principle says you would get zero for the field of any charge distribution. We should point out an important fact. For any radial force the work done is independent of the path, and there exists a potential. If you think about it, the entire argument we made above to show that the work integral was independent of the path depended only on the fact that the force from a single charge was radial and spherically symmetric. It did not depend on the fact that the dependence on distance was as $1/r^2$—there could have been any $r$ dependence. The existence of a potential, and the fact that the curl of $\FLPE$ is zero, comes really only from the symmetry and direction of the electrostatic forces. Because of this, Eq. (4.28)—or (4.29)—can contain only part of the laws of electricity. |
|
2 | 5 | Application of Gauss’ Law | 1 | Electrostatics is Gauss’ law plus … | There are two laws of electrostatics: that the flux of the electric field from a volume is proportional to the charge inside—Gauss’ law, and that the circulation of the electric field is zero—$\FLPE$ is a gradient. From these two laws, all the predictions of electrostatics follow. But to say these things mathematically is one thing; to use them easily, and with a certain amount of ingenuity, is another. In this chapter we will work through a number of calculations which can be made with Gauss’ law directly. We will prove theorems and describe some effects, particularly in conductors, that can be understood very easily from Gauss’ law. Gauss’ law by itself cannot give the solution of any problem because the other law must be obeyed too. So when we use Gauss’ law for the solution of particular problems, we will have to add something to it. We will have to presuppose, for instance, some idea of how the field looks—based, for example, on arguments of symmetry. Or we may have to introduce specifically the idea that the field is the gradient of a potential. |
|
2 | 5 | Application of Gauss’ Law | 2 | Equilibrium in an electrostatic field | Consider first the following question: When can a point charge be in stable mechanical equilibrium in the electric field of other charges? As an example, imagine three negative charges at the corners of an equilateral triangle in a horizontal plane. Would a positive charge placed at the center of the triangle remain there? (It will be simpler if we ignore gravity for the moment, although including it would not change the results.) The force on the positive charge is zero, but is the equilibrium stable? Would the charge return to the equilibrium position if displaced slightly? The answer is no. There are no points of stable equilibrium in any electrostatic field—except right on top of another charge. Using Gauss’ law, it is easy to see why. First, for a charge to be in equilibrium at any particular point $P_0$, the field must be zero. Second, if the equilibrium is to be a stable one, we require that if we move the charge away from $P_0$ in any direction, there should be a restoring force directed opposite to the displacement. The electric field at all nearby points must be pointing inward—toward the point $P_0$. But that is in violation of Gauss’ law if there is no charge at $P_0$, as we can easily see. Consider a tiny imaginary surface that encloses $P_0$, as in Fig. 5–1. If the electric field everywhere in the vicinity is pointed toward $P_0$, the surface integral of the normal component is certainly not zero. For the case shown in the figure, the flux through the surface must be a negative number. But Gauss’ law says that the flux of electric field through any surface is proportional to the total charge inside. If there is no charge at $P_0$, the field we have imagined violates Gauss’ law. It is impossible to balance a positive charge in empty space—at a point where there is not some negative charge. A positive charge can be in equilibrium if it is in the middle of a distributed negative charge. Of course, the negative charge distribution would have to be held in place by other than electrical forces! Our result has been obtained for a point charge. Does the same conclusion hold for a complicated arrangement of charges held together in fixed relative positions—with rods, for example? We consider the question for two equal charges fixed on a rod. Is it possible that this combination can be in equilibrium in some electrostatic field? The answer is again no. The total force on the rod cannot be restoring for displacements in every direction. Call $\FLPF$ the total force on the rod in any position—$\FLPF$ is then a vector field. Following the argument used above, we conclude that at a position of stable equilibrium, the divergence of $\FLPF$ must be a negative number. But the total force on the rod is the first charge times the field at its position, plus the second charge times the field at its position: \begin{equation} \label{Eq:II:5:1} \FLPF=q_1\FLPE_1+q_2\FLPE_2. \end{equation} The divergence of $\FLPF$ is given by \begin{equation*} \FLPdiv{\FLPF}=q_1(\FLPdiv{\FLPE_1})+q_2(\FLPdiv{\FLPE_2}). \end{equation*} If each of the two charges $q_1$ and $q_2$ is in free space, both $\FLPdiv{\FLPE_1}$ and $\FLPdiv{\FLPE_2}$ are zero, and $\FLPdiv{\FLPF}$ is zero—not negative, as would be required for equilibrium. You can see that an extension of the argument shows that no rigid combination of any number of charges can have a position of stable equilibrium in an electrostatic field in free space. Now we have not shown that equilibrium is forbidden if there are pivots or other mechanical constraints. As an example, consider a hollow tube in which a charge can move back and forth freely, but not sideways. Now it is very easy to devise an electric field that points inward at both ends of the tube if it is allowed that the field may point laterally outward near the center of the tube. We simply place positive charges at each end of the tube, as in Fig. 5–2. There can now be an equilibrium point even though the divergence of $\FLPE$ is zero. The charge, of course, would not be in stable equilibrium for sideways motion were it not for “nonelectrical” forces from the tube walls. |
|
2 | 5 | Application of Gauss’ Law | 3 | Equilibrium with conductors | There is no stable spot in the field of a system of fixed charges. What about a system of charged conductors? Can a system of charged conductors produce a field that will have a stable equilibrium point for a point charge? (We mean at a point other than on a conductor, of course.) You know that conductors have the property that charges can move freely around in them. Perhaps when the point charge is displaced slightly, the other charges on the conductors will move in a way that will give a restoring force to the point charge? The answer is still no—although the proof we have just given doesn’t show it. The proof for this case is more difficult, and we will only indicate how it goes. First, we note that when charges redistribute themselves on the conductors, they can only do so if their motion decreases their total potential energy. (Some energy is lost to heat as they move in the conductor.) Now we have already shown that if the charges producing a field are stationary, there is, near any zero point $P_0$ in the field, some direction for which moving a point charge away from $P_0$ will decrease the energy of the system (since the force is away from $P_0$). Any readjustment of the charges on the conductors can only lower the potential energy still more, so (by the principle of virtual work) their motion will only increase the force in that particular direction away from $P_0$, and not reverse it. Our conclusions do not mean that it is not possible to balance a charge by electrical forces. It is possible if one is willing to control the locations or the sizes of the supporting charges with suitable devices. You know that a rod standing on its point in a gravitational field is unstable, but this does not prove that it cannot be balanced on the end of a finger. Similarly, a charge can be held in one spot by electric fields if they are variable. But not with a passive—that is, a static—system. |
|
2 | 5 | Application of Gauss’ Law | 4 | Stability of atoms | If charges cannot be held stably in position, it is surely not proper to imagine matter to be made up of static point charges (electrons and protons) governed only by the laws of electrostatics. Such a static configuration is impossible; it would collapse! It was once suggested that the positive charge of an atom could be distributed uniformly in a sphere, and the negative charges, the electrons, could be at rest inside the positive charge, as shown in Fig. 5–3. This was the first atomic model, proposed by Thomson. But Rutherford concluded from the experiment of Geiger and Marsden that the positive charges were very much concentrated, in what he called the nucleus. Thomson’s static model had to be abandoned. Rutherford and Bohr then suggested that the equilibrium might be dynamic, with the electrons revolving in orbits, as shown in Fig. 5–4. The electrons would be kept from falling in toward the nucleus by their orbital motion. We already know at least one difficulty with this picture. With such motion, the electrons would be accelerating (because of the circular motion) and would, therefore, be radiating energy. They would lose the kinetic energy required to stay in orbit, and would spiral in toward the nucleus. Again unstable! The stability of the atoms is now explained in terms of quantum mechanics. The electrostatic forces pull the electron as close to the nucleus as possible, but the electron is compelled to stay spread out in space over a distance given by the uncertainty principle. If it were confined in too small a space, it would have a great uncertainty in momentum. But that means that it would have a high expected energy—which it would use to escape from the electrical attraction. The net result is an electrical equilibrium not too different from the idea of Thomson—only it is the negative charge that is spread out (because the mass of the electron is so much smaller than the mass of the proton). |
|
2 | 5 | Application of Gauss’ Law | 5 | The field of a line charge | Gauss’ law can be used to solve a number of electrostatic field problems involving a special symmetry—usually spherical, cylindrical, or planar symmetry. In the remainder of this chapter we will apply Gauss’ law to a few such problems. The ease with which these problems can be solved may give the misleading impression that the method is very powerful, and that one should be able to go on to many other problems. It is unfortunately not so. One soon exhausts the list of problems that can be solved easily with Gauss’ law. In later chapters we will develop more powerful methods for investigating electrostatic fields. As our first example, we consider a system with cylindrical symmetry. Suppose that we have a very long, uniformly charged rod. By this we mean that electric charges are distributed uniformly along an indefinitely long straight line, with the charge $\lambda$ per unit length. We wish to know the electric field. The problem can, of course, be solved by integrating the contribution to the field from every part of the line. We are going to do it without integrating, by using Gauss’ law and some guesswork. First, we surmise that the electric field will be directed radially outward from the line. Any axial component from charges on one side would be accompanied by an equal axial component from charges on the other side. The result could only be a radial field. It also seems reasonable that the field should have the same magnitude at all points equidistant from the line. This is obvious. (It may not be easy to prove, but it is true if space is symmetric—as we believe it is.) We can use Gauss’ law in the following way. We consider an imaginary surface in the shape of a cylinder coaxial with the line, as shown in Fig. 5–5. According to Gauss’ law, the total flux of $\FLPE$ from this surface is equal to the charge inside divided by $\epsO$. Since the field is assumed to be normal to the surface, the normal component is the magnitude of the field. Let’s call it $E$. Also, let the radius of the cylinder be $r$, and its length be taken as one unit, for convenience. The flux through the cylindrical surface is equal to $E$ times the area of the surface, which is $2\pi r$. The flux through the two end faces is zero because the electric field is tangential to them. The total charge inside our surface is just $\lambda$, because the length of the line inside is one unit. Gauss’ law then gives \begin{gather} E\cdot2\pi r=\lambda/\epsO,\notag\\[1ex] \label{Eq:II:5:2} E=\frac{\lambda}{2\pi\epsO r}. \end{gather} The electric field of a line charge depends inversely on the first power of the distance from the line. |
|
2 | 6 | The Electric Field in Various Circumstances | 1 | Equations of the electrostatic potential | This chapter will describe the behavior of the electric field in a number of different circumstances. It will provide some experience with the way the electric field behaves, and will describe some of the mathematical methods which are used to find this field. We begin by pointing out that the whole mathematical problem is the solution of two equations, the Maxwell equations for electrostatics: \begin{align} \label{Eq:II:6:1} \FLPdiv{\FLPE}&=\frac{\rho}{\epsO},\\[1ex] \label{Eq:II:6:2} \FLPcurl{\FLPE}&=\FLPzero. \end{align} In fact, the two can be combined into a single equation. From the second equation, we know at once that we can describe the field as the gradient of a scalar (see Section 3–7): \begin{equation} \label{Eq:II:6:3} \FLPE=-\FLPgrad{\phi}. \end{equation} We may, if we wish, completely describe any particular electric field in terms of its potential $\phi$. We obtain the differential equation that $\phi$ must obey by substituting Eq. (6.3) into (6.1), to get \begin{equation} \label{Eq:II:6:4} \FLPdiv{\FLPgrad{\phi}}=-\frac{\rho}{\epsO}. \end{equation} The divergence of the gradient of $\phi$ is the same as $\nabla^2$ operating on $\phi$: \begin{equation} \label{Eq:II:6:5} \FLPdiv{\FLPgrad{\phi}}=\nabla^2\phi= \frac{\partial^2\phi}{\partial x^2}+ \frac{\partial^2\phi}{\partial y^2}+ \frac{\partial^2\phi}{\partial z^2}, \end{equation} so we write Eq. (6.4) as \begin{equation} \label{Eq:II:6:6} \nabla^2\phi=-\frac{\rho}{\epsO}. \end{equation} The operator $\nabla^2$ is called the Laplacian, and Eq. (6.6) is called the Poisson equation. The entire subject of electrostatics, from a mathematical point of view, is merely a study of the solutions of the single equation (6.6). Once $\phi$ is obtained by solving Eq. (6.6) we can find $\FLPE$ immediately from Eq. (6.3). We take up first the special class of problems in which $\rho$ is given as a function of $x$, $y$, $z$. In that case the problem is almost trivial, for we already know the solution of Eq. (6.6) for the general case. We have shown that if $\rho$ is known at every point, the potential at point $(1)$ is \begin{equation} \label{Eq:II:6:7} \phi(1)=\int\frac{\rho(2)\,dV_2}{4\pi\epsO r_{12}}, \end{equation} where $\rho(2)$ is the charge density, $dV_2$ is the volume element at point $(2)$, and $r_{12}$ is the distance between points $(1)$ and $(2)$. The solution of the differential equation (6.6) is reduced to an integration over space. The solution (6.7) should be especially noted, because there are many situations in physics that lead to equations like \begin{equation*} \nabla^2(\text{something})=(\text{something else}), \end{equation*} and Eq. (6.7) is a prototype of the solution for any of these problems. The solution of electrostatic field problems is thus completely straightforward when the positions of all the charges are known. Let’s see how it works in a few examples. |
|
2 | 6 | The Electric Field in Various Circumstances | 2 | The electric dipole | First, take two point charges, $+q$ and $-q$, separated by the distance $d$. Let the $z$-axis go through the charges, and pick the origin halfway between, as shown in Fig. 6–1. Then, using (4.24), the potential from the two charges is given by \begin{align} \phi(x,y,z&)\notag\\ \label{Eq:II:6:8} &=\frac{1}{4\pi\epsO}\biggl[ \frac{q}{\sqrt{[z-(d/2)]^2+x^2+y^2}}+ \frac{-q}{\sqrt{[z+(d/2)]^2+x^2+y^2}} \biggr]. \end{align}
\begin{alignat}{2} \label{Eq:II:6:8} \phi(&x,y,z)\\[.5ex] &=\frac{1}{4\pi\epsO}\!\biggl[ &\frac{q}{\sqrt{[z\!-\!(d/2)]^2\!+\!x^2\!+\!y^2}}\,+\notag\\ &\phantom{\frac{1}{4\pi\epsO}\biggl[} &\frac{-q}{\sqrt{[z\!+\!(d/2)]^2\!+\!x^2\!+\!y^2}}\biggr].\notag \end{alignat} We are not going to write out the formula for the electric field, but we can always calculate it once we have the potential. So we have solved the problem of two charges. There is an important special case in which the two charges are very close together—which is to say that we are interested in the fields only at distances from the charges large in comparison with their separation. We call such a close pair of charges a dipole. Dipoles are very common. A “dipole” antenna can often be approximated by two charges separated by a small distance—if we don’t ask about the field too close to the antenna. (We are usually interested in antennas with moving charges; then the equations of statics do not really apply, but for some purposes they are an adequate approximation.) More important perhaps, are atomic dipoles. If there is an electric field in any material, the electrons and protons feel opposite forces and are displaced relative to each other. In a conductor, you remember, some of the electrons move to the surfaces, so that the field inside becomes zero. In an insulator the electrons cannot move very far; they are pulled back by the attraction of the nucleus. They do, however, shift a little bit. So although an atom, or molecule, remains neutral in an external electric field, there is a very tiny separation of its positive and negative charges and it becomes a microscopic dipole. If we are interested in the fields of these atomic dipoles in the neighborhood of ordinary-sized objects, we are normally dealing with distances large compared with the separations of the pairs of charges. In some molecules the charges are somewhat separated even in the absence of external fields, because of the form of the molecule. In a water molecule, for example, there is a net negative charge on the oxygen atom and a net positive charge on each of the two hydrogen atoms, which are not placed symmetrically but as in Fig. 6–2. Although the charge of the whole molecule is zero, there is a charge distribution with a little more negative charge on one side and a little more positive charge on the other. This arrangement is certainly not as simple as two point charges, but when seen from far away the system acts like a dipole. As we shall see a little later, the field at large distances is not sensitive to the fine details. Let’s look, then, at the field of two opposite charges with a small separation $d$. If $d$ becomes zero, the two charges are on top of each other, the two potentials cancel, and there is no field. But if they are not exactly on top of each other, we can get a good approximation to the potential by expanding the terms of (6.8) in a power series in the small quantity $d$ (using the binomial expansion). Keeping terms only to first order in $d$, we can write \begin{equation*} \biggl(z-\frac{d}{2}\biggr)^2\approx z^2-zd. \end{equation*} It is convenient to write \begin{equation*} x^2+y^2+z^2=r^2. \end{equation*} Then \begin{equation*} \biggl(z-\frac{d}{2}\biggr)^2+x^2+y^2\approx r^2-zd= r^2\biggl(1-\frac{zd}{r^2}\biggr), \end{equation*}
\begin{gather*} \biggl(z-\frac{d}{2}\biggr)^2+x^2+y^2\approx r^2-zd\\[1ex] =r^2\biggl(1-\frac{zd}{r^2}\biggr), \end{gather*} and \begin{equation*} \frac{1}{\sqrt{[z-(d/2)]^2+x^2+y^2}}\approx \frac{1}{\sqrt{r^2[1-(zd/r^2)]}}= \frac{1}{r}\biggl(1-\frac{zd}{r^2}\biggr)^{-1/2}. \end{equation*}
\begin{gather*} \frac{1}{\sqrt{[z-(d/2)]^2+x^2+y^2}}\approx \frac{1}{\sqrt{r^2[1-(zd/r^2)]}}\\[2ex] =\frac{1}{r}\biggl(1-\frac{zd}{r^2}\biggr)^{-1/2}. \end{gather*} Using the binomial expansion again for $[1-(zd/r^2)]^{-1/2}$—and throwing away terms with the square or higher powers of $d$—we get \begin{equation*} \frac{1}{r}\biggl(1+\frac{1}{2}\,\frac{zd}{r^2}\biggr). \end{equation*} Similarly, \begin{equation*} \frac{1}{\sqrt{[z+(d/2)]^2+x^2+y^2}}\approx \frac{1}{r}\biggl(1-\frac{1}{2}\,\frac{zd}{r^2}\biggr). \end{equation*} The difference of these two terms gives for the potential \begin{equation} \label{Eq:II:6:9} \phi(x,y,z)=\frac{1}{4\pi\epsO}\,\frac{z}{r^3}\,qd. \end{equation} The potential, and hence the field, which is its derivative, is proportional to $qd$, the product of the charge and the separation. This product is defined as the dipole moment of the two charges, for which we will use the symbol $p$ (do not confuse with momentum!): \begin{equation} \label{Eq:II:6:10} p=qd. \end{equation} Equation (6.9) can also be written as \begin{equation} \label{Eq:II:6:11} \phi(x,y,z)=\frac{1}{4\pi\epsO}\,\frac{p\cos\theta}{r^2}, \end{equation} since $z/r=\cos\theta$, where $\theta$ is the angle between the axis of the dipole and the radius vector to the point $(x,y,z)$—see Fig. 6–1. The potential of a dipole decreases as $1/r^2$ for a given direction from the axis (whereas for a point charge it goes as $1/r$). The electric field $\FLPE$ of the dipole will then decrease as $1/r^3$. We can put our formula into a vector form if we define $\FLPp$ as a vector whose magnitude is $p$ and whose direction is along the axis of the dipole, pointing from $-q$ toward $+q$. Then \begin{equation} \label{Eq:II:6:12} p\cos\theta=\FLPp\cdot\FLPe_r, \end{equation} where $\FLPe_r$ is the unit radial vector (Fig. 6–3). We can also represent the point $(x,y,z)$ by $\FLPr$. Then
Dipole potential: \begin{equation} \label{Eq:II:6:13} \phi(\FLPr)=\frac{1}{4\pi\epsO}\,\frac{\FLPp\cdot\FLPe_r}{r^2}= \frac{1}{4\pi\epsO}\,\frac{\FLPp\cdot\FLPr}{r^3} \end{equation} This formula is valid for a dipole with any orientation and position if $\FLPr$ represents the vector from the dipole to the point of interest. If we want the electric field of the dipole we can get it by taking the gradient of $\phi$. For example, the $z$-component of the field is $-\ddpl{\phi}{z}$. For a dipole oriented along the $z$-axis we can use (6.9): \begin{equation} -\ddp{\phi}{z}=-\frac{p}{4\pi\epsO}\,\ddp{}{z}\biggl(\frac{z}{r^3}\biggr) =-\frac{p}{4\pi\epsO}\biggl(\frac{1}{r^3}-\frac{3z^2}{r^5}\biggr),\notag \end{equation} or \begin{equation} \label{Eq:II:6:14} E_z=\frac{p}{4\pi\epsO}\,\frac{3\cos^2\theta-1}{r^3}. \end{equation} The $x$- and $y$-components are \begin{equation*} E_x=\frac{p}{4\pi\epsO}\,\frac{3zx}{r^5},\quad E_y=\frac{p}{4\pi\epsO}\,\frac{3zy}{r^5}. \end{equation*} These two can be combined to give one component directed perpendicular to the $z$-axis, which we will call the transverse component $E_\perp$: \begin{equation} E_\perp=\sqrt{E_x^2+E_y^2}=\frac{p}{4\pi\epsO}\,\frac{3z}{r^5} \sqrt{x^2+y^2}\notag \end{equation} or \begin{equation} \label{Eq:II:6:15} E_\perp=\frac{p}{4\pi\epsO}\,\frac{3\cos\theta\sin\theta}{r^3}. \end{equation} The transverse component $E_\perp$ is in the $xy$-plane and points directly away from the axis of the dipole. The total field, of course, is \begin{equation*} E=\sqrt{E_z^2+E_\perp^2}. \end{equation*} The dipole field varies inversely as the cube of the distance from the dipole. On the axis, at $\theta=0$, it is twice as strong as at $\theta=90^\circ$. At both of these special angles the electric field has only a $z$-component, but of opposite sign at the two places (Fig. 6–4). |
|
2 | 6 | The Electric Field in Various Circumstances | 3 | Remarks on vector equations | This is a good place to make a general remark about vector analysis. The fundamental proofs can be expressed by elegant equations in a general form, but in making various calculations and analyses it is always a good idea to choose the axes in some convenient way. Notice that when we were finding the potential of a dipole we chose the $z$-axis along the direction of the dipole, rather than at some arbitrary angle. This made the work much easier. But then we wrote the equations in vector form so that they would no longer depend on any particular coordinate system. After that, we are allowed to choose any coordinate system we wish, knowing that the relation is, in general, true. It clearly doesn’t make any sense to bother with an arbitrary coordinate system at some complicated angle when you can choose a neat system for the particular problem—provided that the result can finally be expressed as a vector equation. So by all means take advantage of the fact that vector equations are independent of any coordinate system. On the other hand, if you are trying to calculate the divergence of a vector, instead of just looking at $\FLPdiv{\FLPE}$ and wondering what it is, don’t forget that it can always be spread out as \begin{equation*} \ddp{E_x}{x}+\ddp{E_y}{y}+\ddp{E_z}{z}. \end{equation*} If you can then work out the $x$-, $y$-, and $z$-components of the electric field and differentiate them, you will have the divergence. There often seems to be a feeling that there is something inelegant—some kind of defeat involved—in writing out the components; that somehow there ought always to be a way to do everything with the vector operators. There is often no advantage to it. The first time we encounter a particular kind of problem, it usually helps to write out the components to be sure we understand what is going on. There is nothing inelegant about putting numbers into equations, and nothing inelegant about substituting the derivatives for the fancy symbols. In fact, there is often a certain cleverness in doing just that. Of course when you publish a paper in a professional journal it will look better—and be more easily understood—if you can write everything in vector form. Besides, it saves print. |
|
2 | 6 | The Electric Field in Various Circumstances | 4 | The dipole potential as a gradient | We would like to point out a rather amusing thing about the dipole formula, Eq. (6.13). The potential can also be written as \begin{equation} \label{Eq:II:6:16} \phi=-\frac{1}{4\pi\epsO}\FLPp\cdot\FLPgrad{\biggl(\frac{1}{r}\biggr)}. \end{equation} If you calculate the gradient of $1/r$, you get \begin{equation*} \FLPgrad{\biggl(\frac{1}{r}\biggr)}=-\frac{\FLPr}{r^3}= -\frac{\FLPe_r}{r^2}, \end{equation*} and Eq. (6.16) is the same as Eq. (6.13). How did we think of that? We just remembered that $\FLPe_r/r^2$ appeared in the formula for the field of a point charge, and that the field was the gradient of a potential which has a $1/r$ dependence. There is a physical reason for being able to write the dipole potential in the form of Eq. (6.16). Suppose we have a point charge $q$ at the origin. The potential at the point $P$ at $(x,y,z)$ is \begin{equation*} \phi_0=\frac{q}{r}. \end{equation*} (Let’s leave off the $1/4\pi\epsO$ while we make these arguments; we can stick it in at the end.) Now if we move the charge $+q$ up a distance $\Delta z$, the potential at $P$ will change a little, by, say, $\Delta\phi_+$. How much is $\Delta\phi_+$? Well, it is just the amount that the potential would change if we were to leave the charge at the origin and move $P$ downward by the same distance $\Delta z$ (Fig. 6–5). That is, \begin{equation*} \Delta\phi_+=-\ddp{\phi_0}{z}\Delta z, \end{equation*} where by $\Delta z$ we mean the same as $d/2$. So, using $\phi_0=q/r$, we have that the potential from the positive charge is \begin{equation} \label{Eq:II:6:17} \phi_+=\frac{q}{r}-\ddp{}{z}\biggl(\frac{q}{r}\biggr)\frac{d}{2}. \end{equation} Applying the same reasoning for the potential from the negative charge, we can write \begin{equation} \label{Eq:II:6:18} \phi_-=\frac{-q}{r}+\ddp{}{z}\biggl(\frac{-q}{r}\biggr)\frac{d}{2}. \end{equation} The total potential is the sum of (6.17) and (6.18): \begin{align} \label{Eq:II:6:19} \phi=\phi_++\phi_-&=-\ddp{}{z}\biggl(\frac{q}{r}\biggr)d\\[1ex] &=-\ddp{}{z}\biggl(\frac{1}{r}\biggr)qd\notag \end{align} For other orientations of the dipole, we could represent the displacement of the positive charge by the vector $\Delta\FLPr_+$. We should then write the equation above Eq. (6.17) as \begin{equation*} \Delta\phi_+=-\FLPgrad{\phi_0}\cdot\Delta\FLPr_+, \end{equation*} where $\Delta\FLPr_+$ is then to be replaced by $\FLPd/2$. Completing the derivation as before, Eq. (6.19) would then become \begin{equation*} \phi=-\FLPgrad{\biggl(\frac{1}{r}\biggr)}\cdot q\FLPd. \end{equation*} This is the same as Eq. (6.16), if we replace $q\FLPd=\FLPp$, and put back the $1/4\pi\epsO$. Looking at it another way, we see that the dipole potential, Eq. (6.13), can be interpreted as \begin{equation} \label{Eq:II:6:20} \phi=-\FLPp\cdot\FLPgrad{\Phi_0}, \end{equation} where $\Phi_0=1/4\pi\epsO r$ is the potential of a unit point charge. Although we can always find the potential of a known charge distribution by an integration, it is sometimes possible to save time by getting the answer with a clever trick. For example, one can often make use of the superposition principle. If we are given a charge distribution that can be made up of the sum of two distributions for which the potentials are already known, it is easy to find the desired potential by just adding the two known ones. One example of this is our derivation of (6.20), another is the following. Suppose we have a spherical surface with a distribution of surface charge that varies as the cosine of the polar angle. The integration for this distribution is fairly messy. But, surprisingly, such a distribution can be analyzed by superposition. For imagine a sphere with a uniform volume density of positive charge, and another sphere with an equal uniform volume density of negative charge, originally superposed to make a neutral—that is, uncharged—sphere. If the positive sphere is then displaced slightly with respect to the negative sphere, the body of the uncharged sphere would remain neutral, but a little positive charge will appear on one side, and some negative charge will appear on the opposite side, as illustrated in Fig. 6–6. If the relative displacement of the two spheres is small, the net charge is equivalent to a surface charge (on a spherical surface), and the surface charge density will be proportional to the cosine of the polar angle. Now if we want the potential from this distribution, we do not need to do an integral. We know that the potential from each of the spheres of charge is—for points outside the sphere—the same as from a point charge. The two displaced spheres are like two point charges; the potential is just that of a dipole. In this way you can show that a charge distribution on a sphere of radius $a$ with a surface charge density \begin{equation*} \sigma=\sigma_0\cos\theta \end{equation*} produces a field outside the sphere which is just that of a dipole whose moment is \begin{equation*} p=\frac{4\pi\sigma_0 a^3}{3}. \end{equation*} It can also be shown that inside the sphere the field is constant, with the value \begin{equation*} E=\frac{\sigma_0}{3\epsO}. \end{equation*} If $\theta$ is the angle from the positive $z$-axis, the electric field inside the sphere is in the negative $z$-direction. The example we have just considered is not as artificial as it may appear; we will encounter it again in the theory of dielectrics. |
|
2 | 6 | The Electric Field in Various Circumstances | 5 | The dipole approximation for an arbitrary distribution | The dipole field appears in another circumstance both interesting and important. Suppose that we have an object that has a complicated distribution of charge—like the water molecule (Fig. 6–2)—and we are interested only in the fields far away. We will show that it is possible to find a relatively simple expression for the fields which is appropriate for distances large compared with the size of the object. We can think of our object as an assembly of point charges $q_i$ in a certain limited region, as shown in Fig. 6–7. (We can, later, replace $q_i$ by $\rho\,dV$ if we wish.) Let each charge $q_i$ be located at the displacement $\FLPd_i$ from an origin chosen somewhere in the middle of the group of charges. What is the potential at the point $P$, located at $\FLPR$, where $\FLPR$ is much larger than the maximum $\FLPd_i$? The potential from the whole collection is given by \begin{equation} \label{Eq:II:6:21} \phi=\frac{1}{4\pi\epsO}\sum_i\frac{q_i}{r_i}, \end{equation} where $r_i$ is the distance from $P$ to the charge $q_i$ (the length of the vector $\FLPR-\FLPd_i$). Now if the distance from the charges to $P$, the point of observation, is enormous, each of the $r_i$’s can be approximated by $R$. Each term becomes $q_i/R$, and we can take $1/R$ out as a factor in front of the summation. This gives us the simple result \begin{equation} \label{Eq:II:6:22} \phi=\frac{1}{4\pi\epsO}\,\frac{1}{R}\sum_iq_i= \frac{Q}{4\pi\epsO R}, \end{equation} where $Q$ is just the total charge of the whole object. Thus we find that for points far enough from any lump of charge, the lump looks like a point charge. The result is not too surprising. But what if there are equal numbers of positive and negative charges? Then the total charge $Q$ of the object is zero. This is not an unusual case; in fact, as we know, objects are usually neutral. The water molecule is neutral, but the charges are not all at one point, so if we are close enough we should be able to see some effects of the separate charges. We need a better approximation than (6.22) for the potential from an arbitrary distribution of charge in a neutral object. Equation (6.21) is still precise, but we can no longer just set $r_i=R$. We need a more accurate expression for $r_i$. If the point $P$ is at a large distance, $r_i$ will differ from $R$ to an excellent approximation by the projection of $\FLPd$ on $\FLPR$, as can be seen from Fig. 6–7. (You should imagine that $P$ is really farther away than is shown in the figure.) In other words, if $\FLPe_R$ is the unit vector in the direction of $\FLPR$, then our next approximation to $r_i$ is \begin{equation} \label{Eq:II:6:23} r_i\approx R-\FLPd_i\cdot\FLPe_R. \end{equation} What we really want is $1/r_i$, which, since $d_i\ll R$, can be written to our approximation as \begin{equation} \label{Eq:II:6:24} \frac{1}{r_i}\approx\frac{1}{R}\biggl(1+\frac{\FLPd_i\cdot\FLPe_R}{R}\biggr). \end{equation} Substituting this in (6.21), we get that the potential is \begin{equation} \label{Eq:II:6:25} \phi=\frac{1}{4\pi\epsO}\biggl(\frac{Q}{R}+ \sum_iq_i\frac{\FLPd_i\cdot\FLPe_R}{R^2}+\dotsb\biggr). \end{equation} The three dots indicate the terms of higher order in $d_i/R$ that we have neglected. These, as well as the ones we have already obtained, are successive terms in a Taylor expansion of $1/r_i$ about $1/R$ in powers of $d_i/R$. The first term in (6.25) is what we got before; it drops out if the object is neutral. The second term depends on $1/R^2$, just as for a dipole. In fact, if we define \begin{equation} \label{Eq:II:6:26} \FLPp=\sum_iq_i\FLPd_i \end{equation} as a property of the charge distribution, the second term of the potential (6.25) is \begin{equation} \label{Eq:II:6:27} \phi=\frac{1}{4\pi\epsO}\,\frac{\FLPp\cdot\FLPe_R}{R^2}, \end{equation} precisely a dipole potential. The quantity $\FLPp$ is called the dipole moment of the distribution. It is a generalization of our earlier definition, and reduces to it for the special case of two point charges. Our result is that, far enough away from any mess of charges that is as a whole neutral, the potential is a dipole potential. It decreases as $1/R^2$ and varies as $\cos\theta$—and its strength depends on the dipole moment of the distribution of charge. It is for these reasons that dipole fields are important, since the simple case of a pair of point charges is quite rare. The water molecule, for example, has a rather strong dipole moment. The electric fields that result from this moment are responsible for some of the important properties of water. For many molecules, for example CO$_2$, the dipole moment vanishes because of the symmetry of the molecule. For them we should expand still more accurately, obtaining another term in the potential which decreases as $1/R^3$, and which is called a quadrupole potential. We will discuss such cases later. |
|
2 | 6 | The Electric Field in Various Circumstances | 6 | The fields of charged conductors | We have now finished with the examples we wish to cover of situations in which the charge distribution is known from the start. It has been a problem without serious complications, involving at most some integrations. We turn now to an entirely new kind of problem, the determination of the fields near charged conductors. Suppose that we have a situation in which a total charge $Q$ is placed on an arbitrary conductor. Now we will not be able to say exactly where the charges are. They will spread out in some way on the surface. How can we know how the charges have distributed themselves on the surface? They must distribute themselves so that the potential of the surface is constant. If the surface were not an equipotential, there would be an electric field inside the conductor, and the charges would keep moving until it became zero. The general problem of this kind can be solved in the following way. We guess at a distribution of charge and calculate the potential. If the potential turns out to be constant everywhere on the surface, the problem is finished. If the surface is not an equipotential, we have guessed the wrong distribution of charges, and should guess again—hopefully with an improved guess! This can go on forever, unless we are judicious about the successive guesses. The question of how to guess at the distribution is mathematically difficult. Nature, of course, has time to do it; the charges push and pull until they all balance themselves. When we try to solve the problem, however, it takes us so long to make each trial that that method is very tedious. With an arbitrary group of conductors and charges the problem can be very complicated, and in general it cannot be solved without rather elaborate numerical methods. Such numerical computations, these days, are set up on a computing machine that will do the work for us, once we have told it how to proceed. On the other hand, there are a lot of little practical cases where it would be nice to be able to find the answer by some more direct method—without having to write a program for a computer. Fortunately, there are a number of cases where the answer can be obtained by squeezing it out of Nature by some trick or other. The first trick we will describe involves making use of solutions we have already obtained for situations in which charges have specified locations. |
|
2 | 7 | The Electric Field in Various Circumstances (Continued) | 1 | Methods for finding the electrostatic field | This chapter is a continuation of our consideration of the characteristics of electric fields in various particular situations. We shall first describe some of the more elaborate methods for solving problems with conductors. It is not expected that these more advanced methods can be mastered at this time. Yet it may be of interest to have some idea about the kinds of problems that can be solved, using techniques that may be learned in more advanced courses. Then we take up two examples in which the charge distribution is neither fixed nor is carried by a conductor, but instead is determined by some other law of physics. As we found in Chapter 6, the problem of the electrostatic field is fundamentally simple when the distribution of charges is specified; it requires only the evaluation of an integral. When there are conductors present, however, complications arise because the charge distribution on the conductors is not initially known; the charge must distribute itself on the surface of the conductor in such a way that the conductor is an equipotential. The solution of such problems is neither direct nor simple. We have looked at an indirect method of solving such problems, in which we find the equipotentials for some specified charge distribution and replace one of them by a conducting surface. In this way we can build up a catalog of special solutions for conductors in the shapes of spheres, planes, etc. The use of images, described in Chapter 6, is an example of an indirect method. We shall describe another in this chapter. If the problem to be solved does not belong to the class of problems for which we can construct solutions by the indirect method, we are forced to solve the problem by a more direct method. The mathematical problem of the direct method is the solution of Laplace’s equation, \begin{equation} \label{Eq:II:7:1} \nabla^2\phi=0, \end{equation} subject to the condition that $\phi$ is a suitable constant on certain boundaries—the surfaces of the conductors. Problems which involve the solution of a differential field equation subject to certain boundary conditions are called boundary-value problems. They have been the object of considerable mathematical study. In the case of conductors having complicated shapes, there are no general analytical methods. Even such a simple problem as that of a charged cylindrical metal can closed at both ends—a beer can—presents formidable mathematical difficulties. It can be solved only approximately, using numerical methods. The only general methods of solution are numerical. There are a few problems for which Eq. (7.1) can be solved directly. For example, the problem of a charged conductor having the shape of an ellipsoid of revolution can be solved exactly in terms of known special functions. The solution for a thin disc can be obtained by letting the ellipsoid become infinitely oblate. In a similar manner, the solution for a needle can be obtained by letting the ellipsoid become infinitely prolate. However, it must be stressed that the only direct methods of general applicability are the numerical techniques. Boundary-value problems can also be solved by measurements of a physical analog. Laplace’s equation arises in many different physical situations: in steady-state heat flow, in irrotational fluid flow, in current flow in an extended medium, and in the deflection of an elastic membrane. It is frequently possible to set up a physical model which is analogous to an electrical problem which we wish to solve. By the measurement of a suitable analogous quantity on the model, the solution to the problem of interest can be determined. An example of the analog technique is the use of the electrolytic tank for the solution of two-dimensional problems in electrostatics. This works because the differential equation for the potential in a uniform conducting medium is the same as it is for a vacuum. There are many physical situations in which the variations of the physical fields in one direction are zero, or can be neglected in comparison with the variations in the other two directions. Such problems are called two-dimensional; the field depends on two coordinates only. For example, if we place a long charged wire along the $z$-axis, then for points not too far from the wire the electric field depends on $x$ and $y$, but not on $z$; the problem is two-dimensional. Since in a two-dimensional problem $\ddpl{\phi}{z}=0$, the equation for $\phi$ in free space is \begin{equation} \label{Eq:II:7:2} \frac{\partial^2\phi}{\partial x^2}+ \frac{\partial^2\phi}{\partial y^2}=0. \end{equation} Because the two-dimensional equation is comparatively simple, there is a wide range of conditions under which it can be solved analytically. There is, in fact, a very powerful indirect mathematical technique which depends on a theorem from the mathematics of functions of a complex variable, and which we will now describe. |