video_title
stringlengths
15
95
transcription
stringlengths
51
44.5k
Change of basis | Chapter 13, Essence of linear algebra
If I have a vector sitting here in 2D space, we have a standard way to describe it with coordinates. In this case, the vector has coordinates 3-2, which means going from its tail to its tip, involves moving 3 units to the right and 2 units up. Now, the more linear algebra-oriented way to describe coordinates is to think of each of these numbers as a scalar, a thing that stretches or squishes vectors. You think of that first coordinate as scaling i-hat, the vector with length 1 pointing to the right. While the second coordinate scales j-hat, the vector with length 1 pointing straight up. The tip to tail sum of those two scaled vectors is what the coordinates are meant to describe. You can think of these two special vectors as encapsulating all of the implicit assumptions of our coordinate system. The fact that the first number indicates rightward motion, that the second one indicates upward motion, exactly how far unit of distance is, all of that is tied up in the choice of i-hat and j-hat as the vectors which our scalar coordinates are meant to actually scale. Any way to translate between vectors and sets of numbers is called a coordinate system, and the two special vectors i-hat and j-hat are called the basis vectors of our standard coordinate system. What I'd like to talk about here is the idea of using a different set of basis vectors. For example, let's say you have a friend, Jennifer, who uses a different set of basis vectors, which I'll call b1 and b2. Her first basis vector, b1, points up into the right a little bit, and her second vector, b2, points left and up. Now take another look at that vector that I showed earlier. The one that you and I would describe using the coordinates 3,2, using our basis vectors i-hat and j-hat. Jennifer would actually describe this vector with the coordinates 5,3, and 1,3. What this means is that the particular way to get to that vector using her two basis vectors is to scale b1 by 5,3, scale b2 by 1,3, then add them both together. In a little bit, I'll show you how you could have figured out those two numbers 5,3, and 1,3. In general, whenever Jennifer uses coordinates to describe a vector, she thinks of her first coordinate as scaling b1, the second coordinate as scaling b2, and she adds the results. What she gets will typically be completely different from the vector that you and I would think of as having those coordinates. To be a little more precise about the setup here, her first basis vector b1 is something that we would describe with the coordinates 2,1, and her second basis vector b2 is something that we would describe as negative 1,1. But it's important to realize, from her perspective in her system, those vectors have coordinates 1,0, and 1. They are what define the meaning of the coordinates 1,0, and 1 in her world. So, in effect, we're speaking different languages. We're all looking at the same vectors in space, but Jennifer uses different words and numbers to describe them. Let me say a quick word about how I'm representing things here. When I animate 2D space, I typically use this square grid. But that grid is just a construct, a way to visualize our coordinate system, and so it depends on our choice of basis. Space itself has no intrinsic grid. Jennifer might draw her own grid, which would be an equally made up construct, meant it's nothing more than a visual tool to help follow the meaning of her coordinates. Her origin, though, would actually line up with ours, since everybody agrees on what the coordinates 0,0 should mean. It's the thing that you get when you scale any vector by 0. But the direction of her axes and the spacing of her grid lines will be different, depending on her choice of basis vectors. So, after all this is set up, a pretty natural question to ask is how we translate between coordinate systems? If, for example, Jennifer describes a vector with coordinates negative 1,2, what would that be in our coordinate system? How do you translate from her language to ours? Well, what her coordinates are saying is that this vector is negative 1 times b1 plus 2 times b2. And from our perspective, b1 has coordinates 2,1, and b2 has coordinates negative 1,1. So, we can actually compute negative 1 times b1 plus 2 times b2 as they're represented in our coordinate system. And working this out, you get a vector with coordinates negative 4,1. So, that's how we would describe the vector that she thinks of as negative 1,2. This process here of scaling each of her basis vectors by the corresponding coordinates of some vector, then adding them together, might feel somewhat familiar. It's matrix vector multiplication, with a matrix whose columns represent Jennifer's basis vectors in our language. In fact, once you understand matrix vector multiplication as applying a certain linear transformation, say by watching what I've you to be the most important video in this series, chapter 3, there's a pretty intuitive way to think about what's going on here. A matrix whose columns represent Jennifer's basis vectors can be thought of as a transformation that moves our basis vectors, i hat and j hat, the things we think of when we say 1,0, and 0,1, to Jennifer's basis vectors, the things she thinks of when she says 1,0, and 0,1. To show how this works, let's walk through what it would mean to take the vector that we think of as having coordinates negative 1,2, and applying that transformation. Before the linear transformation, we're thinking of this vector as a certain linear combination of our basis vectors, negative 1 times i hat plus 2 times j hat. And the key feature of a linear transformation is that the resulting vector will be that same linear combination but of the new basis vectors, negative 1 times the place where i hat lands plus 2 times the place where j hat lands. So what this matrix does is transform our misconception of what Jennifer means into the actual vector that she's referring to. I remember that when I was first learning this, it always felt kind of backwards to me. Geometrically, this matrix transforms our grid into Jennifer's grid, but numerically, it's translating a vector described in her language to our language. What made it finally click for me was thinking about how it takes our misconception of what Jennifer means, the vector we get using the same coordinates but in our system, then it transforms it into the vector that she really meant. What about going the other way around? In the example I used earlier this video, when I had the vector with coordinates 3, 2 in our system, how did I compute that it would have coordinates 5, 3, and 1, 3 in Jennifer's system? You start with that change of basis matrix that translates Jennifer's language into ours, then you take its inverse. Remember, the inverse of a transformation is a new transformation that corresponds to playing that first one backwards. In practice, especially when you're working in more than two dimensions, you'd use a computer to compute the matrix that actually represents this inverse. In this case, the inverse of the change of basis matrix that has Jennifer's basis as its columns, ends up working out to have columns 1, 3, negative 1, 3, and 1, 3, 2, 3. So for example, to see what the vector 3, 2 looks like in Jennifer's system, we multiply this inverse change of basis matrix by the vector 3, 2, which works out to be 5, 3, 1, 3. So that, in a nutshell, is how to translate the description of individual vectors back and forth between coordinate systems. The matrix whose columns represent Jennifer's basis vectors, but written in our coordinates, translates vectors from her language into our language. And the inverse matrix does the opposite. But vectors aren't the only thing that we describe using coordinates. For this next part, it's important that you're all comfortable representing transformations with matrices, and that you know how matrix multiplication corresponds to composing successive transformations. Definitely pause and take a look at chapters 3 and 4 if any of that feels uneasy. Consider some linear transformation, like a 90 degree counterclockwise rotation. When you and I represent this with a matrix, we follow where the basis vectors i hat and j hat each go. i hat ends up at the spot with coordinates 0, 1, and j hat ends up at the spot with coordinates negative 1, 0. So those coordinates become the columns of our matrix. But this representation is heavily tied up in our choice of basis vectors, from the fact that we're following i hat and j hat in the first place, to the fact that we're recording their landing spots in our own coordinate system. How would Jennifer describe this same 90 degree rotation of space? You might be tempted to just translate the columns of our rotation matrix into Jennifer's language. But that's not quite right. Those columns represent where our basis vectors i hat and j hat go. But the matrix that Jennifer wants should represent where her basis vectors land, and it needs to describe those landing spots in her language. Here's a common way to think of how this is done. Start with any vector written in Jennifer's language. Rather than trying to follow what happens to it in terms of her language, first we're going to translate it into our language using the change of basis matrix. The one whose columns represent her basis vectors in our language. This gives us the same vector but now written in our language. Then apply the transformation matrix to what you get by multiplying it on the left. This tells us where that vector lands, but still in our language. Last step apply the inverse change of basis matrix, multiplied on the left as usual, to get the transformed vector but now in Jennifer's language. Since we could do this with any vector written in her language, first applying the change of basis, then the transformation, then the inverse change of basis, that composition of three matrices gives us the transformation matrix in Jennifer's language. It takes in a vector of her language and spits out the transformed version of that vector in her language. For this specific example, when Jennifer's basis vectors look like 2, 1 and negative 1, 1 in our language, and when the transformation is a 90 degree rotation, the product of these three matrices, if you work through it, has columns 1, 3, 5, 3, and negative 2, 3, negative 1, 3. So if Jennifer multiplies that matrix by the coordinates of a vector in her system, it will return the 90 degree rotated version of that vector expressed in her coordinate system. In general, whenever you see an expression like A inverse times M times A, it suggests a mathematical sort of empathy. That middle matrix represents a transformation of some kind as you see it, and the outer two matrices represent the empathy, the shift in perspective. And the full matrix product represents that same transformation but as someone else sees it. For those of you wondering why we care about alternate coordinate systems, the next video on eigenvectors and eigenvalues will give a really important example of this. See you then.
From Newton’s method to Newton’s fractal (which Newton knew nothing about)
You've seen the title, so you know this is leading to a certain fractal. And actually it's an infinite family of fractals. And yeah, it'll be one of those mind-bogglingly intricate shapes that has infinite detail no matter how far you zoom in. But this is not really a video about generating some pretty picture for us to gochit. Well, okay maybe that's part of it, but the real story here has a much more pragmatic starting point than the story behind a lot of other fractals. And more than that, the final images that we get to will become a lot more meaningful if we make an effort to understand why, given what they represent, they kind of have to look as complicated as they do. And what this complexity reflects about an algorithm that is used all over the place in engineering. The starting point here will be to assume that you have some kind of polynomial and that you want to know when it equals zero. But for the one graph here, you can visually see there's three different places where it crosses the x-axis. You can kind of eyeball what those values might be. We'd call those the roots of the polynomial. But how do you actually compute them? Exactly. Now this is the kind of question where if you're already bought into math, maybe it's interesting enough in its own right to move forward. But if you just pull someone on the street and ask them this, I mean they're already falling asleep because who cares. But the thing is this kind of question comes up all the time in engineering, where I'm personally most familiar with equations like this popping up is in the setting of computer graphics, where polynomials are just littered all over the place. So it's not uncommon that when you're figuring out how a given pixel should be colored, that somehow involves solving an equation that uses these polynomials. Here let me give you one fun example. When a computer renders text on the screen, those fonts are typically not defined using pixel values. They're defined as a bunch of polynomial curves, were to known in the business as Bayzi-A curves. And any of you who've messed around with vector graphics, maybe in some design software, would be well familiar with these kinds of curves. But to actually display one of them on the screen, you need a way to tell each one of the pixels of your screen whether it should be colored in or not. These curves can be displayed either with some kind of stroke width or if they enclose a region, some kind of fill for that region. But if you step back and you really think about it, it's an interesting puzzle to figure out how each one of the pixels knows whether it should be colored in or not, just based on the pure mathematical curve. I mean, take the case of stroke width. This comes down to understanding how far away a given pixel is from this pure mathematical curve, which itself is some platonic ideal it has zero width. You would think of it as a parametric curve that has some parameter t. Now one thing that you could do to figure out this distance is to compute the distance between your pixel and a bunch of sample points on that curve and then figure out the smallest. But that's both inefficient and imprecise. Better is to get a little mathematical and acknowledge that this distance to the curve at all the possible points is itself some smooth function of the parameter. And as it happens, the square of that distance will itself be a polynomial, which makes it pretty nice to deal with. And if this were meant to be a full lesson on rendering vector graphics, we could expand all that out and embrace the mess. But right now, the only salient point that I want to highlight is that, in principle, this function whose minimum you want to know is some polynomial. Finding this minimum and hence determining how close the pixel is to the curve and whether it should get filled in is now just a classic calculus problem. What you do is figure out the slope of this function graph, which is to say it's derivative again some polynomial. And you ask, when does that equal zero? So, to actually carry out this seemingly simple task of just displaying a curve, wouldn't it be nice if you had a systematic and general way to figure out when a given polynomial equals zero? Of course, we could draw a hundred other examples from a hundred other disciplines. I just want you to keep in mind that as we seek the roots of polynomials, even though we always display it in a way that's cleanly abstracted away from the messiness of any real world problem, the task is hardly just an academic one. But again, ask yourself, how do you actually compute one of those roots? If whatever problem you're working on leads you to a quadratic function, then happy days. You can use the quadratic formula that we all know and love. And as a fun side note, by the way, again relevant to root finding in computer graphics, I once had a Pixar engineer give me the estimate that considering how many lights were used in some of the scenes for the movie Coco, and given the nature of some of these per pixel calculations when polynomially defined things like spheres are involved, the quadratic formula was easily used multiple trillions of times in the production of that film. Now when your problem leads you to a higher order polynomial, things start to get trickier. For cubic polynomials, there is also a formula, which Mathologer has done a wonderful video on, and there's even a quartic formula, something that solves degree four polynomials, although honestly that one is such a god-offel nightmare of a formula that essentially no one actually uses it in practice. But after that, and I find this one of the most fascinating results in all of Math, you cannot have an analogous formula to solve polynomials that have a degree five or more. More specifically, for a pretty extensive set of standard functions, you can prove that there is no possible way that you can combine those functions together that allows you to plug in the coefficients of a quintic polynomial and always get out a root. This is known as the unsolvability of the quintic, which is a whole other can of worms we can hopefully get into at some other time, but in practice, it kind of doesn't matter, because we have algorithms to approximate solutions to these kinds of equations with whatever level of precision you want. A common one, and the main topic for you and me today, is Newton's method. And yes, this is what will lead us to the fractals, but I want you to pay attention to just how innocent and benign the whole procedure seems at first. The algorithm begins with a random guess, let's call it x-not. Well certainly, the output of your polynomial at x-not is not zero, so you haven't found a solution. It's some other value visible as the height of this graph at that point. So to improve the guess, the idea is to ask, when does a linear approximation to the function around that value equal zero? In other words, if you were to draw a tangent line to the graph at this point, when does that tangent line cross the x-axis? Now assuming this tangent line is a decent approximation of the function in the loose vicinity of some true root, the place where this approximation equals zero should take you closer to that true root. And as long as you're able to take a derivative of this function, and with polynomials, you'll always be able to do that, you can concretely compute the slope of this line. So here's where the active viewers among you might want to pause and ask, how do you figure out the difference between the current guess and the improved guess? What is the size of this step? One way to think of it is to consider the fact that the slope of this tangent line, that's rise over run, looks like the height of this graph, divided by the length of that step. But on the other hand, of course, the slope of the tangent line is the derivative of the polynomial at that point. If we kind of rearrange this equation here, this gives you a super concrete way that you can compute that step size. So the next guess, which we might call x1, is the previous guess, adjusted by this step size. And after that, you can just repeat the process. You compute the value of this function, and the slope at this new guess, which gives you a new linear approximation. And then you make the next guess, x2, wherever that tangent line crosses the x-axis. And then apply the same calculation to x2, and this gives you x3. And before too long, you find yourself extremely close to a true root, pretty much as close as you could ever want to be. It's always worth gut checking that a formula actually makes sense. And in this case, hopefully it does, if p of x is large, meaning the graph is really high, you need to take a bigger step to get down to a root. But if p prime of x is also large, meaning the graph is quite steep, you should maybe ease off on just how big you make that step. Now as the name suggests, this was a method that Newton used to solve polynomial expressions. But he sort of made it look a lot more complicated than it needed to be. And a fellow named Joseph Raphson published a much simpler version, more like what you and I are looking at now. So you also often hear this algorithm called the Newton Raphson method. These days it's a common topic in calculus classes. One nice little exercise to try to get a feel for it, by the way, is to try using this method to approximate square roots by hand. But what most calculus students don't see, which is unfortunate, is just how deep things can get when you let yourself play around with this seemingly simple procedure and start kind of picking at some of its scabs. You see, while Newton's method works great if you start near a root, where it converges really quickly, if your initial guess is far from a root, it can have a couple foibles. For example, let's take the function we were just looking at, but shift it upward and play the same game with the same initial guess. Notice how the sequence of new guesses that we're getting kind of bounces around the local minimum of this function sitting above the x-axis. This should kind of make sense, I mean a linear approximation of the function around these values, all the way to the right, is pretty much entirely unrelated to the nature of the function around the one true root that it has off to the left. So they're sort of giving you no useful information about that true root. It's only when this process just happens to throw the new guess off far enough to the left by chance that the sequence of new guesses does anything productive and actually approaches that true root. Where things get especially interesting is if we ask about finding roots in the complex plane. Even if a polynomial like the one shown here has only a single real number root, you'll always be able to factor this polynomial into five terms like this if you allow these roots to potentially be complex numbers. This is the famous fundamental theorem of algebra. Now, in the happy-go-lucky land of functions with real number inputs and real number outputs, where you can picture the association between inputs and outputs as a graph, Newton's method has this really nice visual meaning with tangent lines and intersecting the x-axis. But if you want to allow these inputs to be any complex number, which means our corresponding outputs might also be any complex number, you can't think about tangent lines and graphs anymore. But the formula doesn't really care how you visualize it. You can still play the same game, starting with a random guess, and evaluating the polynomial at this point, as well as its derivative, then using this update rule to generate a new guess. And hopefully that new guess is closer to the true root. But I do want to be clear, even if we can't visualize these steps with a tangent line, it really is the same logic. We're figuring out where a linear approximation of the function around your guess would equal zero. And then you use that zero of the linear approximation as your next guess. It's not like we're blindly applying the rule to a new context with no reason to expect it to work. And indeed, with at least the one I'm showing here after a few iterations, you can see that we land on a value whose corresponding output is essentially zero. Now here's the fun part. Let's apply this idea to many different possible initial guesses. But reference all put up the five true roots of this particular polynomial in the complex plane. With each iteration, each one of our little dots takes some step based on Newton's method. Most of the dots will quickly converge to one of the five true roots. But there are some noticeable stragglers which seem to spin to wild bouncing around. In particular, notice how the ones that are trapped on the positive real number line? Well, they look a little bit lost. And this is exactly what we already saw before for the same polynomial when we're looking at the real number case with its graph. Now what I'm going to do is color each one of these dots based on which of those five roots it ended up closest to, and then we'll kind of roll back the clock so that every dot goes back to where it started. Now as I've done it here, this isn't quite enough resolution to get the full story. So let me show you what it would look like if we started with an even finer grid of initial guesses and played the same game, applying Newton's method a whole bunch of times, letting each root march forward, coloring each dot based on what root it lands on, then rolling back the clock to see where originally came from. But even this isn't really a high enough resolution to appreciate the pattern. If we did this process for every single pixel on the plane, here's what you would get. And at this level of detail, the color scheme is a little jarring to my eye at least, so let me comment down a little. Really, whatever resolution I try to use to show this to you here could never possibly be enough, because the finer details of the shape we get, go on with endless complexity. But take a moment to think about what this is actually saying. It means that there are regions in the complex plane, where if you slightly adjust that seed value, you know, you just kind of bump it to the side by 1.1 millionth or 1.1 trillionth, it can completely change which of the five true roots it ends up landing on. We saw some foreshadowing of this kind of chaos with the real graph and the problematic guess shown earlier, but picturing all of this in the complex plane really shines a light on just how unpredictable this kind of root-finding algorithm can be, and how there are whole swaths of initial values where the sort of unpredictability will take place. Now, if I grab one of these roots and change it around, meaning that we're using a different polynomial for the process, you can see how the resulting fractal pattern changes. And notice, for example, how the regions around a given root always have the same color, since those are the points that are close enough to the root, where this linear approximation scheme works as a way of finding that root with no problem. All of the chaos seems to be happening at the boundaries between the regions. Remember that. And it seems like no matter where I place these roots, those fractal boundaries are always there. It clearly wasn't just some one-off for the polynomial we happen to start with, it seems to be a general fact for any given polynomial. Another facet we can tweak here just to better illustrate what's going on is how many steps of Newton's method we're using. For example, if I had the computer just take zero steps, meaning it just colors each point of the plane based on whatever root it's already closest to, this is what we'd get. And this kind of diagram actually has a special name, it's called a Vorinoid diagram. And if we let each point of the plane take a single step of Newton's method, and then color it based on what root that single step result is closest to, here's what we would get. Similarly if we allow for two steps we get a slightly more intricate pattern, and so on and so on, where the more steps you allow, the more intricate an image you get bringing us closer to the original fractal. And this is important, keep in mind that the true shape we're studying here is not any one of these, it's the limit as we allow for an arbitrarily large number of iterations. At this point there are so many questions we might ask, like maybe you want to try this out with some other polynomials, see how general it is, or maybe you want to dig deeper into what dynamics are exactly possible with these iterated points, or see if there's connections with some other pieces of math that have a similar theme. But I think the most pertinent question should be something like, what the f*** is going on here? I mean, all we're doing here is repeatedly solving linear approximations. Why would that produce something that's so endlessly complicated? It almost feels like the underlying rule here just shouldn't carry enough information to actually produce an image like this. And before seeing this, don't you think a reasonable initial guess might have been that each seed value simply tends towards whichever root it's closest to? And in that case, you know, if you colored each point based on the root it lands on and move it back to the original position, the final image would look like one of these Voronoid diagrams, with straight line boundaries. And since I referenced earlier the unsolvability of the quintic, maybe you would wonder if the complexity here has anything to do with that. That would be cool, but there are essentially unrelated ideas. In fact, using only degree 5 polynomials so far might have been a little misleading. Watch what happens if we play the same game, but with a cubic polynomial, with three roots somewhere in the complex plane. Notice how, again, while most points nestle into a root, some of them are kind of flying all over the place more chaoticly. In fact, those ones are the most noticeable ones in an animation like this, with the ones going towards the roots just quietly nestled in in their ending points. And again, if we stopped this at some number of iterations and we colored all the points based on what root their close is to, and roll back the clock, the relevant picture for all possible starting points forms this fractal pattern with infinite detail. However, quadratic polynomials with only two roots are different. In that case, each seed value does simply tend towards whichever root it's closest to, the way that you might expect. There is a little bit of meandering behavior from all the points that are in equal distance from each root. It's kind of like they're not able to decide which one to go to, but that's just a single line of points, and when we play the game of coloring, the diagram we end up with is decidedly more boring. So something new seems to happen when you jump from two to three, and the question is what exactly? And if you had asked me a month ago, I probably would have shrugged and just said, you know, math is what it is. Sometimes the answers look simple, sometimes not. Not always clear what it would mean to ask why in a setting like this? But I would have been wrong. There actually is a reason that we can give for why this image has to look as complicated as it does. You see, there's a very peculiar property that we can prove this diagram must have. Focus your attention on just one of the colored regions, say this blue one. In other words, the set of all points that eventually tend towards just one particular root of the polynomial. Now consider the boundary of that region, which for the example shown on screen has this kind of nice threefold symmetry. What's surprising is that if you look at any other color and consider its boundary, you get precisely the same set. Now when I say the word boundary, you probably have an intuitive sense of what it means, but mathematicians have a pretty clever way to formalize it. And this makes it easier to reason about in the context of more wild sets like our fractal. We say that a point is on the boundary of a set if when you draw a small circle centered at that point, no matter how small, it will always contain points that are both inside that set and outside. So if you have a point that's on the interior, a small enough circle would eventually only contain points inside the set. And for a point on the exterior, a small enough circle contains no points of the set at all. But when it's on the boundary, what it means to be on the boundary is that your tiny, tiny circles will always contain both. So looking back at our property, one way to read it is to say that if you draw a circle, no matter how small that circle, it either contains all of the colors, which happens when this shared boundary of the colors is inside that circle, or it contains just one color, and this happens when it's in the interior of one of the regions. In particular, what this implies is you should never be able to find a circle that contains just two of the colors. Since that would require that you have points on the boundary between two regions, but not all of them. And before explaining where this fact actually comes from, it's fun to try just wrapping your mind around it a little bit. You could imagine presenting this to someone as a kind of art puzzle, completely out of context never mentioning Newton's method or anything like that, where you say that the challenge is to construct a picture with at least three colors, maybe we say red, green, and blue, so that the boundary of one color is the boundary of all of them. So if you started with something simple like this, that clearly doesn't work because we have this whole line of points that are on the boundary of green and red, but not touching any blue. And likewise, you have these other lines of disallowed points. So to correct that, you might go and add some blue blobs along the boundary. And then likewise, add some green blobs between the red and blue and some red blobs between the green and blue. But of course, now the boundary of those blobs are a problem, for example touching just blue and red, but no green. So maybe you go and you try to add even smaller blobs, with the relevant third color around those smaller boundaries to help try to correct. And likewise, you have to do this for every one of the blobs that you initially added. But then all the boundaries of those tiny blobs are problems of their own, and you would have to somehow keep doing this process forever. And if you look at Newton's fractal itself, this sort of blobs on blobs on blobs pattern seems to be exactly what it's doing. The main thing I want you to notice is how this property implies you could never have a boundary which is smooth, or even partially smooth on some small segment, since any smooth segment would only be touching two colors. Instead, the boundary has to consist entirely of sharp corners, so to speak. So if you believe the property, it explains why the boundary remains rough no matter how far you zoom in. And for those of you who are familiar with the concept of fractal dimension, you can measure the dimension of the particular boundary I'm showing you right now to be around 1.44. Considering what our colors actually represent, remember this isn't just a picture for picture sake, think about what the property is really telling us. It says that if you're near a sensitive point where some of the seed values go to one root, but other seed values nearby would go to another root, then in fact, every possible root has to be accessible from within that small neighborhood. For any tiny little circle that you draw, either all of the points in that circle tend to just one root, or they tend to all of the roots. But there's never going to be anything in between, just tending to a subset of the roots. For a little intuition, I found it enlightening to simply watch a cluster like the one on screen undergo this process. It starts off mostly sticking together, but at one iteration, they all kind of explode outward. And after that, it feels a lot more reasonable that any root is up for grabs. And keep in mind, I'm just showing you finitely many points, but in principle, you would want to think about what happens to all, uncountably infinitely many points inside some small disk. This property also kind of explains why it's okay for things to look normal in the case of quadratic polynomials with just two roots, because there a smooth boundary is fine. There's only two colors to touch anyway. To be clear, it doesn't guarantee that the quadratic case would have a smooth boundary. It is perfectly possible to have a fractal boundary between two colors. It just looks like our Newton's method diagram is not doing anything more complicated than it needs to under the constraint of this strange boundary condition. But of course, all of this simply raises the question of why this bizarre boundary property would have to be true in the first place. Where does it even come from? For that, I'd like to tell you about a field of math, which studies this kind of question. It's called holomorphic dynamics. And I think we've covered enough ground today, and there's certainly enough left to tell, so it makes sense to pull that out as a separate video. To close things off here, there is something sort of funny to me about the fact that we call this Newton's fractal. Like the fact that Newton had no clue about any of this, and could never have possibly played with these images the way that you and I can with modern technology. And it happens a lot through math that people's names get attached to things well beyond what they could have dreamed of. Hamiltonians are central to quantum mechanics, despite Hamilton knowing nothing about quantum mechanics. Fourier himself never once computed a fast Fourier transform. The list goes on. But this overextension of nomenclature carries with it what I think is an inspiring point. It reflects how even the simple ideas, ones that could be discovered centuries ago, often hold within them some new angle or a new domain of relevance that can sit waiting to be discovered hundreds of years later. It's not just that Newton had no idea about Newton's fractal. There are probably many other facts about Newton's method, or about all sorts of math that may seem like old news that come from questions that no one has thought to ask yet. Things that are just sitting there, waiting for someone like you to ask them. For example, if you were to ask about whether this process we've been talking about today ever gets trapped in a cycle, it leads you to a surprising connection with the Mandelbrot set. And we'll talk a bit about that in the next part. At the time that I'm posting this, that second part by the way is available as an early release to patrons. I always like to give new content a little bit of time there to gather feedback and catch errors. The finalized version should be out shortly. And on the topic of patrons, I do just want to say a quick thanks to everyone whose name is on the screen. I know that in recent history new videos have been a little slow coming. Part of this has to do with other projects that have been in the works. Things I'm proud of, by the way, things like the Summer of MathEx position, which was a surprising amount of work, to be honest, but so worth it given the outcome. I will be talking all about that and announcing winners very shortly, so stay tuned. I just want you to know that the plan for the foreseeable future is definitely to shift gears more wholeheartedly back to making new videos. And more than anything, I want to say thanks for your continued support, even during times of trying a few new things. It means a lot to me, it's what keeps the channel going, and I'll do my best to make the new lessons in the pipeline live up to your vote of confidence there.
Q&A #2 + Net Neutrality Nuance
Hey everyone, no math here, I just want to post two quick announcements for you. First, the number of you who have opted to subscribe to this channel has once again rolled over a power of two, which is just mind-boggling to me. I'm still touched that there are so many of you who enjoy math like this, just for fun, apparently, and who have helped to support the channel by watching it, by sharing some of the content here, and of course for a very special subset of you, directly supporting through Patreon. Really, each and every one of the two to the 19th of you mean a lot to me. And as a thanks, and just for fun, I'm going to be doing a second Q&A session. I left a link to where you can ask and upvote questions it should be on the screen here, and in the description. And just like with the first one, I'll answer questions in the podcast that I do with Ben Eater and Ben Stenhag, since I don't know, I think discussions are more fun than answers in a vacuum. By the way, if you guys don't already know about Eater's channel, what are you doing? This man is the Bob Ross of computer engineering. If you want to understand how a computer works, and I mean really understand it from the ground up, his channel is 100% the place to go. Trust me, he is a very good explainer. But anyway, the reason I bring him up is that he and I just recorded a pretty interesting conversation about net neutrality. And I want to be very clear, Ben Eater is not against net neutrality, and nor am I. However, the issue is a lot more nuance than I first realized. And because there are already many great videos about net neutrality, and why it's a good thing, which by agree with, to offer content that you may not have seen before, this conversation was a chance to honor some of the trade-offs that are at play here. You see, the thing about Eater is that before he was creating phenomenal educational content, he worked for many years in the networking industry, so he has a pretty clear view of both sides of the equation, and a pretty intelligent way of articulating what they are. I'm just going to play you a minute of the conversation here, and then link to the full video, if you want to go see it on Ben's channel, which you should be going to check out anyway. But for those of you who would prefer to listen to a 45-minute conversation in podcast form, maybe over commute, we did also publish it as an episode of Ben Ben and have. You know, consumers kind of want an unlimited service. Or we want to believe we have. You want to believe you have an unlimited service. It's essentially guaranteed that we'll never exercise that. Right. And the one or two people way off on the end that are abusing. Providers would call it abusing. I think the customers would say they're using what they're paying for. But the ones that are really off the charts, the providers would just limit the traffic. We'll slow them down or slow down the applications. Be sure not to interrupt, but it's one such an interesting example just because this is a peer-to-peer type thing. And then there's a whole pile of hype around decentralized possibilities with like blockchain and whatnot. And to the extent that that is an aspect of the future of the internet, that you have a little bit more possibility for some services to be decentralized in this way. And I think there's even a lot of things that just have a straight up bit torrent type flavor when it comes to file sharing and things of that sort. Like do you see that as a little bit more on the horizon? And would you say that as an example of like potentially harmful, harmful things that come about when you are very strict about net neutrality, about abiding by it? I don't know. I would necessarily categorize this as a peer-to-peer. I think the... I also understand what you're not saying that it is like necessarily dangerous to abide by it strictly. I'm sort of eking that out of you. But... No, I don't... I mean, I think the thing that happened with bit torrent, and we were going a little bit too far down that rabbit hole, but the thing that happened with bit torrent was this was an unusual thing. This was...
Fractals are typically not self-similar
Who doesn't like fractals? They're a beautiful blend of simplicity and complexity, often including these infinitely repeating patterns. Programmers in particular tend to be especially fond of them, because it takes a shockingly small amount of code to produce images that are way more intricate than any human hand ever could hope to draw. But a lot of people don't actually know the definition of a fractal, at least not the one that Benoit Mandelbrote, the father of fractal geometry, had in mind. A common misconception is that fractals are shapes that are perfectly self-similar. For example, this snowflake-looking shape right here, called the Von Koch snowflake, consists of three different segments. And each one of these is perfectly self-similar, in that when you zoom in on it, you get a perfectly identical copy of the original. Likewise, the famous Sir Penske triangle consists of three smaller, identical copies of itself. And don't give me wrong, self-similar shapes are definitely beautiful, and they're a good toy model for what fractals really are. But Mandelbrote had a much broader conception in mind. One motivated not by beauty, but more by a pragmatic desire to model nature in a way that actually captures roughness. In some ways, fractal geometry is a rebellion against calculus, whose central assumption is that things tend to look smooth if you zoom in far enough. But Mandelbrote saw this as overly idealized, or at least needlessly idealized, resulting in models that neglect the finer details of the thing that they're actually modeling, which can matter. What he observed is that self-similar shapes give a basis for modeling the regularity in some forms of roughness. But the popular perception that fractals only include perfectly self-similar shapes is another over-idealization, one that ironically goes against the pragmatic spirit of fractal geometry's origins. The real definition of fractals has to do with this idea of fractal dimension, the main topic of this video. You see, there is a sense, a certain way to define the word dimension in which the syropinsky triangle is approximately 1.585 dimensional. That the van Gogh curve is approximately 1.262 dimensional. The coastline of Britain turns out to be around 1.21 dimensional, and in general, it's possible to have shapes whose dimension is any positive real number, not just whole numbers. I think when I first heard someone reference fractional dimension like this, I just thought it was nonsense, right? I mean, mathematicians are clearly just making stuff up. Dimension is something that usually only makes sense for natural numbers, right? A line is one dimensional, a plane that's two dimensional, the space that we live in, that's three dimensional, and so on. And in fact, any linear algebra student who just learned the formal definition of dimension in that context would agree, it only makes sense for counting numbers. And of course, the idea of fractal dimension is just made up. I mean, this is math, everything's made up. But the question is whether or not it turns out to be a useful construct for modeling the world. And I think you'll agree, once you learn how fractal dimension is defined, it's something that you start seeing almost everywhere that you look. It actually helps to start the discussion here by only looking at perfectly self-similar shapes. In fact, I'm going to start with four shapes, the first three of which aren't even fractals, a line, a square, a cube, and a syropinsky triangle. All of these shapes are self-similar. A line can be broken up into two smaller lines, each of which is a perfect copy of the original, just scaled down by a half. A square can be broken down into four smaller squares, each of which is a perfect copy of the original just scaled down by a half. Likewise a cube can be broken down into eight smaller cubes, again, each one is a scaled down version by one half, and the core characteristic of the syropinsky triangle is that it's made of three smaller copies of itself, and the length of the side of one of those smaller copies is one half the side length of the original triangle. Now it's fun to compare how we measure these things. We'd say that the smaller line is one half the length of the original line, the smaller square is one quarter the area of the original square, the smaller cube is one eighth the volume of the original cube, and that smaller syropinsky triangle? Well we'll talk about how to measure that in just a moment. What I want is a word that generalizes the idea of length, area, and volume, but that I can apply to all of those shapes and more. And typically in math, the word that you'd use for this is measure, but I think it might be more intuitive to talk about mass. As in, imagine that each of these shapes is made out of metal, a thin wire, a flat sheet, a solid cube, and some kind of syropinsky mesh. Fractal dimension has everything to do with understanding how the mass of these shapes changes as you scale them. The benefit of starting the discussion with self-similar shapes is that it gives us a nice clear cut way to compare masses. When you scale down that line by one half, the masses also scale down by one half, which you can viscerally see because it takes two copies of that smaller one to form the whole. When you scale down a square by one half, its masses scale down by one fourth, where again you can see this by piecing together four of the smaller copies to get the original. Likewise when you scale down that cube by one half, the mass is scaled down by one eighth or one half cubed because it takes eight copies of that smaller cube to rebuild the original. When you scale down the syropinsky triangle by a factor of a half, wouldn't you agree that it makes sense to say that its mass goes down by a factor of one third? I mean, it takes exactly three of those smaller ones to form the original. But notice that for the line the square and the cube, the factor by which the mass changed is this nice clean integer power of one half. In fact, that exponent is the dimension of each shape. And what's more, you could say that what it means for a shape to be, for example, two dimensional, what puts the two in two dimensional is that when you scale it by some factor, its mass is scaled by that factor raised to the second power. And maybe what it means for a shape to be three dimensional is that when you scale it by some factor, the mass is scaled by the third power of that factor. So if this is our conception of dimension, what should the dimensionality of a syropinsky triangle be? You'd want to say that when you scale it down by a factor of one half, its mass goes down by one half to the power of, well, whatever its dimension is. And because itself is similar, we know that we want its mass to go down by a factor of one third. So what's the number D such that raising one half to the power of D gives you one third? Well, that's the same as asking two to the what equals three, the quintessential type of question that logarithms are meant to answer. And when you go and plug in, log base two of three to a calculator, what you'll find is that it's about 1.585. So in this way, the syropinsky triangle is not one dimensional, even though you could define a curve that passes through all its points. And nor is it two-dimensional, even though it lives in the plane. Instead, it's 1.585 dimensional. And if you want to describe its mass, neither length nor area seem like the fitting notions. If you tried, its length would turn out to be infinite, and its area would turn out to be zero. Instead what you want is whatever the 1.585 dimensional analog of length is. Here, let's look at another self-similar fractal, the von Koch curve. This one is composed of four smaller, identical copies of itself, each of which is a copy of the original scaled down by 1.3. So the scaling factor is 1.3, and the mass has gone down by a factor of 1.4. So that means the dimension should be some number d so that when we raise 1.3 to the power of d, it gives us 1.4. Well that's the same as saying three to the what equals four, so you can go and plug into a calculator, log base three of four, and that comes out to be around 1.262. So in a sense, the von Koch curve is a 1.262 dimensional shape. Here's another fun one. This is kind of the right angled version of the Koch curve. It's built up of eight scaled down copies of itself, where the scaling factor here is 1.4. So if you want to know its dimension, it should be some number d such that 1.4 to the power of d equals 1.8, the factor by which the mass just decreased. And in this case, the value we want is log base four of eight, and that's exactly three halves. So evidently, this fractal is precisely 1.5 dimensional. Does that kind of make sense? It's weird, but it's all just about scaling and comparing masses while you scale. And what I've described so far, everything up to this point is what you might call self-similarity dimension. It does a good job making the idea of fractional dimensions seem at least somewhat reasonable, but there's a problem. It's not really a general notion. I mean, when we were reasoning about how a mass's shape should change, it relied on the self-similarity of the shapes, that you could build them up from smaller copies of themselves. But that seems unnecessarily restrictive. After all, most two dimensional shapes are not at all self-similar. Consider the disc, the interior of a circle. We know that's two-dimensional, and you can say that this is because when you scale it up by a factor of two, its mass, proportional to the area, gets scaled by the square of that factor, in this case four. But it's not like there's some way to piece together four copies of that smaller circle to rebuild the original. So how do we know that that bigger disc is exactly four times the mass of the original? Answering that requires a way to make this idea of mass a little more mathematically rigorous. Since we're not dealing with physical objects made of matter, are we? We're dealing with purely geometric ones living in an abstract space. And there's a couple ways to think about this, but here's a common one. Cover the plane with a grid, and highlight all of the grid squares that are touching the disc, and now count how many there are. In the back of our minds, we already know that a disc is two-dimensional, and the number of grid squares that it touches should be proportional to its area. A clever way to verify this empirically is to scale up that disc by some factor, like two, and count how many grid squares touch this new scaled up version. What you should find is that that number has increased approximately in proportion to the square of our scaling factor, which in this case means about four times as many boxes. Well, admittedly, what's on the screen here might not look that convincing, but it's just because the grid is really coarse. If instead you took a much finer grid, one that more tightly captures the intent we're going for here by measuring the size of the circle, that relationship of quadrupling the number of boxes touched when you scale the disc by a factor of two should shine through more clearly. I'll admit though that when I was animating this, I was surprised by just how slowly this value can register for. Here's one way to think about this. If you were to plot the scaling factor, compared to the number of boxes that the scaled disc touches, your data should very closely fit a perfect parabola, since the number of boxes touched is roughly proportional to the square of the scaling factor. For larger and larger scaling values, which is actually equivalent to just looking at a finer grid, that data is going to more perfectly fit that parabola. Now getting back to fractals, let's play this game with a Sirpinski triangle, counting how many boxes are touching points in that shape. How would you imagine that number compares to scaling up the triangle by a factor of two and counting the new number of boxes touched? Well, the proportion of boxes touched by the big one to the number of boxes touched by the small one should be about three. After all, that bigger version is just built up of three copies of the smaller version. You could also think about this as two raised to the dimension of the fractal, which we just saw is about 1.585. And so if you were to go and plot the scaling factor in this case, against the number of boxes touched by the Sirpinski triangle, the data would closely fit a curve with the shape of y equals x to the power 1.585, just multiplied by some proportionality constant. But importantly, the whole reason that I'm talking about this is that we can play the same game with non-self-similar shapes that still have some kind of roughness. And the classic example here is the coastline of Britain. If you plot that coastline into the plane and count how many boxes are touching it, and then scale it by some amount, and count how many boxes are touching that new scaled version, what you'd find is that the number of boxes touching the coastline increases approximately in proportion to the scaling factor raised to the power of 1.21. Here, it's kind of fun to think about how you would actually compute that number empirically. As an, imagine I give you some shape, and you're a savvy programmer, how would you find this number? So what I'm saying here is that if you scale this shape by some factor, which I'll call S, the number of boxes touching that shape should equal some constant multiplied by that scaling factor raised to whatever the dimension is, the value that we're looking for. Now, if you have some data plot that closely fits a curve that looks like the input raised to some power, it can be hard to see exactly what that power should be. So a common trick is to take the logarithm of both sides. That way, the dimension is going to drop down from the exponent, and we'll have a nice clean linear relationship. What this suggests is that if you would plot the log of the scaling factor against the log of the number of boxes touching the coastline, the relationship should look like a line, and that line should have a slope equal to the dimension. So what that means is that if you tried out a whole bunch of scaling factors, counted the number of boxes touching the coast in each instant, and then plotted the points on the log-log plot, you could then do some kind of linear regression to find the best fit line to your data set, and when you look at the slope of that line, that tells you the empirical measurement for the dimension of what you're examining. I just think that makes this idea of fractal dimensions so much more real and visceral compared to abstract artificially perfect shapes. And once you're comfortable thinking about dimension like this, you, my friend, have become ready to hear the definition of a fractal. Essentially, fractals are shapes whose dimension is not an integer, but instead some fractional amount. What's cool about that is that it's a quantitative way to say that there's shapes that are rough, and that they stay rough even as you zoom in. Technically, there's a slightly more accurate definition, and I've included it in the video description, but this idea here of a non-integer dimension almost entirely captures the idea of roughness that we're going for. There is one nuance though that I haven't brought up yet, but it's worth pointing out, which is that this dimension, at least as I've described it so far using the box counting method, can sometimes change based on how far zoomed in you are. For example, here's a shape sitting in three dimensions, which at a distance looks like a line. In 3D, by the way, when you do a box counting, you have a 3D grid full of little cubes instead of little squares, but it works the same way. At this scale, where the shape's thickness is smaller than the size of the boxes, it looks one-dimensional, meaning the number of boxes it touches is proportional to its length. But when you scale it up, it starts behaving a lot more like a tube, touching the boxes on the surface of that tube. And so it'll look two-dimensional, with the number of boxes touched being proportional to the square of the scaling factor. But it's not really a tube, it's made of these rapidly winding little curves. So once you scale it up even more, to the point where the boxes can pick up on the details of those curves, it looks one-dimensional again, with the number of boxes touched, scaling directly in proportion to the scaling constant. So actually assigning a number to a shape for its dimension can be tricky, and it leaves room for differing definitions and differing conventions. In a pure math setting, there are indeed numerous definitions for dimension, but all of them focus on what the limit of this dimension is at closer and closer zoom levels. You can think of that in terms of the plot as the limit of this slope as you move farther and farther to the right. So for a purely geometric shape to be a genuine fractal, it has to continue looking rough, even as you zoom in infinitely far. But in a more applied setting, like looking at the coastline of Britain, it doesn't really make sense to talk about the limit as you zoom in more and more. I mean, at some point you just be hitting atoms. Instead what you do is you look at a sufficiently wide range of scales, from very zoomed out up to very zoomed in, and compute the dimension at each one. And in this more applied setting, a shape is typically considered to be a fractal only when the measured dimension stays approximately constant even across multiple different scales. For example, the coastline of Britain doesn't just look 1.21 dimensional at a distance, even if you zoom in by a factor of a thousand, the level of roughness is still around 1.21. That right there is the sense in which many shapes from nature actually are self-similar, albeit not perfect self-similarity. Perfectly self-similar shapes do play an important role in fractal geometry. What they give us are simple to describe low information examples of this phenomenon of roughness, roughness that persists at many different scales, and at arbitrarily close scales. And that's important, it gives us the primitive tools for modeling these fractal phenomena. But I think it's also important not to view them as the prototypical example of fractals, since fractals in general actually have a lot more character to them. I really do think that this is one of those ideas where once you learn it, it makes you start looking at the world completely differently. What this number is, what this fractional dimension gives us is a quantitative way to describe roughness. For example, the coastline of Norway is about 1.52 dimensional, which is a numerical way to communicate the fact that it's way more jaggedy than Britain's coastline. The surface of a calm ocean might have a fractal dimension only barely above two, while a stormy one might have a dimension closer to 2.3. In fact, fractal dimension doesn't just arise frequently in nature. It seems to be the core differentiator between objects that arise naturally and those that are just manmade.
How to lie using visual proofs
Today I'd like to share with you three fake proofs in increasing order of subtlety, and then discuss what each one of them has to tell us about math. The first proof is for a formula for the surface area of a sphere, and the way that it starts is to subdivide that sphere into vertical slices, the way you might chop up an orange or paint a beach ball. We then unravel all of those wedge slices from the northern hemisphere so that they poke up like this, and then symmetrically unravel all of those from the southern hemisphere below, and now interlace those pieces to get a shape whose area we want to figure out. The base of this shape came from the circumference of the sphere, it's an unravelled equator, though its length is 2 pi times the radius of the sphere, and then the other side of this shape came from the height of one of these wedges, which is a quarter of a walk around the sphere, and so it has a length of pi halves times r. The idea is that this is only an approximation, the edges might not be perfectly straight, but if we think of the limit as we do finer and finer slices of the sphere, this shape whose area we want to know gets closer to being a perfect rectangle, one whose area will be pi halves r times 2 pi r, or in other words pi squared times r squared. The proof is elegant, it translates a hard problem into a situation that's easier to understand, it has that element of surprise while still being intuitive, it's only fault, really, is that it's completely wrong, the true surface area of a sphere is 4 pi r squared. I originally saw this example thanks to Henry Reich, and to be fair, it's not necessarily inconsistent with the 4 pi r squared formula, just so long as pi is equal to 4. For the next proof, I'd like to show you a simple argument, for the fact that pi is equal to 4. We start off with a circle, say with radius 1, and we ask how can we figure out its circumference? After all, pi is by definition the ratio of this circumference to the diameter of the circle. We start off by drawing the square whose sidelines are all tangent to that circle. It's not too hard to see that the perimeter of the square is 8, then, and some of you may have seen this before, it's a kind of classic argument, the argument proceeds by producing a sequence of curves all of whom also have this perimeter of 8, but which more and more closely approximate the circle. But the full nuance of this example is not always emphasized. First of all, just to make things crystal clear, the way each of these iterations works is to fold in each of the corners of the previous shape, so that they just barely kiss the circle. And you can take a moment to convince yourself that in each region where a fold happened, the perimeter doesn't change. For example, in the upper right here, instead of walking up and then left, the new curve goes left and then up. And something similar is true at all of the folds of all of the different iterations. Wherever the previous iteration went direction A, then direction B, the new iteration goes direction B, then direction A, but no length is lost or gained. Some of you might say, well, obviously this isn't going to give the true perimeter of the circle, because no matter how many iterations you do when you zoom in, it remains jagged, it's not a smooth curve. You're taking these very inefficient steps along the circle. While that is true, and ultimately the reason things are wrong, if you want to appreciate the less in this example is teaching us, the claim of the example is not that any one of these approximations equals the curve. It's that the limit of all of the approximations equals our circle. And to appreciate the lesson that this example teaches us, it's worth thinking a moment to be a little more mathematically precise about what I mean by the limit of a sequence of curves. Let's say we describe the very first shape this square as a parametric function, something that has an input T and it outputs a point in 2D space, though that as T ranges from 0 to 1, it traces that square. I'll call that function C0. And likewise we can parameterize the next iteration with a function I'll call C1, as the parameter T ranges from 0 up to 1, the output of this function traces along that curve. This is just so that we can think of these shapes as instead being functions. Now I want you to consider our particular value of T, maybe 0.2, and then consider the sequence of points that you get by evaluating the sequence of functions we have at this particular point. Now I want you to consider the limit as n approaches infinity of C sub n of 0.2. This limit is a well-defined point in 2D space. In fact that point sits on the circle. And there's nothing specific about 0.2. We could do this limiting process for any input T, and so I can define a new function that I'll call C infinity, which by definition at any input T is whatever this limiting value for all the curves is. So here's the point. That limiting function C infinity is the circle. It's not an approximation of the circle, it's not some jagged version of the circle. It is the genuine smooth circular curve whose perimeter we want to know. And what's also true is that the limit of the lengths of all of our curves really is 8, because each individual curve really does have a perimeter of 8. And there are all sorts of examples throughout calculus when we talk about approximating one thing we want to know as a limit of a bunch of other things that are easier to understand. So the question at the heart here is why exactly is it not okay to do that in this example? And maybe at this point you step back and say, you know it's just not enough for things to look the same. This is why we need rigor, it's why we need proofs. It's why since the days of Euclid, mathematicians have followed in his footsteps and deduced truths step-by-step from axioms forward. But for this last example, I would like to do something that doesn't lean as hard on visual intuition and instead give a Euclid-style proof for the claim that all triangles are isosceles. The way this will work is we'll take any particular triangle and make no assumptions about it, I'll label its vertices A, B and C. And what I would like to prove for you is that the side length A, B is necessarily equal to the side length A, C. Now to be clear, the result is obviously false. Just in the diagram I've drawn, you can visually see that these lengths are not equal to each other. But I challenge you to see if you can identify what's wrong about the proof I'm about to show you. Honestly, it's very subtle and three gold stars for anyone who can identify it. The first thing I'll do is draw the perpendicular bisector, the line B, C. So that means this angle here is 90 degrees and this length is by definition the same as this length. And we'll label that intersection point D. And then next I will draw the angle bisector at A, which means by definition this little angle here is the same as this little angle here, I'll label both of them alpha. And we'll say that the point where these two intersect is P. And now like a lot of Euclid-style proofs we're just going to draw some new lines, figure out what things must be equal and get some conclusions. For instance, let's draw the line from P, which is perpendicular to the side length AC. And we'll label that intersection point E. And likewise, we'll draw the line from P down to the other side length AC. Again, it's perpendicular. And we'll label that intersection point F. My first claim is that this triangle here, which is AFP, is the same or at least congruent to this triangle over here, AEP. Essentially, this follows from symmetry across that angle bisector. Here, more specifically, we can say they share a side length and then they both have an angle alpha and both have an angle 90 degrees. So it follows by the side angle angle congruence relation. Maybe my drawing is a little bit sloppy, but the logic helps us see that they do have to be the same. Next, I'll draw a line from P down to B, and then from P down to C. And I claim that this triangle here is congruent to its reflection across that perpendicular bisector. Again, the symmetry maybe helps make this clear, but more rigorously, they both have the same base. They both have a 90 degree angle and they both have the same height. So it follows by the side angle side relation. So based on that first pair of triangles, I'm going to mark this side length here as being the same as this side length here, marking them with double tick marks. And based on the second triangle relation, I'll mark this side length here as the same as this line over here, marking them with triple tick marks. And so from that, we have two more triangles that need to be the same. Namely, this one over here, and the one with corresponding two side lengths over here, and the reasoning here is they both have that triple ticked side, a double ticked side, and they're both 90 degree triangles. So this follows by the side side angle congruence relation. And all of those are valid congruence relations. I'm not pulling the wall over your eyes with one of those. And all of this will basically be enough to show us why AB has to be the same as BC. That first pair of triangles implies that the length AF is the same as the length AE. Those are corresponding sides to each other. I'll just color them in red here. And then that last triangle relation guarantees for us that the side FB is going to be the same as the side EC. I'll kind of color both of those in blue. And finally, the result we want basically comes from adding up these two equations. The length AF plus FB is clearly the same as the total length AB. And likewise, the length AE plus EC is the same as the total length AC. Though, all in all, the side length AB has to be the same as the side length AC. And because we made no assumptions about the triangle, this implies that any triangle is isosceles. Actually, for that matter, since we made no assumptions about the specific two sides we chose, it implies that any triangle is equilateral. So this leaves us somewhat disturbingly with three different possibilities. All triangles really are equilateral, that's just the truth of the universe. Or, you can use Euclid-style reasoning to derive false results. Or, there's something wrong in the proof. But if there is, where exactly is it? So, what exactly is going on with these three examples? Now, the thing that's a little bit troubling about that first example with the sphere is that it is very similar in spirit to a lot of other famous and supposedly true visual proofs from geometry. For example, there's a very famous proof about the area of a circle that starts off by dividing it into a bunch of little pizza wedges, and you take all those wedges and you straighten them out, essentially lining up the crust of that pizza. And then we take half of the wedges and interslice them with the other half. And the idea is that this might not be a perfect rectangle, it's got some bumps and curves, but, as you take thinner and thinner slices, you get something that's closer and closer to a true rectangle, and the width of that rectangle comes from half the circumference of the circle, which is, by definition, pi times r, and then the height of that rectangle comes from the radius of the circle, r, meaning that the whole area is pi r squared. This time, the result is valid, but why is it not okay to do what we did with the spheres, but somehow it is okay to do this with the pizza slices? The main problem with the sphere argument is that when we flatten out all of those orange wedges, if we were to do it accurately in a way that preserves their area, they don't look like triangles, they should bulge outward. And if you want to see this, let's think really critically about just one particular one of those wedges on the sphere, and ask yourself, how does the width across that wedge, this little portion of a line of latitude, vary as you go up and down the wedge? In particular, if you consider the angle phi from the z-axis down to a point on this wedge as we walk down it, what's the length of that width as a function of phi? For those of you curious about the details of these sorts of things, you'd start off by drawing this line up here from the z-axis to a point on the wedge. Its length will be the radius of the sphere r times the sine of this angle. That lets us deduce how long the total line of latitude is, where we're sitting. It'll basically be 2 pi times that radial line, 2 pi r sine of phi, and then the width of the wedge that we care about is just some constant proportion of that full line of latitude. Now the details don't matter too much, the one thing I want you to notice is that this is not a linear relationship. As you walk from the top of that wedge down to the bottom, letting phi range from zero up to pi halves, the width of the wedge doesn't grow linearly, instead it grows according to a sine curve. And so, when we're unwrapping all of these wedges, if we want those widths to be preserved, they should end up a little bit chubbier around the base, their side lengths are not linear. What this means is when we try to interlace all of the wedges from northern hemisphere with those from the southern, there's a meaningful amount of overlap between those non-linear edges, and we can't wave our hands about a limiting argument. This is an overlap that persists as you take finer and finer subdivisions. And ultimately, it's that overlap that accounts for the difference between our false answer with a pi squared from the true answer that has four pi. It reminds me of one of those rearrangement puzzles, where you have a number of pieces and just by moving them around, you can seemingly create area out of nowhere. For example, right now I've arranged all these pieces to form a triangle, except it's missing two units of area in the middle. And now I want you to focus on the vertices of that triangle, these white dots. Those don't move, I'm not pulling any trickery with that, but I can rearrange all of the pieces back to how they originally were, so that those two units of area in the middle seem to disappear. All the constituent parts remain the same, the triangle that they form remains the same, and yet two units of area seem to appear out of nowhere. If you've never seen this one before, by the way, I highly encourage you to pause and try to think it through. It's a very fun little puzzle. The answer starts to reveal itself, if we carefully draw the edges of this triangle, and zoom in close enough to see that our pieces don't actually fit inside the triangle, they bulge out ever so slightly. Or at least arranged like this, they bulge out ever so slightly. When we rearrange them, and we zoom back in, we can see that they dent inward ever so slightly. And that very subtle difference between the bulge out and the dent inward accounts for all of the difference in area. The slope of the edge of this blue triangle works out to be 5 divided by 2, whereas the slope of the edge of this red triangle works out to be 7 divided by 3. Those numbers are close enough to look similar as slope, but they allow for this dinting inward and the bulging outward. You have to be wary of lines that are made to look straight when you haven't had explicit confirmation that they actually are straight. One quick added comment on the sphere, the fundamental issue here is that the geometry of a curved surface is fundamentally different from the geometry of flat space. The relevant search term here would be Gaussian curvature. You can't flatten things out from a sphere without losing geometric information. Now when you do see limiting arguments that relate to little pieces on a sphere, that somehow get flattened out and are reasoned through there, those only can work if the limiting pieces that you're talking about get smaller in both directions. It's only when you zoom in close to a curved surface that it appears locally flat. The issue with our orange wedge argument is that our pieces never got exposed to that local flatness because they only got thin in one direction, they maintain the curvature in that other direction. Now on the topic of the subtlety of limiting arguments, let's turn back to our limit of jagged curves that approaches the smooth circular curve. As I said, the limiting curve really is a circle and the limiting value for the length of your approximations really is 8. Here, the basic issue is that there is no reason to expect that the limit of the lengths of the curves is the same as the length of the limits of the curves. And in fact, this is a nice counter example to show why that's not the case. The real point of this example is not the fear that anyone is ever going to believe that it shows that pi is equal to 4. Instead, it shows why care is required in other cases where people apply limiting arguments. For example, this happens all throughout calculus. It is the heart of calculus, where say you want to know the area under a given curve, the way we typically think about it, is to approximate that with a set of rectangles because those are the things we know how to compute the areas of. You just take the base times height in each case. Now this is a very jagged approximation, but the thought, or I guess the hope, is that as you take a finer and finer subdivision into thinner and thinner rectangles, the sums of those areas approaches the thing we actually care about. If you want to make it rigorous, you have to be explicit about the error between these approximations and the true thing we care about, the area under this curve. For example, you might start your argument by saying that that error has to be strictly less than the area of these red rectangles. Essentially, the deviation between the curve and our approximating rectangles sits strictly inside that red region. And then, what you would want to argue is that in this limiting process, the cumulative area of all of those red rectangles has to approach zero. Now as to the final example, our proofs that all triangles are isosceles, let me show you what it looks like if I'm a little bit more careful about actually constructing the angle bisector rather than just eyeballing it. When I do that, the relevant intersection point actually sits outside of the triangle. And then from there, if I go through everything that we did in the original argument, drawing the relevant perpendicular lines, all of that, every triangle that I claimed was congruent really is congruent. All of those were genuinely true. And the corresponding lengths of those triangles that I claimed were the same really are the same. The one place where the proof breaks down is at the very end when I said that the full side length AC was equal to AE plus EC. That was only true under the hidden assumption that that point E sat in between them. But in reality, for many triangles, that point would sit outside of those two. It's pretty subtle, isn't it? The point in all of this is that while visual intuition is great, and visual proofs often give you a nice way of elucidating what's going on with otherwise opaque rigor, visual arguments and snazzy diagrams will never obviate the need for critical thinking. In math, you cannot escape the need to look out for hidden assumptions and edge cases.
Dot products and duality | Chapter 9, Essence of linear algebra
Traditionally, dot products are something that's introduced really early on in a linear algebra course, typically right at the start. So it might seem strange that I've pushed them back this far in the series. I did this because there's a standard way to introduce the topic, which requires nothing more than a basic understanding of vectors, but a fuller understanding of the role that dot products play in math can only really be found under the light of linear transformations. Before that though, let me just briefly cover the standard way that dot products are introduced, which I'm assuming is at least partially review for a number of viewers. Numerically, if you have two vectors of the same dimension, two lists of numbers with the same lengths, taking their dot product means pairing up all of the coordinates, multiplying those pairs together, and adding the result. So the vector 1, 2 dotted with 3, 4 would be 1 times 3 plus 2 times 4. The vector 6, 2, 8, 3 dotted with 1, 8, 5, 3 would be 6 times 1 plus 2 times 8 plus 8 times 5 plus 3 times 3. Luckily, this computation has a really nice geometric interpretation. To think about the dot product between two vectors, V and W, imagine projecting W onto the line that passes through the origin and the tip of V. Multiplying the length of this projection by the length of V, you have the dot product V dot W. Except when this projection of W is pointing in the opposite direction from V, that dot product will actually be negative. So when two vectors are generally pointing in the same direction, their dot product is positive. When they're perpendicular, meaning the projection of one onto the other is the zero vector, their dot product is zero, and if they point in generally the opposite direction, their dot product is negative. Now, this interpretation is weirdly asymmetric. It treats the two vectors very differently. So when I first learned this, I was surprised that order doesn't matter. You could instead project V onto W, multiply the length of the projected V by the length of W, and get the same result. I mean, doesn't that feel like a really different process? Here's the intuition for why order doesn't matter. If V and W happened to have the same length, we could leverage some symmetry. Since projecting W onto V, then multiplying the length of that projection by the length of V is a complete mirror image of projecting V onto W, then multiplying the length of that projection by the length of W. Now, if you scale one of them, say V by some constant like two, so that they don't have equal length, the symmetry is broken. But let's think through how to interpret the dot product between this new vector, two times V, and W. If you think of W as getting projected onto V, then the dot product, two V dot W, will be exactly twice the dot product V dot W. This is because when you scale V by two, it doesn't change the length of the projection of W, but it doubles the length of the vector that you're projecting onto. But on the other hand, let's say you were thinking about V getting projected onto W. Well, in that case, the length of the projection is the thing to get scaled when we multiply V by two, but the length of the vector that you're projecting onto stays constant. So the overall effect is still to just double the dot product. So even though symmetry is broken in this case, the effect that this scaling has on the value of the dot product is the same under both interpretations. There's also one other big question that confused me when I first learned this stuff. Why on earth does this numerical process of matching coordinates multiplying pairs and adding them together have anything to do with projection? Well, to give a satisfactory answer, and also to do full justice to the significance of the dot product, we need to unearth something a little bit deeper going on here, which often goes by the name duality. But before getting into that, I need to spend some time talking about linear transformations from multiple dimensions to one dimension, which is just the number line. These are functions that take in a 2D vector and spit out some number. But linear transformations are, of course, much more restricted than your run-of-the-mill function with a 2D input and a 1D output. As with transformations in higher dimensions, like the ones I talked about in chapter 3, there are some formal properties that make these functions linear. But I'm going to purposefully ignore those here so as to not distract from our integral, and instead focus on a certain visual property that's equivalent to all the formal stuff. If you take a line of evenly spaced dots and apply a transformation, a linear transformation will keep those dots evenly spaced once they land in the output space, which is the number line. Otherwise, if there's some line of dots that gets unevenly spaced, then your transformation is not linear. As with the cases we've seen before, one of these linear transformations is completely determined by where it takes i-hat and j-hat. But this time, each one of those basis vectors just lands on a number. So when we record where they land as the columns of a matrix, each of those columns just has a single number. This is a 1 by 2 matrix. Let's walk through an example of what it means to apply one of these transformations to a vector. Let's say you have a linear transformation that takes i-hat to 1 and j-hat to negative 2. To follow where a vector with coordinates, say, 4-3 ends up, think of breaking up this vector as 4 times i-hat plus 3 times j-hat. A consequence of linearity is that after the transformation, the vector will be 4 times the place where i-hat lands, 1, plus 3 times the place where j-hat lands, negative 2, which in this case implies that it lands on negative 2. When you do this calculation purely numerically, it's matrix vector multiplication. Now, this numerical operation of multiplying a 1 by 2 matrix by a vector feels just like taking the dot product of 2 vectors. Doesn't that 1 by 2 matrix just look like a vector that we tipped on its side? In fact, we could say right now that there's a nice association between 1 by 2 matrices and 2D vectors, defined by tilting the numerical representation of a vector on its side to get the associated matrix, or to tip the matrix back up to get the associated vector. Since we're just looking at numerical expressions right now, going back and forth between vectors and 1 by 2 matrices might feel like a silly thing to do. But this suggests something that's truly awesome from the geometric view. There's some kind of connection between linear transformations that take vectors to numbers and vectors themselves. Let me show an example that clarifies the significance, and which just so happens to also answer the dot product puzzle from earlier. On-learn what you have learned, and imagine that you don't already know that the dot product relates to projection. What I'm going to do here is take a copy of the number line and place it diagonally in space somehow with the number 0 sitting at the origin. Now, think of the two-dimensional unit vector whose tip sits where the number 1 on the number line is. I want to give that guy a name, U hat. This little guy plays an important role in what's about to happen, so just keep him in the back of your mind. If we project 2D vectors straight onto this diagonal number line, in effect, we've just defined a function that takes 2D vectors to numbers. What's more, this function is actually linear since it passes our visual test that any line of evenly spaced dots remains evenly spaced once it lands on the number line. Just to be clear, even though I've embedded the number line in 2D space like this, the outputs of the function are numbers, not 2D vectors. You should think of a function that takes in 2 coordinates and outputs a single coordinate. But that vector U hat is a 2-dimensional vector living in the input space. It's just situated in such a way that overlaps with the embedding of the number line. With this projection, we just defined a linear transformation from 2D vectors to numbers, so we're going to be able to find some kind of 1 by 2 matrix that describes that transformation. To find that 1 by 2 matrix, let's zoom in on this diagonal number line setup and think about where i hat and j hat each land, since those landing spots are going to be the columns of the matrix. This part's super cool. We can reason through it with a really elegant piece of symmetry. Since i hat and U hat are both unit vectors, projecting i hat onto the line passing through U hat looks totally symmetric to projecting U hat onto the x axis. So when we ask what number does i hat land on when it gets projected, the answer is going to be the same as whatever U hat lands on when it's projected onto the x axis. But projecting U hat onto the x axis just means taking the x coordinate of U hat. So by symmetry, the number where i hat lands when it's projected onto that diagonal number line is going to be the x coordinate of U hat. Isn't that cool? The reasoning is almost identical for the j hat case. Think about it for a moment. For all the same reasons, the y coordinate of U hat gives us the number where j hat lands when it's projected onto the number line copy. Pause and ponder that for a moment. I just think that's really cool. So the entries of the 1 by 2 matrix describing the projection transformation are going to be the coordinates of U hat. And computing this projection transformation for arbitrary vectors in space, which requires multiplying that matrix by those vectors, is computationally identical to taking a dot product with U hat. This is why taking the dot product with a unit vector can be interpreted as projecting a vector onto the span of that unit vector and taking the length. So what about non-unit vectors? For example, let's say we take that unit vector U hat, but we scale it up by a factor of 3. Numerically, each of its components gets multiplied by 3. So looking at the matrix associated with that vector, it takes i hat and j hat to 3 times the values where they landed before. Since this is all linear, it implies more generally that the new matrix can be interpreted as projecting any vector onto the number line copy and multiplying where it lands by 3. This is why the dot product with a non-unit vector can be interpreted as first projecting onto that vector, then scaling up the length of that projection by the length of the vector. Take a moment to think about what happened here. We had a linear transformation from 2D space to the number line, which was not defined in terms of numerical vectors or numerical dot products, it was just defined by projecting space onto a diagonal copy of the number line. But because the transformation is linear, it was necessarily described by some 1 by 2 matrix. And since multiplying a 1 by 2 matrix by a 2D vector is the same as turning that matrix on its side and taking a dot product, this transformation was, basically, related to some 2D vector. The lesson here is that any time you have one of these linear transformations whose output space is the number line, no matter how it was defined, there's going to be some unique vector v corresponding to that transformation. In the sense that applying the transformation is the same thing as taking a dot product with that vector. To me, this is utterly beautiful. It's an example of something in math called duality. Duality shows up in many different ways and forms throughout math, and it's super tricky to actually define. Loosely speaking, it refers to situations where you have a natural but surprising correspondence between two types of mathematical thing. For the linear algebra case that you just learned about, you'd say that the dual of a vector is the linear transformation that it encodes. And the dual of a linear transformation from some space to one dimension is a certain vector in that space. So to sum up, on the surface, the dot product is a very useful geometric tool for understanding projections, and for testing whether or not vectors tend to point in the same direction. And that's probably the most important thing for you to remember about the dot product. But at a deeper level, dotting two vectors together is a way to translate one of them into the world of transformations. Again, numerically this might feel like a silly point to emphasize. It's just two computations that happen to look similar. But the reason I find this so important is that throughout math, when you're dealing with a vector, once you really get to know its personality, sometimes you realize that it's easier to understand it not as an arrow in space, but as the physical embodiment of a linear transformation. It's as if the vector is really just a conceptual shorthand for a certain transformation, since it's easier for us to think about arrows in space rather than moving all of that space to the number line. In the next video, you'll see another really cool example of this duality in action, as I talk about the cross product....
But what is a convolution?
Suppose I give you two different lists of numbers, or maybe two different functions, and I ask you to think of all the ways you might combine those two lists to get a new list of numbers, or combine the two functions to get a new function. Maybe one simple way that comes to mind is to simply add them together term by term, like ways with the functions you can add all the corresponding outputs. In a similar vein, you could also multiply the two lists, term by term, and do the same thing with the functions. But there's another kind of combination, just as fundamental as both of those, but a lot less commonly discussed, known as a convolution. But unlike the previous two cases, it's not something that's merely inherited from an operation you can do to numbers. It's something genuinely new for the context of lists of numbers or combining functions. They show up all over the place, they are ubiquitous in image processing. It's a core construct in the theory of probability. They're used a lot in solving differential equations, and one context where you've almost certainly seen it, if not by this name, is multiplying two polynomials together. As someone in the business of visual explanations, this is an especially great topic, because the formulaic definition, in isolation and without context, can look kind of intimidating, but if we take the time to really unpack what it's saying, and before that, actually motivate why you would want something like this, it's an incredibly beautiful operation. And I have to admit, I actually learned a little something while putting together the visuals for this project. In the case of convolving two different functions, I was trying to think of different ways you might picture what that could mean. And with one of them, I had a little bit of an aha moment for why it is that normal distributions play the role that they do in probability, why it's such a natural shape for a function. But I'm getting ahead of myself, there's a lot of setup for that one. In this video, our primary focus is just going to be on the discrete case, and in particular, building up to a very unexpected but very clever algorithm for computing these. And I'll pull out the discussion for the continuous case into a second part. It's very tempting to open up with the image processing examples since they're visually the most intriguing, but there are a couple bits of finiciness that make the image processing case less representative of convolutions overall. So instead, let's kick things off with probability. And in particular, one of the simplest examples that I'm sure everyone here is thought about at some point in their life, which is rolling a pair of dice, and figuring out the chances of seeing various different sums. And you might say, not a problem, not a problem. Each of your two dice has six different possible outcomes, which gives us a total of 36 distinct possible pairs of outcomes. And if we just look through them all, we can count up how many pairs have a given sum. And arranging all the pairs in a grid like this, one pretty nice thing is that all of the pairs that have a constant sum are visible along one of these different diagonals. So simply counting how many exist on each of those diagonals will tell you how likely you are to see a particular sum. And I'd say very good, very good. But can you think of any other ways that you might visualize the same question? Other images that can come to mind to think of all the distinct pairs that have a given sum. And maybe one of you raises your hand and says, yeah, I've got one. Let's say you picture these two different sets of possibilities each in a row, but you flip around that second row. That way, all of the different pairs which add up to seven line up vertically like this. And if we slide that bottom row all the way to the right, then the unique pair that adds up to two, the snake eyes, are the only ones that align. And if I shlunk that over one unit to the right, the pairs which align are the two different pairs that add up to three. And in general, different offset values of this lower array, which remember I had to flip around first, reveal all the distinct pairs that have a given sum. As far as probability questions go, this still isn't especially interesting because all we're doing is counting how many outcomes there are in each of these categories. But that is with the implicit assumption that there's an equal chance for each of these faces to come up. But what if I told you I have a special set of dice that's not uniform? Maybe the blue die has its own set of numbers describing the probabilities for each face coming up. And the red die has its own unique distinct set of numbers. In that case, if you wanted to figure out, say, the probability of seeing a two, you would multiply the probability that the blue die is a one times the probability that the red die is a one. And for the chances of seeing a three, you look at the two distinct pairs where that's possible. And again, multiply the corresponding probabilities and then add those two products together. Similarly, the chances of seeing a four involves multiplying together three different pairs of possibilities and adding them all together. And in the spirit of setting up some formulas, let's name these top probabilities, a one, a two, a three, and so on and name the bottom ones, b one, b two, b three, and so on. And in general, this process where we're taking two different arrays of numbers, flipping the second one around and then lining them up at various different offset values, taking a bunch of pairwise products and adding them up, that's one of the fundamental ways to think about what a convolution is. So just to spell it out a little more exactly, through this process, we just generated probabilities for seeing two, three, four on and on up to 12. And we got them by mixing together one list of values, a, and another list of values, b. In the lingo, we'd say the convolution of those two sequences gives us this new sequence, the new sequence of 11 values, each of which looks like some sum of pairwise products. If you prefer, another way you could think about the same operation is to first create a table of all the pairwise products and then add up along all these diagonals. Again, that's a way of mixing together these two sequences of numbers to get us a new sequence of 11 numbers. It's the same operation as the sliding windows thought, just another perspective. Putting a little notation to it, here's how you might see it written down. The convolution of a and b, denoted with this little asterisk, is a new list, and the nth element of that list looks like a sum. And that sum goes over all different pairs of indices, i and j, so that the sum of those indices is equal to n. It's kind of a mouthful, but for example, if n was 6, the pairs were going over r 1 and 5, 2 and 4, 3 and 3, 4 and 2, 5 and 1, all the different pairs that add up to 6. But honestly, however you write it down, the notation is secondary and important to the visual you might hold near head for the process. Here, maybe it helps to do a super simple example, where I might ask you what's the convolution of the list 1, 2, 3 with the list 4, 5, 6. You might picture taking both of these lists, flipping around that second one, and then starting with its slid all the way over to the left. Then the pair values which align are 1 and 4, multiply them together, and that gives us our first term of our output. Slide that bottom array 1 unit to the right, the pairs which align are 1 and 5 and 2 and 4, multiply those pairs, add them together, and that gives us 13, the next entry in our output. Slide things over once more, and we'll take 1 times 6 plus 2 times 5 plus 3 times 4, which happens to be 28, 1 more slide and we get 2 times 6 plus 3 times 5, and that gives us 27, and finally, the last term will look like 3 times 6. If you'd like, you can pull up whatever your favorite programming languages and your favorite library that includes various numerical operations, and you can confirm I'm not lying to you. If you take the convolution of 1, 2, 3 against 4, 5, 6, this is indeed the result that you'll get. We've seen one case where this is a natural and desirable operation, adding up to probability distributions, and another common example would be a moving average. Imagine you have some long list of numbers, and you take another smaller list of numbers that all add up to 1, in this case I just have a little list of 5 values and they're all equal to 1-5th. Then if we do this sliding window convolution process, and kind of close our eyes and sweep under the rug what happens at the very beginning of it, once our smaller list of values entirely overlaps with the bigger one, think about what each term in this convolution really means. At each iteration, what you're doing is multiplying each of the values from your data by 1-5th and adding them all together, which is to say you're taking an average of your data inside this little window. Overall, the process gives you a smoothed out version of the original data, and you could modify this, starting with a different little list of numbers, and as long as that little list all adds up to 1, you can still interpret it as a moving average. In the example shown here, that moving average would be giving more weight towards the central value, this also results in a smoothed out version of the data. If you do kind of a two-dimensional analog of this, it gives you a fine algorithm for blurring a given image. And I should say the animations I'm about to show are modified from something I originally made for part of a set of lectures I did with the Julia Lab at MIT for a certain open courseware class that included an image processing unit. There we did a little bit more to dive into the code behind all of this, so if you're curious, I'll leave you some links, but focusing back on this blurring example. What's going on is I've got this little 3x3 grid of values that's marching along our original image. And if we zoom in, each one of those values is 1-9th. And what I'm doing at each iteration is multiplying each of those values by the corresponding pixel that it sits on top of. And of course, in computer science, we think of colors as little vectors of three values, representing the red, green, and blue components. When I multiply all these little values by 1-9th and I add them together, it gives us an average along each color channel, and the corresponding pixel for the image on the right is defined to be that sum. The overall effect, as we do this for every single pixel on the image, is that each one kind of bleeds into all of its neighbors, which gives us a blurrier version than the original. In the lingo, we'd say that the image on the right is a convolution of our original image with the little grid of values. Or, more technically, maybe I should say that it's the convolution with a 180-degree rotated version of that little grid of values. Not that it matters when the grid is symmetric, but it's just worth keeping in mind that the definition of a convolution as inherited from the pure math context should always invite you to think about flipping around that second array. If we modify this slightly, we can get a much more elegant blurring effect by choosing a different grid of values. In this case, I have a little 5x5 grid, but the distinction is not so much its size. If we zoom in, we notice that the value in the middle is a lot bigger than the value towards the edges. And where this is coming from is they're all sampled from a bell curve, known as a Gaussian distribution. That way, when we multiply all of these values by the corresponding pixel that they're sitting on top of, we're giving a lot more weight to that central pixel, and much less towards the ones out at the edge. And just as before, the corresponding pixel on the right is defined to be this sum. As we do this process for every single pixel, it gives a blurring effect, which much more authentically simulates the notion of putting your lens out of focus or something like that. But blurring is far from the only thing that you can do with this idea. For instance, take a look at this little grid of values, which involves some positive numbers on the left, and some negative numbers on the right, which I'll color with blue and red respectively. Take a moment to see if you can predict and understand what effect this will have on the final image. So in this case, I'll just be thinking of the image as gray scale instead of colored, so each of the pixels is just represented by one number instead of three. And one thing worth noticing is that as we do this convolution, it's possible to get negative values. For example, at this point here, if we zoom in, the left half of our little grid sits entirely on top of black pixels, which would have a value of zero. But the right half of negative values also on top of white pixels, which would have a value of one. So when we multiply corresponding terms and add them together, the results will be very negative. And the way I'm displaying this with the image on the right is to color negative values red and positive values blue. Another thing to notice is that when you're on a patch that's all the same color, everything goes to zero, since the sum of the values in our little grid is zero. This is very different from the previous two examples where the sum of our little grid was one, which let us interpret it as a moving average and hence a blur. All in all, this little process basically detects wherever there's variation in the pixel value as you move from left to right. And so it gives you a kind of way to pick up on all the vertical edges from your image. And similarly, if we rotated that grid around so that it varies as you move from the top to the bottom, this will be picking up on all the horizontal edges, which in the case of our little pie creature image does result in some pretty demonic eyes. This smaller grid by the way is often called a kernel. And the beauty here is how just by choosing a different kernel, you can get different image processing effects, not just blurring your edge detection, but also things like sharpening. For those of you who have heard of a convolutional neural network, the idea there is to use data to figure out what the kernels should be in the first place, as determined by whatever the neural network wants to detect. Another thing I should maybe bring up is the length of the output. For something like the moving average example, you might only want to think about the terms when both of the windows fully align with each other. Or in the image processing example, maybe you want the final output to have the same size as the original. Now, convolutions as a pure math operation always produce an array that's bigger than the two arrays that you started with. At least assuming one of them doesn't have a length of one, just know that in certain computer science contexts, you often want to deliberately truncate that output. Another thing worth highlighting is that in the computer science context, this notion of flipping around that kernel before you let it march across the original often feels really weird and just uncalled for, but again, note that that's what's inherited from the pure math context, where like we saw with the probabilities, it's an incredibly natural thing to do. And actually, I can show you one more pure math example where even the programmers should care about this one, because it opens the doors for much faster algorithm to compute all of these. To set up what I mean by faster here, let me go back and pull up some Python again, and I'm going to create two different relatively big arrays. Each one will have 100,000 random elements in it, and I'm going to assess the run time of the convolve function from the numpy library. And in this case, it runs it for multiple different iterations, tries to find an average, and it looks like, on this computer at least, it averages at 4.87 seconds. By contrast, if I use a different function from the psi-py library, called fft-convolve, which is the same thing just implemented differently, that only takes 4.3 milliseconds on average, so three orders of magnitude improvement. And again, even though it flies under a different name, it's giving the same output that the other convolve function does is just doing something to go about it in a cleverer way. Remember how with the probability example, I said another way you could think about the convolution, was to create this table of all the pairwise products, and then add up those pairwise products along the diagonals. There's of course nothing specific to probability, any time you're convolving two different lists of numbers, you can think about it this way, create this kind of multiplication table with all pairwise products, and then each sum along the diagonal corresponds to one of your final outputs. One context where this view is especially natural is when you multiply together two polynomials. For example, let me take the little grid we already have, and replace the top terms with 1, 2x, and 3x squared, and replace the other terms with 4, 5x, and 6x squared. Now, think about what it means when we're creating all of these different pairwise products between the two lists. What you're doing is essentially expanding out the full product of the two polynomials I have written down, and then when you add up along the diagonal, that corresponds to collecting all like terms, which is pretty neat, expanding a polynomial and collecting like terms is exactly the same process as a convolution. But this allows us to do something that's pretty cool, because think about what we're saying here, we're saying if you take two different functions, and you multiply them together, which is a simple pointwise operation, that's the same thing as if you had first extracted the coefficients from each one of those, assuming they're polynomials, and then taken a convolution of those two lists of coefficients. What makes that so interesting is that convolutions feel, in principle, a lot more complicated than simple multiplication. And I don't just mean conceptually they're harder to think about. I mean, computationally, it requires more steps to perform a convolution than it does to perform a pointwise product of two different lists. For example, let's say I gave you two really big polynomials, say each one with a hundred different coefficients. Then if the way you multiply them was to expand out this product, you know, filling in this entire 100 by 100 grid of pairwise products, that would require you to perform 10,000 different products. And then, when you're collecting all the like terms along the diagonals, that's another set of around 10,000 operations. More generally, in the lingo, we'd say the algorithm is O of N squared, meaning for two lists of size N, the way that the number of operations scales is in proportion to the square of N. On the other hand, if I think of two polynomials in terms of their outputs, for example, sampling their values at some handful of inputs, then multiplying them only requires as many operations as the number of samples, since again, it's a pointwise operation. And with polynomials, you only need finitely many samples to be able to recover the coefficients. For example, two outputs are enough to uniquely specify a linear polynomial. Three outputs would be enough to uniquely specify a quadratic polynomial. And in general, if you know N distinct outputs, that's enough to uniquely specify a polynomial that has N different coefficients. Or if you prefer, we could phrase this in the language of systems of equations. Imagine I tell you, I have some polynomial, but I don't tell you what the coefficients are. Those are a mystery to you. In our example, you might think of this as the product that we're trying to figure out. And then, suppose I say, I'll just tell you what the outputs of this polynomial would be if you inputted various different inputs, like 0, 1, 2, 3 on and on. And I give you enough so that you have as many equations as you have unknowns. It even happens to be a linear system of equations, so that's nice. And in principle, at least, this should be enough to recover the coefficients. So the rough algorithm outline then would be whenever you want to convolve two lists of numbers, you treat them like their coefficients of two polynomials, you sample those polynomials at enough outputs, multiply those samples point-wise, and then solve this system to recover the coefficients as a sneaky backdoor way to find the convolution. And as I've stated it so far, at least, some of you could rightfully complain, grant, that is an idiotic plan. Because for one thing, just calculating all these samples for one of the polynomials we know already takes on the order of n-squared operations, not to mention, solving that system is certainly going to be computationally as difficult as just doing the convolution in the first place. So, like, sure, we have this connection between multiplication and convolutions, but all of the complexity happens in translating from one viewpoint to the other. But there is a trick. And those of you who know about Fourier transforms and the FFT algorithm might see where this is going. If you're unfamiliar with this topics, what I'm about to say might seem completely out of the blue, just know that there are certain paths you could have walked in math that make this more of an expected step. Basically, the idea is that we have a freedom of choice here. If instead of evaluating it some arbitrary set of inputs, like 0, 1, 2, 3 on and on, you choose to evaluate on a very specially selected set of complex numbers, specifically the ones that sit evenly spaced on the unit circle, what are known as the roots of unity, this gives us a friendlier system. The basic idea is that by finding a number where taking its powers falls into this cycling pattern, it means that the system we generate is going to have a lot of redundancy in the different terms that you're calculating, and by being clever about how you leverage that redundancy, you can save yourself a lot of work. This set of outputs that I've written has a special name, it's called the discrete Fourier transform of the coefficients, and if you want to learn more, I actually did another lecture for that same Julia MIT class all about discrete Fourier transforms, and there's also a really excellent video on the channel reducible talking about the fast Fourier transform, which is an algorithm for computing these more quickly, also Veritasium recently did a really good video on FFTs, so you've got lots of options, and that fast algorithm really is the point for us. Again, because of all this redundancy, there exists a method to go from the coefficients to all of these outputs, where instead of doing on the order of n squared operations, you do on the order of n times the log of n operations, which is much, much better as you scale to big lists. And, importantly, this FFT algorithm goes both ways, it also lets you go from the outputs to the coefficients. Though, bringing it all together, let's look back at our algorithm outline. Now we can say, whenever you're given two long lists of numbers and you want to take their convolution, first, compute the fast Fourier transform of each one of them, which in the back of your mind, you can just think of as treating them like they're the coefficients of a polynomial, and evaluating it at a very specially selected set of points. Then, multiply together the two results that you just got point wise, which is nice and fast, and then do an inverse fast Fourier transform, and what that gives you is the sneaky back doorway to compute the convolution that we were looking for. But this time, it only involves O of n log n operations. That's really cool to me. This very specific context where convolutions show up, multiplying two polynomials, opens the doors for an algorithm that's relevant everywhere else where convolutions might come up. If you want to add probability distributions, do some large image processing, whatever it might be, and I just think that's such a good example of why you should be excited when you see some operation or concept in math show up in a lot of seemingly unrelated areas. If you want a little homework, here's something that's fun to think about. Explain why when you multiply two different numbers, just ordinary multiplication the way we all learn in elementary school, what you're doing is basically a convolution between the digits of those numbers. There's some added steps with carries and the like, but the core step is a convolution. In light of the existence of a fast algorithm, what that means is if you have two very large integers, then there exists a way to find their product that's faster than the method we learn in elementary school, that instead of requiring O of n squared operations, only requires O of n log n, which doesn't even feel like it should be possible. The catch is that before this is actually useful in practice, your numbers would have to be absolutely monstrous. But still, it's cool that such an algorithm exists. Next up, we'll turn our attention to the continuous case, with a special focus on probability distributions.
Solving the heat equation | DE
We last left off studying the heat equation in the one dimensional case of a rod. The question is how the temperature distribution along such a rod will tend to change over time. And this gave us a nice first example for a partial differential equation. It told us that the rate at which the temperature at a given point changes over time depends on the second derivative of that temperature at that point with respect to space. Here we're going to look at how to solve that equation. And actually, it's a little misleading to refer to all of this as solving an equation. The PDE itself only describes one out of three constraints that our temperature function must satisfy if it's going to accurately describe heat flow. It must also satisfy certain boundary conditions, which is something we'll talk about momentarily, and a certain initial condition. That is, you don't get to choose how it looks at time t equals zero. That's part of the problem statement. These added constraints are really where all of the challenge actually lies. There is a vast ocean of function solving the PDE in the sense that when you take their partial derivatives, the thing is going to be equal. And a sizable subset of that ocean satisfies the right boundary conditions. When Joseph Fourier solved this problem in 1822, his key contribution was to gain control of this ocean, turning all of the right knobs and dials so as to be able to select from it the particular solution fitting a given initial condition. We can think of his solution as being broken down into three fundamental observations. Number one, certain sign waves offer a really simple solution to this equation. Number two, if you know multiple solutions, the sum of these functions is also a solution. And number three, most surprisingly, any function can be expressed as a sum of sign waves. Well, a pedantic mathematician might point out that there are some pathological exceptions, some weird functions where this isn't true, but basically any distribution that you would come across in practice, including discontinuous ones, can be written as a sum of sign waves, potentially infinitely many. And if you've ever heard of Fourier series, you've at least heard of this last idea. And if so, maybe you've wondered why on earth would anyone care about breaking down a function as a sum of sign waves? Well, in many applications, sign waves are nicer to deal with than anything else. And differential equations offers us a really nice context where you can see how that plays out. For our heat equation, when you write a function as a sum of these waves, the relatively clean second derivatives makes it easy to solve the heat equation for each one of them. And as you'll see, a sum of solutions to this equation gives us another solution. And so in turn, that will give us a recipe for solving the heat equation for any complicated distribution as an initial state. Here, let's dig into that first step. Why exactly would sign waves play nicely with the heat equation? To avoid messy constants, let's start simple and say that the temperature function at time t equals zero is simply sine of x, where x describes the point on the rod. Yes, the idea of a rod's temperature just happening to look like sine of x, varying around whatever temperature our conventions arbitrarily label as zero, is clearly absurd. But in math, you should always be happy to play with examples that are idealized, potentially well beyond the point of being realistic, because they can offer a good first step in the direction of something more general and hence more realistic. The right-hand side of this heat equation asks about the second derivative of our function, how much our temperature distribution curves as you move along space. The derivative of sine of x is cosine of x, whose derivative in turn is negative sine of x. What the wave curves is, in a sense, equal an opposite to its height at each point. So at least at the time t equals zero, this has the peculiar effect that each point changes its temperature at a rate proportional to the temperature of the point itself, with the same proportionality constant across all points. So after some tiny time step, everything scales down by the same factor. And after that, it's still the same sine curve shape, just scaled down a bit, so the same logic applies, and the next time step would scale it down uniformly again. And this applies just as well in the limit, as the size of these time steps approaches zero. So unlike other temperature distributions, sine waves are peculiar in that they'll get scaled down uniformly, looking like some constant time sine of x for all times t. Now when you see that the rate at which some value changes is proportional to that value itself, your mind should burn with the thought of an exponential. And if it's not, or if you're a little rusty on the idea of taking derivatives of exponentials, or what makes the number e special, I'd recommend you take a look at this video. The upshot is that the derivative of e to some constant times t is equal to that constant times itself. If the rate at which your investment grows, for example, is always say 0.05 times the total value, then its value over time is going to look like e to the 0.05 times t times whatever the initial investment was. If the rate at which the count of carbon 14 atoms in an old bone changes is always equal to some negative constant times that count itself, then over time that number will look approximately like e to that negative constant times t times whatever the initial count was. So when you look at our heat equation, and you know that for a sine wave, the right-hand side is going to be negative alpha times the temperature function itself. Hopefully it wouldn't be too surprising to propose that the solution is to scale down by a factor of e to the negative alpha t. Here, go ahead and check the partial derivatives. The proposed function of x and t is sine of x times e to the negative alpha t, taking the second partial derivative with respect to x, that e to the negative alpha t term looks like a constant. It doesn't have any x in it. So which just comes along for the right, as if it was any other constant, like 2. And the first derivative with respect to x is cosine of x times e to the negative alpha t. Likewise, the second partial derivative with respect to x becomes negative sine of x times e to the negative alpha t. And on the flip side, if you look at the partial derivative with respect to t, that sine of x term now looks like a constant, since it doesn't have a t in it. So we get negative alpha times e to the negative alpha t times sine of x. So indeed, this function does make the partial differential equation true. And oh, if it was only that simple, this narrative flow could be so nice. We would just be lying directly to the delicious Fourier series conclusion. Sadly, nature is not so nice, knocking us off onto an annoying but highly necessary detour. Here's the thing, even if nature were to somehow produce a temperature distribution on this rod, which looks like this perfect sine wave, the exponential decay is not actually how it would evolve. Assuming that no heat flows in or out of the rod, here's what that evolution would actually look like. The points on the left are heated up a little at first, and those on the right are cooled down by their neighbors to the interior. In fact, let me give you an even simpler solution to the PDE, which fails to describe actual heat flow, a straight line. It is the temperature function will be some non-zero constant times x and never change over time. The second partial derivative with respect to x is indeed zero. I mean, there is no curvature, and its partial derivative with respect to time is also zero, since it never changes over time. And yet, if I throw this into the simulator, it does actually change over time, slowly approaching a uniform temperature at the mean value. What's going on here is that the simulation I'm using treats the two boundary points of the rod differently from how it treats all the others, which is a more accurate reflection of what would actually happen in nature. If you'll recall from the last video, the intuition for where that second derivative, with respect to x actually came from, was rooted in having each point 10 towards the average value of its two neighbors on either side. But at the boundary, there is no neighbor to one side. If we went back to thinking of the discrete version, modeling only finite, laminated points on this rod, you could have each boundary point simply 10 towards its one neighbor at a rate proportional to their difference. As we do this for higher and higher resolutions, notice how pretty much immediately after the clock starts, our distribution looks flat at either of those two boundary points. In fact, in the limiting case, as these finer and finer discretized setups approach a continuous curve, the slope of our curve at the boundary will be zero for all times after the start. One way this is often described is that the slope at any given point is proportional to the rate of heat flow at that point. So if you want to model the restriction that no heat flows into or out of the rod, the slope at either end will be zero. That somewhat hand-wavy and incomplete, I know, so if you want the fuller details, I've left links and resources in the description. Taking the example of a straight line, whose slope at the boundary points is decidedly not zero, as soon as the clock starts, those boundary values will shift infinitismally, such that the slope there suddenly becomes zero and remains that way through the remainder of the evolution. In other words, finding a function satisfying the heat equation itself is not enough. It must also satisfy the property that it's flat at each of those end points for all times greater than zero. Fraised more precisely, the partial derivative with respect to x of our temperature function at zero t and at L t must be zero for all times t greater than zero, where L is the length of the rod. This is an example of a boundary condition, and pretty much any time that you have to solve a partial differential equation in practice, there will also be some boundary condition hanging along for the ride, which demands just as much attention as the PDE itself. All of this may make it feel like we've gotten nowhere, but the function which is a sine wave in space and an exponential decay in time actually gets us quite close. We just need to tweak it a little bit so that it's flat at both end points. First off, notice that we could just as well use a cosine function instead of a sine. I mean, it's the same wave, it's just shifted in phase by a quarter of the period, which would make it flat at x equal zero as we want. The second derivative of cosine of x is also negative one times itself. So for all the same reasons as before, the product cosine of x times e to the negative alpha t still satisfies the PDE. To make sure that it also satisfies the boundary condition on that right side, we're going to adjust the frequency of the wave. However, that will affect the second derivative since higher frequency waves curve more sharply and lower frequency ones curve more gently. Changing the frequency means introducing some constant, say omega, multiplied by the input of this function. A higher value of omega means the wave oscillates more quickly, since as you increase x, the input to the cosine increases more rapidly. Taking the derivative with respect to x, we still get negative sine, but the chain rule tells us to multiply that omega on the outside, and similarly the second derivative will still be negative cosine, but now with omega squared. This means that the right hand side of our equation has now picked up this omega squared term. So to balance things out on the left hand side, the exponential decay part should have an additional omega squared term up top. Unpacking what that actually means should feel intuitive. For a temperature function filled with sharper curves, it decays more quickly towards an equilibrium, and evidently it does this quadratically. For instance, doubling the frequency results in an exponential decay four times as fast. If the length of the rod is L, then the lowest frequency, where that rightmost point of the distribution will be flat, is when omega is equal to pi divided by L. You see that way, as x increases up to the value L, the input of our cosine expression goes up to pi, which is half the period of a cosine wave. Finding all the other frequencies which satisfy this boundary condition is sort of like finding harmonics. You essentially go through all the whole number multiples of this base frequency, pi over L. In fact, even multiplying it by zero works, since that gives us a constant function, which is indeed a valid solution, boundary condition and all. And with that, we're off the bumpy boundary condition detour and back onto the freeway. And forward, we're equipped with an infinite family of functions, satisfying both the PDE and the pesky boundary condition. Things are definitely looking more intricate now, but it all stems from the one basic observation that a function which looks like a sine curve in space and an exponential decay in time fits this equation, relating second derivatives in space with first derivatives in time. And of course, your formulas should start to look more intricate, you're solving a genuinely hard problem. This actually makes for a pretty good stopping point, so let's call it an end here, and in the next video, we'll look at how to use this infinite family to construct a more general solution. To any of you worried about dwelling too much on a single example in a series that's meant to give you a general overview of differential equations, it's worth emphasizing that many of the considerations which pop up here are frequent themes throughout the field. First off, the fact that we modeled the boundary with its own special rule, while the main differential equation only characterized the interior, is a very regular theme, and a pattern well worth getting used to, especially in the context of PDEs. Also, take note of how what we're doing is breaking down a general situation into simpler idealized cases. This strategy comes up all the time, and it's actually quite common for these simpler cases to look like some mixture of sine curves and exponentials, that's not at all unique to the heat equation, and as time goes on, we're going to get a deeper feel for why that's true.
Higher order derivatives | Chapter 10, Essence of calculus
In the next chapter about Taylor series, I make frequent reference to higher order derivatives. And if you're already comfortable with second derivatives, third derivatives, and so on, great, feel free to just skip ahead to the main event now. You won't hurt my feelings. But somehow, I've managed not to bring up higher order derivatives at all so far in this series. So for the sake of completeness, I thought I'd give you this little footnote, just to go over them very quickly. I'll focus mainly on the second derivative, showing what it looks like in the context of graphs and motion, and leave you to think about the analogies for higher orders. Given some function, f of x, the derivative can be interpreted as the slope of this graph above some point, right? Steep slope means a high value for the derivative, a downward slope means a negative derivative. So the second derivative, whose notation I'll explain in just a moment, is the derivative of the derivative, meaning it tells you how that slope is changing. The way to see that at a glance is to think about how the graph of f of x curves. At points where it curves upwards, like this, the slope is increasing, and that means the second derivative is positive. At points where it's curving downwards, the slope is decreasing, so the second derivative is negative. For example, a graph like this one has a very positive second derivative at the point four, since the slope is rapidly increasing around that point. Whereas a graph like this one still has a positive second derivative at the same point, but it's smaller. I mean, the slope only increases slowly. At points where there's not really any curvature, the second derivative is just zero. As far as notation goes, you could try writing it like this, indicating some small change to the derivative function, divided by some small change to x, where, as always, the use of this letter d suggests that what you really want to consider is what this ratio approaches as dx, both dx is in this case, approach zero. That's pretty awkward and clunky, so the standard is to abbreviate this as d squared f divided by dx squared. And even though it's not terribly important for getting an intuition for the second derivative, I think it might be worth showing you how you can read this notation. To start off, think of some input to your function, and then take two small steps to the right, each one with the size of dx. I'm choosing rather big steps here so that we'll be able to see what's going on, but in principle, keep in the back of your mind that dx should be rather tiny. The first step causes some change to the function, which I'll call df1, and the second step causes some similar but possibly slightly different change, which I'll call df2. The difference between these changes, the change in how the function changes, is what we'll call ddf. You should think of this as really small, typically proportional to the size of dx squared. So if, for example, you substituted in 0.01 for dx, you would expect this ddf to be about proportional to 0.0001. And the second derivative is the size of this change to the change, divided by the size of dx squared. Or more precisely, it's whatever that ratio approaches as dx approaches zero. Even though it's not like this letter d is a variable being multiplied by f, for the sake of more compact notation, you'd write it as d squared f divided by dx squared, and you don't typically bother with any parentheses on the bottom. Maybe the most visceral understanding of the second derivative is that it represents acceleration. Given some movement along a line, suppose you have some function that records the distance traveled versus time. Maybe its graph looks something like this, steadily increasing over time. Then its derivative tells you velocity at each point in time, right? For example, the graph might look like this bump, increasing up to some maximum, and then decreasing back to 0. So the second derivative tells you the rate of change for the velocity, which is the acceleration at each point in time. In this example, the second derivative is positive for the first half of the journey, which indicates speeding up. That's the sensation of being pushed back into your car seat, or rather having the car seat push you forward. A negative second derivative indicates slowing down, negative acceleration. The third derivative, and this is not a joke, is called jerk. So if the jerk is not zero, it means that the strength of the acceleration itself is changing. One of the most useful things about higher-order derivatives is how they help us in approximating functions, which is exactly the topic of the next chapter on Taylor series. So I'll see you there.
Tattoos on Math
Hey folks, just a short kind of out of the ordinary video for you today. A friend of mine, Cam recently got a math tattoo. It's not something I'd recommend, but he told his team at work that if they reached a certain stretch goal, it's something that he do. And well, the incentive worked. Cam's initials are CSC, which happens to be the shorthand for the cosecant function in trigonometry. So, what he decided to do is make his tattoo a certain geometric representation of what that function means. It's kind of like a wordless signature written in pure math. You can't be thinking though, about why on earth we teach students about the trigonometric functions, cosecant, secant, and cotangent. And it occurred to me that there's something kind of poetic about this particular tattoo. Just as tattoos are artificially painted on, but become permanent as if they were a core part of the recipient's flesh. The fact that the cosecant is a named function is kind of an artificial construct on math. Trigonometry could just as well have existed intact without the cosecant ever being named. But because it was, it has this strange and artificial permanence in our conventions and to some extent in our education system. In other words, the cosecant is not just a tattoo on Cam's chest. It's a tattoo on math itself, something which seemed reasonable and even worthy of immortality at its inception, but which doesn't necessarily hold up as time goes on. Here, let me actually show you all a picture of the tattoo that he chose, because not a lot of people know the geometric representation of the cosecant. Whenever you have an angle, typically represented with the Greek letter Theta, it's common in trigonometry to relate it to a corresponding point on the unit circle. The circle with the radius one centered at the origin in the xy plane. These trigonometry students learn that the distance between this point here on the circle and the x-axis is the sine of the angle, and the distance between that point and the y-axis is the cosine of the angle. These lengths give a really wonderful understanding for what cosine and sine are all about. People might learn that the tangent of an angle is sine divided by cosine, and that the cotangent is the other way around, cosine divided by sine. But relatively few learn that there's also a nice geometric interpretation for each of those quantities. If you draw a line tangent to the circle at this point, the distance from that point to the x-axis along that tangent is, well, the tangent of the angle. And the distance along that line to the point where it hits the y-axis, well that's the cotangent of the angle. Again, this gives a really intuitive feel for what those quantities mean. You kind of imagine tweaking that theta and seeing when cotangent gets smaller when tangent gets larger, and it's a good gut check for any students working with them. Likewise, secant, which is defined as one divided by the cosine, and cosecant, which is defined as one divided by the sine of theta, each have their own places on this diagram. If you look at that point where this tangent line crosses the x-axis, the distance from that point to the origin is the secant of the angle, that is, one divided by the cosine. Likewise, the distance between where this tangent line crosses the y-axis and the origin is the cosecant of the angle, that is, one divided by the sine. If you're wondering why on earth that's true, notice that we have two similar right triangles here. One small one inside the circle, and this larger triangle, whose hypotenuse is resting on the y-axis. I'll leave it to you to check that that interior angle up at the tip there is theta, the angle that we originally started with over inside the circle. Now, for each one of those triangles, I want you to think about the ratio of the length of the side opposite theta to the length of the hypotenuse. For the small triangle, the length of the opposite side is sine of theta, and the hypotenuse is that radius, the one that we defined to have length 1, so the ratio is just sine of theta divided by 1. Now when we look at the larger triangle, the side opposite theta is that radial line of length 1, and the hypotenuse is now this length on the y-axis, the one that I'm claiming is the cosecant. If you take the reciprocal of each side here, you see that this matches up with the fact that the cosecant of theta is 1 divided by sine. Kind of cool, right? It's also kind of nice that sine, tangent, and secant all correspond to lengths of lines that somehow go to the x-axis, and then the corresponding cosine, cotangent, and cosecant are all then lengths of lines going to the corresponding spots on the y-axis. And on a diagram like this, it might be pleasing that all six of these are separately named functions. But in any practical use of trigonometry, you can get by just using sine, cosine, and tangent. In fact, if you really wanted, you could define all six of these in terms of sine alone. But the sort of things that cosine and tangent correspond to come up frequently enough that it's more convenient to give them their own names. But cosecant, secant and cotangent, never really come up in problem solving in a way that's not just as convenient to write in terms of sine, cosine, and tangent. At that point, it's really just adding more words for students to learn with not that much added utility. And if anything, if you only introduce secant as one over cosine, and cosecant as one over sine, the mismatch of this coprefix is probably just an added point of confusion in a class that's prone enough to confusion for many of its students. The reason that all six of these functions have separate names, by the way, is that before computers and calculators, if you were doing trigonometry, maybe because you're a sailor or an astronomer or some kind of engineer, you'd find the values for these functions using large charts that just recorded known input output pairs. And when you can't easily plug in something like one divided by the sine of 30 degrees into a calculator, it might actually make sense to have a dedicated column to this value, with a dedicated name. And if you have a diagram like this one in mind when you're taking measurements, with sine, tangent, and secant having nicely mirrored meanings to cosine, cotangent, and cosecant, following this cosecant instead of one divided by sine, might actually make some sense, and it might actually make it easier to remember what it means geometrically. But times have changed, and most use cases for trig just don't involve charts of values and diagrams like this. Hence, the cosecant and its brothers are tattoos on math. Ideas whose permanence in our conventions is our own doing, not the result of nature itself. And in general, I actually think this is a good lesson for any student learning a new piece of math, at whatever level. You just gotta take a moment and ask yourself whether what you're learning is core to the flesh of math itself, and to nature itself, or if what you're looking at is actually just inked onto the subject, and could just as easily have been inked on in some completely other way.
How colliding blocks act like a beam of light...to compute pi
You know that feeling you get when you have two mirrors facing each other, and it gives the illusion of there being an infinite tunnel of rooms. Or if they're at an angle with each other, it makes you feel like you're a part of a strange, kaleidoscopic world with many copies of yourself all separated by angled pieces of glass. What many people may not realize is that the idea underlying these illusions can be surprisingly helpful for solving serious problems in math. We've already seen two videos describing the block collision puzzle, with its wonderfully surprising answer. Big block comes in from the bright, lots of clacks, the total number of clacks looks like pie, and we want to know why. Here we see one more perspective explaining what's going on, where if the connection to pie wasn't surprising enough, we add one more unexpected connection to optics. But we're doing more than just answering the same question twice. This alternate solution gives a much richer understanding of the whole setup, and it makes it easier to answer other questions. And fun side note, it happens to be core to how I coded the accurate simulations of these blocks, without requiring absurdly small time steps in huge computation time. The solution from the last video involved a coordinate plane, where each point encodes a pair of velocities. Here we'll do something similar, but the points of our plane are going to encode the pair of positions of both blocks. Again, the idea is that by representing the state of a changing system, with individual points in some space, problems in dynamics turn into problems in geometry, which hopefully are more solvable. Specifically, let the x-coordinate of a 2D plane represent the distance from the wall to the left edge of the first block, what I'll call D1, and let the y-coordinate represent the distance from the wall to the right edge of the second block, what we'll call D2. That way the line y equals x shows us where the two blocks clack into each other, since this happens whenever D1 is equal to D2. Here's what it looks like for our scenario to play out. As the two distances of our blocks change, the two dimensional points of our configuration space move around, with positions that always fully encode the information of those two distances. You may notice that at the bottom there, it's bounded by a line, where D2 is the same as the small blocks width, which, if you think about it, is what it means for the small block to hit the wall. You may be able to guess where we're going with this. The way this point bounces between the two bounding lines is a bit like a beam of light bouncing between two mirrors. The analogy doesn't quite work, though. In the lingo of optics, the angle of incidence doesn't equal the angle of reflection. Just think of the first collision. A beam of light coming in from the right would bounce off of a 45-degree angled mirror, this x equals y line, in such a way that it ends up going straight down, which would mean that only the second block is moving. This does happen in the simplest case, where the second block has the same mass as the first, and picks up all of its momentum like a croquet ball. But in the general case, for other mass ratios, that first block keeps much of its momentum, so the trajectory of our point in this configuration space won't be pointed straight down. It'll be down into the left a bit. And even if it's not immediately clear why this analogy with light would actually be helpful, and trust me, it will be helpful in many ways, run with me here and see if we can fix this for the general case. Seeking analogies in math is very often a good idea. As with the last video, it's helpful to rescale the coordinates. In fact, motivated by precisely what we did then, you might think to rescale the coordinates so that x is not equal to d1, but is equal to the square root of the first mass, m1, times d1. This has the effect of stretching our space horizontally, so changes in our big blocks position now result in larger changes to the x coordinate itself. And likewise, let's write the y coordinate as square root of m2 times d2, even though in this particular case the second mass is 1, so it doesn't make a difference, but let's keep things symmetric. Maybe this strikes you as making things uglier, and kind of a random thing to do, but as with last time, when we include square roots of masses like this, everything plays more nicely with the laws of conserving energy and momentum. Specifically, the conservation of energy will translate into the fact that our little point in the space is always moving at the same speed, which in our analogy you might think of meaning there's a constant speed of light. And the conservation of momentum will translate to the fact that as our point bounces off of the mirrors of our setup, so to speak, the angle of incidence equals the angle of reflection. Doesn't that seem bizarre in kind of a delightful way, that the laws of kinematics should translate to laws of optics like this. To see why it's true, let's roll up our sleeves and work out the actual math. Focus on the velocity vector of our point in the diagram, it shows which direction it's moving and how quickly. Now keep in mind, this is not a physical velocity, like the velocities of the moving blocks, instead it's a more abstract rate of change in the context of this configuration space, whose two dimensions worth of possible directions encode both velocities of the block. The x component of this little vector is the rate of change of x, and likewise it's y component is the rate of change of y. But what is that rate of change for the x coordinate? Well, x is the square root of m1 times d1, and the mass doesn't change, so it depends only on how d1 changes, and what's the rate at which d1 changes? Well, that's the velocity of the big block. Let's go ahead and call that v1. Likewise, the rate of change for y is going to be the square root of m2 times v2. Now, notice what the magnitude of our little configuration space changing vector is. Using the Pythagorean theorem, it's the square root of the sum of each of these component rates of change squared, which is square root of m1 times v1 squared, plus m2 times v2 squared. This inner expression should look awfully familiar. It's exactly twice the kinetic energy of our system, so the speed of our point in the configuration space is some function of the total energy, and that stays constant throughout the whole process. Remember, a core over idealizing assumption to this is that there's no energy lost to friction or to any of the collisions. All right, so that's pretty cool. With these rescaled coordinates, our little point is always moving with a constant speed, and I know it's not obvious why you would care, but among other things, it's important for the next step, where the conservation of momentum implies that these two bounding lines act like mirrors. First, let's understand this line d1 equals d2 a little bit better. In our new coordinates, it's no longer that nice 45 degree x equals y line. Instead, if we do a little algebraic manipulation here, we can see that that line is x over square root m1 equals y over square root m2. Rearranging a little bit more, we see that's a line with a slope of square root m2 over m1. That's a nice expression to tuck away in the back of your mind. After the blocks collide, meaning our point hits this line, the way to figure out how they move is to use the conservation of momentum, which says that the value m1 times v1 plus m2 times v2 is the same both before and after the collision. Now notice, this looks like a dot product between two column vectors, m1 m2 and v1 v2. Rewriting it slightly for our rescaled coordinates, the same thing could be written as a dot product between a column vector with the square roots of the masses and one with the rates of change for x and y. I know this probably seems like a complicated way to talk about a comparatively simple momentum equation, but there is a good reason for shifting the language to one of dot products in our new coordinates. Notice that second vector is simply the rate of change vector for the point in our diagram that we've been looking at. The key now is that this square root of the masses vector points in the same direction as our collision line, since the rise over run is square root m2 over square root of m1. Now if you're unfamiliar with the dot product, there is another video on this channel describing it, but real quick, let's go over what it means geometrically. The dot product of two vectors equals the length of the first one multiplied by the length of the projection of the second one onto that first, where it's considered negative if they point in opposite directions. You often see this written as the product of the lengths of the two vectors and the cosine of the angle between them. So look back at this conservation of momentum expression telling us that the dot product between this square root of the masses vector and our little change vector has to be the same, both before and after the collision. Since we just saw that this change vector has a constant magnitude, the only way for this dot product to stay the same is if the angle that it makes with the collision line stays the same. In other words, again using the lingo of optics, the angle of incidence and the angle of reflection off this collision line must be equal. Similarly, when the small block bounces off the wall, our little vector gets reflected about the x direction since only its y-coordinate changes. So our configuration point is bouncing off that horizontal line as if it was a mirror. So step back a moment and think about what this means for our original question of counting block collisions and trying to understand why on earth pi would show up. We can translate it to a completely different question. If you shine a beam of light at a pair of mirrors, meeting each other at some angle, let's say theta, how many times would that light bounce off of the mirrors as a function of that angle? Remember, the mass ratio of our blocks completely determines this angle theta in the analogy. Now I can hear some of you complaining. Haven't we just replaced one tricky setup with another? This might make for a cute analogy, but how is it progress? It's true that counting the number of light bounces is hard, but now we have a helpful trick. When the beam of light hits the mirror, instead of thinking of that beam as reflected about the mirror, think of the beam as going straight while the whole world gets flipped through the mirror. It's as if the beam is passing through a piece of glass into an illusory looking glass universe. Think of actual mirrors here. This wire on the left will represent a laser beam coming into the mirror and the one on the right will represent its reflection. The illusion is that the beam goes straight through the mirror as if passing through a window separating us from another room. But notice, crucially, for this illusion to work, the angle of incidence has to equal the angle of reflection. Otherwise, the flipped copy of the reflected beam won't line up with the first part. So all of that work we did, rescaling coordinates and fudzing through the momentum equations, was certainly necessary. But now, we get to enjoy the fruits of our labor. Watch how this helps us elegantly solve the question of how many mirror bounces there will be, which is also the question of how many block collisions there will be. Every time the beam hits a mirror, don't think of the beam as getting reflected, let it continue straight while the world gets reflected. As this goes on, the illusion to the beam of light is that instead of getting bounced around between two angled mirrors many times, it's passing through a sequence of angled pieces of glass all the same angle apart. Right now, I'm showing you all of the reflected copies of the bouncing trajectory, which I think has a very striking beauty to it. But for a clear review, let's just focus on the original bouncing beam and the illusory straight one. The question of counting bounces turns into a question of how many pieces of glass this illusory beam crosses. How many reflected copies of the world does it pass into? Well, calling the angle between the mirrors, theta, the answer here is however many times you can add theta to itself before you get more than half way around a circle, which is to say before you add up to more than pi total radians. Written as a formula, the answer to this question is the floor of pi divided by theta. So let's review. We started by drawing a configuration space for our colliding blocks, where the x and the y coordinates represented the two distances from the wall. This kind of looked like light bouncing between two mirrors, but to make the analogy work properly, we needed to rescale the coordinates by the square roots of the masses. This made it so that the slope of one of our lines was square root of m2 divided by square root of m1. So the angle between those bounding lines will be the inverse tangent of that slope. To figure out how many bounces there are between two mirrors like this, think of the illusion of the beam going straight through a sequence of looking glass universes separated by a semicircular fan of windows. The answer then comes down to how many times the value of this angle fits into 180 degrees, which is pi radians. From here, to understand why exactly the digits of pi show up when the mass ratio is a power of 100, it's exactly what we did in the last video, so I won't repeat myself here. And finally, as we reflect now on how absurd the initial appearance of pi seemed and on the two solutions we've now seen, and on how unexpectedly helpful it can be to represent the state of your system with points in some space. I leave you with this quote from the computer scientist Alan Kay. A change in perspective is worth 80 IQ points.
Essence of linear algebra preview
Hey everyone, so I'm pretty excited about the next sequence of videos that I'm doing. There'll be about linear algebra, which as a lot of you know, is one of those subjects that's required knowledge for just about any technical discipline. But it's also, I've noticed, generally poorly understood by students taking it for the first time. A student might go through a class and learn how to compute lots of things like matrix multiplication or the determinant or cross-products which use the determinant or eigenvalues, but they might come out without really understanding why matrix multiplication is defined the way that it is, why the cross-product has anything to do with the determinant or what an eigenvalue really represents. Oftentimes, students end up well-practiced in the numerical operations of matrices, but are only vaguely aware of the geometric intuitions underlying it all. But there's a fundamental difference between understanding linear algebra on a numeric level and understanding it on a geometric level. Each has its place, but roughly speaking, the geometric understanding is what lets you judge what tools to use to solve specific problems, feel why they work, and know how to interpret the results. And the numeric understanding is what lets you actually carry through the application of those tools. Now, if you learn linear algebra without getting a solid foundation in that geometric understanding, the problems can go unnoticed for a while until you've gone deeper into whatever field you happen to pursue, whether that's computer science, engineering, statistics, economics, or even math itself. Once you're in a class, or a job for that matter, that assumes fluency with linear algebra, the way that your professors or your coworkers apply that field could seem like utter magic. They'll very quickly know what the right tool to use is and what the answer roughly looks like in a way that would seem like computational wizardry if you assume that they're actually crunching all the numbers in their head. Here, as an analogy, imagine that when you first learned about the sine function in trigonometry, you were shown this infinite polynomial. This, by the way, is how your calculator evaluates the sine function. For homework, you might be asked to practice computing approximations of the sine function by plugging in various numbers to the formula and cutting it off at a reasonable point. An infernis, let's say you had a vague idea that this was supposed to be related to triangles, but exactly how had never really been clear and was just not the focus of the course. Later on, if you took a physics course, where signs and cosines are thrown around left and right, and people are able to tell pretty immediately how to apply them and roughly what the sine of a certain value will be, it would be pretty intimidating, wouldn't it? It would make it seem like the only people who are cut out for physics are those with computers for brains, and you would feel unduly slow or dumb for taking so long on each problem. It's not that different with linear algebra, and luckily, just as with trigonometry, there are a handful of intuitions, visual intuitions underlying much of the subject. And unlike the trig example, the connection between the computation and these visual intuitions is typically pretty straightforward. And when you digest these and really understand the relationship between the geometry and the numbers, the details of the subject, as well as how it's used in practice, start to feel a lot more reasonable. In fairness, most professors do make an effort to convey that geometric understanding, the sine example is a little extreme. But I do think that a lot of courses have students spending a disproportionate amount of time on the numerical side of things, especially given that in this day and age, we almost always get computers to handle that half, while in practice, humans worry about the conceptual half. So this brings me to the upcoming videos. The goal is to create a short, binge watchable series animating those intuitions from the basics of vectors up through the core topics that make up the essence of linear algebra. I'll put out one video per day for the next five days, then after that, put out a new chapter every one to two weeks. I think it should go without saying that you cannot learn a full subject with a short series of videos, and that's just not the goal here. But what you can do, especially with this subject, is lay down all the right intuitions, so the learning that you do moving forward is as productive and fruitful as it can be. I also hope this can be a resource for educators who are teaching courses that assume fluency with linear algebra, giving them a place to direct students that need a quick brush up. I'll do what I can to keep things well paced throughout, but it's hard to simultaneously account for different people's different backgrounds and levels of comfort, so I do encourage you to readily pause and ponder if you feel that it's necessary. Actually, I'd give that same advice for watching any math video, even if it doesn't feel too quick, since the thinking that you do on your own time is where all the learning really happens, don't you think? So with that as an introduction, I'll see you next video.
Q&A with Grant Sanderson (3blue1brown)
What is a grobner basis? If that is your intent for what this Q and A episode is going to be, as far as technicality and deep explanation is concerned, you're going to be grossly disappointed. Same goes to whoever asked about what the Fourier transform has to do with quantum computing. I can say at a high level it's because the Fourier transform gives you a unitary operation and quantum computing is very fast when it comes to anything that can be expressed as a unitary matrix. But those words won't make sense if you don't already know what it means. And this is not at all meant to be a video that's going to go into some deep math explanation. But when I do cover quantum computing, which I will at some point. What would you do professionally if it weren't for YouTube? Slash what are you doing professionally? So a lot of you might know I used to work for Khan Academy and I think if I wasn't doing this, I would definitely seek out some other way of doing math outreach online. Intentive the question is maybe more what would I do that has nothing to do with math outreach? I spent a lot of time doing random software engineering things through college. Like my summer internships were often spent at a tech company rather than doing something explicitly math related. But if I really turn on that parallel universe machine, I think going into data science was a very real possibility. One of the internships that I was doing at the end of it, they asked if I wanted to instead of going to college, again, just stick around and maybe have a full time job and just see what unfolded there. And I seriously considered it. You know, it was pretty compelling. Ultimately, the love for pure math won out. So I did go back to school as you're supposed to. But I kind of do wonder what would have been if instead I went that professional data science about. How arbitrary do you think our mathematical perspective is as humans on earth? If an alien civilization developed math from scratch, do you think we would see clearer similarities in their development of the fields? Like number theory, trigonometry and calculus. So this isn't an interesting question because it cuts right to the heart of is math invented or discovered. And I like the phrasing where you kind of imagine an alien civilization coming and comparing your math to theirs. It's hard to speculate on this, right? Like I have no idea. Some aliens came, we have no way of knowing whether their math would look completely different from ours. One thing you can be pretty sure of, and this might seem superficial, is that the notation would be entirely different, right? There's a lot of arbitrary choices in how we write things down. Newton's notation for calculus versus Leibniz notation for calculus. You know, a lot of the really silly things we have, like which side of the variable the function goes on? Writing out the letters S i n for sine cosine, I had the whole triangle of power video about, you know, notations for radicals and exponentials. And on the one hand, that might not feel substantive, but I think it's really interesting to contemplate other ways where the notation shapes the way that we think about it and shapes the axioms and theorems we even choose more so than we give it credit for. So a project I'm actively working on right now is about quaternions. And I was a little bit surprised to learn how up in the air the potential notations and conventions for teaching students about vectors was, like a lot of the actual notation and terminology we have for vectors and crossproducts dot products the way we think of them in 3D ultimately stems from quaternions. You know, even the fact that we use ij and k as letters to represent the x, y and z directions. And if Hamilton had had his way, we would still teach engineering students primarily about quaternions and then things like the dot product and crossproduct would be viewed as subsets of what the quaternions do and what quaternion multiplication is. And I think there's a compelling case to be made for the fact that we would use that if we could visualize four dimensions better. But the reason that quaternions never really won out as the notation to jur is because they're confusing because no one really understood them. There's all sorts of hilarious quotes from Lord Kelvin and the like about how quaternions are just needlessly confuddling when you're trying to phrase some fact about the universe. Like Maxwell's equations were originally written much more quaternionically than we teach them to students now. And arguably they're much more elegant that way. But it's confusing because we can't visualize it. So I think if you had some alien civilization that came but they had a very good spatial conception for four dimensions. They would look at our vector notation and think that it was not capturing the deeper realities of math. Arguably, who knows. What do you think is the main thing that drives people away from math? Always hard to answer on these kind of things. But I really suspect that as soon as you wrap something in a certain kind of judgment of there's a notion of being correct or incorrect or an implicit statement there's a notion of being good at math. Some people are math people. Some people aren't math people. As soon as you get someone identifying that they're not a math person, first you know insinuating that that even makes sense and then insinuating that they fall into that. Like of course you're not going to like it. Of course your natural mind churnings aren't going to go in the direction of some puzzle because you'd much rather think about things that you're good at and that make you feel happy. All of the latest stuff about growth mindsets and Carol, Deweykin, Joe Boehler really behind that. You know the idea that if you're trying to tell a student something about how they're doing with math rather than framing it around oh you must be so smart right? Famine around oh you must have worked very hard. You must have put a lot of a lot of time into that. There's a lot of much less judgmental things that we have out there like reading. Even though there's some notions of reading comprehension tests for students in school and you're reading at an eighth grade level, people usually aren't like oh I'm not a reading person right? Like I those words like some people that just make sense to them but for me those letters I don't know how they come together. When it comes to contest math like the AMC I think those can be really good for high schoolers as a bank of problems. I think they can be really bad for high schoolers as an insinuation of there's some like top tier math folk and they can do these problems really quickly. But if you give the same questions to the student and you say rather than being forced to go through all of these in 75 minutes you say let's let's spend 30 minutes on just one of them right? Really delving into it. There really solid problems that kind of engage in the spirit of problem solving and you know removing that judgmental aspect, removing that time aspect I think you know I can help out a lot. A lot of people ask about certain things that I've made promises for but haven't necessarily delivered on in a recent video you know I did one on divergence and curl and I mentioned at the end an example of using complex numbers to model fluid flow and a certain model for flow around a wing and you might notice I have yet to actually put out a video on that and I've certainly seen a number of commenters you know hampering on me for that fact. If there's ever a thing that I promise and then I don't make a video on it it's probably because I spent a good amount of time trying to write a script for it that I just didn't feel was compelling for whatever reason and I think maybe the granddaddy here is the probability series which at the moment I have five videos that I've made that are you know released to patrons. I don't I just don't feel great about them and I kind of want the stuff that I put out to you guys to be something I feel is you know if not original something that wouldn't be out there otherwise from other creators and there's a lot of good probability material online I will probably do something to release the material that I have either just as it is but on some second channel with the acknowledgement hey this isn't the greatest work I think I've done or trying to rework them and make them standalones but as far as you know essence of blank content I feel much clearer about how I would want to extend the linear algebra series rather than spinning my wheels on certain scripts and animations that I ultimately don't think are going to deliver something to you guys that I would feel proud of. Do you have any questions to Brille? I'm just reading from reading from some reddit ones here but we do something live. How much compromise if any do you have to give with like what you can animate versus like what your script is trying to convey? Usually if I can't animate a thing and it's a mathematical thing it's not like a frivolous cartoonish type thing I change the tool so that it can animate that thing right and then that might take more time and it's possible to subconsciously that means I resist topics that I know would be more difficult to animate I don't think that happens I like to use that to encourage creation of new things right like on the divergence curl didn't have good fluid flow stuff but it was fun to play around with that. For Quaternions right now I think there's a lot of 3D related things that I wanted to sort of upgrade because the previous way I was doing a lot of 3D animations was clunky and not as extensible as I wanted it to be so usually that is a good excuse to just improve the graphics tool. We have somewhere a question on here what sort of music do you listen to which I mostly wanted to answer to like mention my renewed love for the punch brothers I don't know if any of you know about them they're actually super weird they're like a avant-garde bluegrass band it's just five geniuses who get together and put out phenomenal art so can't complain about that how do you compare making your videos to making videos for Khan Academy so very different processes right like Khan Academy you imagine sitting next to someone and tutoring them and just explaining it you're writing everything by hand for the most part you do at live on this channel I obviously script things I put a lot of time into creating the visuals for it sometimes in a way that makes me feel you know if it kind of Academy it could sit down and make like three videos in an afternoon and here it's taking me like three weeks to do one video you know which of these actually carries more of an impact I think there's a proper balance for both of them and I think there's a lot of people out there who do the Khan style stuff to include Khan Academy but also many others the way I like to think about things is what wouldn't happen if I wasn't doing it but there is that little part of me that thinks maybe I should start some sort of shit second channel of the super cheap just like me and a notebook and a pencil like scrapping through some sort of explanation super quickly who makes the awesome music playing in your videos Vince Rubinetti link in the description link in all of the description's actually he does really good work and just go you know download some of the music and leave him a little tip if you feel like it's something that you enjoy what is your favorite Palomano? Palomano? Palimano? this one I'll figure it out later and insert it on the screen all right folks thanks for watching stick around for whenever the next upload is it's gonna be on Quaternions and I hope you like it this is your cold stuff this is your wine does that probably mean I should be looking at the wide one what do I do a dramatic like camera number two
Lockdown math announcement
As many of you know, with the coronavirus outbreak still very much underway, there's a huge number of students who are left to learn remotely from home, whether that means doing distance classes over video conference, or trying to find resources like Khan Academy and Brilliant to learn online. So one thing that I wanted to do in the next coming weeks, which is very different for me on this channel, is to do some live streamed lectures specifically targeted at high school students. With each lecture, I want to cover something that's a standard high school topic that most high schoolers will be expected to learn, but at the same time to have some kind of intriguing angle on it that's a little bit different from what most people might have seen, just so that if you aren't a high school student and you're someone like me, there's still something interesting about it. For example, the very first lesson is going to be on a simpler version of the quadratic formula, and while I was putting together this lesson, I honestly felt a little bit mad that this isn't the way that I learned things when I was in high school, so if you can get that same feeling, I'm going to call that mission success. One thing I'm particularly excited about is a little piece of technology that two good friends of mine who I used to work at Khan Academy with have been working on, which I think should make the dynamic between the audience and the progression of the lecture feel a little bit more tight than it usually does in some kind of live stream situation. I don't want to say anything more, I would just say show up, be prepared to answer some questions, and to ask questions too. My goal is for it to feel as much like a real class as possible. Most of the dynamic is just going to be you and me talking through problems on a piece of paper, which even though I love to visualize stuff and put out animations and that's kind of what the whole channel is about. To be honest, I think just working through things on paper feels more like what actual math is to me, and what the process of finding new ideas and coming to terms with them yourself looks like. The tentative plan right now is to do every Friday and Tuesday at noon Pacific time, but if anything changes on that, you'll see the schedule on the banner of the channel. So tune in, I hope to see you there, and be prepared to do some math.
But what is a Fourier series? From heat flow to drawing with circles | DE4
Here, we look at the math behind an animation like this one, what's known as a complex Fourier series. Each little vector is rotating at some constant integer frequency, and when you add them together, tip to tail, the final tip draws out some shape over time. By tweaking the initial size and angle of each vector, we can make it draw pretty much anything that we want, and here you'll see how. Before diving into it all, I want you to take a moment to just linger on how striking this is. This particular animation has 300 rotating arrows in total. Go full screen for this if you can, the intricacy is worth it. Think about this. The action of each individual arrow is perhaps the simplest thing you could imagine, rotation at a steady rate. And yet, the collection of all added together is anything but simple, and the mind-boggling complexity is put into an even sharper focus the farther we zoom in, revealing the contributions of the littlest, quickest, and downright frenetic arrows. When you consider the chaotic frenzy that you're looking at, and the clockwork rigidity underlying all the motions, it's bizarre how the swarm acts with a kind of coordination to trace out some very specific shape. And unlike much of the emergent complexity you find elsewhere in nature, this is something that we have the math to describe and to control completely. Just by tuning the starting conditions, nothing more, we can make this swarm conspire in all of the right ways to draw anything that you want, provided that you have enough little arrows. What's even crazier is that the ultimate formula for all of this is incredibly short. Now, often, 48 series are described in terms of something that looks a little different. It turns out to be a special case of this more general rotating vector phenomenon that we'll build up to, but it's where Fourier himself started, and there's good reason for us to start the story there as well. Technically, this is the third video in a sequence about the heat equation, what Fourier was working on when he developed his big idea. I would like to teach you about 48 series in a way that doesn't depend on you coming from those chapters, but if you have at least a high level idea for the problem from physics which originally motivated this piece of math, it gives some indication for just how unexpectedly far reaching Fourier series are. All you need to know is that we had a certain equation, which tells us how the temperature distribution on a rod would evolve over time. And incidentally, it also describes many other phenomena unrelated to heat. And while it's hard to directly use this equation to figure out what will happen to an arbitrary heat distribution, there's a simple solution if the initial function just happens to look like a cosine wave with a frequency tuned so that it's flat at each endpoint. Specifically, as you graph what happens over time, these waves simply get scaled down exponentially, with higher frequency waves having a faster exponential decay. The heat equation happens to be what's known in the business as a linear equation, meaning if you know two solutions and you add them up, that sum is a new solution. You can even scale them each by some constant, which gives you some dials to turn to construct a custom function solving the equation. This is a fairly straightforward property that you can verify for yourself, but it's incredibly important. It means that we can take our infinite family of solutions, these exponentially decaying cosine waves, scale a few of them by some custom constants of our choosing, and combine them to get a solution for a new Taylor-made initial condition, which is some combination of cosine waves. One important thing I'd like you to notice is that when you combine these waves, because the higher frequency ones decay faster, the sum that you construct will tend to smooth out over time as all the high frequency terms quickly go to zero, leaving only the low frequency terms dominating. So in a funny way, all of the complexity in the evolution of this heat distribution which the heat equation implies is captured by this difference in the decay rates for the different pure frequency components. It's at this point that Fourier gains immortality. I think most normal people at this stage would say, well, I can solve the heat equation when the initial distribution just happens to look like a wave, or a sum of waves, but what a shame it is that most real-world distributions don't at all look like that. I mean, for example, let's say you brought together two rods, which were each at some uniform temperature, and you wanted to know what happens immediately after they come into contact. To make the number simple, let's say the temperature of the left rod is 1 degree, and the right rod is negative 1 degree, and that the total length L of the combined two rods is 1. What this means is our initial temperature distribution is a step function, which is so obviously different from a sine wave, or the sum of sine waves, don't you think? I mean, it's almost entirely flat, not wavy, and for God's sake, it's even discontinuous. And yet, Fourier thought to ask a question which seems absurd. How do you express this as a sum of sine waves? Even more boldly, how do you express any initial distribution as a sum of sine waves? And it's more constrained than just that. You have to restrict yourself to adding waves, which satisfy a certain boundary condition, and as we saw last video, that means working with these cosine functions, whose frequencies are all some whole number multiple of a given base frequency. And by the way, if you were working with some different boundary condition, say that the endpoints have to stay fixed, you'd have a different set of waves that you're disposal to piece together. In this case, simply replacing that cosine expression with a sine. Its strange how often progress in math looks more like asking a new question rather than simply answering old ones. Fourier really does have a kind of immortality now, with his name essentially synonymous with the idea of breaking down functions and patterns as combinations of simple oscillations. It's really hard to overstate just how important and far reaching that idea turned out to be. Well beyond anything that Fourier himself could have imagined. And yet, the origin of all this is a piece of physics which, at first glance, has nothing to do with frequencies and oscillations. If nothing else, this should give you a hint about the general applicability of Fourier series. Now, hang on, I hear some of you saying, none of these sums of sine waves that you're showing are actually the step function. They're all just approximations. And it's true, any finite sum of sine waves will never be perfectly flat, except for a constant function, nor will it be discontinuous. But Fourier thought more broadly, considering infinite sums. In the case of our step function, it turns out to be equal to this infinite sum, where the coefficients are 1, negative 1, third, plus 1, fifth, minus 1, seventh, and so on for all the odd frequencies, and all of it is rescaled by 4 divided by pi. I'll explain where those numbers come from in a moment. Before that, it's worth being clear about what we mean by a phrase like infinite sum, which runs the risk of being a little bit vague. Consider the simpler context of numbers, where you could say, for example, that this infinite sum of fractions equals pi divided by 4. As you keep adding the terms 1 by 1, at all times what you have is rational, it never actually equals the irrational pi divided by 4. But the sequence of partial sums approaches pi over 4, which is to say the numbers you see, while never equaling pi over 4, get arbitrarily close to that value, and they stay arbitrarily close to that value. That's all amountful to say, so instead we abbreviate and just say the infinite sum equals pi over 4. With functions, you're doing the same thing, but with many different values in parallel. Consider a specific input, and the value of all of these scaled cosine functions for that input. If that input is less than 0.5, as you add more and more terms, the sum will approach 1. If that input is greater than 0.5, as you add more and more terms, it would approach negative 1. And at the input 0.5 itself, all of the cosines are 0, so the limit of the partial sums is also 0. And that means that somewhat awkwardly, for this infinite sum to be strictly true, we do have to prescribe the value of this set function at the point of discontinuity to be 0, sort of halfway along the jump. Analogous to an infinite sum of rational numbers being irrational, the infinite sum of wavy continuous functions can equal, honest to goodness equal, a discontinuous flat function. Getting limits into the game allows for qualitative changes, which finite sums alone never could. There are multiple technical nuances that I'm sweeping under the rug here. Is the fact that we're forced into a certain value for the step function at the point of discontinuity make any difference for the heat flow problem? For that matter, what does it really mean to solve a PDE with a discontinuous initial condition? Can we be sure that the limit of solutions to the heat equation is also a solution? And can we be sure that all functions actually have a Fourier series like this? If not, when not? These are exactly the kind of questions which real analysis is built to answer, but it falls a bit deeper in the weeds than I'd like to go here, so I'll relegate that all to links in the video's description. The upshot is that when you take the heat equation solutions associated with these cosine waves, and you add them all up, all infinitely many of them, you do get an exact solution describing how the step function will evolve over time. And if you had done this in 1822, you would have become immortal for doing so. The key challenge in all of this, of course, is to find these coefficients. So far, we've been thinking about functions with real number outputs, but for the computations, I'd like to show you something more general than what Fourier originally did, applying to functions whose output can be any complex number in the 2D plane, which is where all these rotating vectors from the opening come back into play. Why the added complexity? Well, aside from being more general, in my view, the computations become cleaner, and it's easier to understand why they actually work. More importantly, it sets a good foundation for the ideas that will come up later on in the series, like the Laplace transform, and the importance of exponential functions. We'll still think of functions whose input is some real number on a finite interval, say from 0 up to 1 for simplicity. But whereas something like a temperature function will have outputs on the real number line, this broader view will let the outputs wander anywhere in the two-dimensional complex plane. You might think of such a function as a drawing, with a pencil tip tracing out different points in the complex plane as the input ranges from 0 to 1. Instead of sine waves being the fundamental building block, as you saw at the start, we'll focus on breaking these functions down as a sum of little vectors, all rotating at some constant integer frequency. Functions with real number outputs are essentially really boring drawings, a one-dimensional pencil sketch. You might not be used to thinking of them like this, since usually we visualize such a function with a graph, but right now the path being drawn is only in the output space. If you do one of these decomposition into rotating vectors for a boring one-dimensional drawing, what will happen is that the vectors with frequency 1 and negative 1 will have the same link, and there'll be horizontal reflections of each other. When you just look at the sum of these two as they rotate, that sum stays fixed on the real number line, and it oscillates like a sine wave. If you haven't seen it before, this might be a really weird way to think about what a sine wave is, since we're used to looking at its graph rather than the output alone wandering on the real number line. But in the broader context of functions with complex number outputs, this oscillation on the horizontal line is what a sine wave looks like. Similarly, the pair of rotating vectors with frequencies 2 and negative 2 will add another sine wave component, and so on. With the sine waves we were looking for earlier, now corresponding to pairs of vectors rotating in opposite directions. So the context that Fourier originally studied, breaking down real valued functions into sine waves, is a special case of the more general idea of 2D drawings and rotating vectors. And at this point, maybe you don't trust me, that widening our view to complex functions makes things easier to understand. But bear with me, it really is worth the added effort to see the fuller picture. And I think it'll be pleased with how clean the actual computation is in this broader context. You may also wonder why, if we're going to bump things up into two dimensions, we don't just talk about 2D vectors. What does the square root of negative 1 have to do with anything? Well, the heart and soul of Fourier series is the complex exponential, e to the i times t. As the input t takes forward with time, this value walks around the unit circle at a rate of 1 unit per second. In the next video, you'll see a quick intuition for why expenitialing imaginary numbers walks around circles like this from the perspective of differential equations. And beyond that, as the series progresses, I hope to give you some sense for why complex exponentials like this are actually very important. In theory, you could describe all of the Fourier series stuff purely in terms of vectors, and never breathe a word of i, the square root of negative 1. The formulas would become more convoluted, but beyond that, leaving out the function e to the x would somehow no longer authentically reflect why this idea turns out to be so useful for solving differential equations. For right now, if you want, you can think of e to the i t just as a notational shorthand for describing rotating vectors. But just keep in the back of your mind that it is more significant than mere shorthand. You'll notice I'm being a little loose with language using the words vector and complex numbers somewhat interchangeably. In large part, because thinking of complex numbers as little arrows makes the idea of adding a lot of them together easier to visualize. Alright, armed with the function e to the i times t, let's write down a form of y to the formula for each of these rotating vectors that we're working with. For right now, think of each of them as started pointing one unit to the right at the number 1. The easiest vector to describe is the constant one, which stays at the number 1, never moving, or if you prefer, it's quote-unquote rotating just at a frequency of 0. Then there will be the vector rotating one cycle every second, which we write as e to the 2 pi i times t. That 2 pi is there because as t goes from 0 to 1, it needs to cover a distance of 2 pi along the circle. Technically, in what's being shown, it's actually 1 cycle every 10 seconds so that things aren't too dizzying, I'm slowing everything down by a factor of 10. We also have a vector rotating at 1 cycle per second in the other direction, e to the negative 2 pi i times t. Similarly, the one going 2 rotations per second is e to the 2 times 2 pi i times t, where that 2 times 2 pi in the exponent describes how much distance is covered in 1 second. And we go on like this, over all integers, both positive and negative, with the general formula of e to the n times 2 pi times i t. Notice this makes it more consistent to write that constant vector as e to the 0 times 2 pi times i t, which feels like an awfully complicated way to write the number 1, but at least it fits the pattern. The control that we have, the set of knobs and dials we get to turn, is the initial size and direction of each of these numbers. The way we control that is by multiplying each one by some complex constant, which I'll call c sub n. For example, if we wanted the constant vector not to be at the number 1, but to have a length of 0.5, c0 would be 0.5. If we wanted the vector rotating at 1 cycle per second, to start off at an angle of 45 degrees, we'd multiply it by a complex number, which has the effect of rotating it by that much, which you can write as e to the pi 4 times i. And if its initial length needed to be 0.3, then the coefficient c1 would be 0.3 times that amount. Likewise, everyone in our infinite family of rotating vectors has some complex constant being multiplied into it, which determines its initial angle and its total magnitude. Our goal is to express any arbitrary function f of t, say this one, that draws an eighth note as t goes from 0 to 1, as a sum of terms like this. So we need some way of picking out these constants one by one, given the data of the function itself. The easiest of these to find is the constant term. This term represents a sort of center of mass for the full drawing. If you were to sample a bunch of evenly spaced values for the input t as it ranges from 0 to 1, the average of all the outputs of the function for those samples will be the constant term c0. Or more accurately, as you consider finer and finer samples, the average of the outputs for these samples approaches c0 in the limit. What I'm describing, finer and finer sums of a function for samples of t from the input range, is an integral, an integral of f of t from 0 to 1. Normally, since I'm framing this all in terms of averages, you would divide the integral by the length of the input range. But that length is 1, so in this case, taking an integral and taking an average are the same thing. There's a very nice way to think about why this integral would pull out c0. Remember, we want to think of this function as a sum of rotating vectors, so consider this integral, this continuous average, as being applied to that whole sum. The average of a sum like this is the same as the sum over the averages of each part. You can read this move as a sort of subtle shift in perspective, rather than looking at the sum of all the vectors at each point in time and taking the average value that they sweep out, look at the average of an individual vector as t goes from 0 to 1 and then add up all these averages. But each of these vectors just makes a whole number of rotations around 0, so its average value as t ranges from 0 to 1 will be 0. The only exception is the constant term, since it stays static and doesn't rotate, its average value is just whatever number it happened to start on, which is c0. So doing this average over the whole function is a sort of clever way to kill all of the terms that aren't c0. And here's the actual clever part, let's say that you want to compute a different term, like c2, sitting in front of the vector rotating 2 cycles per second. The trick is to first multiply f of t by something that makes that vector hold still, sort of the mathematical equivalent of giving a smartphone to an overactive child. Specifically, if you multiply the whole function by e to the negative 2 times 2pi i times t, think about what happens to each term. Since multiplying exponentials results in adding what's in the exponent, the frequency term in each of our exponents gets shifted down by 2. So now as we do our averages of each term, that c-1 vector spins around negative 3 times with an average of 0. The c0 vector, previously constant, now rotates twice as t ranges from 0 to 1, so its average is also 0. And likewise, all vectors other than the c2 term make some whole number of rotations, meaning they average out to be 0. So taking the average of this modified function is a clever way to kill all of the terms other than c2. And of course there's nothing special about the number 2 here, you could replace it with any other in, and you have a general formula for cn, which is what we're looking for. Out of context, this expression might look complicated, but remember, you can read it as first modifying our function, our 2D drawing, so as to make the nth little vector hold still, and then performing an average, which kills all of the moving vectors, and leaves you only with the still part. Isn't that crazy? All of the complexity in these decomposition you're seeing of drawings into sums of many rotating vectors is entirely captured in this little expression. So when I'm rendering these animations, that's exactly what I'm having the computer do, it treats the path like a complex function, and for a certain range of values in, it computes this integral to find the coefficient c of n. For those of you curious about where the data for a path itself comes from, I'm going to EasyRoute, and just having the program read in an SVG, which is a file format that defines the image in terms of mathematical curves rather than with pixel values. So the mapping f of t from a time parameter to points in space basically comes predefined. In what's shown right now, I'm using 101 rotating vectors, computing the values of n from negative 50 up to 50. In practice, each of these integrals is computed numerically, basically meaning it chops up the unit interval into many small pieces of size delta t, and then adds up this value, f of t times e to the negative n 2 pi i t times delta t for each one of them. There are fancier methods for more efficient numerical integration, but this gives the basic idea. And after you compute these 101 constants, each one determines an initial angle and magnitude for the little vectors, and then you just set them all rotating, adding them tip to tail as they go, and the path drawn out by the final tip is some approximation of the original path you fed in. As the number of vectors used approaches infinity, the approximation path gets more and more accurate. To bring this all back down to Earth, consider the example we were looking at earlier, of a step function, which remember was useful for modeling the heat dissipation between two rods at different temperatures after they come into contact. Like any real number valued function, the step function is like a boring drawing that's confined to one dimension. But this one isn't especially dull drawing, since for inputs between 0 and 0.5, the output just stays static at the number 1, and then it discontinuously jumps to negative 1 for inputs between 0.5 and 1. So in the 4A series approximation, the vector sum stays really close to 1 for the first half of the cycle, then quickly jumps to negative 1 and stays close to that for the second half of the cycle. And remember, each pair of vectors, rotating in opposite directions, corresponds to one of the cosine waves that we were looking at earlier. To find the coefficients, you would need to compute this integral. And for the ambitious viewers among you, itching to work out some integrals by hand, this is one where you can actually do the calculus to get an exact answer, rather than just having a computer do it numerically for you. I'll leave it as an exercise to work this out, and to relate it back to the idea of cosine waves by pairing off the vectors that rotate in opposite directions. And for the even more ambitious, I'll leave another exercise up on the screen for how to relate this more general computation with what you might see in a textbook describing 4A series only in terms of real valued functions with signs and cosines. By the way, if you're looking for more 4A series content, I highly recommend the videos by Mathologer and the coding train, and I'd also recommend this blog post, links of course in the description. So on the one hand, this concludes our discussion of the heat equation, which was a little window into the study of partial differential equations. But on the other hand, this 4A into 4A series is a first glimpse at a deeper idea. Exponential functions, including their generalization into complex numbers and even matrices, play a very important role for differential equations, especially when it comes to linear equations. What you just saw, breaking down a function as a combination of these exponentials and using that to solve a differential equation, comes up again and again in different shapes and forms.
Why is pi here? And why is it squared? A geometric answer to the Basel proble
I'm gonna guess that you have never had the experience of your heart rate increasing in excitement while you were imagining an infinitely large lake with lighthouses around it. Well, if you feel anything like I do about math, that is gonna change by the end of this video. Take 1 plus a fourth plus 1 9 plus 1 16th and so on, where you're adding the inverses of the next square number. What does this sum approach as you keep adding on more and more terms? Now this is a challenge that remained unsolved for 90 years after it was initially posed, until finally, it was Euler who found the answer super surprisingly to be pi squared divided by 6. I mean, isn't that crazy? What is pi doing here? And why is it squared? We don't usually see it squared. And on a Revoiler, whose hometown was Basel, this infinite sum is often referred to as the Basel problem, but the proof that I'd like to show you is very different from the one that Euler had. I've said in a previous video that whenever you see pi show up, there will be some connection to circles. And there are those who like to say that pi is not fundamentally about circles and insisting on connecting equations like these ones with a geometric intuition stems from a stubborn insistence on only understanding pi in the context where we first discovered it. And that's all well and good, but whatever your own perspective holds as fundamental, the fact is, pi is very much tied to circles. So if you see a show up, there will be a path somewhere in the massive interconnected web of mathematics leading you back to circles and geometry. The question is just how long and convoluted that path might be, and in the case of the Basel problem, it's a lot shorter than you might first think. And it all starts with light. Here's the basic idea. Imagine standing at the origin of a positive number line and putting a little lighthouse on all of the positive integers, one, two, three, four, and so on. That first lighthouse has some apparent brightness from your point of view, some amount of energy that your eyes are receiving from the light per unit time. And let's just call that a brightness of one. For reasons I'll explain shortly, the apparent brightness of the second lighthouse is one fourth as much as the first, and the apparent brightness of the third is one ninth as much as the first, and then one sixteen, and so on. And you can probably see why this is useful for the Basel problem. It gives us a physical representation of what's being asked. Since the brightness received from the whole infinite line of lighthouses is going to be one plus a fourth plus a ninth plus a sixteen, and so on. So the result that we are aiming to show is that this total brightness is equal to pi squared divided by six times the brightness of that first lighthouse. At first that might seem useless. I mean, we're just re-asking the same original question. But the progress comes from a new question that this framing raises. Are there ways that we can rearrange these lighthouses that don't change the total brightness for the observer? And if so, can you show this to be equivalent to a setup that somehow easier to compute? To start, let's be clear about what we mean when we reference apparent brightness to an observer. Imagine a little screen, which maybe represents the retina of your eye or a digital camera sensor, something like that. You could ask, what proportion of the rays coming out of the source hit that screen? Or phrase differently, what is the angle between the ray hitting the bottom of that screen and the ray hitting the top? Or rather, since we should be thinking of these lights as being in three dimensions, it might be more accurate to ask, what is the angle the light covers in both directions perpendicular to the source? In spherical geometry, you sometimes talk about the solid angle of a shape, which is the proportion of a sphere it covers as viewed from a given point. You see, the first of two places this story, where thinking of screens is going to be useful, is in understanding the inverse square law, which is a distinctly three-dimensional phenomenon. Think of all of the rays of light hitting a screen one unit away from the source. As you double the distance, those rays will now cover an area with twice the width and twice the height. So it would take four copies of that original screen to receive the same rays at that distance. And so each individual one receives one fourth as much light. This is the sense in which I mean a light would appear one fourth as bright two times the distance away. Likewise, when you're three times farther away, you would need nine copies of that original screen to receive the same rays, so each individual screen only receives one ninth as much light. And this pattern continues. Because the area hit by a light increases by the square of the distance, the brightness of that light decreases by the inverse square of that distance. And as I'm sure many of you know, this inverse square law is not at all special to light. It pops up whenever you have some kind of quantity that spreads out evenly from a point source, whether that's sound or heat or radio signal, things like that. And remember, it's because of this inverse square law that an infinite array of evenly spaced lighthouses physically implements the basil problem. But again, what we need if we're going to make any progress here is to understand how we can manipulate setups with light sources like this without changing the total brightness for the observer. And the key building block is in a specially nice way to transform a single lighthouse into two. Think of an observer at the origin of the xy plane and a single lighthouse sitting out somewhere on that plane. Now draw a line from that lighthouse to the observer, and then another line perpendicular to that one at the lighthouse. Now place two lighthouses where this new line intersects the coordinate axes, which I'll go ahead and call lighthouse a over here on the left and lighthouse b on the upper side. It turns out, and you'll see why this is true in just a minute, the brightness that the observer experiences from that first lighthouse is equal to the combined brightness experienced from lighthouses a and b together. And I should say by the way that the standing assumption throughout this video is that all lighthouses are equivalent, they're using the same light bulb, emanating the same power, all of that. So in other words, assigning variables to things here, if we call the distance from the observer to lighthouse a, little a, and the distance from the observer to lighthouse b, little b, and the distance to the first lighthouse, h. We have the relation 1 over a squared plus 1 over b squared equals 1 over h squared. This is the much less well known inverse Pythagorean theorem, which some of you may recognize from Mathologer's most recent and also most excellent video on the many cousins of the Pythagorean theorem. Pretty cool relation, don't you think? And if you're a mathematician at heart, you might be asking right now how you prove it. And there are some straightforward ways where you express the triangles area in two separate ways and apply the usual Pythagorean theorem. But there is another quite pretty method that I'd like to briefly outline here that falls much more nicely into our storyline, because again, it uses intuitions of light and screens. Imagine scaling down the whole right triangle into a tinier version, and think of this miniature hypotenuse as a screen receiving light from the first lighthouse. If you reshape that screen to be the combination of the two legs of the miniature triangle, like this, well it still receives the same amount of light, right? I mean, the rays of light hitting one of those two legs are precisely the same as the rays that hit the hypotenuse. Then the key is that the amount of light from the first lighthouse that hits this left side, the limited angle of rays that end up hitting that screen, is exactly the same as the amount of light over here coming from lighthouse A, which hits that side. It'll be the same angle of rays. And, symmetrically, the amount of light from the first house hitting the bottom portion of our screen is the same as the amount of light hitting that portion from lighthouse B. Why? You might ask? Well, it's a matter of similar triangles. This animation already gives you a strong hint for how it works. And we've also left a link in the description to a simple geographer applet for those of you who want to think this through in a slightly more interactive environment. And in playing with that, one important fact here that you'll be able to see is that with the similar triangles only apply in the limiting case for a very tiny screen. Alright, buckle up now, because here's where things get good. We've got this inverse Pythagorean theorem, right? And that's going to let us transform a single lighthouse into two others without changing the brightness experienced by the observer. With that in hand, and no small amount of cleverness, we can use this to build up the infinite array that we need. Answer yourself at the edge of a circular lake directly opposite a lighthouse. We're going to want it to be the case that the distance between you and the lighthouse along the border of the lake is one. So we'll say the lake has a circumference of two. Now the apparent brightness is one divided by the diameter squared. And in this case, the diameter is that circumference, two, divided by pi. So the apparent brightness works out to be pi squared divided by four. Now for our first transformation, draw a new circle twice as big, so circumference four, and draw a tangent line to the top of the small circle. Then replace the original lighthouse with two new ones where this tangent line intersects the larger circle. An important fact from geometry that we'll be using over and over here is that if you take the diameter of a circle and form a triangle with any point on the circle, the angle at that new point will always be 90 degrees. The significance of that in our diagram here is that it means the inverse pedagorean theorem applies, and the brightness from those two new lighthouses equals the brightness from the first one, namely pi squared divided by four. As the next step, draw a new circle twice as big as the last with a circumference A. Now for each lighthouse, take a line from that lighthouse through the top of the smaller circle, which is the center of the larger circle, and consider the two points where that intersects with the larger circle. Again, since this line is a diameter of that large circle, then the lines from those two new points to the observer are going to form a right angle. Likewise, by looking at this right triangle here, whose hypotenuse is the diameter of the smaller circle, you can see that the line from the observer to that original lighthouse is at a right angle with a new long line that we drew. Good news, right? Because that means we can apply the inverse pedagorean theorem, and that means that the apparent brightness from the original lighthouse is the same as the combined brightness from the two newer ones. And of course, you can do that same thing over on the other side, drawing a line through the top of the smaller circle and getting two new lighthouses on the larger circle. And even nicer, these four lighthouses are all going to be evenly spaced around the lake. Why? Well, the lines from those lighthouses to the center are at 90 degree angles with each other. So since things are symmetric left to right, that means that the distances along this circumference are 1, 2, 2, 2, and 1. You draw a circle twice as big, so circumference of 16 now, and for each lighthouse, you draw a line from that lighthouse through the top of the smaller circle, which is the center of the bigger circle, and then create two new lighthouses where that line intersects with the larger circle. Just as before, because the long line is a diameter of the big circle, those two new lighthouses make a right angle with the observer, right? And just as before, the line from the observer to the original lighthouse is perpendicular to the long line, and those are the two facts that justify us in using the inverse pathagorean theorem. But what might not be as clear is that when you do this for all of the lighthouses to get eight new ones on the big lake, those eight new lighthouses are going to be evenly spaced. This is the final bit of geometry proofing this before the final thrust. To see this, remember that if you draw lines from two adjacent lighthouses on the small lake to the center, they make a 90 degree angle. If instead you draw lines to a point anywhere on the circumference of the circle that's not between them, the very useful inscribed angle theorem from geometry tells us that this will be exactly half of the angle that they make with the center, in this case 45 degrees. But when we position that new point at the top of the lake, these are the two lines which define the position of the new lighthouses on the larger lake. What that means then is that when you draw lines from those eight new lighthouses into the center, they divide the circle evenly into 45 degree angle pieces. And that means the eight lighthouses are evenly spaced around the circumference, with the distance of two between each one of them. And now, just imagine this thing playing on. At every step, doubling the size of each circle and transforming each lighthouse into two new ones along a line drawn through the center of the larger circle. At every step, the apparent brightness to the observer remains the same, pi squared over four. And at every step, the lighthouses remain evenly spaced with a distance two between each one of them on the circumference. And in the limit, what we're getting here is a flat horizontal line with an infinite number of lighthouses evenly spaced in both directions. And because the apparent brightness was pi squared over four, the entire way, that will also be true in this limiting case. And this gives us a pretty awesome infinite series, the sum of the inverse squares one over n squared, where n covers all of the odd integers, one three five and so on, but also negative one, negative three, negative five, often the leftward direction. Adding all of those up is going to give us pi squared over four. That's amazing, and it's the core of what I want to show you. And just take a step back and think about how unreal this seems. The sum of simple fractions that at first sight have nothing to do with geometry, nothing to do with circles at all, apparently. It gives us this result that's related to pi, except now you can actually see what it has to do with geometry. The number line is kind of like a limit of ever growing circles. And as you sum across that number line, making sure to sum all the way to infinity on either side, it's sort of like you're adding up along the boundary of an infinitely large circle, and a very loose but very fun way of speaking. But wait, you might say, this is not the sum that you promised us at the start of the video. And well you're right, we do have a little bit of thinking left. First things first, let's just restrict the sum to only being the positive odd numbers, which gets us pi squared divided by eight. Now the only difference between this and the sum that we're looking for that goes over all the positive integers, odd and even, is that it's missing the sum of the reciprocals of even numbers, what I'm coloring and red up here. Now you can think of that missing series as a scaled copy of the total series that we want, where each light house moves to being twice as far away from the origin. One gets shifted to two, two gets shifted to four, three gets shifted to six, and so on. And because that involves doubling the distance for every light house, it means that the apparent brightness would be decreased by a factor of four. And that's also relatively straightforward algebra, going from the sum over all the integers to the sum over the even integers involves multiplying by one fourth. And what that means is that going from all the integers to the odd ones would be multiplying by three-fourths, since the evens plus the odds have to give us the whole thing. So if we just flip that around, that means going from the sum over the odd numbers to the sum over all positive integers requires multiplying by four thirds. So taking that pi squared over eight, multiplying by four thirds, but a boom but a bang, we've got ourselves a solution to the basil problem. Stick around, I've got a little preview animation for the next project at the end here. Now this video that you just watched was primarily written and animated by one of the new three blue one brown team members, Ben Humbrecht, an addition made possible thanks to you guys through Patreon. You'll meet him along with the other new addition when the next video is published, but in the meantime, you can read a little bit about my thoughts on introducing more contributors over at the Patreon page. Let me close by sharing a set of principles about effective learning. There are eight that I have pulled up here, but I just want to highlight two of them. Effective learning is active, not passive. Watching a video is not enough. And it must allow for failure. As hard as we work over here to distill a beautiful solution like this one to the basil problem, take down to its core, there is a risk of hiding the fact that finding a solution like that requires hitting many, many dead ends. And these principles come from Brilliant.org, today's sponsor. And let me share with you a comment from a recent video. Brilliant.org is amazing. Even if you don't pay for it, you still get to solve problems every week. I recommend it for the people who might be put off by the constant advertising. Here I couldn't agree more. We're just a legitimately great place to learn through active problem solving. And I really do believe they live up to these principles. I mean, just look at the messaging here when you get a question wrong. It's okay, getting stumped is part of learning. How's that for growth mindset? Now, if you want to supplement videos like these with more active learning like theirs, go to Brilliant.org slash 3B1B, which lets them know that you came from here. And it gives the first 321 people to follow that link, 20% off their premium annual subscription. And that's it. Thank you. Thank you.
Gradient descent, how neural networks learn | Chapter 2, Deep learning
Last video, I laid out the structure of a neural network. I'll give a quick recap here just so that it's fresh in our minds and then I have two main goals for this video. The first is to introduce the idea of gradient descent, which underlies not only how neural networks learn, but how a lot of other machine learning works as well. Then after that, we're going to dig in a little more to how this particular network performs and what those hidden layers of neurons end up actually looking for. As a reminder, our goal here is the classic example of handwritten digit recognition, the Hello World of Neural Networks. These digits are rendered on a 28x28 pixel grid, each pixel with some grayscale value between 0 and 1. Those are what determine the activations of 784 neurons in the input layer of the network. And then the activation for each neuron in the following layers is based on a weighted sum of all the activations in the previous layer plus some special number called a bias. Then you compose that sum with some other function, like the sigmoid squishification or a ray-loo, the way that I walked through last video. In total, given the somewhat arbitrary choice of two hidden layers here with 16 neurons each, the network has about 13,000 weights and biases that we can adjust, and it's these values that determine what exactly the network actually does. And what we mean when we say that this network classifies a given digit is that the brightest of those 10 neurons in the final layer corresponds to that digit. And remember, the motivation that we had in mind here for the layered structure was that maybe the second layer could pick up on the edges, and the third layer might pick up on patterns like loops and lines, and the last one could just piece together those patterns to recognize digits. So here we learn how the network learns. What we want is an algorithm where you can show this network a whole bunch of training data, which comes in the form of a bunch of different images of handwritten digits, along with labels for what they're supposed to be, and it'll adjust those 13,000 weights and biases so as to improve its performance on the training data. Hopefully, this layered structure will mean that what it learns generalizes to images beyond that training data. And the way we test that is that after you train the network, you show it more labeled data, that it's never seen before, and you see how accurately it classifies those new images. Fortunately for us, and what makes this such a common example to start with, is that the good people behind the M-NIST database have put together a collection of tens of thousands of handwritten digit images, each one labeled with the numbers that they're supposed to be. And it's provocative as it is to describe a machine as learning. Once you actually see how it works, it feels a lot less like some crazy sci-fi premise, and a lot more like, a calculus exercise. I mean, basically it comes down to finding the minimum of a certain function. Remember, conceptually, we're thinking of each neuron as being connected to all of the neurons in the previous layer, and the weights in the weighted sum defining its activation are kind of like the strengths of those connections. The bias is some indication of whether that neuron tends to be active or inactive. And to start things off, we're just going to initialize all of those weights and biases totally randomly. Needless to say, this network is going to perform pretty horribly on a given training example, since it's just doing something random. For example, you feed in this image of a 3, and the output layer just looks like a mess. So what you do is you define a cost function, a way of telling the computer, no, bad computer. That output should have activations, which are zero for most neurons, but one for this neuron. What you gave me is utter trash. To say that a little more mathematically, what you do is add up the squares of the differences between each of those trash output activations and the value that you want them to have. And this is what we'll call the cost of a single training example. Notice, this sum is small when the network confidently classifies the image correctly. But it's large when the network seems like it doesn't really know what it's doing. So then what you do is consider the average cost over all of the tens of thousands of training examples at your disposal. This average cost is our measure for how lousy the network is and how bad the computer should feel. And that's a complicated thing. Remember how the network itself was basically a function, one that takes in 784 numbers as inputs, the pixel values, and spits out 10 numbers as its output. And in a sense, it's parameterized by all these weights and biases. Well the cost function is a layer of complexity on top of that. It takes as its input, those 13,000 or so weights and biases, and it spits out a single number describing how bad those weights and biases are. And the way it's defined depends on the network's behavior over all the tens of thousands of pieces of training data. That's a lot to think about. But just telling the computer what a crappy job it's doing isn't very helpful. You want to tell it how to change those weights and biases so that it gets better. To make it easier, rather than struggling to imagine a function with 13,000 inputs, just imagine a simple function that has one number as an input and one number as an output. How do you find an input that minimizes the value of this function? Circular students will know that you can sometimes figure out that minimum explicitly. But that's not always feasible for really complicated functions. Certainly not in the 13,000 input version of this situation for our crazy complicated neural network cost function. A more flexible tactic is to start at any all input and figure out which direction you should step to make that output lower. Specifically, if you can figure out the slope of the function where you are, then shift to the left if that slope is positive and shift the input to the right if that slope is negative. If you do this repeatedly, at each point checking the new slope and taking the appropriate step, you're going to approach some local minimum of the function. And the image you might have in mind here is a ball rolling down a hill. And notice, even for this really simplified single input function, there are many possible valleys that you might land in, depending on which random input you start at. And there's no guarantee that the local minimum you land in is going to be the smallest possible value of the cost function. That's going to carry over to our neural network case as well. And I also want you to notice how if you make your step sizes proportional to the slope, then when the slope is flattening out towards the minimum, your steps get smaller and smaller, and that kind of helps you from overshooting. Bumping up the complexity a bit, imagine instead of function with two inputs and one output. You might think of the input space as the x, y plane, and the cost function as being graft as a surface above it. Now instead of asking about the slope of the function, you have to ask which direction should you step in this input space so as to decrease the output of the function most quickly. In other words, what's the downhill direction? And again, it's helpful to think of a ball rolling down that hill. Those of you familiar with multivariable calculus will know that the gradient of a function gives you the direction of steepest ascent. Basically which direction should you step to increase the function most quickly? Naturally enough, taking the negative of that gradient gives you the direction to step that decreases the function most quickly. And even more than that, the length of this gradient vector is actually an indication for just how steep that steepest slope is. Now if you're unfamiliar with multivariable calculus and you want to learn more, check out some of the work that I did for Khan Academy on the topic. Honestly though, all that matters for you and me right now is that in principle there exists a way to compute this vector. This vector that tells you what the downhill direction is and how steep it is. You'll be okay if that's all you know and you're not rock solid on the details. Because if you can get that, the algorithm for minimizing the function is to compute this gradient direction, then take a small step downhill and just repeat that over and over. It's the same basic idea for a function that has 13,000 inputs instead of two inputs. Imagine organizing all 13,000 weights and biases of our network into a giant column vector. The negative gradient of the cost function is just a vector. It's some direction inside this insanely huge input space that tells you which nudges to all of those numbers is going to cause the most rapid decrease to the cost function. And of course, with our specially designed cost function, changing the weights and biases to decrease it means making the output of the network on each piece of training data look less like a random array of 10 values and more like an actual decision that we want it to make. It's important to remember, this cost function involves an average over all of the training data, so if you minimize it, it means it's a better performance on all of those samples. The algorithm for computing this gradient efficiently, which is effectively the heart of how a neural network learns, is called back propagation. And it's what I'm going to be talking about next video. There I really want to take the time to walk through what exactly happens to each weight and each bias for a given piece of training data, trying to give an intuitive feel for what's happening beyond the pile of relevant calculus and formulas. Right here, right now, the main thing I want you to know, independent of implementation details, is that what we mean when we talk about a network learning is that it's just minimizing a cost function. And notice, one consequence of that is that it's important for this cost function to have a nice smooth output so that we can find a local minimum by taking little steps downhill. This is why, by the way, artificial neurons have continuously ranging activations, rather than simply being active or inactive in a binary way, the way that biological neurons are. This process of repeatedly nudging an input of a function by some multiple of the negative gradient is called gradient descent. It's a way to converge toward some local minimum of a cost function, basically a valley in this graph. I'm still showing the picture of a function with two inputs, of course, because nudges in a 13,000-dimensional input space are a little hard to wrap your mind around, but there is actually a nice non-spatial way to think about this. Each component of the negative gradient tells us two things. The sign, of course, tells us whether the corresponding component of the input vector should be nudged up or down. But importantly, the relative magnitudes of all these components kind of tells you which changes matter more. You see, in our network, an adjustment to one of the weights might have a much greater impact on the cost function than the adjustment to some other weight. Some of these connections just matter more for our training data. So a way that you can think about this gradient vector of our mind-warpingly massive cost function is that it encodes the relative importance of each weight and bias, that is, which of these changes is going to carry the most bang for your buck. This really is just another way of thinking about direction. To take a simpler example, if you have some function with two variables as an input, and you compute that it's gradient at some particular point, comes out as 3-1. And on the one hand, you can interpret that as saying that when you're standing at that input, moving along this direction increases the function most quickly. That when you graph the function above the plane of input points, that vector is what's giving you the straight uphill direction. But another way to read that is to say that changes to this first variable have three times the importance as changes to the second variable, that at least in the neighborhood of the relevant input, nudging the x-value carries a lot more bang for your buck. Alright, let's zoom out and sum up where we are so far. The network itself is this function with 784 inputs and 10 outputs, defined in terms of all of these weighted sums. The cost function is a layer of complexity on top of that. It takes the 13,000 weights and biases as inputs and spits out a single measure of laziness based on the training examples. And the gradient of the cost function is one more layer of complexity still. It tells us what nudges to all of these weights and biases, because the fastest change to the value of the cost function, which you might interpret as saying which changes to which weights matter the most. So when you initialize the network with random weights and biases and adjust them many times based on this gradient descent process, how well does it actually perform on images that it's never seen before? Well, the one that I've described here, with the two hidden layers of 16 neurons each, chosen mostly for aesthetic reasons, well it's not bad. It classifies about 96% of the new images that it sees correctly. And honestly, if you look at some of the examples that it messes up on, you kind of feel compelled to cut it a little slack. Now if you play around with the hidden layer structure and make a couple tweaks, you can get this up to 98%. And that's pretty good. It's not the best. You can certainly get better performance by getting more sophisticated than this plain vanilla network, but given how daunting the initial task is, I just think there's something incredible about any network doing this well on images that it's never seen before, given that we never specifically told it what patterns to look for. Originally, the way that I motivated this structure was by describing a hope that we might have, that the second layer might pick up on little edges, that the third layer would piece together those edges to recognize loops in longer lines, and that those might be piece together to recognize digits. So is this what our network is actually doing? Well, for this one at least, not at all. Remember how last video we looked at how the weights of the connections from all of the neurons in the first layer to a given neuron in the second layer can be visualized as a given pixel pattern that that second layer neuron is picking up on? Well, when we actually do that, for the weights associated with these transitions from the first layer to the next. Instead of picking up on isolated little edges here and there, they look almost random, just with some very loose patterns in the middle there. It would seem that in the unfathomably large 13,000 dimensional space of possible weights and biases, our network found itself a happy little local minimum that, despite successfully classifying most images, doesn't exactly pick up on the patterns that we might have hoped for. And to really drive this point home, watch what happens when you input a random image. If the system was smart, you might expect it to either feel uncertain, maybe, not really activating any of those 10 output neurons or activating them all evenly. But instead, it confidently gives you some nonsense answer, as if it feels as sure that this random noise is a 5, as it does that an actual image of a 5 is a 5. Fraze differently? Even if this network can recognize digits pretty well, it has no idea how to draw them. A lot of this is because it's such a tightly constrained training setup. I mean, put yourself in the network's shoes here. From its point of view, the entire universe consists of nothing but clearly defined, unmoving digits centered in a tiny grid. And its cost function just never gave it any incentive to be anything but utterly confident in its decisions. So with this is the image of what those second layer neurons are really doing, you might wonder why I would introduce this network with the motivation of picking up on edges and patterns. That's just not at all what it ends up doing. Well, this is not meant to be our end goal, but instead a starting point. Frankly, this is old technology, the kind researched in the 80s and 90s. And you do need to understand it before you can understand more detailed modern variants, and it clearly is capable of solving some interesting problems. But the more you dig in to what those hidden layers are really doing, the less intelligent it seems. Making the focus for a moment from how networks learn to how you learn, that'll only happen if you engage actively with the material here somehow. One pretty simple thing that I want you to do is just pause right now and think deeply for a moment about what changes you might make to this system and how it perceives images if you wanted it to better pick up on things like edges and patterns. But better than that, to actually engage with the material, I highly recommend the book by Michael Neilsen on deep learning and neural networks. In it, you can find the code and the data to download and play with for this exact example. And the book will walk you through step by step what that code is doing. What's awesome is that this book is free and publicly available. So if you do get something out of it, consider joining me in making a donation towards Neilsen's efforts. I've also linked a couple other resources that I like a lot in the description, including the phenomenal and beautiful blog posts by Chris Ola and the articles in Distill. To close things off here for the last few minutes, I want to jump back into a snippet of the interview that I had with Lisha Lee. You might remember her from the last video, she did her PhD work in deep learning. And in this little snippet, she talks about two recent papers that really dig in to how some of the more modern image recognition networks are actually learning. Just to set up where we were in the conversation, the first paper took one of these particularly deep neural networks that's really good at image recognition. And instead of training it on a properly labeled data set, it shuffled all of the labels around before training. Obviously, the testing accuracy here was going to be no better than random, since everything's just randomly labeled. But it was still able to achieve the same training accuracy as you would on a properly labeled data set. Basically, the millions of weights for this particular network were enough for it to just memorize the random data, which kind of raises the question for whether minimizing this cost function actually corresponds to any sort of structure in the image, or is it just memorization? It reminds the entire data set of what the correct classification is. And so a couple of half a year later at ICML this year, there was not exactly rebuttal paper that addressed some aspects of like, hey, actually these networks are doing something a little bit smarter than that. If you look at that accuracy curve, if you were just training on a random data set, that curves went down very slowly in almost a linear fashion. So you're really struggling to find that local minimum of possible, the right weights that would get you that accuracy. Whereas if you're actually training on a structure data set, one that has the right labels, you fiddle around a little bit in the beginning, but then you kind of dropped very fast to get to that accuracy level. And so in some sense, it was easier to find that local maxima. And so it was also interesting about that, is it brings into light another paper from actually a couple of years ago, which has a lot more simplifications about the network layers, but one of the results was saying how if you look at the optimization landscape, the local minima that these networks tend to learn are actually of equal quality. In some sense, if your data set is structure, you should be able to find that much more easily. My thanks, as always, to those of you supporting on Patreon. I've said before just what a game changer in Patreon is, but these videos really would not be possible without you. I also want to give a special thanks to the VC firm Amplify Partners in their support of these initial videos in the series. They focus on very early stage machine learning and AI companies. And I feel pretty confident in the probabilities that some of you watching this, and even more likely some of the people that you know, are right now in the early stages of getting such a company off the ground. And the Amplify folks would love to hear from any such founders, and they even set up an email address just for this video that you can reach out to them through. Any Blue One Brown at AmplifyPartners.com
The Brachistochrone, with Steven Strogatz
For this video, I'm doing something a little different. I got the chance to sit down with Stephen Strogatz and record a conversation. For those of you who don't know, Steve is a mathematician at Cornell. He's the author of several popular math books and a frequent contributor to, among other things, Radio Lab and The New York Times. To put it shortly, he's one of the great mass communicators of math in our time. In our conversation, we talked about a lot of things, but it was all centering around this one very famous problem in the history of math, the Braquisticrone. And for the first two thirds or so of the video, I'm just going to play some of that conversation. We lay out the problem, talk about some of its history, and go through this solution by Johann Bernoulli from the 17th century. After that, I'm going to show this proof that Steve showed me. It's by a modern mathematician Mark Levy, and it gives a certain geometric insight to the Johann Bernoulli's original solution. And at the very end, I have a little challenge for you. We should probably start off by just defining the problem itself. Okay. You want me to take a crack at that? Yeah, go for it. Okay. Yeah, so it's this complicated word, first of all, Braquisticrone, that comes from two G.I. have to check. Are those Latin or Greek words, I think? I'm pretty sure they're Greek. Okay, so Greek words for the shortest time. And it refers to a question that was posed by one of their Bernoulli brothers, by Johann Bernoulli. If you imagine like a shoot, and there's a particle moving down a shoot, being pulled by gravity, what's the path of the shoot that connects two points so that it goes from point A to point B in the shortest amount of time? I think what I like most about this problem is that it's relatively easy to describe qualitatively what you're going for. You know, you want the path to be short, something like a straight line, but you want the object to get going fast, which requires starting steeply, and that adds length to your line, but making this quantitative and actually finding the balance with a specific curve, it's not at all obvious, and makes for a really interesting problem. It is. It's a really interesting thing. I mean, most people, when they first hear it, assume that the shortest path will give the shortest time, that the straight line is the best. But as you say, it can help to build up some steam by rolling straight down at first, or not necessarily rolling. I mean, you could picture it sliding. That doesn't really matter how we phrase it. So Galileo had thought about this himself much earlier than Johann Bernoulli in 1638, and Galileo thought that an arc of a circle would be the best thing. So he had the idea that a bit of curvature might help. And it turns out that the arc of the circle is not the right answer. It's good, but there are better solutions. And the history of real solutions starts with Johann Bernoulli posing this as a challenge. So that's then in June of 1696. And he posed it as a challenge, really, to the mathematical world at that time. For him, that meant the mathematicians of Europe. And in particular, he was very concerned to show off that he was smarter than his brother. He had a brother Jacob, and the two of them were quite bitter rivals, actually, both tremendous mathematicians. But Johann Bernoulli fancied himself the greatest mathematician of his era, not just better than his brother. But I think he thought that he might be better than Leibniz, who was alive at the time. And Isaac Newton, who was by then sort of an old man, I mean, more or less retired from doing math. Newton was the warden of the mint, be something like the secretary of the treasury nowadays. And Newton shows him up, right? He stays up all night and solves it, even though it took Johann Bernoulli two weeks to solve. That's the great story. That Newton was shown the problem, wasn't really pleased to be challenged, especially by somebody that he considered beneath him. I mean, he considered pretty much everybody beneath him. But yeah, Newton stayed up all night, solved it, and then sent it in anonymously to the philosophical transactions, the journal, at the time. And it was published anonymously. And so Newton complained in a letter to a friend of his. He said, I do not love to be dund and teased by foreigners about mathematical things. So he didn't enjoy this challenge, but he did solve it. The famous legend is that Johann Bernoulli on seeing this anonymous solution said, I recognize the lion by his claw. I don't know if that's true, but it's a great story. Everyone loves to tell that story. And I suspect part of the reason that Johann was so eager to challenge other mathematicians like Newton is he secretly knew that his own solution was unusually clever. Maybe we should start going into what he does. And yes, he imagines that to solve the problem, you let light take care of it for you, because Fermat in the early 1600s had shown that you could state the way that light travels, whether bouncing off of a mirror or refracting from air into water where it bends or going through a lens. All the motion of light could be understood by saying that light takes whatever path gets it from point A to point B in the shortest time. Which is a really awesome perspective when you think about it, because usually you think very locally in terms of what happens to a particle at each specific point. This steps back and looks at all possible paths and says nature chooses the best one. Yes, it is. It's a beautiful and, as you say, really an awe-inspiring mental shift. For some people, literally awe-inspiring in the sense that it had religious overtones, that somehow nature is imbued with this property of doing the most efficient thing. But leaving that aside, you know, you could just say it's an empirical fact that that is how light behaves. And so Johann Bernoulli's idea was to then use Fermat's principle of least time and say let's pretend that instead of a particle sliding down a chute, it's light traveling through media of different index of refraction, meaning that the light would go at different speeds. As it successively went sort of down the chute. And I think before we dive into that case, we should look at some simple words. So at this point in the conversation, we talked for a while about Snell's Law. This is a result in physics that describes how light bends when it goes from one material into another where its speed changes. I made a separate video out of this talking about how you can prove it using Fermat's principle together with a very neat argument using imaginary constant tension springs. But for now, all you need to know is the statement of Snell's Law itself. When a beam of light passes from one medium into another, and you consider the angle that it makes with a line perpendicular to the boundary between those two materials, the sign of that angle divided by the speed of light stays constant as you move from one medium to the next. So what Johann Bernoulli does is find a neat way to take advantage of that fact, this sign of theta over V stays constant for the broccosticrone's problem. When he thinks about what's happening with the particle sliding down the chute, he notices that by conservation of energy, the velocity that the particle has will be proportional to the square root of the distance from the top. And just to spill that out a little bit more, the loss in potential energy is its mass times the gravitational constant times y that distance from the top. And when you set that equal to the kinetic energy, one half times mv squared, and you rearrange, the velocity v will indeed end up being proportional to the square root of y. Yes. So that then gives him the idea about, let's imagine, glass of many different layers, each with a different velocity characteristic for the light in it. The velocity in the first one is v1 and the next one is v2 and the next one is v3. And these are all going to be proportional to the square root of y1 or y2 or y3. And principle, you should be thinking about a limiting process where you have infinitely many infinitely thin layers. And this is kind of a continuous change for the speed of light. And so then his question is, if light is always instantaneously obeying Snell's law, as it goes from one medium to the next, so that v over sine theta is always a constant as I move from one layer to the next, what is that path where such that these tangent lines are always instantaneously obeying Snell's law? And for the record, we should probably just state exactly what that property is. Okay. So the conclusion that Johan made was that if you look at whatever the time minimizing curve is and you take any point on that curve, the sign of the angle between the tangent line at that point and the vertical, divided by the square root of the vertical distance between that point and the start of the curve, that's going to be some constant independent of the point that you chose. And when Johan Bernoulli first saw this, correct me if I'm wrong, he just recognized it as the differential equation for a cycloid, the shape traced by the point on the rim of a rolling wheel. But it's not obvious, certainly not obvious to me, why this sign of theta over square root, why property has anything to do with rolling wheels? It's not at all obvious, but this is again the genius of Mark Levy to the rescue. Do you want to say a few words about Mark Levy? Yeah, well, Mark Levy is a very clever, as well as a very nice guy who's a friend of mine and a terrific mathematician at Penn State who has written a book called the Mathematical Mechanic in which he uses principles of mechanics and more generally physics to solve all kinds of math problems that is rather than math in the service of science, it's science in the service of math. And as an example of the kinds of clever things that he does, he recently published a little note very short showing that if you look at the geometry of a cycloid, just drawing the correct lines in the right places, that this principle of velocity over sine theta being constant is built in to the motion of the cycloid itself. So in that conversation, we never actually talked about the details of the proof itself. It's kind of a hard thing to do without visuals. But I think a lot of you out there are enjoying seeing the math and not just talking about the math, it's also a really elegant little piece of geometry, so I'm going to go through it here. Imagine a wheel rolling on the ceiling and picture a point P on the rim of that wheel. Mark Levy's first insight was that the point where the wheel touches the ceiling that I'll call C acts as this instantaneous center of rotation for the trajectory of P. It's as if for that moment, P is on the end of a pendulum whose base is at C. Since the tangent line of any circle is always perpendicular to the radius, the tangent line of the cycloid path of P is perpendicular to the line Pc. This gives us a right angle inside of the circle. And any right triangle inscribed in a circle must have the diameter as its hypotenuse. So from that, you can conclude that the tangent line always intersects the bottom of the circle. Now let theta be the angle between this tangent line and the vertical. We get a pair of similar triangles, which I'll just show on the screen. You can see that the length of Pc is the diameter times sine of theta. Using the second similar triangle, this length times sine of theta again gives the distance between P and the ceiling, the distance that we were calling Y earlier. Rearranging this, we see that sine of theta divided by the square root of Y is equal to 1 divided by the square root of the diameter. Since the diameter of a circle, of course, stays constant throughout the rotation, this implies that the sine of theta divided by square root of Y is constant on a cycloid, and that's exactly the snail's law property that we're looking for. So when you combine Johann Bernoulli's insight with this little geometry proof, that's the cleverest solution of the broccistic room that I've ever seen. And I could call it done here, but, given that the whole history of this problem started with a challenge that Johann Bernoulli posed, I want to finish things off with a little challenge of my own. When I was playing around with the equations of a cycloid, something interesting popped out. Consider an object sliding down the cycloid due to gravity, and think about where it is along the curve as a function of time. Now think about how the curve is defined, as this trajectory of the point on a rim of a rotating wheel. How might you tweak the rate at which the wheel rotates, so that when the object starts sliding, the marked point on the rim of the wheel always stays fixed to that sliding object. Do you start rotating it slowly and increase its speed? If so, according to what function. It turns out, the wheel will rotate at a constant rate, which is surprising. This means that the gravity pulls you along a cycloid in precisely the same way that a constantly rotating wheel would. The warm-up part of this challenge is just confirm this for yourself. This is kind of fun to see how it falls out of the equations. But this caught me thinking. If we look back at our original brokista-croned problem, asking about the path of fastest descent between two given points, maybe there's a slick way to reframe our thinking. How would it look if instead of describing the trajectory of a sliding object in terms of its x and y coordinates, we described it in terms of the angle that the velocity vector makes as a function of time. I mean, you can imagine defining a curve by having an object start sliding, then turning a knob to determine the angle at which it's sliding at each point in time, always being pulled by gravity. If you describe the angle of the knob as a function of time, you are, in fact, uniquely describing a curve. You're basically using a differential equation, since what's given is the slope as the function of some other parameter, in this case, time. So what's interesting here is that when you look at the solution of the brokista-croned problem, not in the x-y plane, but in the t-theta plane, where t is time, theta is the angle of the path, all of the brokista-croned solutions are straight lines. That is to say, theta increases at a constant rate with respect to t. When the solution of a curved minimization problem is a straight line, it's highly suggestive that there's some way to view it as a shortest path problem. Here, it's not so straightforward. Since the boundary conditions that your object started a point A and ended a point B in the x-y space, doesn't just look like going from one point to another in the theta t-space. Nevertheless, my challenge to you is this. Can you find another solution to the brokista-croned problem by explaining why it must be the case that a time-minimizing trajectory, when represented in t-theta space, looks like a straight line.
e^(iπ) in 3.14 minutes, using dynamics | DE5
One way to think about the function e to the t is to ask what properties does it have? Probably the most important one, and from some points of view, the defining property, is that it is its own derivative. Together with the added condition that inputting 0 returns 1, it's actually the only function with this property. And you can illustrate what this means with a physical model. If e to the t describes your position on a number line as a function of time, then you start at the number 1. And what this equation is saying is that your velocity, the derivative of position, is always equal to that position. The farther away from 0 you are, the faster you move. So even before knowing how to compute e to the t exactly, going from a specific time to a specific position, this ability to associate each position with a velocity paints a very strong intuitive picture of how the function must grow. You know that you'll be accelerating, and at an accelerating rate, with an all-around feeling of things getting out of hand quickly. And if you add a constant to that exponent, like e to the 2 times t, the chain rule tells us that the derivative is now 2 times itself. So at every point on the number line, rather than attaching a vector corresponding to the number itself, first double the magnitude of the position, then attach it. Moving so that your position is always e to the 2t, is the same thing as moving in such a way that your velocity is always twice your position. The implication of that 2 is that our runaway growth feels all the more out of control. If that constant was negative, say negative 0.5, then your velocity vector is always negative 0.5 times your position vector, meaning you flip it around 180 degrees, and scale its length by a half. Moving in such a way that your velocity always matches this flipped and squished copy of your position vector, you'd go the other direction, slowing down in an exponential decay towards 0. But what about if that constant was i, the square root of negative 1? If your position was always e to the it, how would you move as the time t takes forward? Well, now the derivative of your position will always be i times itself, and multiplying by i has the effect of rotating numbers 90 degrees. So as you might expect things only make sense here if we start thinking beyond the number line and in the complex plane. So even before you know how to compute e to the i times t, you know that for any position this might give for some value of time, the velocity at that time will be a 90 degree rotation of that position. Drawing this for all possible positions you might come across, you get a vector field, where as usual with vector fields, you shrink things down to avoid clutter. At time t equals 0, e to the it will be 1, that's our initial condition, and there's only one trajectory starting from that position where your velocity is always matching the vector that it's passing through, a 90 degree rotation of the position. It's when you go around a circle of radius 1 at a speed of 1 unit per second. So after pi seconds, you've traced a distance of pi around, so e to the i times pi should be negative 1. After tau seconds, you've gone full circle. e to the i times tau equals 1. And more generally, e to the i times t equals a number that's t radians around this unit circle in the complex plane. Nevertheless, something might still feel immoral about putting an imaginary number up in that exponent. And you would be right to question that. What we write as e to the t is a bit of a notational disaster, giving the number e and the idea of repeated multiplication way more emphasis than they deserve. But my time is up, so I'll spare you the full rant until the next video.
Derivative formulas through geometry | Chapter 3, Essence of calculus
Now that we've seen what a derivative means and what it has to do with rates of change, our next step is to learn how to actually compute these guys. As in if I give you some kind of function with an explicit formula, you'd want to be able to find what the formula for its derivative is. Maybe it's obvious, but I think it's worth stating explicitly why this is an important thing to be able to do. Why much of a calculus student's time ends up going towards grappling with derivatives of abstract functions rather than thinking about concrete rate of change problems. It's because a lot of real world phenomena, the sort of things that we want to use calculus to analyze, are modeled using polynomials, trigonometric functions, exponentials, and other pure functions like that. So if you build up some fluency with the ideas of rates of change for those kinds of pure abstract functions, it gives you a language to more readily talk about the rates at which things change in concrete situations that you might be using calculus to model. But it is way too easy for this process to feel like just memorizing a list of rules. And if that happens, if you get that feeling, it's also easy to lose sight of the fact that derivatives are fundamentally about just looking at tiny changes to some quantity and how that relates to a resulting tiny change in another quantity. So in this video and in the next one, my aim is to show you how you can think about a few of these rules intuitively and geometrically. And I really want to encourage you to never forget that tiny nudges are at the heart of derivatives. Let's start with a simple function, like f of x equals x squared. What if I asked you what's derivative? That is, if you were to look at some value x, like x equals 2, and compare it to a value slightly bigger, just dx bigger, what's the corresponding change in the value of the function, df? And in particular, what's df divided by dx, the rate at which this function is changing per unit change in x? As a first step for intuition, we know that you can think of this ratio df dx as the slope of a tangent line to the graph of x squared. And from that, you can see that the slope generally increases as x increases. At 0, the tangent line is flat, and the slope is 0. At x equals 1, it's something a bit steeper. At x equals 2, it's steeper still. But looking at graphs isn't generally the best way to understand the precise formula for a derivative. For that, it's best to take a more literal look at what x squared actually means. And in this case, let's go ahead and picture a square whose side length is x. If you increase x by some tiny nudge, some little dx, what's the resulting change in the area of that square? That slight change in area is what df means in this context. It's the tiny increase to the value of f of x equals x squared, caused by increasing x by that tiny nudge dx. Now you can see that there's three new bits of area in this diagram, two thin rectangles and a minuscule square. The two thin rectangles each have side lengths of x and dx, so they account for 2 times x times dx units of new area. For example, let's say x was 3 and dx was 0.01, then that new area from these two thin rectangles would be 2 times 3 times 0.01, which is 0.06, about 6 times the size of dx. That little square there has an area of dx squared, but you should think of that as being really tiny, negligibly tiny. For example, if dx was 0.01, that would be only 0.0001. And keep in mind, I'm drawing dx with a fair bit of width here just so we can actually see it, but always remember, in principle, dx should be thought of as a truly tiny amount. And for those truly tiny amounts, a good rule of thumb is that you can ignore anything that includes a dx raised to a power greater than 1. That is, a tiny change squared is a negligible change. What this leaves us with is that df is just some multiple of dx. And that multiple, 2x, which you could also write as df divided by dx, is the derivative of x squared. For example, if you were starting at x equals 3, then as you slightly increase x, the rate of change in the area per unit change in length added, dx squared over dx would be 2 times 3 or 6. And if instead you were starting at x equals 5, then the rate of change would be 10 units of area per unit change in x. Let's go ahead and try a different simple function, f of x equals x cubed. This is going to be the geometric view of the stuff that I went through algebraically in the last video. What's nice here is that we can think of x cubed as the volume of an actual cube whose sidelines are x. And when you increase x by a tiny nudge, a tiny dx, the resulting increase in volume is what I have here in yellow. That represents all the volume in a cube with sidelines x plus dx that's not already in the original cube, the one with sidelines x. It's nice to think of this new volume as broken up into multiple components, but almost all of it comes from these three square faces. Or, set a little more precisely, as dx approaches zero, those three squares comprise a portion closer and closer to 100% of that new yellow volume. Each of those thin squares has a volume of x squared times dx. The area of the face times that little thickness dx. So in total, this gives us three x squared dx of volume change. And to be sure, there are other slivers of volume here along the edges and that tiny one in the corner. But all of that volume is going to be proportional to dx squared or dx cubed, so we can safely ignore them. Again, this is ultimately because they're going to be divided by dx. And if there's still any dx remaining, then those terms aren't going to survive the process of letting dx approach zero. What this means is that the derivative of x cubed, the rate at which x cubed changes per unit change of x, is three times x squared. What that means in terms of graphical intuition is that the slope of the graph of x cubed at every single point x is exactly three x squared. And reasoning about that slope, it should make sense that this derivative is high on the left, and then zero at the origin, and then high again as you move to the right. But just thinking in terms of the graph, would never have landed us on the precise quantity three x squared. For that, we had to take a much more direct look at what x cubed actually means. Now in practice, you wouldn't necessarily think of the square every time you're taking the derivative of x squared. Nor would you necessarily think of this cube whenever you're taking the derivative of x cubed. Both of them fall under a pretty recognizable pattern for polynomial terms. The derivative of x to the fourth turns out to be four x cubed. The derivative of x to the fifth is five x to the fourth, and so on. Abstractly, you'd write this as the derivative of x to the n for any power n is n times x to the n minus one. This right here is what's known in the business as the power rule. In practice, we all quickly just get jaded and think about this symbolically, as the exponent hopping down in front, leaving behind one less than itself, rarely pausing to think about the geometric delights that underlie these derivatives. That's the kind of thing that happens when these tend to fall in the middle of much longer computations. But rather than tracking it all off to symbolic patterns, let's just take a moment and think about why this works for powers beyond just two and three. When you nudge that input x, increasing it slightly to x plus dx, working out the exact value of that nudge output would involve multiplying together these n separate x plus dx terms. The full expansion would be really complicated, but part of the point of derivatives is that most of that complication can be ignored. The first term in your expansion is x to the n. This is analogous to the area of the original square, or the volume of the original cube from our previous examples. For the next terms in the expansion, you can choose mostly x's with a single dx. Since there are n different parentheticals from which you could have chosen that single dx, this gives us n separate terms, all of which include n minus 1x's times a dx, giving a value of x to the power n minus 1 times dx. This is analogous to how the majority of the new area in the square came from those two bars, each with area x times dx, or how the bulk of the new volume in the cube came from those three thin squares, each of which had a volume of x squared times dx. There will be many other terms at this expansion, but all of them are just going to be some multiple of dx squared, so we can safely ignore them. And what that means is that all but a negligible portion of the increase in the output comes from n copies of this x to the n minus 1 times dx. That's what it means for the derivative of x to the n to be n times x to the n minus 1. And even though, like I said in practice, you'll find yourself performing this derivative quickly and symbolically, imagining the exponent hopping down to the front. You know, and then, it's nice to just step back and remember why these rules work. Not just because it's pretty, and not just because it helps remind us that math actually makes sense and isn't just a pile of formulas to memorize, but because it flexes that very important muscle of thinking about derivatives in terms of tiny nudges. As another example, think of the function f of x equals 1 divided by x. Now on the one hand, you could just blindly try applying the power rule, since 1 divided by x is the same as writing x to the negative 1. That would involve letting the negative 1 hop down in front, leaving behind one less than itself, which is negative 2. But let's have some fun, and see if we can reason about this geometrically, rather than just plugging it through some formula. The value 1 over x is asking, what number multiplied by x equals 1? So here's how I'd like to visualize it. In a little rectangular puddle of water, sitting in two dimensions, whose area is 1. And let's say that its width is x, which means that the height has to be 1 over x, since the total area of it is 1. So if x was stretched out to 2, then that height is forced down to 1 half. And if you increased x up to 3, then the other side has to be squished down to 1 third. This is a nice way to think about the graph of 1 over x, by the way. If you think of this width x of the puddle as being in the xy plane, then that corresponding output, 1 divided by x, the height of the graph above that point, is whatever the height of your puddle has to be to maintain an area of 1. So with this visual in mind, for the derivative, imagine nudging up that value of x by some tiny amount, some tiny dx. How must the height of this rectangle change so that the area of the puddle remains constant at 1? It is increasing the width by dx, add some new area to the right here. So the puddle has to decrease in height by some d1 over x, so that the area lost off of that top cancels out the area gained. You should think of that d1 over x as being a negative amount, by the way, since it's decreasing the height of the rectangle. And you know what? I'm going to leave the last few steps here for you, for you to pause and ponder and work out an ultimate expression. And once you reason out what d1 over x divided by dx should be, I want you to compare it to what you would have gotten if you had just blindly applied the power rule purely symbolically to x to the negative 1. And while I'm encouraging you to pause and ponder, here's another fun challenge if you're feeling up to it. See if you can reason through what the derivative of the square root of x should be. To finish things off, I want to tackle one more type of function. You're going to metric functions. And in particular, let's focus on the sine function. So for this section, I'm going to assume that you're already familiar with how to think about trig functions using the unit circle, the circle with the radius one centered at the origin. For a given value of theta, like say 0.8, you imagine yourself walking around the circle, starting from the rightmost point, until you've traversed that distance of 0.8 in arc length. This is the same thing as saying that the angle right here is exactly theta radians, since the circle has a radius of 1. Then what sine of theta means is the height of that point above the x axis. And as your theta value increases, and you walk around the circle, your height bobs up and down between negative 1 and 1. So when you graph sine of theta versus theta, you get this wave pattern, the quintessential wave pattern. And just from looking at this graph, we can start to get a feel for the shape of the derivative of the sine. The slope at 0 is something positive, since sine of theta is increasing there. And as we move to the right and sine of theta approaches its peak, that slope goes down to 0. Then the slope is negative for a little while, while the sine is decreasing, before coming back up to 0, as the sine graph levels out. And as you continue thinking this through and drawing it out, if you're familiar with the graph of trig functions, you might guess that this derivative graph should be exactly cosine of theta, since all the peaks and valleys line up perfectly with where the peaks and valleys for the cosine function should be. And spoiler alert, the derivative is in fact the cosine of theta. But aren't you a little curious about why it's precisely cosine of theta? I mean, you could have all sorts of functions with peaks and valleys at the same points that have roughly the same shape, but who knows, maybe the derivative of sine could have turned out to be some entirely new type of function that just happens to have a similar shape. Well, just like the previous examples, a more exact understanding of the derivative requires looking at what the function actually represents, rather than looking at the graph of the function. So think back to that walk around the unit circle. Having traversed an arc with length theta and thinking about sine of theta as the height of that point. Now, zoom into that point on the circle and consider a slight nudge of d theta along their circumference, a tiny step in your walk around the unit circle. How much does that tiny step change the sine of theta? How much does this increase d theta of arc length increase the height above the x-axis? Well, zoomed in close enough, the circle basically looks like a straight line in this neighborhood. So let's go ahead and think of this right triangle, where the hypotenuse of that right triangle represents the nudge d theta along this circumference. And that left side here represents the change in height, the resulting d sine of theta. Now this tiny triangle is actually similar to this larger triangle here with the defining angle theta and whose hypotenuse is the radius of the circle with length 1. Specifically, this little angle right here is precisely equal to theta radians. Now think about what the derivative of sine is supposed to mean. It's the ratio between that d sine of theta, the tiny change to the height, divided by d theta, the tiny change to the input of the function. And from the picture, we can see that that's the ratio between the length of the side adjacent to the angle theta, divided by the hypotenuse. Well, let's see, adjacent, divided by hypotenuse, that's exactly what the cosine of theta means. That's the definition of the cosine. So this gives us two different really nice ways of thinking about how the derivative of sine is cosine. One of them is looking at the graph and getting a loose feel for the shape of things based on thinking about the slope of the sine graph at every single point. And the other is a more precise line of reasoning looking at the unit circle itself. For those of you that like to pause and ponder, see if you can try a similar line of reasoning to find what the derivative of the cosine of theta should be. In the next video, I'll talk about how you can take derivatives of functions who combine simple functions like these ones, either as sums or products or function compositions, things like that. And similar to this video, the goal is going to be to understand each one geometrically in a way that makes it intuitively reasonable and somewhat more memorable.
But why is a sphere's surface area four times its shadow?
Some of you may have seen in school that the surface area of a sphere is 4 pi r squared. A suspiciously suggestive formula given that it's a clean multiple of the more popular pi r squared, the area of a circle with the same radius. But have you ever wondered why this is true? And I don't just mean proving the 4 pi r squared formula, I mean viscerally feeling to your bones a connection between this surface area and these four circles. How lovely would it be if there were some subtle shift in perspective that shows how you could nicely and perfectly fit these four circles onto the sphere's surface? Nothing can be quite that simple since the curvature of a sphere's surface is different from the curvature of a flat plane, which is why trying to fit, say, a piece of paper around the sphere, it just doesn't work. Nevertheless, I would like to show you two separate ways of thinking about the surface area that connect it in a satisfying way to these circles. The first comes from a classic, one of the true gems of geometry that I think all math students should experience, the same way all English students should read at least some Shakespeare. The second line of reasoning is something of my own, which draws a more direct line between the sphere and its shadow. And lastly, I'll share why this fourfold relation is not unique to spheres, but is instead one specific instance of a much more general fact for all convex shapes in three dimensions. Starting with the bird's eye view here, the idea for the first approach is to show that the surface area of the sphere is the same as the area of a cylinder with the same radius and the same height as that sphere, or rather a cylinder without the top in the bottom, what you might call the label of that cylinder. And once you have that, you can unwrap that label to understand it simply as a rectangle. The width of this rectangle comes from the cylinder's circumference, so it's 2 pi times r, and the height comes from the height of the sphere, which is 2 times the radius. And this already gives us the formula for pi r squared when we multiply the two. But in the spirit of mathematical playfulness, it's nice to see how four circles with radius r can actually fit into this picture. The idea will be to unwrap each circle into a triangle without changing its area and then to fit those nicely into the unfolded cylinder label. More on that in a couple minutes. The more pressing question is, why on earth should the sphere be related to the cylinder in this way? The way I'm animating it is already suggestive of how this might work. The idea is to approximate the area of the sphere with many, many tiny rectangles covering it, and to show how if you project these rectangles directly outward as if casting a shadow by little lights positioned on the z-axis, pointing parallel to the xy plane, the projection of each rectangle onto the cylinder, quite surprisingly, ends up having the same area as the original rectangle. But why should that be? Well, there are two competing effects at play here. For one of these rectangles, let's call the side along the latitude lines it's width, and the side along the longitude lines it's height. On the one hand, as the rectangle is projected outward, its width will get scaled up. For rectangles towards the poles, that length is scaled up quite a bit since they're projected over a longer distance. But for those closer to the equator, the effect might be close to nothing. But on the other hand, because these rectangles are already slant with respect to the z-direction, during this projection, the height of each such rectangle will get scaled down. Think about holding some flat object and looking at its shadow. As you reorient that object, the shadow looks more or less squished for some angles. Now take a look, those rectangles towards the poles are quite slanted this way, so their height is going to get squished down by a lot. For those closer to the equator, oriented somewhere closer to parallel to the z-axis, the effect it's not as much. It will turn out that these two effects of stretching the width and squishing the height cancel each other out perfectly. Already, as a rough sketch, wouldn't you agree that this is a very pretty way of reasoning? Of course, the meat here comes from showing why these two competing effects cancel each other out. And in some ways, the details fleshing this out are just as pretty as the zoomed-out structure of the full argument. So let's dig in. Let me go ahead and cut away half of the sphere so that we can get a better look. For any mathematical problem solving, it never hurts to start by giving things names. So let's say that the radius of the sphere is R, and for one specific rectangle, let's call the distance between that rectangle and the z-axis d. You could rightfully complain that the distance d is a little ambiguous, depending on which point of that rectangle you're going from. But for tinier and tinier rectangles, that ambiguity will become negligible. And tinier and tinier is when this approximation with rectangles gets closer to the true surface area anyway. To choose an arbitrary standard, let's say that d is the distance from the bottom of the rectangle. Now to think about projecting this out to the cylinder, what we're going to do is picture two similar triangles. The first one shares its base with the base of the rectangle on the sphere and has a tip at the same height but on the z-axis, a distance d away. The second triangle is a scaled up version of this, scaled so that it just barely reaches the cylinder, meaning its long side now has a length R. So the ratio of their bases, which is how much our rectangles width gets stretched out, is R divided by d. What about the height? How precisely does that get scaled down as we project? Again, let's slice a cross section for a cleaner view. And in fact, why don't we go ahead and completely focus our view to this two-dimensional cross section? To think about the projection, let's make a little right triangle like this, where what was the height of our spherical rectangle is the hypotenuse, and the projection is one of the legs. Pro tip, anytime you're doing geometry with circles or spheres, keep in the forefront of your mind that anything tangent to the circle is perpendicular to the radius drawn to that point of tangency. It's crazy just how helpful that one little fact can be for making progress. In our case, once we draw that radial line, together with the distance d, we have another right triangle. And often in geometry, I like to imagine tweaking the parameters of a setup and imagining how the relevant shapes change. This helps to make guesses about what the relations might be. In this case, you might predict that the two triangles I've drawn are similar to each other, since their shapes seem to change in concert with each other. This is indeed true, but as always, don't take my word for it. See if you can justify this for yourself. Again, it never hurts to give more names to things. Maybe let's call this angle alpha, and this other one beta. Since this is a right triangle, we know that alpha plus beta plus 90 degrees must be 180 degrees. Now let's zoom in on our little triangle and see if we can figure out what its angles might be. Notice this 90 degree angle, which comes from the radius being perpendicular to the tangent, and how when it's combined with this beta here, and some other little angle, it forms a straight line. So that other little angle must be alpha. And this lets us fail out a few more values, which reveals that this little triangle also has angles alpha, beta, and 90 degrees. So it is indeed similar to the big one. Deep in the weeds like this, it's sometimes easy to forget why we're doing this. Remember, what we want to know is how much the height of the sphere rectangle gets squished down as we project it out. And that's the ratio of this little hypotenuse to the leg on the right side. By the similarity with the big triangle, the ratio of those two sides is again, r divided by d. So indeed, as this rectangle gets projected outward, the effect of stretching out the width is perfectly canceled out, by how much that height is getting squished due to the slant. As a fun side note, you might notice that it looks like the projected rectangle is a 90 degree rotation of the original. This would not at all be true in general, but by a lovely coincidence, the way I'm parametrizing the sphere, results in rectangles were the ratio of the width to the height, starts out as d to r. So for this very specific case, rescaling the width by r over d and rescaling the height by d over r actually does have the effect of a 90 degree rotation. And this lends itself to a rather bizarre way to animate the relation, where instead of projecting each rectangular piece, as if casting a shadow, you can rotate each one of them 90 degrees and then rearrange them all to make this elinder. Now, if you're really thinking critically, you might still not be satisfied that this shows us what the surface area of the sphere is, because all of these little rectangles only approximate the relevant areas. Well, the idea is that this approximation gets closer and closer to the true value for finer and finer coverings. And since for any specific covering, the sphere rectangles have the same area as the cylinder rectangles, whatever value each of these two series of approximations are approaching, must actually be the same. I mean, as you get really aggressively philosophical about what we even mean by surface area, these sorts of rectangular approximations are not just aids in our problem solving toolbox. They end up serving as a way to rigorously define the idea of area in the context of smooth curved surfaces. This kind of reasoning is essentially calculus, just without any of the jargon. In fact, I think neat geometric arguments like this, which require no background in calculus to understand, can serve as a great way to tee things up for new calculus students so that they have the core ideas already in their head before seeing the definitions which make them precise, rather than going the other way around. Alright, so as I said before, if you're the kind of person who's just itching to see a direct connection to these four circles, one nice way to do that is to unwrap those circles into triangles. If this is something you haven't seen before, I go into much more detail about why this works in the first video of the calculus series. The basic idea is to relate thin concentric rings of the circle with horizontal slices of this triangle. Because the circumference of each such ring increases linearly in proportion to the radius, always 2 pi times that radius. When you unwrap them all and line them up like this, their ends will form a straight line as opposed to some other curved shape, which gives us a triangle with base 2 pi r and height r. And four of these unwrapped circles fit perfectly into our rectangle, which is in some sense an unwrapped version of the sphere's surface. Now that's pretty satisfying, but you might nevertheless be wondering if there's some way to relate this sphere directly to a circle with the same radius rather than going through this intermediary of a cylinder. I do have a proof for you to this effect, leveraging a little trigonometry, though I have to admit I still think the comparison to the cylinder wins out on raw elegance. Now I'm a big believer that the best way to really learn math is to do problems for yourself, which is a bit hypocritical coming from a channel essentially consisting of lectures. So I'm going to try something a little different here and present the proof as a heavily guided sequence of exercises. Yes, I know that's less fun and it means you have to pull out some paper to do a little work, but I guarantee you will get more out of it this way. At a high level, the approach will be to cut the sphere into many thin rings parallel to the xy plane, and to compare the area of these rings to the area of their shadows on the xy plane. All of the shadows of the rings from, say, the northern hemisphere, make up a circle with the same radius as the sphere, right? Well, the main idea will be to show a correspondence between these ring shadows and every second ring on the sphere. Challenge mode here is to pause now and see if you can predict how that comparison might go. Let's label each one of these rings based on the angle theta between a line from the sphere center to that ring and the z-axis. So theta ranges from zero at the north pole all the way up to 180 degrees at the south pole, which is to say from zero to pi radians. And let's call the change in the angle from one ring to the next d theta, which means the thickness of those rings will be the radius r times d theta. All right, structure dexercise time. We'll ease in with a warm up. Question number one, what is the circumference of this ring, say at the inner edge, in terms of r and theta? Once you have that, go ahead and multiply the answer by the thickness r times d theta to get an approximation for the ring's area, an approximation that will get better and better as you chop up the sphere more and more finely. And at this point, if you know you're calculus, you could integrate, but our goal is not just to find the answer. It's to feel the connection between the sphere and its shadow. So question number two, what is the area of the shadow of one of these rings on the xy plane? Again, expressed in terms of r, theta, and d theta. And for this one, it might be helpful to think back to that tiny little right triangle we were talking about earlier. Question number three, and this is really the heart of it, each one of these rings' shadows has precisely half the area of one of the rings on the sphere. It's not the one that's an angle theta straight above it, but another one. The question is, which one? As a hint, you might want to reference some tree identities for this one. Question number four, I said at the outset that there's a correspondence between all of the shadows from the northern hemisphere, which make up a circle with radius r, and every second ring on the sphere. Use your answer to the last question to spell out what exactly that correspondence is. And for question five, bring it on home. Why does this imply that the area of the circle is exactly one-fourth the surface area of the sphere, particularly as we consider thinner and thinner rings? If you want answers or hints, I'm quite sure that people in comments and on Reddit will have them waiting for you. And finally, I would be remiss not to make at least a brief mention of the fact that the surface area of a sphere is a very specific instance of a much more general fact. If you take any convex shape and look at the average area of all of its shadows, averaged over all possible orientations in 3D space, that average will be exactly one-fourth the surface area of your shape. As to why this is true, I'll have to leave those details for another day.
Backpropagation calculus | Chapter 4, Deep learning
The hard assumption here is that you've watched Part 3, giving an intuitive walkthrough of the back propagation algorithm. Here we get a little bit more formal and dive into the relevant calculus. It's normal for this to be at least a little confusing so the mantra to regularly pause and ponder certainly applies as much here as anywhere else. Our main goal is to show how people in machine learning commonly think about the chain rule from calculus in the context of networks, which has kind of a different feel from how most introductory calculus courses approach the subject. For those of you uncomfortable with the relevant calculus, I do have a whole series on the topic. Let's just start off with an extremely simple network, one where each layer has a single neuron in it. So this particular network is determined by three weights and three biases, and our goal is to understand how sensitive the cost function is to these variables. That way we know which adjustments to those terms is going to cause the most efficient decrease to the cost function. And we're just going to focus on the connection between the last two neurons. Let's label the activation of that last neuron with a superscript L indicating which layer it's in. So the activation of the previous neuron is a L minus 1. These are not exponents, they're just a way of indexing what we're talking about since I want to save subscripts for different indices later on. Now let's say that the value we want this last activation to be for a given training example is y, for example, y might be 0 or 1. So the cost of this simple network for a single training example is AL minus y squared. We'll denote the cost of that one training example as C0. As a reminder, this last activation is determined by a weight, which I'm going to call WL, times the previous neurons activation plus some bias, which I'll call BL. And then you pump that through some special nonlinear function like the sigmoid or a ray loop. It's actually going to make things easier for us if we give a special name to this weighted sum, like Z, with the same superscript as the relevant activations. So this is a lot of terms and a way that you might conceptualize it is that the weight, the previous action and the bias altogether are used to compute Z, which in turn lets us compute A, which finally, along with a constant Y, lets us compute the cost. And of course, AL minus 1 is influenced by its own weight and bias and such. But we're not going to focus on that right now. Now all of these are just numbers, right? And it can be nice to think of each one as having its own little number line. Our first goal is to understand how sensitive the cost function is to small changes in our weight, WL. Never phrase differently. What is the derivative of C with respect to WL? When you see this Dell W term, think of it as meaning some tiny nudge to W, like a change by 0.01. And think of this Dell C term as meaning whatever the resulting nudge to the cost is. What we want is their ratio. Conceptually, this tiny nudge to WL causes some nudge to ZL, which in turn causes some nudge to AL, which directly influences the cost. So we break things up by first looking at the ratio of a tiny change to ZL to this tiny change W, that is the derivative of ZL with respect to WL. Likewise, you then consider the ratio of the change to AL to the tiny change in ZL that caused it, as well as the ratio between the final nudge to C and this intermediate nudge to AL. This right here is the chain rule, where multiplying together these three ratios gives us the sensitivity of C to small changes in WL. So on screen right now, there's kind of a lot of symbols, and take a moment to just make sure it's clear what they all are. Because now we're going to compute the relevant derivatives. The derivative of C with respect to AL works out to be 2 times AL minus Y. Notice this means that its size is proportional to the difference between the networks output and the thing that we want it to be. So if that output was very different, even slight changes stand to have a big impact on the final cost function. The derivative of AL with respect to ZL is just the derivative of our sigmoid function, or whatever non-linearity you choose to use. And the derivative of ZL with respect to WL? In this case comes out just to be AL minus 1. Now I don't know about you, but I think it's easy to get stuck head down in the formulas without taking a moment to sit back and remind yourself of what they all actually mean. In the case of this last derivative, the amount that that small nudge to the weight influenced the last layer depends on how strong the previous neuron is. Remember, this is where that neurons that fire together wire together idea comes in. And all of this is the derivative with respect to WL only of the cost for a specific single training example. Since the full cost function involves averaging together all those costs across many different training examples, its derivative requires averaging this expression that we found over all training examples. And of course that is just one component of the gradient vector, which itself is built up from the partial derivatives of the cost function with respect to all those weights and biases. But even though that's just one of the many partial derivatives we need, it's more than 50% of the work. The sensitivity to the bias, for example, is almost identical. We just need to change out this del Z del W term for a del Z del B. And if you look at the relevant formula, that derivative comes out to be 1. Also, and this is where the idea of propagating backwards comes in, you can see how sensitive this cost function is to the activation of the previous layer. Namely, this initial derivative in the chain rule expression, the sensitivity of Z to the previous activation, comes out to be the weight WL. And again, even though we're not going to be able to directly influence that previous layer activation, it's helpful to keep track of, because now we can just keep iterating this same chain rule idea backwards to see how sensitive the cost function is to previous weights and previous biases. And you might think that this is an overly simple example, since all layers just have one neuron, and that things are going to get exponentially more complicated for a real network. But honestly, not that much changes when we give the layers multiple neurons. Really it's just a few more indices to keep track of. Rather than the activation of a given layer simply being AL, it's also going to have a subscript, indicating which neuron of that layer it is. Let's go ahead and use the letter K to index the layer L minus 1, and J to index the layer L. For the cost, again, we look at what the desired output is, but this time we add up the squares of the differences between these last layer activations and the desired output. That is, you take a sum over ALJ minus YJ squared. Since there's a lot more weights, each one has to have a couple more indices to keep track of where it is. So let's call the weight of the edge connecting this Kth neuron to the Jth neuron, WLJK. Those indices might feel a little backwards at first, but it lines up with how you'd index the weight matrix that I talked about in the Part 1 video. Just as before, it's still nice to give a name to the relevant weighted sum, like Z, so that the activation of the last layer is just your special function, like the sigmoid, applied to Z. You can kind of see what I mean, right, where all of these are essentially the same equations that we had before in the one neuron per layer case. It's just that it looks a little more complicated. And indeed, the chain ruled the rivetive expression, describing how sensitive the cost is to a specific weight, looks essentially the same. I'll leave it to you to pause and think about each of those terms if you want. What does change here, though, is the derivative of the cost with respect to one of the activations in the layer L minus 1. In this case, the difference is that the neuron influences the cost function through multiple different paths. It is on the one hand, it influences AL0, which plays a role in the cost function, but it also has an influence on AL1, which also plays a role in the cost function, and you have to add those up. And that, well, that's pretty much it. Once you know how sensitive the cost function is to the activations in this second to last layer, you can just repeat the process for all the weights and biases feeding into that layer. So pat yourself on the back. If all of this makes sense, you have now looked deep into the heart of back propagation, the workhorse behind how neural networks learn. These chain rule expressions give you the derivatives that determine each component in the gradient that helps minimize the cost of the network by repeatedly stepping downhill. If you sit back and think about all that, this is a lot of layers of complexity to wrap your mind around. So don't worry if it takes time for your mind to digest it all.
Why “probability of 0” does not mean “impossible” | Probabilities of probabilities, part 2
Imagine you have a weighted coin, so the probability of flipping heads might not be 50-50 exactly, it could be 20% or maybe 90% or 0% or 31.41592%. The point is that you just don't know, but imagine that you flip this coin 10 different times and 7 of those times it comes up heads. Do you think that the underlying weight of this coin is such that each flip has a 70% chance of coming up heads? If I were to ask you, hey, what's the probability that the true probability of flipping heads is 0.7? What would you say? This is a pretty weird question and for two reasons. First of all, it's asking about a probability of a probability, as in the value we don't know is itself some kind of long run frequency for a random event, which frankly is hard to think about. But the more pressing weirdness comes from asking about probabilities in the setting of continuous values. Let's give this unknown probability of flipping heads some kind of name, like H. Keep in mind that H could be any real number from 0 up to 1, ranging from a coin that always flips tails up to 1 that always flips heads and everything in between. So if I ask, hey, what's the probability that H is precisely 0.7, as opposed to say 0.700000001, or any other nearby value, well there's going to be a strong possibility for paradox if we're not careful. It feels like no matter how small the answer to this question, it just wouldn't be small enough. If every specific value within some range, all uncountably infinitely many of them, has a non-zero probability, well even if that probability was minuscule, adding them all up to get the total probability of any one of these values will blow up to infinity. On the other hand though, if all of these probabilities are 0, aside from the fact that that now gives you no useful information about the coin, the total sum of those probabilities would be 0, when it should be 1. After all, this weight of the coin H is something, so the probability of it being any one of these values should add up to 1. So if these values can't all be non-zero, and they can't all be 0, what do you do? Where we're going with this by the way is that I'd like to talk about the very practical question of using data to create meaningful answers to these sorts of probabilities of probabilities questions. But for this video, let's take a moment to appreciate how to work with probabilities over continuous values and resolve this apparent paradox. The key is not to focus on individual values, but ranges of values. For example, we might make these buckets to represent the probability that H is between, say, 0.8 and 0.85. Also, and this is more important than it might seem, rather than thinking of the height of each of these bars as representing the probability, think of the area of each one as representing that probability. Where exactly those areas come from is something that we'll answer later. For right now, just know that in principle, there's some answer to the probability of H sitting inside one of these ranges. Our task right now is to take the answers to these very coarse-grained questions and to get a more exact understanding of the distribution at the level of each individual input. The natural thing to do would be consider finer and finer buckets, and when you do the smaller probability of falling into any one of them is accounted for in the thinner width of each of these bars, while the heights are going to stay roughly the same. That's important because it means that as you take this process to the limit, you approach some kind of smooth curve. So even though all of the individual probabilities of falling into any one particular bucket will approach zero, the overall shape of the distribution is preserved and even refined in this limit. If on the other hand, we had let the heights of the bars represent probabilities, everything would have gone to zero. So in the limit, we would have just had a flat line giving no information about the overall shape of the distribution. So wonderful, letting area represent probability help solve this problem. But let me ask you, if the y-axis no longer represents probability, what exactly are the units here? Since probability sits in the area of these bars, or width times height, the height represents a kind of probability per unit in the x-direction, what's known in the business as a probability density. The other thing to keep in mind is that the total area of all these bars has to equal one at every level of the process, that's something that has to be true for any valid probability distribution. The idea of probability density is actually really clever when you step back to think about it, as you take things to the limit. Even if there's all sorts of paradoxes associated with assigning a probability to each of these uncountably infinitely many values of h between zero and one, there's no problem if we associate a probability density to each one of them, giving what's known as a probability density function, or PDF for short. Anytime you see a PDF in the wild, the way to interpret it is that the probability of your random variable lying between two values equals the area under this curve between those values. So, for example, what's the probability of getting any one very specific number, like 0.7? Well, the area of an infinitely thin slice is zero, so it's zero. What's the probability of all of them put together? Well, the area under the full curve is one. You see, paradox sidesteped. And the way that it's been sidesteped is a bit subtle. In normal finite settings, like rolling a die or drawing a card, the probability that a random value falls into a given collection of possibilities is simply the sum of the probabilities of being any one of them. This feels very intuitive. It's even true in accountably infinite context. But to deal with the continuum, the rules themselves have shifted. The probability of falling into a range of values is no longer the sum of the probabilities of each individual value. Instead, probabilities associated with ranges are the fundamental primitive objects. And the only sense in which it's meaningful to talk about an individual value here is to think of it as a range of width zero. If the idea of the rules changing between a finite setting and a continuous one feels unsettling, well, you'll be happy to see that. I'm happy to know that mathematicians are way ahead of you. There's a field of math called measure theory, which helps to unite these two settings and make rigorous the idea of associating numbers like probabilities to various subsets of all possibilities in a way that combines and distributes nicely. For example, let's say you're in a setting where you have a random number that equals zero with 50% probability. And the rest of the time, it's some positive number according to a distribution that looks like half of a bell curve. This is an awkward middle ground between a finite context where a single value has a non-zero probability and a continuous one, where probabilities are found according to areas under the appropriate density function. This is the sort of thing that measure theory handles very smoothly. I mention this mainly for the especially curious viewer, and you can find more reading material in the description. It's a pretty common rule of thumb that if you find yourself using a sum in a discrete context, then use an integral in the continuous context, which is the tool from calculus that we use to find areas under curves. In fact, you could argue this video would be way shorter if I just said that at the front and called it good. For my part though, I always found it a little unsatisfying to do this blindly without thinking through what it really means. And in fact, if you really dig into the theoretical underpinnings of integrals, what you'd find is that in addition to the way that it's defined in a typical intro calculus class, there is a separate, more powerful definition that's based on measure theory, this formal foundation of probability. If I look back to when I first learned probability, I definitely remember grappling with this weird idea that in continuous settings, like random variables that are real numbers or throwing a dart at a dart board, you have a bunch of outcomes that are possible, and yet each one has a probability of zero. And somehow altogether they have a probability of one. Now, one step of coming to terms with this is to realize that possibility is better tied to probability density than probability, but just swapping out sums of one for integrals of the others never quite scratched to the itch for me. I remember that it only really clicked when I realized that the rules for combining probabilities of different sets were not quite what I thought they were, and there was simply a different axiom system underlying it all. But anyway, steering away from the theory somewhere back in the loose direction of application, look back to our original question about the coin with an unknown weight. What we've learned here is that the right question to ask is, what's the probability density function that describes this value h after seeing the outcomes of a few tosses? If you can find that PDF, you can use it to answer questions like, what's the probability that the true probability of flipping heads falls between 0.6 and 0.8? To find that PDF, join me in the next part.
Circle Division Solution
In my last video, I posed the following question. If you take N points on a circle, then connect every pair of them with a line, how many sections do these lines cut the circle into? What was strange is that when N is less than 6, N when N is 10 for some reason, the answer is always a power of 2. But for all other values of N, the answer seems completely unrelated to powers of 2. What I love about this problem is that it brings together many disparate concepts, counting functions, graphs, one of Euler's most famous equations, and Pascal's triangle. You might be wondering if changing the placement of points changes the number of sections. It actually can, for instance, watch how this small region in the middle disappears if we adjust things so that 3 lines go through the same point. But if we add the restriction that no 3 lines can go through the same point, the number of sections depends only on the number of points, not their placement, as you're about to see. I think it's fair to call this a hard problem, and in solving hard problems, it's a good idea to ask simpler questions about the same setup. In this case, I have two questions for you. One, how many lines are there? And two, at how many points do these lines intersect within the circle? For the first question, every line corresponds uniquely with a pair of points, and likewise every pair of points gives us a unique line. Luckily, counting the number of pairs in a set is common enough in math that we have specific notation for it. N choose 2, which we evaluate as N times N minus 1 divided by 2. To see where this formula comes from, notice that you have N options for the first element of the pair, which we multiply by the N minus 1 remaining options for the second element. But this would double count each pair, so we divide by 2. For instance, when N equals 7, 7 choose 2 is 7 times 6 over 2, or 21, so there are 21 pairs of points and hence 21 lines. With, say, 100 points, counting lines directly would be a nightmare, but we can compute it as 102's 2, which is 100 times 99 divided by 2, or 4950. The number of intersection points is a bit more subtle. While every intersection point corresponds with a unique pair of lines, there are many pairs of lines that don't intersect within the circle, so we can't just count the pairs of lines. What we can do, though, is associate each intersection point with a set of 4 points on the circle, since this association goes the other way around, in that every quadruplet of points gives a unique intersection point. Just look at that. Isn't that elegant? This means the number of intersection points is the same as the number of quadruplets of our N starting points. The function N choose 4 counts quadruplets in a set of size N, and you evaluate it by taking N times N minus 1 times N minus 2 times N minus 3, all divided by 1 times 2 times 3 times 4. The derivation of this formula is similar to that of N choose 2. You multiply in the number of options you have for each successive entry, then divide by the extent to which you've overcounted. For instance, with N equals 4, 4 choose 4 is 1, and indeed there's just one intersection point. 6 choose 4 is 15, so when N equals 6, there are 15 intersection points. And if N were 100, even though the prospect of counting intersection points is horrifying, we can nevertheless deduce that there will be 3,921,225 of them. Now, how does this help us count sections in the circle, you might ask? Well, it will once we consider graphs and Euler's formula. No, no, not function graphs, and not that e to the pi i stuff. The word graph can also refer to a set of points, referred to as vertices, along with a set of lines connecting some of these points, which we call edges. Notice, if we count the number of vertices in this graph, then subtract the number of edges, then add the number of regions that this graph cuts space into, along with that outer region, we get 2. If we do the same thing with this other graph, well, we get 2 again. This isn't a coincidence. You could do this with any graph, and as long as your edges don't intersect each other, the answer is always 2. If edges could intersect, then you could just change the number of regions without changing number of vertices and edges. So of course it would be nonsense. This relation is known as Euler's characteristic formula, and it's easy to see where the name comes from, since Euler's is German for beautiful. If you're curious, the reason we write f for the number of regions is because the formula traditionally refers to the number of vertices, edges, and faces of 3D polyhedra. In another video, I'll explain why this is true, but here let's just use it to solve our circle problem. Our setup is already a graph, with n vertices and n choose 2 edges, one between each pair of points. But we cannot apply Euler's characteristic formula directly, since our edges intersect many times, and choose 4 times to be exact. However, if we consider each intersection point to be a vertex, meaning our original lines must be chopped up at these points, and if we also include the segments of the circle connecting our n outer points as new edges, we have a graph perfectly suited for Euler's formula. In particular, the number of regions in this picture is the number of edges in our new graph minus the number of vertices plus 2. Since our new graph retains the n original vertices and adds on another n choose 4 for intersection points, we replace the minus v term with minus n minus n choose 4. To find the number of edges, note that the intersection points can be seen as adding 2 edges each, since each one takes 2 existing lines and then cuts them into 4 smaller pieces. For example, 3 lines intersecting at 2 points would be cut into 3 plus 2 times 2 equals 7 smaller pieces. 4 lines intersecting at 3 points would be cut into 4 plus 2 times 3 equals 10 smaller pieces. And in our circle diagram, our n choose 2 lines intersecting at n choose 4 points are cutting to n choose 2 plus 2 times n choose 4 smaller pieces, plus another n for the circle segments we're now considering to be edges. Going back to our formula, we can replace E with n choose 2 plus 2 times n choose 4 plus n. Combining like terms, we see that our graph cuts the 2d plane into 2 plus n choose 2 plus n choose 4 pieces. Since we're concerned with counting the regions inside the circle, we can disregard that outer region and write our final answer as 1 plus n choose 2 plus n choose 4. Great, we found the answer. But why on earth does this formula relate to powers of 2 for n less than 6 than again when n equals 10? It's not just a coincidence, it has to do with Pascal's triangle. Pascal's triangle is constructed like this. Each term is the sum of the two terms above it. If you add up each row, you get a successive power of 2. To convince yourself of this, notice that each term is added into the following row twice, so the sum of each row should be twice the sum of the row before it. The function n choose k is closely related to this triangle, in that the kth entry of the nth row, where counting starts at 0, is always n choose k. For instance, to find 5, 2, 3 in the triangle, count down to the 0, 1, 2, 3, 4, 5th row, then go in 0, 1, 2, 3, and indeed 5, 2, 3 equals 10. This means that the answer to our circle problem for n points is the sum of the 0th, 2nd, and 4th entries of the nth row of Pascal's triangle. For instance, if n equals 5, we can see that we just have to add 1, 10, and 5. Since each term is the sum of the two above it, this is the same as adding the entire 4th row, which we know is a power of 2. Likewise, for smaller values of n, the answer is going to be the sum of the n-first row, that hints a power of 2. However, when n equals 6, and we relate the terms to the 5th row, notice that we're not adding the entire row, since we missed that last term, so we only get 31. When n equals 10, we're summing precisely half of the 9th row, so the answer is half of 2 to the 9th, which is 2 to the 8. So to recap, first, turn our diagram into a graph suitable for Euler's characteristic formula by adding all of the intersection points and vertices and cutting up all the edges. Next, count the number of lines and intersection points by relating them to pairs and quadruplets of our starting points. Then finally, use Euler's formula to deduce the number of sections and then relate this to powers of 2 using Pascal's triangle.
Researchers thought this was a bug (Borwein integrals)
Sometimes it feels like the universe is just messing with you. I have up on screen here a sequence of computations, and don't worry in a moment we're going to unpack and visualize what each one is really saying. What I want you to notice is how the sequence follows a very predictable, if random, seeming pattern, and how each computation happens to equal pi. And if you were just messing around evaluating these on a computer for some reason, you might think that this was a pattern that would go on forever. But it doesn't. At some point, it stops. And instead of equaling pi, you get a value which is just barely, barely less than pi. Alright, let's dig into what's going on here. The main character in the story today is the function sine of x divided by x. This actually comes up commonly enough in math and engineering that it gets its own name, sink. And the way you might think about it is by starting with a normal oscillating sine curve and then sort of squishing it down as you get far away from zero by multiplying it by one over x. And the astute among you might ask about what happens at x equals zero, since when you plug that in, it looks like dividing zero by zero. And then the even more astute among you may be fresh out of a calculus class. Could point out that for values closer and closer to zero, the function gets closer and closer to one. So if we simply redefine the sink function at zero to equal one, you get a nice continuous curve. All of that is a little by the by because the thing we actually care about is the integral of this curve from negative infinity to infinity, which you'd think of as meaning the area between the curve and the x-axis. Or more precisely the signed area, meaning you add all the area bound by the positive parts of the graph and the x-axis and you subtract all of the parts bound by the negative parts of the graph and the x-axis. Like we saw at the start, it happens to be the case that this evaluates to be exactly pi, which is nice and also a little weird and it's not entirely clear how you would approach this with the usual tools of calculus. Towards the end of the video, I'll share the trick for how you would do this. For guessing on what the sequence I opened with, the next step is to take a copy of this sink function where you plug in x divided by 3, which will basically look like the same graph but stretched out horizontally by a factor of 3. When we multiply these two functions together, we get a much more complicated wave, whose mass seems to be more concentrated towards the middle, and with any usual functions you would expect that's completely changes the area. You can't just randomly modify and integral like this and expect nothing to change. So already, it's a little bit weird that this result also equals pi that nothing has changed. That's another mystery you should add to your list. And the next step in the sequence was to take an even more stretched out version of this sink function by a factor of 5. Multiply that by what we already have, and again, look at the signed area underneath the whole curve, which again equals pi. And it continues on like this, with each iteration, we stretch out by a new odd number and multiply that into what we have. One thing you might notice is how except at the input x equals 0, every single part of this function is progressively getting multiplied by something that's smaller than 1. So you would expect, as the sequence progresses, for things to get squished down more and more, and if anything you would expect the area to be getting smaller. Eventually, that is exactly what happens, but what's bizarre is that it stays so stable for so long, and of course more pertinently, that when it does break at the value 15, it does so by the tiniest tiny amount. And before you go thinking this is the result of some numerical error, maybe because we're doing something with floating point arithmetic, if you work this out more precisely, here is the exact value of that last integral, which is a certain fraction of pi, where the numerator and the denominator are absurd. They're both around 400 billion billion billion. So this pattern was described in a paper by a father-son pair, Jonathan and David Borwein, which is very fun, and they mentioned how when a fellow researcher was computing these integrals using a computer algebra system, he assumed that this had to be some kind of bug. But it's not a bug, it is a real phenomenon. And it gets weirder than that actually. If we take all these integrals and include yet another factor, two cosine of x, which again you would think changes their values entirely, you can't just randomly multiply new things into an integral like this, it continues to equal pi for much much longer, and it's not until you get to the number 113 that it breaks. And when it breaks, it's by the most puny, absolutely subtle amount that you could imagine. So the natural question is what on earth is going on here? And luckily, there actually is a really satisfying explanation for all this. The way I think I'll go about this is to show you a phenomenon that first looks completely unrelated, but it shows a similar pattern, where you have a value that stays really stable until you get to the number 15, and then it falters by just a tiny amount. And then after that, I'll show why this seemingly unrelated phenomenon is secretly the same as all our integral expressions, but in disguise. So, turning our attention to what seems completely different, consider a function that I'm going to be calling rect of x, which is defined to equal 1 if the input is between negative 1 1 1 1 1 1 1 1 and otherwise it's equal to 0. So the function is this boring step basically. This will be the first in a sequence of functions that we define, so I'll call it f1 of x, and each new function in our sequence is going to be a kind of moving average of the previous function. So for example, the way the second iteration will be defined is to take this sliding window, who's with as 1 third, and for a particular input x, when the window is centered at that input x, the value in my new function drawn below is defined to be equal to the average value of the first function above inside that window. So for example, when the window is far enough to the left, every value inside at is 0, so the graph on the bottom is showing 0. As soon as that window starts to go over the plateau a little bit, the average value is a little more than 0, and you see that in the graph below. And notice that when exactly half the window is over that plateau at 1 and half of it is at 0, the corresponding value in the bottom graph is 1 half, and you get the point. The important thing I want you to focus on is how when that window is entirely in the plateau above, where all the values are 1, then the average value is also 1, so we get this plateau on our function at the bottom. Let's call this bottom function f2 of x, and what I want you to think about is the length of the plateau for that second function, how wide should it be? If you think about it for a moment, the distance between the left edge of the top plateau and the left edge of the bottom plateau will be exactly half of the width of the window, so half of 1 third, and similarly on the right side, the distance between the edges of the plateaus is half of the window width, so overall it's 1 minus that window width, which is 1 minus a third. The value we're going to be computing, the thing that will look stable for a while before it breaks, is the value of this function at the input 0, which in both of these iterations is equal to 1 because it's inside that plateau. For the next iteration, we're going to take a moving average of that last function, but this time with a window whose width is 1 fifth. It's kind of fun to think about why, as you slide around this window, you get a smoothed out version of the previous function, and again, the significant thing I want you to focus on is how when that window is entirely inside the plateau of the previous function, then by definition the bottom function is going to equal 1. This time, the length of that plateau on the bottom will be the length of the previous one, 1 minus a third, minus the window width, 1 fifth. The reasoning is the same as before, in order to go from the point where the middle of the window is on that top plateau, to where the entirety of the window is inside that plateau, is half the window width, and likewise on the right side. And once more, the value to record is the output of this function when the input is 0, which again is exactly 1. The next iteration is a moving average with a window width of 1 seventh, the plateau gets smaller by that 1 over 7, doing one more iteration with 1 over 9, the plateau gets smaller by that amount, and as we keep going, the plateau gets thinner and thinner, and also notice how just outside of the plateau, the function is really, really close to 1, because it's always been the result of an average between the plateau at 1 and the neighbors, which themselves are really, really close to 1. The point at which all of this breaks is once we get to the iteration where we're sliding a window with 1 15th across the whole thing. At that point, the previous plateau is actually thinner than the window itself, so even at the input x equals 0, this moving average will have to be ever so slightly smaller than 1. And the only thing that's special about the number 15 here is that, as we keep adding the reciprocals of these odd fractions, 1 third plus 1 fifth plus 1 seventh on and on, it's once we get to 1 15th that that sum grows to be bigger than 1, and in the context of our shrinking plateaus, having started with a plateau of width 1, it's now shrunk down so much that it'll disappear entirely. The point is, with this as a sequence of functions that we've defined by seemingly random procedure, if I ask you to compute the values of all of these functions at the input 0, you get a pattern which initially looks stable, it's 1 1 1 1 1 1 1 1, but by the time we get to the 8th iteration, it falls short ever so slightly, just barely. This is analogous, and I claim more than just analogous, to the integrals we saw earlier, where we have a stable value at pi pi pi pi pi pi until it falls short just barely. And as it happens, this constant from our moving average process that's ever so slightly smaller than 1 is exactly the factor that sits in front of pi in our series of integrals. So the two situations aren't just qualitatively similar, they're quantitatively the same as well. And when it comes to the case where we add the two cosine of x term inside the integral, which caused the pattern to last a lot longer before it broke down, in the analogy what that will correspond to, is the same setup, but where the function we start with has an even longer plateau, stretching from x equals negative 1 up to 1, meaning its length is 2. So, as you do this repeated moving average process, eating into it with these smaller and smaller windows, it takes a lot longer for them to eat into the whole plateau. More specifically, the relevant computation is to ask how long do you have to add these reciprocals of odd numbers until that sum becomes bigger than 2. And it turns out that you have to go and tell you hit the number 113, which will correspond to the fact that the integral pattern there continues until you hit 113. And by the way, I should emphasize that there is nothing special about these reciprocals of odd numbers, one third, one fifth, one seventh, that just happens to be the sequence of values highlighted by the boreways in their paper that made the sequence mildly famous in nerd circles. More generally, we could be inserting any sequence of positive numbers into those sync functions, and as long as the sum of those numbers is less than 1, our expression will equal pi, but as soon as they become bigger than 1, our expression drops a little below pi. And if you believe me that there's an analogy with these moving averages, you can hopefully see why. But of course, the burning question is why on earth should these two situations have anything to do with each other? From here, the argument does bring into mildly heavy bits of machinery, namely 48 transforms and convolutions. And the way I'd like to go about this is to spend the remainder of this video giving you a high level sense of how the argument will go without necessarily assuming you're familiar with either of those two topics, and then to explain why the details are true in a video that's dedicated to convolutions. In particular, something called the convolution theorem, since it's incredibly beautiful and it's useful well beyond this specific very esoteric question. To start, instead of focusing on this function sine of x divided by x, where we want to show why the signed area underneath its curve is equal to pi, we'll make a simple substitution where we replace the input x with pi times x, which has the effect of squishing the graph horizontally by a factor of pi, and so the area gets scaled down by a factor of pi, meaning our new goal is to show why this integral on the right is equal to exactly one. By the way, in some engineering contexts, people use the name sync to refer to this function with the pi on the inside, since it's often very nice to have a normalized function, meaning the area under it is equal to one. The point is showing this integral on the right is exactly the same thing as showing the integral on the left, it's just a change of variables. And likewise, for all of the other ones in our sequence, go through each of them, replace the x with a pi times x, and from here, the claim is that all these integrals are not just analogous to the moving average examples, but that both of these are two distinct ways of computing exactly the same thing. And the connection comes down to the fact that this sync function, for the engineer sync function with the pi on the inside, is related to the rect function, using what's known as a 48 transform. Now, if you've never heard of a 48 transform, there are a few other videos on this channel all about it. The way it's often described is that if you want to break down a function as the sum of a bunch of pure frequencies, or in the case of an infinite function, a continuous integral of a bunch of pure frequencies, the 48 transform will tell you all the strength and phases for all those constituent parts. But all you really need to know here is that it is something which takes in one function and spits out a new function, and you often think of it as kind of rephrasing the information of your original function in a different language, like you're looking at it from a new perspective. For example, like I said, this sync function written in this new language, where you take a 48 transform, looks like our top hat rect function. And vice versa, by the way, this is a nice thing about 48 transforms for functions that are symmetric about the y-axis. It is its own inverse. And actually, the slightly more general fact that we'll need to show is how when you transform the stretched out version of our sync function, where you stretch it horizontally by a factor of k, what you get is a stretched and squished version of this rect function. But of course, all of these are just meaningless words in terminology, unless you can actually do something upon making this translation. And the real idea behind why 48 transforms are such a useful thing for math is that when you take statements and questions about a particular function, and then you look at what they correspond to, with respect to the transformed version of that function, those statements and questions often look very very different in this new language, and sometimes it makes the questions a lot easier to answer. For example, one very nice little fact, another thing on our list of things to show, is that if you want to compute the integral of some function from negative infinity to infinity, this signed area under the entirety of its curve, it's the same thing as simply evaluating the Fourier transformed version of that function at the input zero. This is a fact that we'll actually just pop right out of the definition, and it's representative of a more general vibe that every individual output of the 48 transform function on the right corresponds to some kind of global information about the original function on the left. In our specific case, it means if you believe me that this sync function and the rect function are related with a 48 transform like this, it explains the integral, which is otherwise a very tricky thing to compute, because it's saying all that signed area is the same thing as evaluating rect at zero, which is just one. Now, you could complain, surely this just moves the bump under the rug, surely computing this 48 transform, whatever that looks like, would be as hard as computing the original integral. But the idea is that there's lots of tips and tricks for computing these 48 transforms, and moreover that when you do, it tells you a lot more information than just that integral. You get a lot of bang for your buck out of doing the computation. Now, the other key fact that we'll explain the connection we're hunting for, is that if you have two different functions and you take their product, and then you take the 48 transform of that product, it will be the same thing as if you individually took the 48 transforms of your original function, and then combine them using a new kind of operation that we'll talk all about in the next video, known as a convolution. Now, even though there's a lot to be explained with convolutions, the upshot will be that in our specific case with these rectangular functions, taking a convolution looks just like one of the moving averages that we've been talking about this whole time. Combined with our previous fact that integrating in one context looks like evaluating at zero in another context, if you believe me that multiplying in one context corresponds to this new operation, convolutions, which for our example you should just think of as moving averages, that will explain why multiplying more and more of these sync functions together can be thought about in terms of these progressive moving averages and always evaluating at zero, which in turn gives a really lovely intuition for why you would expect such a stable value before eventually something breaks down as the edges of the plateau inch closer and closer to the center. This last key fact, by the way, has a special name, it's called the convolution theorem, and again, it's something that will go into much more deeply. I recognize that it's maybe a little unsatisfying to end things here by laying down three magical facts and saying everything follows from those, but hopefully this gives you a little glimpse of why powerful tools like Fourier transforms can be so useful for tricky problems. It's a systematic way to provide a shift in perspective where hard problems can sometimes look easier. If nothing else, it hopefully provides some motivation to learn about these beautiful things like the convolution theorem. As one more tiny teaser, another fun consequence of this convolution theorem will be that it opens the doors for an algorithm that lets you compute the product of two large numbers very quickly, like way faster than you think should be even possible. So with that, I'll see you in the next video.
But how does bitcoin actually work?
What does it mean to have a bitcoin? Many people have heard of bitcoin that it's a fully digital currency with no government to issue it and that no banks need to manage accounts and verify transactions, and also that no one really knows who invented it. And yet many people don't know the answer to this question, at least not in full. To get there, and to make sure that the technical details underlying the answer actually feel motivated, what we're going to do is walk through step by step how you might have invented your own version of bitcoin. We'll start with you keeping track of payments with your friends using a communal ledger, and then as you start to trust your friends and the world around you less and less, and if you're clever enough to bring in a few ideas from cryptography to help circumvent the need for trust, what you end up with is what's called a cryptocurrency. You see, bitcoin is just the first implemented example of a cryptocurrency. And now there are thousands more on exchanges with traditional currencies. Walking the path of inventing your own can help to set the foundations for understanding some of the more recent players in the game, and recognizing when and why there's room for different design choices. In fact, one of the reasons I chose this topic is that in the last year, there's been a huge amount of attention and investment and, well honestly, hype directed at these currencies. And I'm not going to comment or speculate on the current or future exchange rates, but I think we'd all agree that anyone looking to buy a cryptocurrency should really know what it is. And I don't just mean in terms of analogies with vague connections to gold mining. I mean an actual direct description of what the computers are doing when we send, receive, and create cryptocurrencies. One thing worth stressing, by the way, is that even though you and I are going to dig into the details here, and that takes meaningful time, you don't actually need to know those details if you just want to use the cryptocurrency, just like you don't need to know the details of what happens under the hood when you swipe a credit card. Like any digital payment, there's lots of user friendly applications that let you just send and receive the currencies without thinking about what's going on. The difference is that the backbone underlying this is not a bank that verifies transactions. Instead, it's a clever system of decentralized trustless verification based on some of the math, born, and cryptography. But to start, I want you to actually set aside the thought of cryptocurrencies and all that just for a few minutes. We're going to begin the story with something more down to earth, ledgers and digital signatures. If you and your friends exchanged money pretty frequently, you know, paying your share of the dinner bill and such, it can be inconvenient to exchange cash all the time. So you might keep a communal ledger that records all of the payments that you intend to make some point in the future. You know, Alice pays Bob $20, Bob pays Charlie $40, things like that. This ledger is going to be something public and accessible to everyone, like a website, where anyone can go and just add new lines. And let's say that at the end of every month, you all good together, look at the list of transactions and settle up. If you spent more than you received, you put that money in the pot, and if you received more than you spent, you take that money out. So the protocol for being part of this very simple system might look like this. One can add lines to the ledger, and at the end of every month, you all get together and settle up. Now one problem with a public ledger like this is that anyone can add a line. So what's to prevent Bob from going and writing Alice pays Bob $100 without Alice approving? How are we supposed to trust that all of these transactions are what the sender meant them to be? Well this is where the first bit of cryptography comes in. Digital signatures. Like handwritten signatures, the idea here is that Alice should be able to add something next to that transaction that proves that she has seen it and that she's approved of it. And it should be infeasible for anyone else to forge that signature. At first, it might seem like a digital signature shouldn't even be possible. I mean, whatever data makes up that signature can just be read and copied by a computer. So how do you prevent forgeries? The way this works is that everyone generates what's called a public key private key pair, each of which looks like some string of bits. The private key is sometimes also called a secret key so that we can abbreviate it as SK while abbreviating the public key as PK. Now as the name suggests, this secret key, it's something you want to keep to yourself. In the real world, your handwritten signature looks the same no matter what document you're signing. But a digital signature is actually much stronger because it changes for different messages. It looks like some string of ones and zeros, commonly something like 256 bits, and altering the message even slightly completely changes what the signature on that message should look like. Speaking a little more formally, producing a signature involves a function that depends both on the message itself and on your private key. The private key ensures that only you can produce that signature, and the fact that it depends on the message means that no one can just copy one of your signatures and then forge it on another message. Hand in hand with this is a second function used to verify that a signature is valid, and this is where the public key comes into play. All it does is output true or false to indicate if this was a signature produced by the private key associated with the public key that you're using for verification. I won't go into the details of how exactly both these functions work, but the idea is that it should be completely infeasible to find a valid signature if you don't know the secret key. Specifically, there's no strategy better than just guessing and checking random signatures, which you can check using the public key that everyone knows. Now think about how many signatures there are with a length of 256 bits. That's 2 to the power of 256. This is a stupidly large number. To call it astronomically large would be giving way too much credit to astronomy. In fact, I made a supplemental video devoted just to illustrating what a huge number this is. Right here, let's just say that when you verify that a signature against a given message is valid, you can feel extremely confident that the only way someone could have produced it is if they knew the secret key associated with the public key you used for verification. Now making sure that people sign transactions on the ledger is pretty good, but there's one slight loophole. If Alice signs a transaction like Alice pays Bob $100, even though Bob can't forge Alice's signature on a new message, he could just copy that same line as many times as he wants. I mean, that message signature combination remains valid. To get around this, what we do is make it so that when you sign a transaction, the message has to also include some sort of unique ID associated with that transaction. That way, if Alice pays Bob $100 multiple times, each one of those lines on the ledger requires a completely new signature. All right. Great. Digital signatures remove a huge aspect of trust in this initial protocol. But even still, if you were to really do this, you would be relying on an honor system of sorts. Namely, you're trusting that everyone will actually follow through and settle up in cash at the end of each month. What if, for example, Charlie racks up thousands of dollars in debt and just refuses to show up? The only real reason to revert back to cash to settle up is if some people, I'm looking at you, Charlie, owe a lot of money. So maybe you have the clever idea that you never actually have to settle up in cash as long as you have some way to prevent people from spending too much more than they take in. Maybe what you do is start by having everyone pay $100 into the pot and then have the first few lines of the ledger read, Alice gets $100, Bob gets $100, Charlie gets $100, etc. Now just don't accept any transactions where someone is spending more than they already have on that ledger. For example, if the first two transactions are Charlie pays Alice $50 and Charlie pays Bob $50. If he were to try to add Charlie pays you $20, that would be invalid, as invalid as if he had never signed it. Notice this means that verifying a transaction requires knowing the full history of transactions up to that point. And this is more or less also going to be true in cryptocurrencies, though there is a little room for optimization. What's interesting here is that this step removes the connection between the ledger and actual physical US dollars. In theory, if everyone in the world was using this ledger, you could live your whole life just sending and receiving money on this ledger without ever having to convert to real US dollars. In fact, to emphasize this point, let's start referring to the quantities on the ledger as ledger dollars or LD for short. You are of course free to exchange ledger dollars for real US dollars. For example, maybe Alice gives Bob a $10 bill in the real world and exchange for him adding and signing the transaction Bob pays Alice $10 ledger dollars to this communal ledger. But exchanges like that, they're not going to be guaranteed by the protocol. It's now more analogous to how you might exchange dollars for euros or any other currency on the open market. It's just its own independent thing. This is the first important thing to understand about Bitcoin or any other cryptocurrency. What it is is a ledger. The history of transactions is the currency. Of course, with Bitcoin, money doesn't enter the ledger with people buying in using cash. I'll get to how new money enters the ledger in just a few minutes. But before that, there's actually an even more significant difference between our current system of ledger dollars and how cryptocurrencies work. So far, I've said that this ledger is in some public place, like a website where anyone can add new lines. But that would require trusting a central location, namely who hosts the website, who controls the rules of adding new lines. To remove that bit of trust, we'll have everybody keep their own copy of the ledger. Then when you want to make a transaction like Alice pays Bob $100 ledger dollars, what you do is broadcast that out into the world for people to hear and to record on their own private ledgers. But unless you do something more, this system is absurdly bad. How could you get everyone to agree on what the right ledger is? When Bob receives a transaction like Alice pays Bob $10 ledger dollars, how can he be sure that everyone else received and believes that same transaction? That he'll be able to later on go to Charlie and use those same $10 ledger dollars to make a transaction? Really, imagine yourself just listening to transactions being broadcast. How can you be sure that everyone else is recording the same transactions and in the same order? This is really the heart of the issue. This is an interesting puzzle. Can you come up with a protocol for how to accept or reject transactions and in what order, so that you can feel confident that anyone else in the world who's following that same protocol has a personal ledger that looks the same as yours? This is the problem addressed in the original Bitcoin paper. At a high level, the solution that Bitcoin offers is to trust whichever ledger has the most computational work put into it. I'll take a moment to explain exactly what that means. It involves this thing called a cryptographic hash function. The general idea that we'll build to is that if you use computational work as a basis for what to trust, you can make it so that fraudulent transactions and conflicting ledgers would require an infeasible amount of computation to bring about. Again, I'll remind you that this is getting well into the weeds beyond what anyone would need to know just to use a currency like this. But it's a really cool idea. And if you understand it, you understand the heart of Bitcoin and of other cryptocurrencies. So first things first, what's a hash function? The inputs for one of these functions can be any kind of message or file. It really doesn't matter. And the output is a string of bits with some kind of fixed length, like 256 bits. This output is called the hash or the digest of the message. And the intent is that it looks random. It's not random. It always gives the same output for a given input. But the idea is that if you slightly change the input, maybe editing just one of the characters, the resulting hash changes completely. In fact, for the hash function that I'm showing here called SHA 256, the way the output changes as you slightly change that input is entirely unpredictable. You see, this is not just any hash function. It's a cryptographic hash function. That means it's infeasible to compute in the reverse direction. If I show you some string of ones and zeros and ask you to find an input so that the SHA 256 hash of that input gives this exact string of bits, you will have no better method than to just guess and check. And again, if you want to feel for how much computation would be needed to go through two to the 256 guesses, just take a look at this supplement video. I actually had way too much fun writing that thing. You might think that if you just really dig into the details of how exactly this function works, you could reverse engineer the appropriate input without having to guess and check. But no one has ever figured out a way to do that. Interestingly, there's no cold hard rigorous proof that it's hard to compute in the reverse direction. And yet, a huge amount of modern security depends on cryptographic hash functions and the idea that they have this property. If you were to look at what algorithms underlie the secure connection that your browser is making with YouTube right now, or that it makes with your bank, you will likely see the name SHA 256 show up in there. For right now, our focus will just be on how such a function can prove that a particular list of transactions is associated with a large amount of computational effort. Imagine someone shows you a list of transactions, and they say, hey, I found a special number, so that when you put that number at the end of this list of transactions and apply SHA 256 to the entire thing, the first 30 bits of that output are all zeros. How hard do you think it was for them to find that number? Well for a random message, the probability that a hash happens to start with 30 successive zeros is 1 and 2 to the 30, which is about 1 and a billion. And because SHA 256 is a cryptographic hash function, the only way to find a special number like that is just guessing and checking. So this person almost certainly had to go through about a billion different numbers before finding this special one. And once you know that number, it's really quick to verify. You just run the hash and see that there are 30 zeros. So in other words, you can verify that they went through a large amount of work, but without having to go through that same effort yourself. This is called a proof of work. And importantly, all of this work is intrinsically tied to the list of transactions. If you change one of those transactions, even slightly, it would completely change the hash. So you'd have to go through another billion guesses to find a new proof of work, a new number that makes it so that the hash of the altered list, together with this new number, starts with 30 zeros. So now think back to our distributed ledger situation. Everyone is there broadcasting transactions, and we want to wait for them to agree on what the correct ledger is. As I said, the core idea behind the original Bitcoin paper is to have everyone trust whichever ledger has the most work put into it. The way this works is to first organize a given ledger into blocks, where each block consists of a list of transactions together with a proof of work. That is a special number so that the hash of the whole block starts with a bunch of zeros. For the moment, let's say that it has to start with O60 zeros, but later we'll return back to a more systematic way you might want to choose that number. In the same way that a transaction is only considered valid when it's signed by the sender, a block is only considered valid if it has a proof of work. And also, to make sure that there's a standard order to these blocks, we'll make it so that a block has to contain the hash of the previous block at its header. That way, if you were to go back and change any one of the blocks, or to swap the order of two blocks, it would change the block that comes after it, which changes that block's hash, which changes the one that comes after it, and so on. That would require redoing all of the work, finding a new special number for each of these blocks that makes their hash's start with O60 zeros. Because blocks are chained together like this, instead of calling it a ledger, it's common to call it a block chain. As part of our updated protocol, we'll now allow anyone in the world to be a block creator. What that means is that they're going to listen for transactions being broadcast, collect them into some block, and then do a whole bunch of work to find a special number that makes the hash of that block start with O60 zeros. And once they find it, they broadcast out the block they found. To reward a block creator for all this work, when she puts together a block, we'll allow her to include a very special transaction at the top of it, in which she gets, say, $10 dollars out of thin air. This is called the block reward, and it's an exception to our usual rules about whether or not to accept transactions. It doesn't come from anyone, so it doesn't have to be signed. And it also means that the total number of ledger dollars in our economy increases with each new block. Creating blocks is often called mining, since it requires doing a lot of work, and it introduces new bits of currency into the economy. But when you hear or read about miners, keep in mind that what they're really doing is listening for transactions, creating blocks, broadcasting those blocks, and getting rewarded with new money for doing so. From the miners' perspective, each block is kind of like a miniature lottery, where everyone is guessing numbers as fast as they can, until one lucky individual finds a special number that makes the hash of the block start with many zeros, and they get the reward. For anyone else who just wants to use this system to make payments, instead of listening for transactions, they all start listening just for blocks being broadcast by miners, and updating their own personal copies of the blockchain. Now the key addition to our protocol is that if you hear two distinct blockchains with conflicting transaction histories, you defer to the longest one, the one with the most work put into it. If there's a tie, just wait until you hear an additional block that makes one of them longer. So even though there's no central authority, and everyone is maintaining their own copy of the blockchain, if everyone agrees to give preference to whichever blockchain has the most work put into it, we have a way to arrive at decentralized consensus. To see why this makes for a trustworthy system, and to understand at what point you should trust that a payment is legit, it's actually really helpful to walk through exactly what it would take to fool someone using this system. Maybe Alice is trying to fool Bob with a fraudulent block. Namely, she tries to send him one that includes her paying him $100 L, but without broadcasting that block to the rest of the network. That way everyone else still thinks that she has those $100. To do this, she would have to find a valid proof of work before all of the other miners, each working on their own block. And that could definitely happen. Maybe Alice just happens to win this miniature lottery before everyone else. But Bob is still going to be hearing the broadcasts made by other miners. So to keep him believing this fraudulent block, Alice would have to do all of the work herself to keep adding blocks on this special fork in Bob's blockchain. That's different from what he's hearing from the rest of the miners. Remember, as per the protocol, Bob always trusts the longest chain that he knows about. This might be able to keep this up for a few blocks. If just by chance she happens to find blocks more quickly than the rest of the miners on the network all combined. But unless she has close to 50% of the computing resources among all of the miners, the probability becomes overwhelming that the blockchain that all of the other miners are working on grows faster than the single fraudulent blockchain that Alice is feeding to Bob. So, after enough time, Bob's just going to reject what he's hearing from Alice in favor of the longer chain that everyone else is working on. Notice, that means that you shouldn't necessarily trust a new block that you hear immediately. Instead, you should wait for several new blocks to be added on top of it. If you still haven't heard of any longer block chains, you can trust that this block is part of the same chain that everyone else is using. And with that, we've hit all the main ideas. This distributed ledger system, based on a proof of work, is more or less how the Bitcoin Protocol works and how many other cryptocurrencies work. There's just a few details to clear up. Earlier, I said that the proof of work might be to find a special number so that the hash of the block starts with 60 zeros. Well, the way the actual Bitcoin Protocol works is to periodically change that number of zeros so that it should take, on average, 10 minutes to find a new block. So as there are more and more miners added to the network, the challenge actually gets harder and harder in such a way that this miniature lottery only has about one winner every 10 minutes. Many newer cryptocurrencies actually have much shorter block times than that. And all of the money in Bitcoin ultimately comes from some block reward. In the beginning, these rewards were 50 Bitcoin per block. There's actually a great website you can go to called block explorer that makes it easy to look through the Bitcoin blockchain. And if you look at the very first few blocks on the chain, they contain no transactions other than that 50 Bitcoin reward to the miner. But every 210,000 blocks, which is about every four years, that reward gets cut in half. So right now, the reward is 12.5 Bitcoin per block. And because this reward decreases geometrically over time, it means there will never be more than 21 million Bitcoin in existence. However, this doesn't mean that miners will stop earning money. In addition to the block reward, miners can also pick up transaction fees. The way this works is that whenever you make a payment, you can purely optionally include a little transaction fee with it that's going to go to the miner of whichever block includes that payment. The reason you might do that is to incentivize miners to actually include the transaction that you broadcast into the next block. You see, in Bitcoin, each block is limited to about 2400 transactions, which many critics argue is unnecessarily restrictive. For comparison, Visa processes an average of about 1700 transactions per second, and they're capable of handling more than 24,000 per second. This comparatively slow processing on Bitcoin makes for higher transaction fees, since that's what determines which transactions miners choose to include in a new block. All of this is far from a comprehensive coverage of cryptocurrencies. There are still many nuances and alternate design choices that I haven't even touched. But my hope is that this can provide a stable, weight-foot-wise style tree trunk of understanding for anyone looking to add a few more branches with further reading. Like I said at the start, one of the motives behind this is that a lot of money has started flowing towards cryptocurrencies. And even though I don't want to make any claims about whether that's a good or bad investment, I really do think that it's healthy for people getting into the game to at least know the fundamentals of the technology. As always, my sincerest thanks to those of you making this channel possible on Patreon. I understand that not everyone is in a position to contribute, but if you're still interested in helping out, one of the best ways to do that is simply to share videos that you think might be interesting, or helpful to others. I know you know that, but it really does help.
We ran a contest for math explainers, here are the results (SoME2)
In the last month or two, there's been a measurable increase in the attention to a wide variety of smaller math channels on YouTube. My friend James and I ran a second iteration of a contest that we did last year, the Summer of Math Exposition, which invites people to put up lessons about math online. Could be a video, could be an article, any medium you dream up, whatever topic you dream up, and we have some prizes available for the ones that we deem in some sense best, whatever that could mean. The deadline for submissions was a little over a month ago, and if we just focus on the video entries, they've collectively accumulated over 7 million views since that time. Considering that the vast majority of these are uploaded to very young channels, where the video is often just the first or the second upload, this was really exciting for me to see. I suspect a big part of the reason for this rising tide is that after the submission deadline, we ran a peer review process where an algorithm would feed participants two different videos to compare, and they'd be asked to vote which one of these is, quote, better, according to a few criteria. Now that process generated over 10,000 comparisons, which on the one hand helps to provide an initial rough rank ordering of all the videos, but more important than that, any judgments or rankings, it gave an excuse for many hundreds of people to upload around a similar time, and then collectively view each other's work, helping to jump start a cluster of videos with a shared viewer base. And that 7 million number doesn't account for other videos on these channels, for instance the many submissions people made to last year's contest, which since this year's deadline could collectively jump up by about 2 million views. One group who told me that last year's contest was what inspired them to put up their first video, mentioned to me that a video they had made in between the two contests managed to suddenly jump from 1,000 to 600,000 views during the peer review process for this year's contest, despite not being among those videos reviewed. This is all to say, there can be a surprising value in the seemingly simple presence of a shared goal and a shared deadline. And again, these are just the video entries, where it's easy for us to run these analytics to quickly get a sense of the reach, many of my favorite entries were the written ones. The spirit of the contest, as you can no doubt tell, is getting more people to put out math lessons, and on that front mission accomplished. But I did promise to select 5 winners, lessons that stood out as especially valuable for one reason or another, and that brings us to this video here. To choose winners, James and I both spent a couple weeks giving a pretty thorough look at over 100 of the top entries, as determined by the peer review process, and we also recruited a few guest judges from the community to help look at a subset of these and make sure that our own biases and blind spots aren't playing too heavy a role. Many, many things to them in the time they offered. I won't tell you the final decision until the end of the video. What I thought might be more fun is to lead up to it by talking through the criteria that I had in mind when making this selection, highlighting as many exemplary submissions as I can along the way, hopefully giving any of you who are looking to put out your own math lessons online at some point, a few concrete things to focus on. At a high level, the four criteria I told people I'd look out for were motivation, clarity, novelty, and memorability. The first two are probably the most important, and let's start with motivation. This actually has two meanings I can think of, one on the macro scale and one on the micro scale. By macro scale motivation, I mean, how well do you hook someone into the lesson as a whole? This video by Alexander Berdnickov opens by asking why it is that when you hear plain approaching, the pitch of its sound seems to slowly fall. He points out how a lot of people assume that this is the Doppler effect, but that this doesn't actually hold up to scrutiny, since for example that pitch actually rises as the plane is going away from you. It's a good point, and an interesting question you have my attention. One of my favorite articles in the batch by Adi Mital prompts you to wonder about an algorithm behind the panorama feature on your phone, and proceeds to explain one why that's not trivial, and then two, the linear algebra and projective geometry involved in a DIY-style project to stitch together two overlapping images taken at different angles. Application and tangible problems can make for great motivation, but that's not the only source of motivation. Depending on the target audience, a good nerd-sniping question can also do the trick. This video on the channel Going Null opens with a seemingly impossible puzzle. 10 prisoners are each given a hat, chosen arbitrarily from 10 total hat types available. It is possible for some of the prisoners to have the same hat type as others, and everyone can see all of the other hats, but not their own. After giving a little time to look at everyone else's hat and think about it all, the prisoners are to simultaneously shout out a guess for their own hat type. The question is, can you find a method that guarantees at least one of the prisoners will make a correct guess? This video by Eric Roland motivates the idea of periodic numbers than the periodic metric by showing how if you take two to the tenth and two to the 100th, two to the 1000th, so on and so on, and he assign distinct colors to each digit, lining them all up on the right, you can see that their final digits line up more and more with larger powers. And he asks the question of whether it's reasonable to interpret this as a kind of convergence, despite the fact that these numbers are clearly diverging to infinity in the usual sense. A completely different form of motivation can come from showing the historical significance of a problem or a field, and since giving the viewer a feeling that they're part of something bigger. One excellent video on a channel, A Well-Rested Dog, provides an overview of the history of calculus, and the progression of how some of the world's smartest minds grappled with the nuances of infinity and infinitesimals. It's the right mixture of entertaining and detailed, and he goes on to talk about how learning all of this made his own questions and confusions in a calculus class feel validated, which I think a lot of students can resonate with. This example is less about the intro of a video motivating the lesson it teaches, and more about the entire video motivating an entire field. Another one like that would be this lecture by the channel Thissary, laying out how Cantor's diagonalization argument and the halting problem and a number of other paradoxes people might have heard of in math, computer science, and logic. All actually follow the same basic pattern, and moreover, if you try to formalize the exact sense in which they follow the same pattern, that ends up serving as a pretty nice motivation for the subject of category theory. And the last flavor of motivation I'll mention is if you can somehow make the learner feel like they're playing an active role in the lesson. This is very hard to do with a video, maybe even impossible, and it's best suited for in-person lessons. But one written entry that I thought did this especially well was an inverse turing test, where you as the reader are challenged to come up with a sequence of ones and zeros that appears random, and the article goes on to explain various statistical tests that you could apply to prove that the sequence was actually human generated, and not really random. The content of the article is centered around the particular sequence that you the reader created, and you're invited to change it along the way to try to get it to pass more tests. It's a nice touch, and I could easily see this working really well as a classroom activity. Whatever approach you take, whatever flavor of motivation is your favorite, it's hard to overstate just how important it is that you do actually give viewers a reason to care. This is true for any piece of content, but I think it's especially true for educational content, and even more so for math, given the amount of focus and thought that these topics sometimes require. I think this was articulated best by the author of one of my favorite podcasts, an opinionated history of mathematics, who has a manifesto on his website that lays out what he calls the axioms of learning, beginning with the first axiom, quote, in a perfect world, students pursue learning not because it is prescribed to them, but rather out of a genuine desire to figure things out. It follows that we must not introduce any topic, for which we cannot first convince the students that they should want to pursue it. That said, one mistake that I think I've made in past videos is to over-philosophize in the video's introduction. Motivation is critical, but it doesn't have to take long, and often what actually keeps the viewer engaged is to get right to the point and leave any commentary about broader themes and connections to the end. If you can, motivate using clear examples, not sweeping statements or promises of what is to come. By microscale motivation, what I mean is whether each new idea that's introduced in the lesson itself feels to the learner like it has a good reason to be there. For instance, this video by Joshua Maros gives a fairly detailed overview of raytracing, and what I love about it is that before he introduces any new technical topic, like the rendering equation, important sampling or the restore algorithm, he's already outlined the main idea and intuition for that topic with really well visualized examples. It makes it so that once the equation comes on the screen, or the algorithm is described, it doesn't feel like an expression handed down with nothing to hold on to. Instead, it arrives only once it's articulating something that already exists at least loosely in the viewer's mind, making it much easier to parse. This video, by Michael DeFranco, about extending the factorial, offers another great example of a lesson with good motivation along the way. You may have heard that there's a function generalizing the factorial to real and even complex inputs, the gamma function. The usual definition is written down as a certain integral expression, sort of handed down from on high, and the justification for why this generalizes factorials, is certain properties that you can prove about it. But a lot of students find this unsatisfying, where does it come from? By contrast, in Michael's explanation, he starts by observing the properties that are true of the normal factorial function that you would want to be true of a general version, and uses those desired properties to motivate various different alternate expressions massaged here and there to be more amenable to non-hole number inputs, ultimately leading to a pretty satisfying answer. Another good template for this microscale motivation when introducing a pretty complicated solution to a problem, is to start with a naive but flawed solution, and then progressively refine it. This article, by Max Slater, undifferential programming does this particularly well. The basic question is how you get computers to evaluate derivatives, a ubiquitous task for machine learning. He starts by describing the most obvious approach, and then what flaws it has, and uses that as motivation for another approach. But that one has its own flaws and fixing those motivates yet another approach, and so on. The ideas he builds up to, dual numbers, and backward mode automatic differentiation, both could feel a bit confusing if presented out of the blue, but in context, having motivated each new idea by pointing out flaws with the previous ones, it all ends up feeling utterly reasonable. Turning back to that same prisoner hat puzzle I referenced earlier, one of the other things I liked about it is how the author doesn't just present the solution, there are plenty of puzzle videos out there which do that. Instead, he gives a pretty authentic look at the wrong turns and tangents that are involved in the problem solving process, not even eating up too much time to do so, and he justifies each new step with a general problem solving principle. All of this micro scale motivation could just as well be categorized as a subset of clarity. If motivating a lesson determines how much attention and focus the viewer is willing to give you, clarity determines how quickly you burn through that focus. The best hook in the world is wasted if the lesson which follows is confusing. This presentation by Explanaria talks about how to describe various crystal structures using group theory, which considering the complex 3D forms involved and the fact that most people don't know group theory has the potential to be very confusing. But they do a really effective job at keeping concrete examples front and center, guiding the reader to focus on one relevant pattern at a time, and distilling down to a simple version of an idea before seeing how that fits into a broader, more general setting. In general, entries that struck me as especially clear would often keep one or two examples front and center, and that often give a feeling of playing with those examples, maybe running simulations or tweaking them to run up against edge cases, and all around giving the viewer a chance to build their own intuitions before general rules are presented. The example doesn't even have to be explicit. In a visually driven lesson, the choice of what to show on screen when making general points is often a great opportunity to offer the viewer a concrete example to hold on to, but without wasting too much time explicitly talking about that example or over-emphasizing its importance. This, I think, is part of what gives visually driven lessons the opportunity to be clearer. As a brief side comment, by the way, loosely related to clarity, for any of you who want to use music in the videos, while music can enrich the storytelling aspect of a lesson, setting the desired tone and the momentum, once you're getting into the meat of a technical explanation, it's very easy for the music to do more harm than good. If it's there at all, you'll want it to be decidedly in the background and not calling attention to itself. I recognize some hypocrisy here, it's definitely something I know I've messed up with past videos, and it's just worth thinking about whatever benefit you see from the music, you don't want to incur a needless cost on clarity that outweighs that benefit. Moving on to novelty, this is another category that has two distinct interpretations. One would be stylistic originality. Back when I created this channel, part of the reason I wrote my own animation tool behind it was to ensure a kind of stylistic originality. Well, the main reason was it was a fun side project, and having my hands deep into the guts of some tool helped me to feel less constrained in trying to visualize whatever came to mind, but being a forcing function for originality was at least a small part of my reasoning. This means there's at least a little hint of irony in the fact that if we fast forward to today, so many of the entries in this contest use that tool, Manum, to illustrate their lessons. I have nothing wrong with that, it actually delights me, it's why I made it open source, and I'm very grateful to the Manum community for everything they've done to make the tool more accessible. But I would still encourage people to find their own unique voice and aesthetic whatever tools they use and whoever they take inspiration from. I don't want to overemphasize that point because it's the much less important half of novelty, the much more important kind of novelty is when the thing you present would have been very hard to find elsewhere on the internet, either because it's a highly unique topic or because it's a very unique perspective. For example, this video on percolation showed a completely fascinating toy model for studying phase changes, a model where it's easier to make exact proofs, and considering the level of depth and the level of clarity that the authors provided, I think it's fair to say you wouldn't find something like this on YouTube if this group hadn't made it. As to memorability, I'll keep this one quick. Lessons take off this box when they ask a question that's just so fun to think about or provide such a satisfying aha moment that it stays with you long after watching it or reading it. Admittedly, this one is highly personal and subjective, to my taste, for example, this video by Daria Ivanova discusses the question of when it's possible for a single track to have been left by a bicycle, that is, the back wheel goes through the same path that the front wheel does, which is just so fun to think about. This video by Gurgle Bensic about how Involute Gears work had a really satisfying way of explaining why a certain gear design pattern works so well, that for me at least just stuck. So with all of that, who are the chosen winners? In the announcement, I promised that one winner slot would go to an entry that was made as a collaboration, and that one goes to the percolation video. The other four winners are, perhaps unsurprisingly, also ones that I've already mentioned. They include the post about describing crystal structures with group theory, the video covering the history of calculus, the one about ray tracing, and the algorithms to make it faster, and the problem solving lesson centered around a tricky hat riddle. These entries really do speak for themselves, so rather than telling you too much more here, I encourage you to check them out. To be honest, after I got it down to about 25 entries that I wanted to at least be honorable mentions, it was exceedingly hard to actually choose winners from that. Since for each of these, I could easily envision a target audience for whom that entry would actually be the best recommendation. It was a game of comparing apples to oranges, but times 25. Below the video, I've left links to the other 20 that I chose as honorable mentions, and to a playlist that contains all the video submissions, and also to a blog post, containing links to all of the non-video submissions. Thanks to a sponsorship from Brilliant, each winner will get $1,000 as a cash prize, and also, and much more importantly, I think, a rare edition golden pie creature. Also, after the initial announcement, risk zero, and Google fonts both generously reached out, offering additional prize sponsorships. I'd also like to thank Protocol Labs for another contribution to help us cover the costs of managing the whole event. Thanks to everybody who participated, and to everybody who helped in creating this rising tide for new math channels and new math blogs that we've seen in the last month. It was genuinely inspiring to see just how well this all went.
The medical test paradox, and redesigning Bayes' rule
Some of you may have heard this paradoxical fact about medical tests. It's very commonly used to introduce the topic of Bayes rule in probability. The paradox is that you could take a test which is highly accurate in the sense that it gives correct results to a large majority of the people thinking it. And yet, under the right circumstances, when assessing the probability that your particular test result is correct, you can still land on a very low number, arbitrarily low, in fact. In short, an accurate test is not necessarily a very predictive test. Now, when people think about math and formulas, they don't often think of it as a design process. I mean, maybe in the case of notation, it's easy to see that different choices are possible, but when it comes to the structure of the formulas themselves and how we use them, that's something that people typically view as fixed. In this video, you and I will dig into this paradox, but instead of using it to talk about the usual version of Bayes rule, I'd like to motivate an alternate version, an alternate design choice. Now, what's up on the screen now is a little bit abstract, which makes it difficult to justify that there really is a substantive difference here, especially when I haven't explained either one yet. To see what I'm talking about though, we should really start by spending some time a little more concretely, and just laying out what exactly this paradox is. Picture a thousand women and suppose that one percent of them have breast cancer. And let's say they all undergo a certain breast cancer screening, and that nine of those with cancer correctly get positive results, and there's one false negative. And then suppose that among the remainder without cancer, 89 get false positives, and 901 correctly get negative results. So if all you know about a woman is that she does this screening and she gets a positive result, you don't have information about symptoms or anything like that, you know that she's either one of these nine true positives or one of these 89 false positives. So the probability that she's in the cancer group given the test result is nine divided by nine plus 89, which is approximately one in 11. In medical parlance, you would call this the positive predictive value of the test, or PPP, the number of true positives divided by the total number of positive test results. You can see where the name comes from. To what extent does a positive test result actually predict that you have the disease? Now hopefully, as I've presented it this way, where we're thinking concretely about a sample population, all of this makes perfect sense. But where it comes across as counterintuitive is if you just look at the accuracy of the test, presented to people as a statistic, and then ask them to make judgments about their test result. Test accuracy is not actually one number, but two. First, you ask how often does the test correct on those with the disease? This is known as the test sensitivity, as in how sensitive is it to detecting the presence of the disease? In our example, test sensitivity is nine in 10, or 90 percent, and another way to say the same fact would be to say the false negative rate is 10 percent. And then a separate, not necessarily related number, is how often it's correct for those without the disease, which is known as the test specificity. As in, our positive results caused specifically by the disease, or are there confounding triggers giving false positives? In our example, the specificity is about 91 percent, or another way to say the same fact would be to say the false positive rate is 9 percent. So the paradox here is that in one sense, the test is over 90 percent accurate. It gives correct results to over 90 percent of the patients who take it. And yet, if you learn that someone gets a positive result without any added information, there's actually only a one in 11 chance that that particular result is accurate. This is a bit of a problem, because of all of the places for math to be counterintuitive, medical tests are one area where it matters a lot. In 2006 and 2007, the psychologist Gared Giger-Enzer gave a series of statistic seminars to practicing gynecologists, and he opened with the following example. A 50-year-old woman, no symptoms, participates in a routine mammography screening. She tests positive, is alarmed, and wants to know from you whether she has breast cancer for certain or what her chances are. Apart from the screening result, you know nothing else about this woman. In that seminar, the doctors were then told that the prevalence of breast cancer for women of this age is about 1 percent, and then to suppose that the test sensitivity is 90 percent, and that its specificity was 91 percent. You might notice these are exactly the same numbers from the example that you and I just looked at. This is where I got them. So having already thought it through, you and I know the answer. It's about 1 in 11. However, the doctors in this session were not primed with the suggestion to picture a concrete sample of 1,000 individuals, the way that you and I had, all they saw were these numbers. They were then asked, how many women, who test positive, actually have breast cancer? What is the best answer? And they were presented with these four choices. In one of the sessions, over half the doctor's present said that the correct answer was 9 and 10, which is way off. Only a fifth of them gave the correct answer, which is worse than what it would have been if everybody had randomly guessed. It might seem a little extreme to be calling this a paradox. I mean, it's just a fact. It's not something intrinsically self-contradictory. But as these seminars with giga-rens are shown, people, including doctors, definitely find it counterintuitive that a test with high accuracy can give you such a low predictive value. We might call this a veridical paradox, which refers to facts that are provably true, but which nevertheless can feel false when phrased a certain way, and sort of the softest form of a paradox, saying more about human psychology than about logic. The question is how we can combat this. Where we're going with this, by the way, is that I want you to be able to look at numbers like this and quickly estimate in your head that it means the predictive value of a positive test should be around 1 and 11. Or if I change things and asked what if it was 10% of the population who had breast cancer, you should be able to quickly turn around and say that the final answer would be a little over 50%. Or if I said imagine a really low prevalence, something like 0.1% of patients having cancer, you should again quickly estimate that the predictive value of the test is around 1 and 100, that 1 and 100 of those with positive test results in that case would have cancer. Or let's say we go back to the 1% prevalence, but I make the test more accurate. I tell you to imagine the specificity is 99%. There, you should be able to relatively quickly estimate that the answer is a little less than 50%. The hope is that you're doing all of this with minimal calculations in your head. Now, the goals of quick calculations might feel very different from the goals of addressing whatever misconception underlies this paradox, but they actually go hand in hand. Let me show you what I mean. On the side of addressing misconceptions, what would you tell to the people in that seminar who answered 9 and 10? What fundamental misconception are they reviewing? What I might tell them is that in much the same way that you shouldn't think of tests as telling you deterministically, whether you have a disease, you shouldn't even think of them as telling you your chances of having a disease. Instead, the healthy view of what tests do is that they update your chances. In our example, before taking the test, a patient's chances of having cancer were 1 in 100. In Bayesian terms, we call this the prior probability. The effect of this test was to update that prior by almost an order of magnitude up to around 1 in 11. The accuracy of a test is telling us about the strength of this updating. It's not telling us a final answer. What does this have to do with quick approximations? Well, a key number for those approximations is something called the Bayes factor, and the very act of defining this number serves to reinforce this central lesson about reframing what it is the tests do. You see, one of the things that makes test statistics so very confusing is that there are at least 4 numbers that you'll hear associated with them. For those with the disease, there's the sensitivity in the false negative rate, and then for those without, there's the specificity in the false positive rate, and none of these numbers actually tell you the thing you want to know. Luckily, if you want to interpret a positive test result, you can pull out just one number to focus on from all this. Take the sensitivity divided by the false positive rate. In other words, how much more likely are you to see the positive test result with cancer versus without? In our example, this number is 10. This is the Bayes factor, also sometimes called the likelihood ratio. A very handy rule of thumb is that to update a small prior, or at least to approximate the answer, you simply multiply it by the Bayes factor. So in our example where the prior was 1 and 100, you would estimate that the final answer should be around 1 and 10, which is in fact slightly above the true correct answer. So based on this rule of thumb, if I asked you what would happen if the prior from our example was instead 1 and 1,000, you could quickly estimate that the effect of the test should be to update those chances to around 1 and 100. And in fact, take a moment to check yourself by thinking through a sample population. In this case, you might picture 10,000 patients, where only 10 of them really have cancer. And then based on that 90% sensitivity, we would expect 9 of those cancer cases to give true positives. And on the other side, and 91% specificity means that 9% of those without cancer are getting false positives. So we'd expect 9% of the remaining patients, which is around 900, to give false positive results. Here, with such a low prevalence, the false positives really do dominate the true positives. So the probability that a randomly chosen positive case from this population actually has cancer is only around 1%, just like the rule of thumb predicted. Now this rule of thumb clearly cannot work for higher priors. For example, it would predict that a prior of 10% gets updated all the way to 100% certainty. But that can't be right. In fact, take a moment to think through what the answer should be, again, using a sample population. Maybe this time we picture 10 out of 100 having cancer. Again, based on the 90% sensitivity of the test, we'd expect 9 of those true cancer cases to get positive results. But what about the false positives? How many do we expect there? About 9% of the remaining 90, about 8%. So upon seeing a positive test result, it tells you that you're either one of these 9 true positives or one of the 8 false positives. So this means the chances are a little over 50%, roughly 9 out of 17 or 53%. At this point, having dared to dream that Bayesian updating could look as simple as multiplication, you might tear down your hopes and pragmatically acknowledge sometimes life is just more complicated than that. Except, it's not. This rule of thumb turns into a precise mathematical fact, as long as we shift away from talking about probabilities to instead talking about odds. If you've ever heard someone talk about the chances of an event being 1 to 1 or 2 to 1, things like that, you already know about odds. With probability, we're taking the ratio of the number of positive cases out of all possible cases, right? Things like 1 in 5 or 1 in 10. With odds, what you do is take the ratio of all positive cases to all negative cases. You commonly see odds written with a colon to emphasize the distinction, but it's still just a fraction, just a number. So an event with a 50% probability would be described as having 1 to 1 odds. 10% probability is the same as 1 to 9 odds, and 80% probability is the same as 4 to 1 odds you get the point. It's the same information. It still describes the chances of a random event, but is presented a little differently, like a different unit system. Probabilities are constrained between 0 and 1, with even chances sitting at 0.5. But odds range from 0 up to infinity, with even chances sitting at the number 1. The beauty here is that a completely accurate, not even approximating things way to frame base rule is to say, express your prior using odds, and then just multiply by the base factor. Think about what the prior odds are really saying. It's the number of people with cancer, divided by the number without it. Let's just write that down as a normal fraction for a moment so we can multiply it. When you filter down just to those with positive test results, the number of people with cancer gets scaled down, scaled down by the probability of seeing a positive test result given that someone has cancer. And then similarly, the number of people without cancer also gets scaled down, this time by the probability of seeing a positive test result, but in that case. So the ratio between these two counts, the new odds upon seeing the test, looks just like the prior odds, except multiplied by this term here, which is exactly the base factor. Look back at our example, where the base factor was 10. And as a reminder, this came from the 90% sensitivity, divided by the 9% false positive rate, how much more likely are you to see a positive result with cancer versus without. If the prior is 1%, express as odds, this looks like 1 to 99. So by our rule, this gets updated to 10 to 99, which if you want you could convert back to a probability, it would be 10 divided by 10 plus 99, or about 1 and 11. If instead, the prior was 10%, which was the example that tripped up our rule of thumb earlier, expressed as odds, this looks like 1 to 9. By our simple rule, this gets updated to 10 to 9, which you can already read off pretty intuitively, it's a little above even chances, a little above 1 to 1. If you prefer, you can convert it back to a probability, you would write it as 10 out of 19, or about 53%. And indeed, that is what we already found by thinking things through with a sample population. Let's say we go back to the 1% prevalence, but I make the test more accurate. Now what if I told you to imagine that the false positive rate was only 1% instead of 9%. What that would mean is that our base factor is 90 instead of 10, the test is doing more work for us. In this case, with the more accurate test, it gets updated to 90 to 99, which is a little less than even chances, something a little under 50%. To be more precise, you could make the conversion back to probability, and work out that it's around 48%. But honestly, if you're just going for a gut feel, it's fine to stick with the odds. Do you see what I mean about how just defining this number helps to combat potential misconceptions? For anybody who's a little hasty in connecting test accuracy directly to your probability of having a disease, it's worth emphasizing that you could administer the same test with the same accuracy to multiple different patients who all get the same exact result. But if they're coming from different contexts, that result can mean wildly different things. However, the one thing that does stay constant in every case is the factor by which each patient's prior odds get updated. And by the way, this whole time we've been using the prevalence of the disease, which is the proportion of people in a population who have it, as a substitute for the prior, the probability of having it before you see a test. However, that's not necessarily the case. If there are other known factors, things like symptoms, or in the case of a contagious disease, things like known contacts, those also factor into the prior, and they could potentially make a huge difference. As another side note, so far we've only talked about positive test results, but way more often you would be seeing a negative test result. The logic there is completely the same, but the base factor that you compute is going to look different. Instead, you look at the probability of seeing this negative test result with the disease versus without the disease. So in our cancer example, this would have been the 10% false negative rate divided by the 91% specificity, or about 1 in 9. In other words, seeing a negative test result, in that example, would reduce your prior odds by about an order of magnitude. When you write it all out as a formula, here's how it looks. It says, your odds of having a disease, given a test result, equals your odds before taking the test, the prior odds, times the base factor. Now, let's contrast this with the usual way that base rule is written, which is a bit more complicated. In case you haven't seen it before, it's essentially just what we were doing with sample populations, but you wrap it all up symbolically. Remember how every time we were counting the number of true positives, and then dividing it by the sum of the true positives and the false positives? We do just that, except instead of talking about absolute amounts, we talk of each term as a proportion. So the proportion of true positives in the population comes from the prior probability of having the disease, multiplied by the probability of seeing a positive test result in that case. Then we copy that term down again into the denominator, and then the proportion of false positives comes from the prior probability of not having the disease, times the probability of a positive test in that case. If you want, you could also write this down with words instead of symbols, if terms like sensitivity and false positive rate are more comfortable. And this is one of those formulas where once you say it out loud, it seems like a bit much, but it really is no different from what we were doing with sample populations. If you wanted to make the whole thing look simpler, you often see this entire denominator written just as the probability of seeing a positive test result, overall. While that does make for a really elegant little expression, if you intend to use this for calculations, it's a little disingenuous, because in practice, every single time you do this, you need to break down that denominator into two separate parts, breaking down the cases. So taking this more honest representation of it, let's compare our two versions of base rule. And again, maybe it looks nicer if we use the word sensitivity and false positive rate. If nothing else, it helps emphasize which parts of the formula are coming from statistics about the test accuracy. I mean, this actually emphasizes one thing I really like about the framing with odds and a base factor, which is that it cleanly factors out the parts that have to do with the prior and the parts that have to do with the test accuracy. But over in the usual formula, all of those are very intermingled together. And this has a very practical benefit. It's really nice if you want to swap out different priors and easily see their effects. This is what we were doing earlier. But with the other formula, to do that, you have to recompute everything each time. You can't leverage a pre-computed base factor the same way. The odds framing also makes things really nice if you want to do multiple different Bayesian updates based on multiple pieces of evidence. For example, let's say you took not one test, but two, or you wanted to think about how the presence of symptoms plays into it. For each piece of new evidence you see, you always ask the question, how much more likely would you be to see that with the disease, versus without the disease? Each answer to that question gives you a new base factor, and you think that you multiply by your odds. Even just making calculations easier, there's something I really like about attaching a number to test accuracy that doesn't even look like a probability. I mean, if you hear that a test has, for example, a 9% false positive rate. That's just such a disastrously ambiguous phrase. It's so easy to misinterpret it to mean there's a 9% chance that your positive test result is false. But imagine if instead the number that we heard tacked on to test results was that the Bayes factor for a positive test result is, say, 10. There's no room to confuse that for your probability of having a disease. The entire framing of what a Bayes factor is is that it's something that acts on a prior. It forces your hand to acknowledge the prior is something that's separate entirely, and highly necessary to drawing any conclusion. All that said, the usual formula is definitely not without its merits. If you view it not simply as something to plug numbers into, but as an encapsulation of the sample population idea that we've been using throughout, you could very easily argue that that's actually much better for your intuition. After all, it's what we were routinely falling back on in order to check ourselves that the Bayes factor computation even made sense in the first place. Like any design decision, there is no clear cut objective best here. But it's almost certainly the case that giving serious consideration to that question will lead you to a better understanding of Bayes rule. Also, since we're on the topic of kind of paradoxical things, a friend of mine, Matt Cook, recently wrote a book all about paradoxes. I actually contributed a small chapter to it with thoughts on the question of whether math is invented or discovered, and the book as a whole is this really nice connection of thought-provoking paradoxical things ranging from philosophy to math and physics. You can of course find all the details in the description.
The more general uncertainty principle, regarding Fourier transforms
You've probably heard of the Heisenberg uncertainty principle from quantum mechanics. That the more you know about a particle's position, the less certain you can be of its momentum and vice versa. Michael here is for you to come away from this video feeling like this is utterly reasonable. It'll take some time, but I think you'll agree that digging deep is well worth it. You see, the uncertainty principle is actually one specific example of a much more general trade-off that shows up in a lot of everyday, totally non-quantum circumstances involving waves. The plan here is to see what this means in the context of sound waves, which should feel reasonable, and then Doppler radar, which should again feel reasonable, and a little bit closer to the quantum case. And then for particles, which if you're willing to accept one or two premises of quantum mechanics, hopefully feels just as reasonable as the first two. The core idea here has to do with the interplay between frequency and duration. And I bet you already have an intuitive idea of this principle before we even get into the math or the quantum. If you were to pull up behind a car at a red light, and your turn signals were flashing together for a few seconds, you might kind of think that they have the same frequency, but at that point for all you know they could fall out of sync as more time passes, revealing that they actually had different frequencies. So an observation over a short period of time gave you low confidence over what their frequencies are. But if you were to set at that red light for a full minute, and the signals continued to click in sync, you would be a lot more confident that the frequencies are actually the same. So that certainty about the frequency information required an observation spread out over time. And this trade-off right here between how short your observation can be, and how confident you can feel about the frequency, is an example of the general uncertainty principle. Similarly, think of a musical note. The shorter it lasts in time, the less certain you can be about what its exact frequency is. In the extreme, I could ask you what the pitch of a clap or a shock wave is, and even someone with perfect pitch would be unable to answer. And on the flip side, a more definite frequency requires a longer duration signal. Or rather than talking about definiteness or certainty, it would be a little more accurate here to say that the short signal correlates highly with a wider range of frequency. And that the signal correlating strongly with only a narrow range of frequencies must last for a longer time. Here, that's the kind of phrase that's made a little bit clearer when we bring in the actual math. So let's turn now to talking about the 48 transform, which is the relevant construct for analyzing frequencies. The last video I put out was a visual intuition for this transform. And yes, it probably would be helpful if you've seen it. But I'm going to go ahead and give a quick recap here just to remind ourselves of how it went. Let's say you have a signal, and it plays five beats per second over the course of two seconds. The 48 transform gives a way to view any signal not in terms of the intensity at each point in time, but instead in terms of the strength of various frequencies within it. The main idea was to take this signal and to kind of wind it around a circle. As in, imagine some rotating vector whose length is determined by the height of the graph at each point in time. Right now, this little vector is rotating at 0.3 cycles per second. That's the frequency with which we're winding the graph around the circle. And for most frequencies, the signal is kind of just averaged out over the circle. This was the fun part of last video, don't you think? Just seeing the different patterns that come up as you wind a pure cosine around a circle like this. But the key point is what happens when that winding frequency matches the signal frequency. In this case, five cycles per second. As our little vector is rotating around and it draws, all of the peaks line up on one side and all of the valleys on another side. So the whole weight of the graph is kind of off-center, so to speak. The idea behind the 48 transform is that if you follow the center of mass of the wound up graph with frequency f, the position of that center of mass encodes the strength of that frequency in the original signal. The distance between that center of mass and the origin captures the strength of that frequency. And this is something I didn't really talk about in the main video, but the angle of that center of mass off the horizontal corresponds to the phase of the given frequency. Now one way to think of this whole winding mechanism is that it's a way to measure how well your signal correlates with a given pure frequency. So remember, when we say the 48 transform, we're referring to this new function whose input is that winding frequency, and whose output is the center of mass, thought of as a complex number. Or more technically, it's a certain multiple of that center of mass, but whatever, the overall shape remains the same. And the graph that I'm drawing is just going to be the x-coordinate of that center of mass, the real part of its output. If you wanted, you could also plot the distance between the center of mass and the origin, and maybe that better conveys how strongly each possible frequency correlates with the signal. But outside is that you lose some of the nice linearity properties that I talked about last video. Anyway, point is, this spike you're looking at here above the winding frequency of 5 is the 48 transforms way of telling us that the dominant frequency of the signal is 5 beats per second. And equally importantly, the fact that it's a little bit spread out around that 5 is an indication that pure sine waves near 5 beats per second also correlate pretty well with the signal. And that last idea is key for the uncertainty principle. What I want you to do is think about how this spread changes as the signal persists longer or shorter over time. You've already seen this at an intuitive level. All we're doing right now is just illustrating it in the language of 48 transforms. If the signal persists over a long period of time, then when the winding frequency is even slightly different from 5, the signal goes on long enough to wrap itself around the circle and balance out. So looking at the 48 plot over here, that corresponds to a super sharp drop off in the magnitude of the transform as your frequency shifts away from that 5 beats per second. On the other hand, if your signal was really localized to a short period of time, then as you adjust the frequency away from 5 beats per second, the signal doesn't really have as much time to balance itself out around the circle. You have to change the winding frequency to be meaningfully different from 5 before that signal starts to balance out again. Over on the frequency plot, that corresponds to a much broader peak around the 5 beats per second. And that's the uncertainty principle, just phrased a little bit more mathematically. A signal concentrated in time must have a spread out 48 transform, meaning it correlates with a wide range of frequencies, and a signal with a concentrated 48 transform has to be spread out in time. And one other place where this comes up in a really tangible way is Doppler Radar. So with Radar, the idea is you send out some radio wave pulse, and the pulse might reflect off of objects, and the time that it takes for this echo signal to return to you lets you deduce how far away those objects are. And you can actually take this one step further and make deductions about the velocities of those objects using the Doppler effect. Think about sending out a pulse with some frequency. If this gets reflected off an object moving towards you, then the beats of that wave get kinda smushed together, so the echo you hear back is going to be a slightly higher frequency. 48 transforms give a neat way to view this. The 48 transform of your original signal tells you the frequencies that go into it, and for simplicity, let's think of that as being dominated by a single pure frequency. Though as you know, if it's a short pulse, that means that our 48 transform has to be spread out a little bit. And now think about the Doppler shifted echo. By coming back at a higher frequency, it means that the 48 transform will just look like a similar plot shifted up a bit. Moreover, if you look at the size of that shift, you can deduce how quickly the object was moving. By the way, there is an important technical point that I'm choosing to gloss over here, and I've outlined it a little more in the video description. What follows is meant to be a distilled, if somewhat oversimplified, description of the 48 trade-off in this setup. The salient fact is that time and frequency of that echo signal correspond respectively to the position and the velocity of the object being measured, which is what makes this example much more closely analogous to the quantum mechanical Heisenberg uncertainty principle. You see, there is a very real way in which a radar operator faces a dilemma, where the more certain you can be about the positions of things, the less certain you are about their velocities. Here imagine sending out a pulse that persists over a long period of time. Then that means the echo from some object is also spread out over time. And on its own, that might not seem like an issue. But in practice, there's all sorts of different objects in the field, so these echoes are all going to start to get overlapped with each other. Combine that with other noise and imperfections, and this can make the locations of multiple objects extremely ambiguous. Instead, a more precise understanding of how far away all these things are would require having a very quick little pulse confined to a small amount of time. But, think about the frequency representations of such a short echo. As you saw with the sound example, the Fourier transform of a quick pulse is necessarily more spread out. So for many objects with various velocities, that would mean that the Doppler shifted echoes, despite having been nicely separated in time, are more likely to overlap in frequency space. So since what you're looking at is the sum of all of these, it can be really ambiguous how you break it down. If you wanted a nice clean, sharp view of the velocities, you would need to have an echo that only occupies a very small amount of frequency space. But for a signal to be concentrated in frequency space, it necessarily has to be spread out in time. This is the Fourier trade-off, you cannot have crisp delineation for both. And this brings us to the quantum case. Do you know who else spent some time immersed in the pragmatic world of radio wave transmissions? A young, otherwise philosophically inclined history major in World War I France, Louis De Broix. And this was a strangely fitting post, given his predispositions to philosophizing about the nature of waves. Because after the war, as De Broix switched from the humanities to physics, in his 1924 PhD thesis, he proposed that all matter has wave-like properties. And more than that, he concluded that the momentum of any moving particle is going to be proportional to the spatial frequency of that wave. How many times that wave cycles per unit distance? Okay, now that's the kind of phrase that can easily fly into one ear and out the other. Because as soon as you say matter is a wave, it's easy to just throw up your hands and say physics is just weird. But really, think about this. Even if you're willing to grant that particles behave like waves, in some way, whatever that means, why on earth should the momentum of those particles, the thing we classically think of as mass times velocity, have anything to do with the spatial frequency of that wave? Now, being more of a math-benefit physics guy, I asked the number of people with deeper backgrounds and physics about helpful intuitions here. And one thing that became clear is that there is a surprising variety of viewpoints. Now personally, one thing I found to be interesting was just going back to the source and seeing how De Broix framed things in his seminal paper on the topic. You see, there is a sense in which it's not all that different from the Doppler effect, where relative movement corresponds to shifts in frequency. It has a slightly different flavor, since we're not talking about frequency over time. Instead, we're talking about frequency over space. And special relativity is going to come into play. But I still think it's an interesting analogy. In his thesis, De Broix lays out what is, in his own words, a crude comparison for the kind of wave phenomenon he has in mind. Imagine many weights hanging from springs, with all of these weights oscillating up and down in sync. And with most of the mass concentrated towards a single point. The energy of these oscillating weights is meant to be a metaphor for the energy of a particle, specifically the E equals MC squared-style energy residing in its mass. And De Broix emphasized how the conception he had in mind involves the particle being dispersed across all of space. The whole premise he was exploring here is that the energy of a particle might have to do with something that oscillates over time, since this was known to be the case for photons. And these oscillating weights are just meant to be a metaphor for whatever that something might be. With Einstein's relatively new theory of relativity in mind, he pointed out that if you view this whole setup while moving relative to it, all of the weights are going to appear to fall out of phase. That's not obvious, and I'm certainly exaggerating the effect in this animation. It has to do with a core fact from special relativity that what you consider to be simultaneous events in one reference frame may not be simultaneous in a different reference frame. So even though from one point of view, you might see two of these weights as reaching their peaks and their valleys at the same instant. From a different moving point of view, those events might actually be happening at different times. Understanding this more fully requires some knowledge of special relativity, so we'll all just have to wait for Henry Rye's series on that topic to come out. But here our only goal is to get an inkling for why momentum, that thing we usually think of as mass times velocity, should have anything to do with spatial frequency. And the basic line of reasoning here is if mass is the same as energy via E equals mc squared. And if that energy was carried as some kind of oscillating phenomenon, similar to how it is for photons, then this sort of relativistic Doppler effect means changes to how that mass moves, corresponds to changes in the spatial frequency. So what does our general Fourier trade-off tell us in this case? Well, if a particle is described as a little wave packet over space, then the Fourier transform, where we're thinking of this as a function over space, not over time, tells us how much various pure frequencies correspond with this top wave. So if the momentum is the spatial frequency up to a constant multiple, then the momentum is also a kind of wave, namely some multiple of the Fourier transform of the original wave. So if that original wave was very concentrated around a single point, as we have seen multiple times now, that means that its Fourier transform must necessarily be more spread out, and hence the wave describing its momentum must be more spread out. And vice versa. Notice, unlike the Doppler radar case, where the ambiguity arose because waves were being used to measure an object with a definite distance and speed, what we're seeing here is that the particle is the wave. So the spread out over space and over momentum is not some artifact of imperfect measurement techniques. It's a spread fundamental to what the particle is. Analogous to how a musical note being spread out over time is fundamental to what it even means to be a musical note. One pet peeve I have in mainstream references to quantum is that they often treat Heisenberg's uncertainty principle as some fundamental example of things being unknowable in the quantum realm, as if it is a core nugget of the universe's indeterminacy. But really, it's just a trade-off between how concentrated a wave and its frequency representation can be applied to the premise that matter is some kind of wave, and hence spread out. All of the stuff about randomness and unknowability is still there, but it comes one level deeper in the way that these waves have come to be interpreted. You see, when you measure these particles, say trying to detect if it's in a given region, whether or not you find it there appears to be probabilistic, where the probability of finding it is proportional to the strength of the wave in that region. So when one of these waves is concentrated near a point, what that actually means is that we have a higher probability of finding it near that point, that we are more certain of its location. And just to beat this drum one more time, since that concentration implies a more spread out Fourier transform, then the wave describing its momentum would also be more spread out. So you wouldn't be able to find a narrow range of momentum that the particle has a high probability of occupying. I quite like how if you look at the German word for this principle, it might be more directly translated as the unsharpness relation. Which I think more faithfully captures the Fourier tradeoff at play here without imposing on questions of no ability. When I think of the Heisenberg uncertainty principle, what makes it fascinating is not so much that it's a statement about randomness. I mean, yes, that randomness is very thought-provoking and contentious and just plain weird. But to me, equally fascinating is that underpinning Heisenberg's conclusion is that the position and momentum have the same relationship as sound and frequency. As if a particle's momentum is somehow the sheet music describing how it moves through space.
Taylor series | Chapter 11, Essence of calculus
When I first learned about Taylor series, I definitely didn't appreciate just how important they are. But time and time again, they come up in math and physics and many fields of engineering because they're one of the most powerful tools that math has to offer for approximating functions. I think one of the first times this clicked for me as a student was not an calculus class but a physics class. We were studying a certain problem that had to do with the potential energy of a pendulum. And for that, you need an expression for how high the weight of the pendulum is above its lowest point. And when you work that out, it comes out to be proportional to 1 minus the cosine of the angle between the pendulum and the vertical. Now, the specifics of the problem we were trying to solve are beyond the point here. But what I'll say is that this cosine function made the problem awkward and unwieldy and it made it less clear how pendulums relate to other oscillating phenomena. But if you approximate cosine of theta as 1 minus theta squared over 2 of all things, everything just fell into place much more easily. Now, if you've never seen anything like this before, an approximation like that might seem completely out of left field. I mean, if you graph cosine of theta along with this function, 1 minus theta squared over 2, they do seem rather close to each other, at least for small angles near 0. But how would you even think to make this approximation? And how would you find that particular quadratic? The study of Taylor's series is largely about taking non-pollonomial functions and finding polynomials that approximate them near some input. And the motive here is that polynomials tend to be much easier to deal with than other functions. They're easier to compute, easier to take derivatives, easier to integrate, just all around more friendly. So let's take a look at that function, cosine of x, and really take a moment to think about how you might construct a quadratic approximation near x equals 0. That is, among all of the possible polynomials that look like c0 plus c1 times x plus c2 times x squared, for some choice of these constants c0, c1, and c2, find the one that most resembles cosine of x near x equals 0, whose graph kind of spoons with the graph of cosine x at that point. Well, first of all, at the input 0, the value of cosine of x is 1. So far approximation is going to be any good at all. It should also equal 1 at the input x equals 0. Picking in 0 just results in whatever c0 is, so we can set that equal to 1. This leaves us free to choose constants c1 and c2 to make this approximation as good as we can, but nothing we do with them is going to change the fact that the polynomial equals 1 at x equals 0. Now it would also be good if our approximation had the same tangent slope as cosine x at this point of interest. Otherwise, the approximation drifts away from the cosine graph much faster than it needs to. The derivative of cosine is negative sine, and at x equals 0, that equals 0, meaning the tangent line is perfectly flat. On the other hand, when you work out the derivative of our quadratic, you get c1 plus 2 times c2 times x. At x equals 0, this just equals whatever we choose for c1. So this constant c1 has complete control over the derivative of our approximation around x equals 0. Setting it equal to 0 ensures that our approximation also has a flat tangent line at this point. And this leaves us free to change c2. But the value and the slope of our polynomial at x equals 0 are locked in place to match that of cosine. The final thing to take advantage of is the fact that the cosine graph curves downward above x equals 0. It has a negative second derivative. Or in other words, even though the rate of change is 0 at that point, the rate of change itself is decreasing around that point. Specifically, since its derivative is negative sine of x, its second derivative is negative cosine of x, and at x equals 0, that equals negative 1. Now in the same way that we wanted the derivative of our approximation to match that of the cosine, so that their values wouldn't drift apart needlessly quickly. Making sure that their second derivative's match will ensure that they curve at the same rate. That the slope of our polynomial doesn't drift away from the slope of cosine x any more quickly than it needs to. Pulling up the same derivative we had before, and then taking its derivative, we see that the second derivative of this polynomial is exactly 2 times c2. So to make sure that this second derivative also equals negative 1 at x equals 0, 2 times c2 has to be negative 1, meaning c2 itself to be negative 1 half. And this gives us the approximation 1 plus 0x minus 1 half x squared. And to get a feel for how good it is, if you estimate, say, cosine of 0.1 using this polynomial, you'd estimate it to be 0.995. And this is the true value of cosine of 0.1. It's a really good approximation. Take a moment to reflect on what just happened. You've had three degrees of freedom with this quadratic approximation, the constant c0, c1, and c2. c0 was responsible for making sure that the output of the approximation matches that of cosine x at x equals 0. c1 was in charge of making sure that the derivatives match at that point. And c2 was responsible for making sure that the second derivatives match up. This ensures that the way your approximation changes as you move away from x equals 0, and the way that the rate of change itself changes is as similar as possible to the behavior of cosine x given the amount of control that you have. You could give yourself more control by allowing more terms in your polynomial and matching higher order derivatives. For example, let's say you added on the term c3 times x cubed for some constant c3. Well in that case, if you take the third derivative of a cubic polynomial, anything that's quadratic or smaller goes to 0. And as for that last term, after three iterations of the power rule, it looks like 1 times 2 times 3 times whatever c3 is. On the other hand, the third derivative of cosine x comes out to sine of x, which equals 0 at x equals 0. So to make sure that the third derivatives match, the constant c3 should be 0. Or in other words, not only is 1 minus 1 half x squared the best possible quadratic approximation of cosine, it's also the best possible cubic approximation. You can actually make an improvement by adding on a fourth order term, c4 times x to the fourth. The fourth derivative of cosine is actually itself, which equals 1 at x equals 0. And what's the fourth derivative of our polynomial with this new term? Well, when you keep applying the power rule over and over with those exponents all hopping down in front, you end up with 1 times 2 times 3 times 4 times c4, which is 24 times c4. So if we want this to match the fourth derivative of cosine x, which is 1, c4 has to be 1 over 24. And indeed, the polynomial 1 minus 1 half x squared plus 1 24 times x to the fourth, which looks like this, is a very close approximation for cosine x around x equals 0. In any physics problem involving the cosine of a small angle, for example, predictions would be almost unnoticeably different if you substituted this polynomial for cosine of x. Now, take a step back and notice a few things happening with this process. First of all, factorial terms come up very naturally in this process. When you take n successive derivatives of the function x to the n, letting the power rule just keep cascading on down, what you'll be left with is 1 times 2 times 3 on and on up to whatever n is. So you don't simply set the coefficients of the polynomial equal to whatever derivative you want. You have to divide by the appropriate factorial to cancel out this effect. For example, that x to the fourth coefficient was the fourth derivative of cosine 1, but divided by 4 factorial, 24. The second thing to notice is that adding on new terms, like this c4 times x to the fourth, doesn't mess up what the old terms should be. And that's really important. For example, the second derivative of this polynomial at x equals 0 is still equal to 2 times the second coefficient, even after you introduce higher order terms. And it's because we're plugging in x equals 0. So the second derivative of any higher order term, which all include in x, will just wash away. And the same goes for any other derivative, which is why each derivative of a polynomial at x equals 0 is controlled by 1 and only one of the coefficients. If instead you are approximating near an input other than 0, like maybe x equals pi, in order to get the same effect, you would have to write your polynomial in terms of powers of x minus pi, or whatever input you're looking at. This makes it look noticeably more complicated, but all we're doing is just making sure that the point pi looks and behaves like 0. So that plugging in x equals pi is going to result in a lot of nice cancellation that leaves only one constant. And finally, on a more philosophical level, notice how what we're doing here is basically taking information about higher order derivatives of a function at a single point, and then translating that into information about the value of the function near that point. You can take as many derivatives of cosine as you want. It follows this nice cyclic pattern, cosine of x, negative sine of x, negative cosine, sine, and then repeat. And the value of each one of these is easy to compute at x equals 0. It gives this cyclic pattern 1, 0, negative 1, 0, and then repeat. And knowing the values of all of those higher order derivatives is a lot of information about cosine of x, even though it only involves plugging in a single number, x equals 0. So what we're doing is leveraging that information to get an approximation around this input. And you do it by creating a polynomial whose higher order derivatives are designed to match up with those of cosine, following the same 1, 0, negative 1, 0 cyclic pattern. And to do that, you just make each coefficient of the polynomial follow that same pattern, but you have to divide each one by the appropriate factorial. Like I mentioned before, this is what cancels out the cascading effect of many power rule applications. The polynomials you get by stopping this process at any point are called Taylor polynomials for cosine of x. More generally, and hence more abstractly, if we were dealing with some other function, other than cosine, you would compute its derivative, its second derivative, and so on, getting as many terms as you'd like, and you would evaluate each one of them at x equals 0. Then for the polynomial approximation, the coefficient of each x to the n term should be the value of the nth derivative of the function evaluated at 0, but divided by n factorial. And this whole rather abstract formula is something that you'll likely see in any text or any course that touches on Taylor polynomials. And when you see it, I want you to think to yourself that that constant term ensures that the value of the polynomial matches with the value of f. The next term ensures that the slope of the polynomial matches the slope of the function at x equals 0. The next term ensures that the rate at which the slope changes is the same at that point, and so on, depending on how many terms you want. And the more terms you choose, the closer the approximation, but the trade-off is that the polynomial you'd get would be more complicated. And to make things even more general, if you wanted to approximate near some input other than 0, which we'll call a, you would write this polynomial in terms of powers of x minus a, and you would evaluate all the derivatives of f at that input, a. This is what Taylor polynomials look like in their fullest generality. Changing the value of a changes where this approximation is hugging the original function, where its higher order derivatives will be equal to those of the original function. One of the simplest meaningful examples of this is the function e to the x around the input x equals 0. Computing the derivatives is super nice, as nice as it gets, because the derivative of e to the x is itself. So the second derivative is also e to the x, as is its third, and so on. So at the point x equals 0, all of these are equal to 1. And what that means is our polynomial approximation should look like 1 plus 1 times x plus 1 over 2 times x squared plus 1 over 3 factorial times x cubed. And so on, depending on how many terms you want. These are the Taylor polynomials for e to the x. Okay, so with that as a foundation, in the spirit of showing you just how connected all the topics of calculus are, let me turn to something kind of fun, a completely different way to understand this second order term of the Taylor polynomials, but geometrically. It's related to the fundamental theorem of calculus, which I talked about in chapters 1 and chapters 8, if you need a quick refresher, like we did in those videos, consider a function that gives the area under some graph between a fixed left point and a variable right point. What we're going to do here is think about how to approximate this area function, not the function for the graph itself, like we've been doing before. Focusing on that area is what's going to make the second order term kind of pop out. Remember, the fundamental theorem of calculus is that this graph itself represents the derivative of the area function. And it's because a slight nudge dx to the right bound of the area gives a new bit of area that's approximately equal to the height of the graph times dx. And that approximation is increasingly accurate for smaller and smaller choices of dx. But if you wanted to be more accurate about this change in area given some change in x that isn't meant to approach zero, you would have to take into account this portion right here, which is approximately a triangle. Let's name the starting input a and the nudge input above it x, so that that change is x minus a. The base of that little triangle is that change x minus a. And its height is the slope of the graph times x minus a. Since this graph is the derivative of the area function, its slope is the second derivative of the area function evaluated at the input a. So the area of this triangle, one half base times height, is one half times the second derivative of this area function evaluated a multiplied by x minus a squared. And this is exactly what you would see with a Taylor polynomial. If you knew the various derivative information about this area function at the point a, how would you approximate the area at the point x? Well you have to include all that area up to a, f of a, plus the area of this rectangle here, which is the first derivative times x minus a, plus the area of that little triangle, which is one half times the second derivative times x minus a squared. I really like this, because even though it looks a bit messy all written out, each one of the terms has a very clear meaning that you can just point to on the diagram. If you wanted, we could call it an end here, and you would have a phenomenally useful tool for approximations with these Taylor polynomials. But if you're thinking like a mathematician, one question you might ask is whether or not it makes sense to never stop and just add infinitely many terms. In math, an infinite sum is called a series. So even though one of these approximations with finitely many terms is called a Taylor polynomial, adding all infinitely many terms gives what's called a Taylor series. You have to be really careful with the idea of an infinite series, because it doesn't actually make sense to add infinitely many things. You can only hit the plus button on the calculator so many times. But if you have a series, we're adding more and more of the terms, which makes sense at each step, gets you increasingly close to some specific value. What you say is that the series converges to that value. Or if you're comfortable extending the definition of equality to include this kind of series convergence, you'd say that the series as a whole, this infinite sum, equals the value that it's converging to. For example, look at the Taylor polynomial for e to the x and plug in some input, like x equals 1. As you add more and more polynomial terms, the total sum gets closer and closer to the value e. So you say that this infinite series converges to the number e, or what's saying the same thing, that it equals the number e. In fact, it turns out that if you plug in any other value of x, like x equals 2, and look at the value of the higher and higher order Taylor polynomials at this value, they will converge towards e to the x, which in this case is e squared. And this is true for any input, no matter how far away from zero it is. Even though these Taylor polynomials are constructed only from derivative information gathered at the input zero. In a case like this, we say that e to the x equals its own Taylor series at all inputs x, which is kind of a magical thing to have happen. And even though this is also true for a couple other important functions, things like sign and cosine, sometimes these series only converge within a certain range around the input whose derivative information you're using. If you worked out the Taylor series for the natural log of x around the input x equals 1, which is built by evaluating the higher order derivatives of the natural log of x at x equals 1, this is what it would look like. When you plug in an input between zero and two, adding more and more terms of this series will indeed get you closer and closer to the natural log of that input. But outside of that range, even by just a little bit, the series fails to approach anything. As you add on more and more terms, the sum just kind of bounces up back and forth wildly. It does not, as you might expect, approach the natural log of that value. Even though the natural log of x is perfectly well defined for inputs that are above two. In some sense, the derivative information of ln of x at x equals 1 doesn't propagate out that far. In a case like this, we're adding more terms of the series doesn't approach anything. You say that the series diverges. And that maximum distance between the input you're approximating near and points where the outputs of these polynomials actually do converge is called the radius of convergence for the Taylor series. There remains more to learn about Taylor series. There are many use cases, tactics for placing bounds on the error of these approximations, tests for understanding when series do and don't converge. And for that matter, there remains more to learn about calculus as a whole, and the countless topics not touched by this series. The goal with these videos is to give you the fundamental intuitions that make you feel confident and efficient in learning more on your own, and potentially even rediscovering more of the topic for yourself. In the case of Taylor series, the fundamental intuition to keep in mind as you explore more of what there is is that they translate derivative information at a single point to approximation information around that point. Thank you once again to everybody who supported this series. The next series like it will be on probability, and if you want early access as those videos are made, you know where to go.
A quick trick for computing eigenvalues | Chapter 15, Essence of linear algebra
This is a video for anyone who already knows what eigenvalues and eigenvectors are, and who might enjoy a quick way to compute them, in the case of 2x2 matrices. If you're unfamiliar with eigenvalues, go ahead and take a look at this video here, which is actually meant to introduce them. You can skip ahead if all you want to do is see the trick, but if possible, I'd like you to rediscover it for yourself. So for that, let's lay out a little background. As a quick reminder, if the effect of a linear transformation on a given vector is to scale that vector by some constant, we call it an eigenvector of the transformation, and we call the relevant scaling factor the corresponding eigenvalue, often denoted with the letter lambda. When you write this as an equation, and you rearrange a little bit, what you see is that if the number lambda is an eigenvalue of a matrix A, then the matrix A minus lambda times the identity must send some non-zero vector, namely the corresponding eigenvector, to the zero vector, which in turn means that the determinant of this modified matrix must be zero. Okay, that's all a little bit of a mouthful to say, but again, I'm assuming that all of this is per view for any of you watching. So, the usual way to compute eigenvalues, how I used to do it and how I believe most students are taught to carry it out, is to subtract the unknown value lambda off the diagonals, and then solve for when the determinant is equal to zero. Doing this always involves a few extra steps to expand out and simplify to get a clean quadratic polynomial, what's known as the characteristic polynomial of the matrix. The eigenvalues are the roots of this polynomial, so to find them, you have to apply the quadratic formula, which itself typically requires one or two more steps of simplification. Honestly, the process isn't terrible, but at least for two by two matrices, there is a much more direct way that you can get at the answer. And if you want to rediscover this trick, there's only three relevant facts that you need to know, each of which is worth knowing in its own right and can help you with other problems solving. Number one, the trace of a matrix, which is the sum of these two diagonal entries, is equal to the sum of the eigenvalues. Or another way to phrase it, more useful for our purposes, is that the mean of the two eigenvalues is the same as the mean of these two diagonal entries. Number two, the determinant of a matrix, our usual AD minus BC formula, is equal to the product of the two eigenvalues. And this should kind of make sense if you understand that eigenvalues describe how much an operator stretches space in a particular direction, and that the determinant describes how much an operator scales areas or volumes as a whole. Now before getting to the third fact, notice how you can essentially read these first two values out of the matrix without really writing much down. Take this matrix here as an example. Straight away, you can know that the mean of the eigenvalues is the same as the mean of eight and six, which is seven. Likewise, most linear algebra students are pretty well practiced at finding the determinant, which in this case works out to be 48 minus eight. So right away, you know that the product of the two eigenvalues is 40. Now take a moment to see if you can derive what will be our third relevant fact, which is how you can quickly recover two numbers when you know their mean and you know their product. Here, let's focus on this example. You know that the two values are evenly spaced around the number seven. So they look like seven plus or minus something. Let's call that something D for distance. You also know that the product of these two numbers is 40. Now to find D, notice that this product expands really nicely. It works out as a difference of squares. So from there, you can directly find D. D squared is seven squared minus 40 or nine, which means that D itself is three. In other words, the two values for this very specific example work out to be four and ten. But our goal is a quick trick and you wouldn't want to think through this each time. So let's wrap up what we just did in a general formula. For any mean M and product P, the distance squared is always going to be M squared minus P. This gives the third key fact, which is that when two numbers have a mean M and a product P, you can write those two numbers as M plus or minus the square root of M squared minus P. This is decently fast to read arrive on the fly if you ever forget it. And it's essentially just a rephrasing of the difference of squares formula. But even still, it's a fact that's worth memorizing, so it's at the tip of your fingers. In fact, my friend Tim from the channel Acapella science wrote us a nice quick jingle to make it a little bit more memorable. Let me show you how this works. Say for the matrix 3-1-4-1. You start by bringing to mind the formula. Maybe it's stating it all in your head. But when you write it down, you fill in the appropriate values for M and P as you go. So in this example, the mean of the eigenvalues is the same as the mean of 3-1, which is 2. So the thing you start writing is 2 plus or minus the square root of 2 squared minus. Then the product of the eigenvalues is the determinant, which in this example is 3 times 1 minus 1 times 4 or negative 1. So that's the final thing you fill in. Which means the eigenvalues are 2 plus or minus the square root of 5. You might recognize that this is the same matrix I was using at the beginning, but notice how much more directly we can get at the answer. Here, try another one. This time, the mean of the eigenvalues is the same as the mean of 2 and 8, which is 5. So again, you start writing out the formula, but this time writing 5 in place of M. And then the determinant is 2 times 8 minus 7 times 1 or 9. So in this example, the eigenvalues look like 5 plus or minus the square root of 16, which simplifies even further as 9 and 1. You see what I mean about how you can basically just start writing down the eigenvalues while you're staring at the matrix. It's typically just the tiniest bit of simplification at the end. Honestly, I found myself using this trick a lot when I'm sketching quick notes related to linear algebra and want to use small matrices as examples. I've been working on a video about matrix exponents, where eigenvalues pop up a lot. And I realize it's just very handy if students can read out the eigenvalues from small examples without losing the main line of thought by getting bogged down in a different calculation. As another fun example, take a look at this set of three different matrices, which comes up a lot in quantum mechanics. They're known as the Pauly spin matrices. If you know quantum mechanics, you'll know that the eigenvalues of matrices are highly relevant to the physics that they describe. And if you don't know quantum mechanics, let this just be a little glimpse of how these computations are actually very relevant to real applications. The mean of the diagonal entries in all three cases is 0. So the mean of the eigenvalues in all of these cases is 0, which makes our formula look especially simple. What about the products of the eigenvalues, the determinants of these matrices? For the first one, it's 0 minus 1 or negative 1. The second one also looks like 0 minus 1, but it takes a moment more to see because of the complex numbers. And the final one looks like negative 1 minus 0. So in all cases, the eigenvalues simplify to be plus and minus 1. Although in this case, you really don't need a formula to find two values if you know that they're evenly spaced around 0 and their product is negative 1. If you're curious, in the context of quantum mechanics, these matrices describe observations you might make about a particle spin in the x, y, or z direction. And the fact that their eigenvalues are plus and minus 1 corresponds with the idea that the values for the spin that you would observe would be either entirely in one direction or entirely in another, as opposed to something continuously ranging in between. Maybe you'd wonder how exactly this works, or why you would use two by two matrices that have complex numbers to describe spin in three dimensions. And those are the repair questions just outside the scope of what I want to talk about here. You know, it's funny. I wrote this section because I wanted some case where you have two by two matrices that aren't just toy examples or homework problems. One's where they actually come up in practice, and quantum mechanics is great for that. But the thing is, after I made it, I realized that the whole example kind of undercuts the point that I'm trying to make. For these specific matrices, when you use the traditional method, the one with characteristic polynomials, it's essentially just as fast. It might actually be faster. I mean, take a look at the first one. The relevant determinant directly gives you a characteristic polynomial of lambda squared minus 1, and clearly that has roots of plus and minus 1. Same answer when you do the second matrix, lambda squared minus 1. And as for the last matrix, forget about doing any computations, traditional or otherwise. It's already a diagonal matrix, so this diagonal entries are the eigenvalues. However, the example is not totally lost to our cause, where you will actually feel the speed up is in the more general case, where you take a linear combination of these three matrices and then try to compute the eigenvalues. You might write this as A times the first one plus B times the second plus C times the third. In quantum mechanics, this would describe spin observations in a general direction of a vector with coordinates A, B, C. More specifically, you should assume that this vector is normalized, meaning A squared plus B squared plus C squared is equal to 1. When you look at this new matrix, it's immediate to see that the mean of the eigenvalues is still zero. And you might also enjoy pausing for a brief moment to confirm that the product of those eigenvalues is still negative 1. And then from there, concluding what the eigenvalues must be. And this time, the characteristic polynomial approach would be by comparison a lot more cumbersome, definitely harder to do in your head. To be clear, using the mean product formula is not fundamentally different from finding roots of the characteristic polynomial. I mean, it can't be, they're solving the same problem. One way to think about this actually is that the mean product formula is a nice way to solve quadratics in general. And some viewers of the channel may recognize this. Think about it. When you're trying to find the roots of a quadratic given the coefficients, that's another situation where you know the sum of two values, and you also know their product, but you're trying to recover the original two values. Specifically, if the polynomial is normalized, so that this leading coefficient is 1, then the mean of the roots will be negative 1 half times this linear coefficient, which is negative 1 times the sum of those roots. With the example on the screen, that makes the mean 5. And the product of the roots is even easier. It's just the constant term. No adjustments needed. So from there, you would apply the mean product formula, and that gives you the root. And on the one hand, you could think of this as a lighter weight version of the traditional quadratic formula. But the real advantage is not just that it's fewer symbols to memorize, it's that each one of them carries more meaning with it. I mean, the whole point of this eigenvalue trick is that because you can read out the mean and product directly from looking at the matrix, you don't need to go through the intermediate step of setting up the characteristic polynomial. You can jump straight to writing down the roots without ever explicitly thinking about what the polynomial looks like. But to do that, we need a version of the quadratic formula where the terms carry some kind of meaning. I realize this is a very specific trick for a very specific audience, but it's something I wish I knew in college, so if you happen to know any students who might benefit from this, consider sharing it with them. The hope is that it's not just one more thing that you memorize, but that the framing reinforces some other nice facts that are worth knowing, like how the traits in the determinant are related to eigenvalues. If you want to prove those facts, by the way, take a moment to expand out the characteristic polynomial for a general matrix, and then think hard about the meaning of each of these coefficients. Many thanks to Tim for ensuring that this mean product formula will stay stuck in all of our heads for at least a few months. If you don't know about alchipyla science, please do check it out. The molecular shape of you in particular is one of the greatest things on the internet.
How (and why) to raise e to the power of a matrix | DE6
Let me pull out an all differential equations textbook that I learned from in college. And let's turn to this funny little exercise in here that asks the reader to compute e to the power a t, where a, we're told, is going to be a matrix. And the insinuation seems to be that the results will also be a matrix. It then offers several examples for what you might plug in for a. Now, taking out a context, putting a matrix into an exponent like this probably seems like total nonsense. But what it refers to is an extremely beautiful operation. And the reason it shows up in this book is that it's useful. It's used to solve a very important class of differential equations. In turn, given that the universe is often written in the language of differential equations, you see this pop up in physics all the time too, especially in quantum mechanics where matrix exponents are later throughout the place. They play a particularly prominent role. This has a lot to do with throw-dinger's equations, which we'll touch on a bit later. And it may also help in understanding your romantic relationships. But again, all in due time. A big part of the reason I want to cover this topic is that there is an extremely nice way to visualize what matrix exponents are actually doing using flow that not a lot of people seem to talk about. But for the bulk of this chapter, let's start by laying out what exactly the operation is and see if we can get a feel for what kinds of problems it helps us to solve. The first thing you should know is that this is not some bizarre way to multiply the constant E by itself multiple times. You would be right to call that nonsense. The actual definition is related to a certain infinite polynomial for describing real number powers at E, what we call its Taylor series. For example, if I took the number 2 and plugged it into this polynomial, then as you add more and more terms, each of which looks like some power of 2 divided by some factorial. The sum approaches a number near 7.389, and this number is precisely E times E. If you increment this input by one, then somewhat miraculously, no matter where you started from, the effect on the output is always to multiply it by another factor of E. For reasons that you're going to see in a bit, mathematicians became interested in plugging all kinds of things into this polynomial, things like complex numbers, and for our purposes today, matrices. Even when those objects do not immediately make sense as exponents. What some authors do is give this infinite polynomial the name X when you plug in more exotic inputs. It's a gentle nod to the connection that this has to exponential functions in the case of real numbers, even though obviously these inputs don't make sense as exponents. However, an equally common convention is to give a much less gentle nod to the connection and just abbreviate the whole thing as E to the power of whatever object you're plugging in, whether that's a complex number or a matrix, or all sorts of more exotic objects. So while this equation is a theorem for real numbers, it's a definition for more exotic inputs. Seneca, you could call this a blatant abuse of notation. More charitably, you might view it as an example of the beautiful cycle between discovery and invention in math. In either case, plugging in a matrix even to a polynomial might seem a little strange, so let's be clear on what we mean here. The matrix has to have the same number of rows and columns, that way you can multiply it by itself according to the usual rules of matrix multiplication. This is what we mean by squaring it. Similarly, if you were to take that result and then multiply it by the original matrix again, this is what we mean by cubing the matrix. If you carry on like this, you can take any whole number power of a matrix. It's perfectly sensible. In this context, powers still mean exactly what you would expect, repeated multiplication. Each term in this polynomial is scaled by one divided by some factorial, and with matrices, all that means is that you multiply each component by that number. Likewise, it always makes sense to add together two matrices. This is something that you again do term by term. The astute among you might ask how sensible it is to take this out to infinity, which would be a great question, one that I'm largely going to postpone the answer to, but I can show you one pretty fun example here now. Take this two by two matrix that has negative pi and pi sitting off its diagonal entries. Let's see what the sum gives. The first term is the identity matrix. This is actually what we mean by definition when we raise a matrix to the zero of power. Then we add the matrix itself, which gives us the pi off the diagonal terms, and then add half of the matrix squared, and continuing on, I'll have the computer keep adding more and more terms, each of which requires taking one more matrix product to get the new power, and then adding it to a running tally. And as it keeps going, it seems to be approaching a stable value, which is around negative one times the identity matrix. In this sense, we say the infinite sum equals that negative identity. By the end of this video, my hope is that this particular fact comes to make total sense to you. For any of you familiar with Euler's famous identity, this is essentially the matrix version of that. It turns out that in general, no matter what matrix you start with, as you add more and more terms, you eventually approach some stable value, though sometimes it can take quite a while before you get there. Just seeing the definition like this in isolation raises all kinds of questions. Most notably, why would mathematicians and physicists be interested in torturing their poor matrices this way? What problems are they trying to solve? And if you're anything like me, a new operation is only satisfying when you have a clear view of what it's trying to do, some sense of how to predict the output based on the input before you actually crunch the numbers. How on earth could you have predicted that the matrix with pi off the diagonals results in a negative identity matrix like this? Often in math, you should view the definition not as a starting point, but as a target. Contrary to the structure of textbooks, mathematicians do not start by making definitions, and then listing a lot of theorems and proving them and then showing some examples. The process of discovering math typically goes the other way around. They start by chewing on specific problems, and then generalizing those problems, then coming up with constructs that might be helpful in those general cases, and only then do you write down a new definition, or extend an old one. As to what sorts of specific examples might motivate matrix exponents, two come to mind, one involving relationships and the other quantum mechanics. Let's start with relationships. Suppose we have two lovers, let's call them Romeo and Juliet, and let's let X represent Juliet's love for Romeo, and why represent his love for her, both of which are going to be values that change with time. This is an example that we actually touched on in chapter 1, it's based on a Stephen Strogat's article, but it's okay if you didn't see that. The way their relationship works is that the rate at which Juliet's love for Romeo changes, the derivative of this value, is equal to negative 1 times Romeo's love for her. So in other words, when Romeo is expressing cool disinterest, that's when Juliet's feelings actually increase, whereas if he becomes too infatuated, her interest will start to fade. Romeo, on the other hand, is the opposite, the rate of change of his love is equal to the size of Juliet's love. So while Juliet is mad at him, his affections tend to decrease, whereas if she loves him, that's when his feelings grow. Of course, neither one of these numbers is holding still, as Romeo's love increases in response to Juliet, her equation continues to apply and drives her love down. Both of these equations always apply, from each infinitesimal point in time to the next, so every slight change to one value immediately influences the rate of change of the other. This is a system of differential equations. It's a puzzle, where your challenge is to find explicit functions for x of t and y of t that make both of these expressions true. Now, as systems of differential equations go, this one is on the simpler side, enough so that many calculus students could probably just guess at an answer. But keep in mind, it's not enough to find some pair of functions that makes this true. If you want to actually predict where Romeo and Juliet end up after some starting point, you have to make sure that your functions match the initial set of conditions at time t equal zero. More to the point, our actual goal today is to systematically solve more general versions of this equation, without guessing and checking. And it's that question that leads us to matrix exponents. Very often, when you have multiple changing values like this, it's helpful to package them together as coordinates of a single point in a higher dimensional space. So for Romeo and Juliet, think of their relationship as a point in a 2D space, the x-coordinate capturing Juliet's feelings, and the y-coordinate capturing Romeo's. Sometimes it's helpful to picture this state as an arrow from the origin, other times just as a point. All that really matters is that it encodes two numbers, and moving forward will be writing that as a column vector. And of course, this is all a function of time. You might picture the rate of change of this state, the thing that packages together the derivative of x and the derivative of y, as a kind of velocity vector in this state space, something that tugs at our point in some direction, and with some magnitude that indicates how quickly it's changing. Remember, the rule here is that the rate of change of x is negative y, and the rate of change of y is x. Set up as vectors like this, we could rewrite the right hand side of this equation as a product of this matrix with the original vector, xy. The top row encodes Juliet's rule, and the bottom row encodes Romeo's rule. So what we have here is a differential equation telling us that the rate of change of some vector is equal to a certain matrix times itself. In a moment, we'll talk about how matrix exponentiation solves this kind of equation. But before that, let me show you a simpler way that we can solve this particular system, one that uses pure geometry, and it helps set the stage for visualizing matrix exponents a bit later. This matrix from our system is a 90 degree rotation matrix. For any of you rusty on how to think about matrices as transformations, there's a video all about it on this channel, a series, really. The basic idea is that when you multiply a matrix by the vector 1, 0, it pulls out the first column. And similarly, if you multiply it by 0, 1, that pulls out the second column. What this means is that when you look at a matrix, you can read its columns as telling you what it does to these two vectors, known as the basis vectors. The way it acts on any other vector is the result of scaling and adding these two basis results by that vector's coordinates. So looking back at the matrix from our system, notice how from its columns, we can tell it takes the first basis vector to 0, 1, and the second to negative 1, 0, hence why I'm calling it the 90 degree rotation matrix. What it means for our equation is that it's saying wherever Romeo and Juliet are in this state space, the rate of change has to look like a 90 degree rotation of this position vector. The only way velocity can permanently be perpendicular to position like this is when you rotate around the origin in circular motion, never growing or shrinking because the rate of change has no component in the direction of the position. More specifically, since the length of this velocity vector equals the length of the position vector, then for each unit of time, the distance that this covers is equal to 1 radius's worth of arc length along that circle. In other words, it rotates at 1 radian per unit time, so in particular, it would take 2 pi units of time to make a full revolution. If you want to describe this kind of rotation with a formula, we can use a more general rotation matrix, which looks like this. Again, we can read it in terms of the columns. Notice how the first column tells us that it takes that first basis vector to cosine of t sine of t. And the second column tells us that it takes the second basis vector to negative sine of t cosine of t, both of which are consistent with rotating by t radians. So, to solve the system, if you want to predict where Romeo and Juliet end up after t units of time, you can multiply this matrix by their initial state. The active viewers among you might also enjoy taking a moment to pause and confirm that the explicit formulas you get out of this for x of t and y of t really do satisfy the system of differential equations that we started with. The mathematician and you might wonder if it's possible to solve not just this specific system, but equations like it for any other matrix, no matter what it's coefficients. To ask this question is to set yourself up to rediscover matrix exponents. The main goal for today is for you to understand how this equation lets you intuitively picture the operation which we write as e raised to a matrix. And on the flip side, how being able to compute matrix exponents lets you explicitly solve this equation. A much less whimsical example is Schrodinger's famous equation, which is the fundamental equation describing how systems in quantum mechanics change over time. It looks pretty intimidating, and I mean it's quantum mechanics, so of course it will, but it's actually not that different from the Romeo Juliet setup. This symbol here refers to a certain vector. It's a vector that packages together all the information you might care about in a system, like the various particles positions and momenta. It's analogous to our simpler 2D vector that encoded all the information about Romeo and Juliet. The equation says that the rate at which this state vector changes looks like a certain matrix times itself. There are a number of things that makes Schrodinger's equation notably more complicated, but in the back of your mind you might think of it as a target point that you and I can build up to, with simpler examples like Romeo and Juliet offering more friendly stepping stones along the way. Actually, the simplest example which is tied to ordinary real number powers of e is the one dimensional case. This is when you have a single changing value, and its rate of change equals some constant times itself. So the bigger the value, the faster it grows. Most people are more comfortable visualizing this with a graph where the higher the value of the graph, the steeper its slope, resulting in this ever-steepening upward curve. Just keep in mind that when we get to higher-dimensional variants, graphs are a lot less helpful. This is a highly important equation in its own right. It's a very powerful concept when the rate of change of a value is proportional to the value itself. This is the equation governing things like compound interest, or the early stages of population growth before the effects of limited resources kick in, or the early stages of an epidemic while most of the population is susceptible. Calculus students all learn about how the derivative of e to the rt is r times itself. In other words, this self-reinforcing growth phenomenon is the same thing as exponential growth, and e to the rt solves this equation. Actually, a better way to think about it is that there are many different solutions to this equation, one for each initial condition, something like an initial investment size or an initial population, which we'll just call x-not. Notice, by the way, how the higher the value for x-not, the higher the initial slope of the resulting solution, which should make a complete sense given the equation. The function e to the rt is just a solution when the initial condition is 1, but if you multiply by any other initial condition, you get a new function which still satisfies this property, it still has a derivative which is r times itself. But this time it starts at x-not, since e to the 0 is 1. This is worth highlighting before we generalize to more dimensions. Do not think of the exponential part as being a solution in and of itself. Think of it as something that acts on an initial condition in order to give a solution. You see, up in the two-dimensional case, when we have a changing vector whose rate of change is constrained to be some matrix times itself, what the solution looks like is also an exponential term acting on a given initial condition, but the exponential part in that case will produce a matrix that changes with time, and the initial condition is a vector. In fact, you should think of the definition of matrix exponentiation as being heavily motivated by making sure that this fact is true. For example, if we look back at the system that popped up with Romeo and Juliet, the claim now is that solutions look like e raised to this 0-1-1-0 matrix all times time, multiplied by some initial condition. But we've already seen the solution in this case. We know it looks like a rotation matrix times the initial condition. So let's take a moment to roll up our sleeves and compute the exponential term using the definition that I mentioned at the start and see if it lines up. Remember, writing e to the power of a matrix is a shorthand, a shorthand for plugging it in to this long infinite polynomial, the Taylor series for e to the x. I know it might seem pretty complicated to do this, but trust me, it's very satisfying how this particular one turns out. If you actually sit down and you compute successive powers of this matrix, what you'd notice is that they fall into a cycling pattern every four iterations. This should make sense, given that we know it's a 90 degree rotation matrix. So when you add together all infinitely many matrices term by term, each term in the result looks like a polynomial in T with some nice cycling pattern in its coefficients, all of them scaled by the relevant factorial term. Those of you who are savvy with Taylor series might be able to recognize that each one of these components is the Taylor series for either sine or cosine, though in that top right corner's case it's actually negative sine. So what we get from the computation is exactly the rotation matrix we had from before. To me, this is extremely beautiful. We have two completely different ways of reasoning about the same system, and they give us the same answer. I mean it's reassuring that they do, but it is wild just how different the mode of thought is when you're chugging through this polynomial versus when you're geometrically reasoning about what a velocity perpendicular to a position must imply. Hopefully the fact that these line up inspires a little confidence in the claim that matrix exponents really do solve systems like this. This explains the computation we saw at the start, by the way, with the matrix that had negative pi and pi off the diagonals, producing the negative identity. This expression is exponentiating a 90 degree rotation matrix times pi, which is another way to describe what the Romeo Juliet setup does after pi units of time. As we now know, that has the effect of rotating everything 180 degrees in this state space, which is the same as multiplying by negative 1. Also, for any of you familiar with imaginary number exponents, this particular example is probably ringing a ton of bells. It is 100% analogous. In fact, we could have framed the entire example where Romeo and Juliet's feelings were packaged into a complex number, and the rate of change of that complex number would have been i times itself, since multiplication by i also acts like a 90 degree rotation. The same exact line of reasoning, both analytic and geometric, would have led to this whole idea that e to the power i t describes rotation. These are actually two of many different examples throughout math and physics, when you find yourself exponentiating some object which acts as a 90 degree rotation times time. It shows up with quaternions or many of the matrices that pop up in quantum mechanics. In all of these cases, we have this really neat general idea that if you take some operation that rotates 90 degrees in some plane, often it's a plane in some high-dimensional space that we can't visualize, then what we get by exponentiating that operation times time is something that generates all other rotations in that same plane. One of the more complicated variations on this same theme is Schrodinger's equation. It's not just that this has the derivative of a state equals some matrix times that state form. The nature of the relevant matrix here is such that the equation also describes a kind of rotation. Though in many applications of Schrodinger's equation, it'll be a rotation in a kind of function space. It's a little more involved though because typically there's a combination of many different rotations. It takes time to really dig into this equation and I would love to do that in a later chapter. But right now, I cannot help but at least allude to the fact that this imaginary unit i that sits so impishly in such a fundamental equation for all of the universe is playing basically the same role as the matrix from Arromio Julia example. What this i communicates is that the rate of change of a certain state is, in a sense, perpendicular to that state, and hence that the way things have to evolve over time will involve a kind of oscillation. But matrix exponentiation can do so much more than just rotation. You can always visualize these sorts of differential equations using a vector field. The idea is that this equation tells us the velocity of a state is entirely determined by its position. So what we do is go to every point in the space and draw a little vector indicating what the velocity of a state must be if it passes through that point. For our type of equation, this means that we go to each point v in space and we attach the vector m times v. To intuitively understand how any given initial condition will evolve, you let it flow along this field with a velocity always matching whatever vector it's sitting on at any given point in time. So if the claim is that solutions to this equation look like e to the mt time some initial condition, it means you can visualize what the matrix e to the mt does by letting every possible initial condition flow along this field for t units of time. The transition from start to finish is described by whatever matrix pops out from the computation for e to the mt. In our main example with the 90 degree rotation matrix, the vector field looks like this. And as we saw, e to the mt describes rotation in that case, which lines up with flow along this field. As another example, the more Shakespearean Romeo and Juliet might have equations that look a little more like this, where Juliet's rule is symmetric with Romeo's and both of them are inclined to get carried away in response to one another's feelings. Again, the way the vector field you're looking at has been defined is to go to each point v in space and attach the vector m times v. This is the pictorial way of saying that the rate of change of a state must always equal m times itself. But for this example, flow along the field looks a lot different from how it did before. If Romeo and Juliet start off anywhere in this upper right half of the plane, their feelings will feed off of each other and they both tend towards infinity. If they're in the other half of the plane, well, let's just say that they stay more true to their capulid and Montague family traditions. So even before you try calculating the exponential of this particular matrix, you can already have an intuitive sense for what the answer should look like. The resulting matrix should describe the transition from time zero to time t, which if you look at the field, seems to indicate that it will squish along one diagonal while stretching along another, getting more extreme as t gets larger. Of course, all of this is presuming that e to the mt times an initial condition actually solves these systems. This is one of those facts that's easiest to believe when you just work it out yourself. But I'll run through a quick rough sketch. Write out the full polynomial, the defines e to the mt, and multiply by some initial condition vector on the rate. And then take the derivative of this, with respect to t. Because the matrix m is a constant, this just means applying the power rule to each one of the terms. And that power rule really nicely cancels out with the factorial terms. So what we're left with is an expression that looks almost identical to what we had before, except that each term has an extra m hanging onto it. But this can be factored out to the left. So the derivative of the expression is m times the original expression, and hence it solves the equation. This actually sweeps under the rug some details required for bigger, mostly centered around the question of whether or not this thing actually converges, but it does give the main idea. In the next chapter, I would like to talk more about the properties that this operation has, most notably its relationship with eigenvectors and eigenvalues, which leads us to more concrete ways of thinking about how you actually carry out this computation, which otherwise seems insane. Also, time permitting, it might be fun to talk about what it means to raise e to the power of the derivative operator.
Hamming codes part 2, the elegance of it all
I'm assuming that everybody here is coming from part one. We were talking about hamming codes, a way to create a block of data where most of the bits carry a meaningful message, while a few others act as a kind of redundancy. In such a way that if any bit gets flipped, either a message a bit or a redundancy bit, anything in this block, a receiver is going to be able to identify that there was an error, and how to fix it. The basic idea presented there was how to use multiple parity checks to binary search your way down to the error. Now, in that video, the goal was to make hamming codes feel as hands-on and rediscoverable as possible. But as you start to think about actually implementing this, either in software or hardware, that framing may actually undersell how elegant these codes really are. You might think that you need to write an algorithm that keeps track of all the possible error locations and cuts that group in half with each check, but it's actually way, way simpler than that. If you read out the answers to the four parity checks we did in the last video, all is 1's and 0's instead of yes's and no's. It literally spells out the position of the error in binary. For example, the number 7 in binary looks like 0111, essentially saying that it's 4 plus 2 plus 1. And notice where the position 7 sits. It does affect the first of our parity groups, and the second, and the third, but not the last. So reading the results of those four checks from bottom to top, indeed, does spell out the position of the error. There's nothing special about the example 7, this works in general. And this makes the logic for implementing the whole scheme and hardware shockingly simple. Now if you want to see why this magic happens, take these 16 index labels for our positions, but instead of writing them in base 10, let's write them all in binary, running from 0, 0, 0, 0, up to 1, 1, 1, 1. As we put these binary labels back into their boxes, let me emphasize that they are distinct from the data that's actually being sent. They're nothing more than a conceptual label to help you and me understand where the four parity groups came from. The elegance of having everything that we're looking at be described in binary is maybe undercut by the confusion of having everything we're looking at being described in binary. It's worth it, though. Focus your attention just on that last bit of all of these labels. And then highlight the positions where that final bit is a 1. What we get is the first of our four parity groups, which means that you can interpret that first check as asking, hey, if there's an error is the final bit in the position of that error, a 1. Similarly, if you focus on the second to last bit and highlight all the positions where that's a 1, you get the second parity group from our scheme. In other words, that second check is asking, hey, me again, if there's an error is the second to last bit of that position, a 1. And so on. The third parity check covers every position whose third to last bit is turned on. And the last one covers the last eight positions. Those ones whose highest order bit is a 1. Everything we did earlier is the same as answering these four questions, which in turn is the same as spelling out a position in binary. I hope this makes two things clearer. The first is how to systematically generalize to block sizes that are bigger powers of two. If it takes more bits to describe each position, like six bits to describe 64 spots, then each of those bits gives you one of the parity groups that we need to check. Those of you who watched the chessboard puzzle I did with Matt Parker might find all this exceedingly familiar. It's the same core logic, but solving a different problem, and applied to a 64 squared chessboard. The second thing I hope this makes clear is why our parity bits are sitting in the positions that are powers of two, for example, 1, 2, 4, and 8. These are the positions whose binary representation has just a single bit turned on. What that means is each of those parity bits sits inside one and only one of the four parity groups. You can also see this in larger examples, where no matter how big you get, each parity bit conveniently touches only one of the groups. Once you understand that these parity checks that we focused so much of our time on are nothing more than a clever way to spell out the position of an error in binary, well then we can draw a connection with a different way to think about himming codes, one that is arguably a lot simpler and more elegant, and which can basically be written down with a single line of code. It's based on the XOR function. XOR, for those of you who don't know, stands for exclusive OR. When you take the XOR of two bits, it's going to return a 1 if either one of those bits is turned on, but not if both are turned on or if both are turned off. phrase differently, it's the parity of these two bits. As a math person, I prefer to think about it as addition mod 2. We also commonly talk about the XOR of two different bit strings, which basically does this component by component, it's like addition but where you never carry. Again, the more mathematically inclined, might prefer to think of this as adding two vectors and reducing mod 2. If you open up some Python right now and you apply the carrot operation between two integers, this is what it's doing but to the bit representations of those numbers under the hood. The key point for you and me is that taking the XOR of many different bit strings is effectively a way to compute the parodies of a bunch of separate groups, like so with the columns, all in one fail swoop. This gives us a rather snazzy way to think about the multiple parity checks from our hamming code algorithm as all being packaged together into one single operation. Though at first glance, it does look very different, specifically right down the 16 positions in binary, like we had before, and now highlight only the positions where the message bit is turned on to a 1. And then collect these positions into one big column and take the XOR. You can probably guess that the four bits sitting at the bottom as a result are the same as the four parity checks that we've come to know and love, but take a moment to actually think about why exactly. This last column, for example, is counting all of the positions whose last bit is a 1, but were already limited only to the highlighted positions, so it's effectively counting how many highlighted positions came from the first parity group. Does that make sense? Likewise, the next column counts how many positions are in the second parity group, the positions whose second to last bit is a 1, and which are also highlighted. And so on. It's really just a small shift in perspective on the same thing that we've been doing. And so you know where it goes from here. The sender is responsible for toggling some of the special parity bits to make sure that the sum works out to be 00000. Now once we have it like this, this gives us a really nice way to think about why these four resulting bits at the bottom directly spell out the position of an error. Let's say some bit in this block gets toggled from a 0 to a 1. What that means is that the position of that bit is now going to be included in the total x-or, which changes the sum from being 0 to instead being this newly included value, the position of the error. Slightly less obviously, the same is true if there's an error that changes a 1 to a 0. You see, if you add a bit string together twice, it's the same as not having it there at all, basically because in this world 1 plus 1 equals 0. So adding a copy of this position to the total sum has the same effect as we're moving it. And that effect again is that the total result at the bottom here spells out the position of the error. To illustrate how elegant this is, let me show that one line of Python code I referenced before, which will capture almost all of the logic on the receivers end. We'll start by creating a random array of 161s and 0s to simulate the data block, and I'll go ahead and give it the name bits. But of course, in practice, this would be something that we're receiving from a center, and instead of being random, it would be carrying 11 data bits together with 5 parity bits. If I call the function enumerate bits, what it does is pair together each of those bits with a corresponding index, in this case running from 0 up to 15. So if we then create a list that loops over all of these pairs, pairs that look like I come a bit, and then we pull out just the i value, just the index. Well, that's not that exciting. We just get back those indices 0 through 15. But if we add on the condition to only do this if bit, meaning if that bit is a 1 and not a 0, well then it pulls out only the positions where the corresponding bit is turned on. In this case, it looks like those positions are 0, 4, 6, 9, etc. Remember, what we want is to collect together all of those positions, the positions of the bits that are turned on, and then XOR them together. To do this in Python, let me first import a couple helpful functions, that way we can call reduce on this list and use the XOR function to reduce it. This basically eats its way through the list, taking XORs along the way. If you prefer, you can explicitly write out that XOR function without having to import it from anywhere. So at the moment, it looks like if we do this on our random block of 16 bits, it returns 9, which has the binary representation 1,001. We won't do it here, but you could write a function where the sender uses that binary representation to set the 4 parity bits as needed. Ultimately, getting this block to a state where running this line of code on the full list of bits returns a 0. This would be considered a well-prepared block. Now what's cool is that if we toggle any one of the bits in this list, simulating a random error from noise, then if you run this same line of code, what it does is it prints out that error. Isn't that neat? You could get this block from out of the blue, run this single line on it, and what it'll do is automatically spit out the position of an error, or a 0 if there wasn't any. There's nothing special about the size 16 here, the same line of code would work if you had a list of, say, 256 bits. Needless to say, there is more code to write here, like doing the meta-parity check to detect two bit errors. But the idea is that almost all of the core logic from our scheme comes down to a single XOR reduction. Now, depending on your comfort with binary and XORs and software in general, you may either find this perspective a little bit confusing, or so much more elegant and simple that you're wondering why we didn't just start with it from the get-go. Lusely speaking, the multiple parity check perspective is easier to think about when implementing hamming codes in hardware, very directly, and the XOR perspective is easiest to think about when doing it in software, from kind of a higher level. The first one is easiest to actually do by hand, and I think it does a better job in stealing the core intuition underlying all of this, which is that the information required to locate a single error is related to the log of the size of the block. Or in other words, it grows one bit at a time as the block size doubles. The relevant fact here is that that information directly corresponds to how much redundancy we need. That's really what runs against most people's need or reaction when they first think about making a message resilient to errors. Or usually, copying the whole message is the first instinct that comes to mind. And then, by the way, there is this whole other way that you sometimes see hamming codes presented where you multiply the message by one big matrix. It's kind of nice because it relates it to the broader family of linear codes, but I think that gives almost no intuition for where it comes from, or how it scales. And speaking of scaling, you might notice that the efficiency of this scheme only gets better as we increase the block size. For example, we saw that with 256 bits, you're using only 3% of that space for redundancy, and it just keeps getting better from there. As the number of parity bits grows one by one, the block size keeps doubling. And if you take that to an extreme, you could have a block with, say, a million bits, where you would quite literally be playing 20 questions with your parity checks, and it uses only 21 parity bits. And if you step back to think about looking at a million bits and locating a single error, that genuinely feels crazy. The problem, of course, is that with a larger block, the probability of seeing more than one or two bit errors goes up, and hamming codes do not handle anything beyond that. So in practice, what you'd want is to find the right size so that the probability of too many bit flips isn't too high. Also, in practice, errors tend to come in little bursts, which would totally ruin a single block. So one common tactic to help spread out a burst of errors across many different blocks is to interlace those blocks like this before they're sent out or stored. Then again, a lot of this is rendered completely moot by more modern codes, like the much more commonly used read-solumn algorithm, which handles burst errors particularly well, and it can be tuned to be resilient to a larger number of errors per block. But that is a topic for another time. In his book, The Art of Doing Science and Engineering, hamming is wonderfully candid about just how meandering his discovery of this code was. He first tried all sorts of different schemes involving organizing the bits into parts of a higher dimensional lattice and strange things like this. The idea that it might be possible to get parity checks to conspire in a way that spells out the position of an error only came to hamming when he stepped back after a bunch of other analysis and asked, OK, what is the most efficient I could conceivably be about this? He was also candid about how important it was that parity checks were already on his mind, which would have been way less common back in the 1940s than it is today. There are like half a dozen times throughout this book that he references the Louis Pasteur quote, Luck favors a prepared mind. The other idea is often looked deceptively simple in hindsight, which makes them easy to underappreciate. Right now, my honest hope is that hamming codes, or at least the possibility of such codes, feels almost obvious to you. But you shouldn't fool yourself into thinking that they actually are obvious, because they definitely aren't. Part of the reason that clever ideas look deceptively easy is that we only ever see the final result, cleaning up what was messy, never mentioning all of the wrong turns, underselling just how vast the space of explorable possibilities is at the start of a problem solving process, all of that. But this is true in general. I think for some special inventions, there's a second deeper reason that we underappreciate them. Thinking of information in terms of bits had only really coalesced into a full theory by in 1948, with Claude Shannon's seminal paper on information theory. This was essentially concurrent with when hamming developed his algorithm. This was the same foundational paper that showed, in a certain sense, that efficient error correction is always possible, no matter how high the probability of bit flips, at least in theory. Shannon and hamming, by the way, shared an office in Bell Labs, despite working on very different things, which hardly seems coincidental here. Fast forward several decades, and these days, many of us are so immersed in thinking about bits and information that it's easy to overlook just how distinct this way of thinking was. Ironically, the idea is that most profoundly shape the ways that a future generation thinks will end up looking to that future generation, well, simpler than they really are.
Matrix multiplication as composition | Chapter 4, Essence of linear algebra
Hey everyone, where we last left off, I showed what linear transformations look like and how to represent them using matrices. This is worth a quick recap because it's just really important, but of course if this feels like more than just a recap, go back and watch the full video. Basically speaking, linear transformations are functions with vectors as inputs and vectors as outputs, but I showed last time how we can think about them visually as smushing around space in such a way that grid lines stay parallel and evenly spaced and so that the origin remains fixed. The key takeaway was that a linear transformation is completely determined by where it takes the basis vectors of the space, which for two dimensions means i hat and j hat. This is because any other vector can be described as a linear combination of those basis vectors. A vector with coordinates x, y is x times i hat plus y times j hat. After going through the transformation, this property that grid lines remain parallel and evenly spaced has a wonderful consequence. The place where your vector lands will be x times the transformed version of i hat plus y times the transformed version of j hat. This means if you keep a record of the coordinates where i hat lands and the coordinates where j hat lands, you can compute that a vector which starts at x, y must land on x times the new coordinates of i hat plus y times the new coordinates of j hat. The convention is to record the coordinates of where i hat and j hat land as the columns of a matrix and to define this sum of the scaled versions of those columns by x and y to be matrix vector multiplication. In this way, a matrix represents a specific linear transformation and multiplying a matrix by a vector is what it means computationally to apply that transformation to that vector. Alright, recap over onto the new stuff. Often times you find yourself wanting to describe the effects of applying one transformation and then another. For example, maybe you want to describe what happens when you first rotate the plane 90 degrees counterclockwise, then apply a shear. The overall effect here, from start to finish, is another linear transformation, distinct from the rotation and the shear. This new linear transformation is commonly called the composition of the two separate transformations we applied. And like any linear transformation, it can be described with a matrix all of its own by following i hat and j hat. In this example, the ultimate landing spot for i hat after both transformations is one one. So let's make that the first column of a matrix. Likewise, j hat ultimately ends up at the location negative one zero. So we make that the second column of the matrix. This new matrix captures the overall effect of applying a rotation then a shear, but is one single action rather than two successive ones. Here's one way to think about that new matrix. If you were to take some vector and pump it through the rotation, then the shear, the long way to compute where it ends up is to first multiply it on the left by the rotation matrix. Then take whatever you get and multiply that on the left by the shear matrix. This is numerically speaking what it means to apply a rotation then a shear to a given vector. But whatever you get should be the same as just applying this new composition matrix that we just found by that same vector, no matter what vector you chose. Since this new matrix is supposed to capture the same overall effect as the rotation then shear action. Based on how things are written down here, I think it's reasonable to call this new matrix the product of the original two matrices. Don't you? We can think about how to compute that product more generally in just a moment, but it's way too easy to get lost in the forest of numbers. Always remember that multiplying two matrices like this has the geometric meaning of applying one transformation than another. One thing that's kind of weird here is that this has us reading from right to left. You first apply the transformation represented by the matrix on the right, then you apply the transformation represented by the matrix on the left. This stems from function notation since we write functions on the left of variables, so every time you compose two functions you always have to read it right to left. Good news for the Hebrew readers, bad news for the rest of us. Let's look at another example. Take the matrix with columns 1, 1, and negative 2, 0, whose transformation looks like this. And let's call it M1. Next, take the matrix with columns 0, 1, and 2, 0, whose transformation looks like this. And let's call that guy M2. The total effect of applying M1, then M2, gives us a new transformation. So let's find its matrix. But this time, let's see if we can do it without watching the animations, and instead, using the numerical entries in each matrix. First, we need to figure out where i hat goes. After applying M1, the new coordinates of i hat by definition are given by that first column of M1, namely 1, 1. To see what happens after applying M2, multiply the matrix for M2 by that vector 1, 1. Working it out, the way that I described last video, you'll get the vector 2, 1. This will be the first column of the composition matrix. Likewise, to follow j hat, the second column of M1 tells us that it first lands on negative 2, 0. Then when we apply M2 to that vector, you can work out the matrix vector product to get 0, negative 2, which becomes the second column of our composition matrix. Let me talk through that same process again, but this time, I'll show variable entries in each matrix, just to show that the same line of reasoning works for any matrices. This is more simple, heavy, and will require some more room, but it should be pretty satisfying for anyone who has previously been taught matrix multiplication the more wrote way. To follow where i hat goes, start by looking at the first column of the matrix on the right, since this is where i hat initially lands. Using that column, by the matrix on the left, is how you can tell where the intermediate version of i hat ends up after applying the second transformation. The first column of the composition matrix will always equal the left matrix times the first column of the right matrix. Likewise, j hat will always initially land on the second column of the right matrix. And multiplying the left matrix by this second column will give its final location, and hence that's the second column of the composition matrix. Notice there's a lot of symbols here, and it's common to be taught this formula as something to memorize, along with a certain algorithmic process to kind of help remember it. But I really do think that before memorizing that process, you should get into habit of thinking about what matrix multiplication really represents, applying one transformation after another. Trust me, this will give you a much better conceptual framework that makes the properties of matrix multiplication much easier to understand. For example, here's a question, does it matter what order we put the two matrices in when we multiply them? Well, let's think through a simple example, like the one from earlier. Take a shear, which fixes i hat and smushes j hat over to the right, and a 90 degree rotation. If you first do the shear, then rotate, we can see that i hat ends up at 0, 1, and j hat ends up at negative 1, 1, both are generally pointing close together. If you first rotate, then do the shear, i hat ends up over at 1, 1, and j hat is often a different direction at negative 1, 0, and they're pointing farther apart. The overall effect here is clearly different, so evidently order totally does matter. This, by thinking in terms of transformations, that's the kind of thing that you can do in your head by visualizing. No matrix multiplication necessary. I remember when I first took linear algebra, there's this one homework problem that asks us to prove that matrix multiplication is associative. This means that if you have three matrices, A, B, and C, and you multiply them altogether, it shouldn't matter if you first compute A times B, then multiply the result by C, or if you first multiply B times C, then multiply that result by A on the left. In other words, it doesn't matter where you put the parentheses. Now if you try to work through this numerically, like I did back then, it's horrible, just horrible, and unenlightening for that matter. But when you think about matrix multiplication as applying one transformation after another, this property is just trivial. Can you see why? What it's saying is that if you first apply C then B, then A, it's the same as applying C, then B then A. I mean, there's nothing to prove. You're just applying the same three things one after the other, all in the same order. This might feel like cheating, but it's not. This is an honest to goodness proof that matrix multiplication is associative. And even better than that, it's a good explanation for why that property should be true. I really do encourage you to play around more with this idea, imagining two different transformations, thinking about what happens when you apply one after the other, and then working out the matrix product numerically. Trust me, this is the kind of play time that really makes the idea sink in. In the next video, I'll start talking about extending these ideas beyond just two dimensions. See you then.
How pi was almost 6.283185
I'm sure that you're already familiar with the whole pie versus tau-to-bate. A lot of people say that the fundamental circle constant we hold up should be the ratio of a circle circumference to its radius, which is around 6.28, not the ratio to its diameter, the more familiar 3.14. These days we often call that larger constant tau, popularized by Michael Hartel's tau manifesto. Although personally, I'm quite partial to Robert Pellace's proposed notation of a pie with 3 legs. In either of these manifestos, and on many, many other places of the internet, you can read to no end about how many formulas look a lot cleaner using tau. Largely because the number of radians describing a given fraction of a circle is actually that fraction of tau. That dead horse is beat. I'm not here to make that case further. Instead, I'd like to talk about the seminal moment in history when pie, as we know it, became the standard. For this, one fruitful place to look is that the old notes and letters by one of history's most influential mathematicians, Leonardo Euler. Luckily, we now have an official 3-blue-one brown Switzerland correspondent, Ben Humbrecht, who was able to go to the library in Euler's hometown and get his hands on some of the original documents. And in looking through some of those, it might surprise you to see Euler write, let pie be the circumference of a circle whose radius is 1. That is the 6.28 constant that we would now call tau. And it's likely he was using the Greek letter pie as a P for perimeter. So was it the case that Euler, genius of the day, was more notationally enlightened than the rest of the world, fighting the good fight for 6.28? And if so, who's the villain of our story? Pushing the 3.1415 constant shoved in front of most students today? Well, the work that really established pie as we now know it as the commonly recognized circle constant was an early calculus book from 1748. At the start of chapter 8, indescribing the semi-circumference of a circle with radius 1, and after expanding out a full 128 digits of this number, one of them wrong, by the way, the author adds, which for the sake of brevity I may write pie. Now, there were other texts and letters here and there with varying conventions for the notation of various circle constants, but this book, and this section in particular, was really the one to spread the notation throughout Europe, and eventually the world. So what monster wrote this book with such an unprincipled take toward circle constants? Well, Euler again. In fact, if you look further, you can find instances of Euler using the symbol pie to represent a quarter turn of the circle. What we would call today, pie halves, or tau-fourths. In fact, Euler's use of the letter pie seems to be much more analogous to our use of the Greek letter theta. It's typical for us to let it represent an angle, but no one angle in particular. Euler's use of the circle is 30 degrees, maybe other times it's 135, and most times it's just a variable for a general statement. It depends on the problem and the context before us. Likewise, Euler let pie represent whatever circle constant best suited the problem before him. Though it's worth pointing out that he typically frame things in terms of unit circles, with radius 1. So the 3.1415 constant would almost always have been thought of as the ratio of a circle's same-i circumference to its radius. None of this circumference to its diameter nonsense. And I think Euler's use of this symbol carries with it a general lesson about how we should approach math. The thing you have to understand about Euler is that this man solved problems. A lot of problems. I mean, day in, day out, breakfast lunch and dinner, he was just churning out puzzles and formulas and having insights and creating entire new fields left and right. Over the course of his life, he wrote over 500 books and papers, which amounted to a rate of 800 pages per year. And these are dense math pages. And then, after his death, another 400 publication surfaced. It's often joked that formulas and math have to be named after the second person to prove them, because the first is always going to be Euler. His mind was not focused on what circle constant we should take as fundamental. It was unsolving the task sitting in front of him in a particular moment, and writing a letter to the Bernoulli's to boast about doing so afterwards. For some problems, the quarter circle constant was most natural to think about. For others, the full circle constant. And for others still, say at the start of chapter 8 of his famous calculus book, maybe the half circle constant was most natural to think about. Too often in math education, the focus is on which of multiple competing views about a topic is right. Is it correct to say that the sum of all positive integers is negative 112? Or is it correct to say that it diverges to infinity? Can the infinitesimal values of calculus be taken literally, or is it only correct to speak in terms of limits? Are you allowed to divide a number by zero? These questions in isolation just don't matter. Our focus should be on specific problems and puzzles, both those of practical application and those of idle pondering for knowledge's own sake. Then, when questions of standards arise, you can answer them with respect to a given context. And inevitably, different contexts will lend themselves to different answers of what seems most natural. But that's okay. Outputing 800 pages a year of dense transformative insights seems to be more correlated with a flexibility towards conventions than it does with focusing on which standards are objectively right. So in this Pi Day, the next time someone tells you that, you know, we should really be celebrating math on June 28th. See how quickly you can change the topic to one where you're actually talking about a piece of math.
But what is a neural network? | Chapter 1, Deep learning
This is a 3. It's slobily written and rendered at an extremely low resolution of 28x28 pixels, but your brain has no trouble recognizing it as a 3. And I want you to take a moment to appreciate how crazy it is that brains can do this so effortlessly. I mean this, this and this are also recognizable as 3s, even though the specific values of each pixel is very different from one image to the next. The particular light-sensitive cells in your eye that are firing when you see this 3 are very different from the ones firing when you see this 3. But something in that crazy smart visual cortex of yours resolves these as representing the same idea, while at the same time recognizing other images as their own distinct ideas. But if I told you, hey, sit down and write for me a program that takes in a grid of 28x28 pixels like this, and outputs a single number between 0 and 10, telling you what it thinks the digit is, while the task goes from comically trivial to dauntingly difficult. Unless you've been living under a rock, I think I hardly need to motivate the relevance and importance of machine learning and neural networks to the present and to the future. But what I want to do here is show you what a neural network actually is, assuming no background, and to help visualize what it's doing, not as a buzzword, but as a piece of math. My hope is just that you come away feeling like the structure itself is motivated, and to feel like you know what it means when you read, or you hear about a neural network, quote unquote, learning. This video is just going to be devoted to the structure component of that, and the following one is going to tackle learning. What we're going to do is put together a neural network that can learn to recognize handwritten digits. This is a somewhat classic example for introducing the topic, and I'm happy to stick with the status quo here, because at the end of the two videos, I want to point you to a couple good resources where you can learn more, and where you can download the code that does this and play with it on your own computer. There are many, many variants of neural networks, and in recent years there's been sort of a boom in research towards these variants, but in these two introductory videos, you and I are just going to look at the simplest plain vanilla form with no added frills. This is kind of a necessary prerequisite for understanding any of the more powerful modern variants, and trust me, it still has plenty of complexity for us to wrap our minds around. But even in this simplest form, it can learn to recognize handwritten digits, which is a pretty cool thing for a computer to be able to do. And at the same time, you'll see how it does fall short of a couple hopes that we might have for it. As the name suggests, neural networks are inspired by the brain, but let's break that down. What are the neurons and in what sense are they linked together? Right now, when I say neuron, all I want you to think about is a thing that holds a number, specifically a number between zero and one. It's really not more than that. For example, the network starts with a bunch of neurons corresponding to each of the 28 times 28 pixels of the input image, which is 784 neurons in total. Each one of these holds a number that represents the gray scale value of the corresponding pixel, ranging from zero for black pixels up to one for white pixels. This number inside the neuron is called its activation. And the image you might have in mind here is that each neuron is lit up when its activation is a high number. So all of these 784 neurons make up the first layer of our network. Now jumping over to the last layer, this has 10 neurons each representing one of the digits. The activation in these neurons, again, some number that's between zero and one, represents how much the system thinks that a given image corresponds with a given digit. There's also a couple layers in between called the hidden layers, which for the time being should just be a giant question mark. For how on earth this process of recognizing digits is going to be handled. In this network, I chose two hidden layers, each one with 16 neurons. And admittedly, that's kind of an arbitrary choice. To be honest, I chose two layers based on how I want to motivate the structure in just a moment. And 16, well that was just a nice number to fit on the screen. In practice, there is a lot of room for experiment with a specific structure here. The way the network operates, activations in one layer determine the activations of the next layer. And of course the heart of the network as an information processing mechanism comes down to exactly how those activations from one layer bring about activations in the next layer. It's meant to be loosely analogous to how in biological networks of neurons, some groups of neurons firing cause certain others to fire. Now the network I'm showing here has already been trained to recognize digits. And let me show you what I mean by that. It means if you feed in an image lighting up all 784 neurons of the input layer according to the brightness of each pixel in the image, that pattern of activations causes some very specific pattern in the next layer, which causes some pattern in the one after it, which finally gives some pattern in the output layer. And the brightest neuron of that output layer is the network's choice, so to speak, for what digit this image represents. And before jumping into the math for how one layer influences the next or how training works, let's just talk about why it's even reasonable to expect a layered structure like this to behave intelligently. What are we expecting here? What is the best hope for what those middle layers might be doing? Well, when you or I recognize digits, we piece together various components. A 9 has a loop up top and a line on the right. And 8 also has a loop up top, but it's paired with another loop down low. A 4 basically breaks down into 3 specific lines and things like that. Now in a perfect world, we might hope that each neuron in the second to last layer corresponds with one of these sub components. That anytime you feed in an image with, say, a loop up top, like a 9 or an 8, there's some specific neuron whose activation is going to be close to 1. And I don't mean this specific loop of pixels, the hope would be that any generally loopy pattern towards the top sets off this neuron. That way, going from the third layer to the last one, just requires learning which combination of sub components corresponds to which digits. Of course, that just kicks the problem down the road because how would you recognize these sub components or even learn what the right sub components should be? And I still haven't even talked about how one layer influences the next, but run with me on this one for a moment. Recognizing a loop can also break down into sub problems. One reasonable way to do this would be to first recognize the various little edges that make it up. Similarly, a long line, like the kind you might see in the digits 1 or 4 or 7, well that's really just a long edge, or maybe you think of it as a certain pattern of several smaller edges. So maybe, our hope is that each neuron in the second layer of the network corresponds with the various relevant little edges. Maybe when an image like this one comes in, it lights up all of the neurons associated with around 8 to 10 specific little edges, which in turn lights up the neurons associated with the upper loop and a long vertical line, and those light up the neuron associated with a 9. Whether or not this is what our final network actually does is another question. One that I'll come back to once we see how to train the network, but this is a hope that we might have, a sort of goal with the layered structure like this. Moreover, you can imagine how being able to detect edges and patterns like this would be really useful for other image recognition tasks. And even beyond image recognition, there are all sorts of intelligent things you might want to do that break down into layers of abstraction. Parsing speech, for example, involves taking raw audio and picking out distinct sounds, which combine to make certain syllables, which combine to form words, which combine to make up phrases and more abstract thoughts, etc. But getting back to how any of this actually works, picture yourself right now designing how exactly the activations in one layer might determine the activations in the next. The goal is to have some mechanism that could conceivably combine pixels into edges, or edges into patterns, or patterns into digits. And to zoom in on one very specific example, let's say the hope is for one particular neuron in the second layer to pick up on whether or not the image has an edge in this region here. The question at hand is what parameters should the network have? What dials and knobs should you be able to tweak so that it's expressive enough to potentially capture this pattern, or any other pixel pattern, or the pattern that several edges can make a loop and other such things? Well, what we'll do is assign a weight to each one of the connections between our neuron and the neurons from the first layer. These weights are just numbers. Then take all of those activations from the first layer and compute their weighted sum according to these weights. I find it helpful to think of these weights as being organized into a little grid of their own, and I'm going to use green pixels to indicate positive weights and red pixels to indicate negative weights, where the brightness of that pixel is some loose depiction of the weights value. Now if we made the weights associated with almost all of the pixels 0, except for some positive weights in this region that we care about, then taking the weighted sum of all the pixel values really just amounts to adding up the values of the pixel just in the region that we care about. And if you really wanted to pick up on whether there's an edge here, what you might do is have some negative weights associated with the surrounding pixels. Then the sum is largest when those middle pixels are bright, but the surrounding pixels are darker. When you compute a weighted sum like this, you might come out with any number, but for this network, what we want is for activations to be some value between 0 and 1. So a common thing to do is to pump this weighted sum into some function that squishes the real number line into the range between 0 and 1. And a common function that does this is called the sigmoid function, also known as a logistic curve, basically very negative inputs end up close to 0, very positive inputs end up close to 1, and it just steadily increases around the input 0. So the activation of the neuron here is basically a measure of how positive the relevant weighted sum is. But maybe it's not that you want the neuron to light up when the weighted sum is bigger than 0. Maybe you only want it to be active when the sum is bigger than say 10. That is, you want some bias for it to be inactive. What we'll do then is just add in some other number, like negative 10, to this weighted sum, before plugging it through the sigmoid squishing function. That additional number is called the bias. So the weights tell you what pixel pattern this neuron in the second layer is picking up on, and the bias tells you how high the weighted sum needs to be before the neuron starts getting meaningfully active. And that is just one neuron. Every other neuron in this layer is going to be connected to all 784 pixel neurons from the first layer, and each one of those 784 connections has its own weight associated with it. Also, each one has some bias, some other number that you add onto the weighted sum before squishing it with the sigmoid. And that's a lot to think about. With this hidden layer of 16 neurons, that's a total of 784 x 16 weights, along with 16 biases. And all of that is just the connections from the first layer to the second. The connections between the other layers also have a bunch of weights and biases associated with them. All said and done, this network has almost exactly 13,000 total weights and biases. 13,000 knobs and dials that can be tweaked and turned to make this network behave in different ways. So when we talk about learning, what that's referring to is getting the computer to find a valid setting for all of these many, many numbers so that it'll actually solve the problem at hand. One thought experiment that is at once fun and kind of horrifying is to imagine sitting down and setting all of these weights and biases by hand, purposefully tweaking the numbers so that the second layer picks up on edges, the third layer picks up on patterns, etc. I personally find this satisfying rather than just treating the network as a total black box, because when the network doesn't perform the way you anticipate, if you've built up a little bit of a relationship with what those weights and biases actually mean, you have a starting place for experimenting with how to change the structure to improve. Or when the network does work, but not for the reasons you might expect, digging into what the weights and biases are doing is a good way to challenge your assumptions and really expose the full space of possible solutions. By the way, the actual function here is a little cumbersome to write down, don't you think? So let me show you a more notationally compact way that these connections are represented. This is how you'd see it if you choose to read out more about neural networks. Organize all of the activations from one layer into a column as a vector. Then organize all of the weights as a matrix, where each row of that matrix corresponds to the connections between one layer and a particular neuron in the next layer. What that means is that taking the weighted sum of the activations in the first layer, according to these weights, corresponds to one of the terms in the matrix vector product of everything we have on the left here. By the way, so much of machine learning just comes down to having a good grasp of linear algebra. So for any of you who want a nice visual understanding for matrices and what matrix vector multiplication means, take a look at the series I did on linear algebra, especially chapter three. Back to our expression, instead of talking about adding the bias to each one of these values independently, we represent it by organizing all those biases into a vector, and adding the entire vector to the previous matrix vector product. Then as a final step, I'll wrap a sigmoid around the outside here, and what that's supposed to represent is that you're going to apply the sigmoid function to each specific component of the resulting vector inside. So once you write down this weight matrix and these vectors as their own symbols, you can communicate the full transition of activations from one layer to the next in an extremely tight and neat little expression. And this makes the relevant code both a lot simpler and a lot faster, since many libraries optimize the heck out of matrix multiplication. Remember how earlier I said these neurons are simply things that hold numbers? Well, of course the specific numbers that they hold depends on the image you feed in. So it's actually more accurate to think of each neuron as a function, one that takes in the outputs of all the neurons in the previous layer and spits out a number between 0 and 1. Really, the entire network is just a function, one that takes in 784 numbers as an input and spits out 10 numbers as an output. It's an absurdly complicated function, one that involves 13,000 parameters in the forms of these weights and biases that pick up uncertain patterns, and which involves iterating many matrix vector products and the sigmoid squishification function. But it's just a function nonetheless. And in a way, it's kind of reassuring that it looks complicated. I mean, if there were any simpler, what hope would we have that it could take on the challenge of recognizing digits? And how does it take on that challenge? How does this network learn the appropriate weights and biases just by looking at data? Oh, that's what I'll show in the next video. And I'll also dig a little more into what this particular network we're seeing is really doing. Now is the point I suppose I should say subscribe to stay notified about when that video or any new videos come out, but realistically most of you don't actually receive notifications from YouTube, do you? Maybe more honestly, I should say subscribe so that the neural networks that underlie YouTube's recommendation algorithm are primed to believe that you want to see content from this channel get recommended to you. Anyway, stay posted for more. Thank you very much to everyone supporting these videos on Patreon. I've been a little slow to progress in the probability series this summer, but I'm jumping back into it after this project, so patrons, you can look out for updates there. To close things off here, I have with me Lisha Lee, who did her PhD work on the theoretical side of deep learning, and who currently works at a venture capital firm called Amplify Partners, who kindly provided some of the funding for this video. So, Lisha, one thing I think we should quickly bring up is this sigmoid function. As I understand it, early networks use this to squish the relevant weighted sum into that interval between zero and one, you know, kind of motivated by this biological analogy of neurons either being inactive or active. Exactly. But relatively few modern networks actually use sigmoid anymore. Yeah. It's kind of old school, right? Yeah, or rather, rel U seems to be much easier to train. And rel U stands for rectified linear unit. Yes, it's this kind of function where you're just taking a max of zero and A where A is given by what you were explaining in the video. And what this was sort of motivated from, I think, was partially by a biological analogy with how neurons would either be activated or not. And so, if it passes a certain threshold, it would be the identity function, but if it did not, then it would just not be activated, so V0. So, it's kind of a simplification. Using sigmoids didn't help training, or it was very difficult to train at some point, and people just tried rel U, and it happened to work very well for these incredibly deep neural networks. All right. Thank you, Lisha. For background, amplify partners and early stage VC invests in technical founders building the next generation of companies focused on the applications of AI. If you or someone that you know has ever thought about starting a company someday, or if you're working on an early stage one right now, the amplify folks would love to hear from you. They even set up a specific email for this video, 3BlueOneBrown, at amplifypartners.com, so feel free to reach out to them through that.
Music And Measure Theory
I have two seemingly unrelated challenges for you. The first relates to music, and the second gives a foundational result in measure theory, which is the formal underpinning for how mathematicians define integration and probability. The second challenge, which I'll get to about halfway through the video, has to do with covering numbers with open sets, and is very counterintuitive, or at least when I first thought I was confused for a while. For most, I'd like to explain what's going on, but I also plan to share a surprising connection that it has with music. Here's the first challenge. I'm going to play a musical note with a given frequency, let's say 220 hertz. Then I'm going to choose some number between 1 and 2, which we'll call R, and play a second musical note whose frequency is R times the frequency of the first note 220. For some values of R, like 1.5, the two notes will sound harmonious together, but for others, like the square root of 2, they sound cacophonous. Your task is to determine whether a given ratio R will give a pleasant sound or an unpleasant one just by analyzing the number and without listening to the notes. One way to answer, especially if your name is Pythagoras, might be to say that two notes sound good together when the ratio is a rational number and bad when it's irrational. For instance, a ratio of 3 halves gives a musical 5th. 4 thirds gives a musical 4th. 8 fifths gives a major 6th, so on. Here's my best guess for why this is the case. A musical note is made up of beats played in rapid succession, for instance 220 beats per second. When the ratio of frequencies of two notes is rational, there's a detectable pattern in those beats, which when we slow it down, we hear as a rhythm instead of as a harmony. Evidently, when our brains pick up on this pattern, the two notes sound nice together. However, most rational numbers actually sound pretty bad, like 211 over 198, or 1093 divided by 826. The issue, of course, is that these rational numbers are somehow more complicated than the other ones. Our ears don't pick up on the pattern of the beats. One simple way to measure complexity of rational numbers is to consider the size of the denominator when it's written in reduced form. So we might edit our original answer to only admit fractions with low denominators, say less than 10. Even still, this doesn't quite capture harmoniousness. Since plenty of notes sound good together, even when the ratio of their frequencies is irrational, so long as it's close to a harmonious rational number. And it's a good thing, too, because many instruments, such as pianos, are not tuned in terms of rational intervals, but are tuned such that each half-step increase corresponds with multiplying the original frequency by the 12th root of 2, which is irrational. If you're curious about why this is done, Henry admitted physics recently did a video that gives a very nice explanation. This means that if you take a harmonious interval, like a fifth, the ratio of frequencies, when played on a piano, will not be a nice rational number like you expect, in this case three halves. But will instead be some power of the 12th root of 2, in this case 2 to the 7 over 12, which is irrational, but very close to three halves. Similarly, a musical fourth corresponds to 2 to the 5-12, which is very close to 4-3rds. In fact, the reason it works so well that have 12 notes in the chromatic scale is that powers of the 12th root of 2 have this strange tendency to be within a 1% margin of error of the simple rational numbers. So now you might say that a ratio R will produce a harmonious pair of notes if it is sufficiently close to a rational number with a sufficiently small denominator. How close depends on how discerning your ear is, and how smaller denominator depends on the intricacy of harmonic patterns your ear has been trained to pick up on. After all, maybe someone with a particularly acute musical synth would be able to hear and find pleasure in the pattern resulting from more complicated fractions, like 23 over 21, or 35 over 43, as well as numbers closely approximating those fractions. This leads me to an interesting question. Suppose there is a musical savante who finds pleasure in all pairs of notes whose frequencies have a rational ratio, even the super complicated ratios that you and I would find cacophonous. Is it the case that she would find all ratios are between 1 and 2 harmonious, even the irrational ones? After all, for any given real number, you can always find a rational number arbitrarily close to it, just like 3 halves is really close to 2 to the 7 over 12. Well, this brings us to challenge number 2. Mathematicians like to ask riddles about covering various sets with open intervals, and the answers to these riddles have a strange tendency to become famous lemmas of theorems. By open interval, I just mean the continuous stretch of real numbers, strictly greater than some number A, but strictly less than some other number B, where B is of course greater than A. My challenge to you involves covering all of the rational numbers between 0 and 1 with open intervals. When I say cover, all this means is that each particular rational number lies inside at least one of your intervals. The most obvious way to do this is to just use the entire interval from 0 to 1 itself and call it done, but the challenge here is that the sum of the lengths of your intervals must be strictly less than 1. To aid you in this seemingly impossible task, you're allowed to use infinitely many intervals. Even still, the task might feel impossible since the rational numbers are dense in the real numbers, meaning any stretch, no matter how small, contains infinitely many rational numbers. So how could you possibly cover all of the rational numbers without just covering the entire interval from 0 to 1 itself, which would mean the total length of your open intervals has to be at least the length of the entire interval from 0 to 1. Then again, I wouldn't be asking if there wasn't a way to do it. First, we enumerate the rational numbers between 0 and 1, meaning we organize them into an infinitely long list. There are many ways to do this, but one natural way that I'll choose is to start with 1 half, followed by 1 third and 2 thirds, then 1 fourth and 3 fourths, we don't write down 2 fourths since it's already appeared as 1 half. Then all reduced fractions with denominator 5, all reduced fractions with denominator 6, continuing on and on in this fashion. Every fraction will appear exactly once in this list in its reduced form, and it gives us a meaningful way to talk about the first rational number, the second rational number, the 42nd rational number, things like that. Next, to ensure that each rational is covered, we're going to assign one specific interval to each rational. Once we remove the intervals from the geometry of our setup and just think of them in a list, each one responsible for one rational number, it seems much clearer that the sum of their lengths can be less than 1, since each particular interval can be as small as you want and still cover its designated rational. In fact, the sum can be any positive number. Just choose an infinite sum with positive terms that converges to 1, like 1 half plus a fourth plus an eighth on and on. Then choose any desired value of epsilon greater than 0, like 0.5, and multiply all of the terms in the sum by epsilon so that you have an infinite sum converging to epsilon. Now, scale the nth interval to have a length equal to the nth term in the sum. Notice, this means your intervals start getting really small really fast, so small that you can't really see most of them in this animation, but it doesn't matter since each one is only responsible for covering one rational. I've said it already, but I'll say it again because it's so amazing. Epsilon can be whatever positive number we want, so not only can our sum be less than 1, it can be arbitrarily small. This is one of those results where even after seeing the proof, it still defies intuition. The discord here is that the proof has us thinking analytically with the rational numbers in a list, but our intuition has us thinking geometrically with all the rational numbers as a dense set on the interval where you can't skip over any continuous stretch because that would contain infinitely many rationales. So let's get a visual understanding for what's going on. Beef side note here. I had trouble deciding on how to illustrate small intervals since if I scale the parentheses with the interval, you won't be able to see them at all, but if I just push the parentheses together, they cross over in a way that's potentially confusing. Nevertheless, I decided to go with the ugly chromosomal cross, so keep in mind the interval this represents is that tiny stretch between the centers of each parenthesis. Okay, back to the visual intuition. Consider when epsilon equals 0.3, meaning if I choose a number between 0 and 1 at random, there's a 70% chance that it's outside those infinitely many intervals. What does it look like to be outside the intervals? The square root of 2 over 2 is among those 70%, and I'm going to zoom in on it. As I do so, I'll draw the first 10 intervals in our list within our scope of vision. As we get closer and closer to the square root of 2 over 2, even though you will always find rationales within your field of view, the intervals placed on top of those rationales get really small, really fast. One might say that for any sequence of rational numbers approaching the square root of 2 over 2, the intervals containing the elements of that sequence shrink faster than the sequence converges. Notice, intervals are really small if they show of late in the list, and rationales show of late in the list when they have large denominators. So the fact that the square root of 2 over 2 is among the 70% not covered by our intervals is, in a sense, a way to formalize the otherwise vague idea that the only rational numbers close to it have a large denominator. That is to say, the square root of 2 over 2 is cucophonous. In fact, let's use a smaller epsilon, say 0.01, and shift our setup to lie on top of the interval from 1 to 2 instead of from 0 to 1. Then which numbers fall among that elite 1% covered by our tiny intervals? Almost all of them are harmonious. For instance, the harmonious irrational number 2 to the 7-12 is very close to 3-1, which has a relatively fat interval sitting on top of it, and the interval around 4-3 is smaller, but still fat enough to cover 2 to the 5-12. Which numbers of the 1% are cucophonous? Well, the cucophonous rationals, meaning those with high denominators, and irrational that are very, very, very close to them. However, think of the savant who finds harmonic patterns in all rational numbers. You could imagine that for her, harmonious numbers are precisely those 1% covered by the intervals, provided that her tolerance for error goes down exponentially for more complicated rationals. In other words, the seemingly paradoxical fact that you can have a collection of intervals densely populated range while only covering 1% of its values corresponds to the fact that harmonious numbers are rare, even for the savant. I'm not saying this makes the result more intuitive. In fact, I find it quite surprising that the savant I defined could find 99% of all ratios cucophonous. But the fact that these two ideas are connected was simply too beautiful not to share.
Exponential growth and epidemics
The phrase exponential growth is familiar to most people. And yet, human intuition has a hard time really recognizing what it means sometimes. We can anchor on a sequence of small, seeming numbers and then become surprised when suddenly those numbers look big, even if the overall trend follows an exponential perfectly consistently. This right here is the data for the recorded cases of COVID-19, aka the coronavirus, at the time that I'm writing this. Never one to waste an opportunity for a math lesson, I thought this might be a good time for all of us to go back to the basics on what exponential growth really is, where it comes from, what it implies, and maybe most pressingly how to know when it's coming to an end. Exponential growth means that as you go from one day to the next, it involves multiplying by some constant. In our data, the number of cases in each day tends to be a multiple of about 1.15 to 1.25 of the number of cases the previous day. Now viruses are a textbook example of this kind of growth, because what causes new cases are the existing cases. So if the number of cases on a given day is n, and we say that each individual with the virus is exposed to, on average, e people on a given day, and each one of those exposures has a probability p of becoming a new infection, then the number of new cases on a given day is e times p times n. And the fact that n itself is a factor in its own change is what really makes things go fast, because if n gets big, it means the rate of growth itself is getting big. One way to think about this is that as you add the new cases to get the next day's count, you can factor out the n. So it's just the same as multiplying by some constant that's bigger than 1. This is sometimes easier to see if we put the y-axis of our graph on a logarithmic scale, which means that each step of a fixed distance corresponds to multiplying by a certain factor. In this case, each step is another power of 10. On this scale, exponential growth should look like a straight line. Looking at our data, it seems like it took 20 days to go from 100 to 1000, and 13 days to go from that to 10,000. And if you do a simple linear regression to find the best fit line, you can look at the slope of that line to draw a conclusion like we tend to multiply by 10 every 16 days on average. This regression also lets us be a little more quantitative about exactly how close the exponential fit really is, and to use the technical statistical jargon here, the answer is that it's really freaking close. But it can be hard to digest exactly what that means if true. When you see one country with, say, 6000 cases and another with 60, it's easy to think that the second is doing 100 times better, and hence fine. But if you're actually in a situation where numbers multiply by 10 every 16 days, another way to view the same fact is that the second country is about a month behind the first. Now, this is, of course, rather worrying if you draw out the line. I'm recording this on March 6th, and if the present trend continues, it would mean hitting a million cases in 30 days, hitting 10 million in 47 days, 100 million in 64 days, and 1 billion in 81 days. Needless to say, though, you can't just draw out a line like this forever. It clearly has to start slowing down at some point. But the crucial question is when. Is it like the SARS outbreak of 2002, which capped out around 8000 cases, or the Spanish flu of 1918, which ultimately infected about 27% of the world's population? In general, with no context, just drawing a line through your data is not a great way to make predictions. But remember, there's an actual reason to expect an exponential here. If the number of new cases each day is proportional to the number of existing cases, it necessarily means each day you multiply by some constant. So moving forward D days is the same as multiplying by that constant D times. The only way that stops is if either the number E or P goes down. It's inevitable that this will eventually happen. Even in the most perfectly pernicious model for a virus, which would be where every day each person with the infection is exposed to a random subset of the world's population. At some point, most of the people they're exposed to would already be sick, and so they couldn't become new cases. In our equation, that would mean that the probability of an exposure becoming a new infection would have to include some kind of factor to account for the probability that someone you're exposed to is already infected. For a random shuffling model like this, that could mean including a factor like 1- the proportion of people in the world who are already infected. Including that factor, and then solving for how Anne grows, you get what's known in the business as a logistic curve, which is essentially indistinguishable from an exponential at the beginning, but ultimately it levels out once you're approaching the total population size, which is what you would expect. True exponentials essentially never exist in the real world. Only one of them is really the start of a logistic curve. Now this point right here where that logistic goes from curving upward to instead curving downward is known as the inflection point. There, the number of new cases each day, represented by the slope of this curve, stops increasing and instead stays roughly constant before it starts decreasing. So one number that people often follow with epidemics is the growth factor, which is defined as the ratio between the number of new cases one day and the number of new cases the previous day. So just to be clear, if you were looking at all of the totals from one day to the next, then tracking the changes between those totals, the growth factor is a ratio between two successive changes. While you're on the exponential part, this factor stays consistently above one, whereas as soon as your growth factor looks closer to one, it's a sign that you've hit the inflection. This can make for another counterintuitive fact while following the data. Think about what it would feel like for the number of new cases one day to be about 15% more than the number of new cases the previous day. And contrast that with what it would feel like for it to be about the same. Just looking at the totals they result in, they don't really feel that different. But if the growth factor is one, it could mean you're at the inflection point of a logistic, which would mean the total number of cases is going to max out at about two times wherever you are now. But a growth factor bigger than one, subtle though that might seem, means you're on the exponential part, which could imply there are orders of magnitude of growth still waiting ahead of you. Now, while it's true that in the worst case situation, the saturation point is around the total population. It's of course not at all true that people with a virus are randomly shuffled around the world's population like this. People are clustered in local communities. However, if you run simulations where there's even a little bit of travel between clusters like this, the growth is actually not that much different. What you end up with is a kind of fractal pattern, where communities themselves function like individuals. Which one has some exposure to others, with some probability of spreading the infection, so the same underlying and exponential inducing laws apply. Fortunately, saturating the whole population is not the only thing that can cause the two factors we care about to go down. The amount of exposure can also go down when people stop gathering and traveling, and the infection rate can go down when people just wash their hands more. The other thing that's counterintuitive about exponential growth, this time in a more optimistic sense, is just how sensitive it is to this constant. For example, if it's 15% like it is as I'm recording this, and we're at 21,000 cases now, that would mean that 61 days from now you hit over 100 million. But if through a bit less exposure and infection, that rate drops down to 5%, it doesn't mean the projection also drops down by a factor of 3, it actually drops down to around 400,000. So if people are sufficiently worried, then there's a lot less to worry about. But no one is worried, that's when you should worry.
Why 5⧸3 is a fundamental constant for turbulence
The arrow around you is in constant and chaotic motion, replete with nearly impossible to predict swirls, ranging from large to minuscule. What you're looking at right now is a cross section of the flow in a typical room, made visible using a home demo, involving a laser, a glass rod, and a fog machine. Predicting the specifics of turbulent motion like this has long evaded mathematicians and physicists, but we are steadily getting closer to understanding some consistent patterns in this chaos. And in a minute, I'll share with you one specific quantitative result describing a certain self-similarity to this motion. To back up a bit, I was recently in San Diego and spent a day with Diana Coward, aka Physics Girl, and her frequent collaborator, Dan Walsh, playing around with vortex rings. This is a really surprising fluid flow phenomenon, where a doughnut-shaped region of fluid stays surprisingly stable as it moves through space. If you take some open container with a lip, and you fill it with smoke or fog, you can use this to actually see the otherwise invisible ring. Diana just published a video over on her channel showing much more of that particular demo, including a genuinely fascinating observation about what happens when you change the shape of the opening. The story for you and me starts when her friend, Dan, had the clever idea to visualize a slice of what's going on with these vortex rings using a plane or laser. So, you know how if you shine a laser through the fog? Photons will occasionally bounce off of the particles in the fog along that beam towards your eye, thereby revealing the beam of the laser. Well, Dan's thought was to refract that light through a glass rod so that it was relatively evenly spread across an entire plane, then the same phenomenon would reveal the laser light along a thin plane through that fog. The result was awesome! The cross section of such a smoke ring looks like two hurricanes rotating next to each other, and this makes abundantly clear how the surface of these rings rotates as they travel. And also, how chaotic they leave the air behind them. And, as an added bonus, the setup doubles as a great death-eater-themed Halloween decoration. If you do want to try this at home, I should say, be super careful with the laser. Make sure not to point it near anyone's eyes. This concern is especially relevant when the laser is spread along a full plane, basically treat it like a gun. Also, credit where credit is due, I'd like to point out that after we did this, we found that the channel Nighthawk & Light, Great Channel, has a video doing a similar demo, link in the description. Even though our original plan was to illuminate vortex rings, I actually think the most notable part of this visual is how it sheds light on what ordinary airflow in a room looks like, in all of its intricacy and detail. We call this chaotic flow turbulence, and just as vortex rings give an example of unexpected order in the otherwise messy world of fluid dynamics, I'd like to share with you a more subtle instance of order amidst chaos in the math of turbulence. First off, what exactly is turbulence? The term is familiar to many of us as that annoying thing that makes plane rides bumpy, but nailing down a specific definition is a little tricky. It's easiest to describe qualitatively. Turbulence involves many swirling eddies, it's chaotic, and it mixes things together. One approach here would be to describe turbulence based on what it's not, laminar flow. This term stems from the same Latin word that lamination does, lamina, meaning a thin layer of a material, and it refers to smooth flow in a fluid, where the moving particles stay largely confined to distinct layers. Turbulence, in contrast, contains many eddies, points of some vorticity, also known as positive curl, also known as a high swirly swirly factor, breaking down the notion of distinct layers. However, vorticity does not necessarily imply that a flow is turbulent. Patterns like whirlpools, or even smoke rings, have high vorticity since the fluid is rotating, but can nevertheless be smooth and predictable. Instead, turbulence is further characterized as being chaotic, meaning small changes to the initial conditions result in large changes to the ensuing patterns. It's also diffusive in the sense of mixing together different parts of the fluid, and also diffusing the energy and the momentum from isolated parts of the fluid to the rest. Notice how in this clip, over time, the image shifts from having a crisp delineation between fog and air to instead being a murky mixture of both of them. As to something more mathematically precise, there's not really a single, widely agreed upon, clear-cut criterion, the way that there is for most other terms in math. The intricacy of the patterns you're seeing is mirrored by a difficulty to parse the physics describing all of this. And that can make the notion of a rigorous definition somewhat slippery. You see, the fundamental equations describing fluid dynamics, the Navier-Stokes equations, are famously challenging to understand. We won't go through the full details here, but if you're curious, the main equation is essentially a form of Newton's second law, that the acceleration of a body, times its mass, equals the sum of the forces acting on it. It's just that writing mass times acceleration looks a bit more complicated in this context, and the force is broken down into the different types of forces acting on a fluid, which again can look a bit intimidating in the context of continuum dynamics. Not only are these hard to solve in the sense of feeding in some initial state of a fluid and figuring out how the equations predict that fluid will evolve, there are several unsolved problems around a much more modest task of understanding whether or not, quote unquote, reasonable solutions will always exist. Reasonable here means things like not blowing up to a point of having infinite kinetic energy, and that smooth initial states yield smooth solutions, where the word smooth carries with it a very precise meaning in this context. The questions formalizing the idea of these equations predicting reasonable behavior actually have a $1 million prize associated with them. And all of that is just for the case of incompressible fluid flow, where something compressible like air makes things trickier still. And the heart of the difficulty, both for the specific solutions and the general theoretical results surrounding them, is that tricky to pin down phenomenon of turbulence. But we're not completely in the dark. The hard work of a lot of smart people throughout history has led us to understanding some of the patterns underlying this chaos, and I'd like to share with you one, found by the 19th century mathematician Andre Komagorov. It has to do with how kinetic energy, interbulent motion, is distributed at different length scales. In simpler to think about physics, we often think about kinetic energy at two different length scales. A macro scale, say the energy carried by your moving car, or a molecular scale, which we call heat. As you apply your brakes, energy is transferred more or less directly from that macro scale motion to the molecular scale motion, as your brakes and the surrounding air heats up, meaning all of their molecules start jiggling even faster. Perbulence, on the other hand, is characterized by kinetic energy at a whole spectrum of length scales, from the movement of large eddies to smaller ones and smaller ones in smaller ones still. Moreover, this energy tends to cascade down the spectrum, where what I mean by that is that the energy of large eddies gets converted into that of smaller eddies, which in turn bring about smaller eddies still. This goes on until it's small enough that the energy dissipates directly to heat in the fluid, which is to say molecular scale jiggling, due to the fluid's viscosity, which is to say how much the particles tug at each other. Or, as this was all phrased in a poem by Lewis F. Richardson, big worlds have little worlds which feed on their velocity, and little worlds have lesser worlds, and so on to viscosity. Now you might wonder whether more of the kinetic energy of this fluid is carried by all of the larger eddies, say all those with diameter 1 meter, or by all of the smaller ones, say all those with diameter 1 centimeter counted together. Or, more generally, if you were to look at all of the swirls with a diameter d, about how much of the fluid's total energy do they collectively carry? Is that even an answerable question? Comagorov hypothesized that the amount of energy in a turbulent flow carried by eddies with diameter d tends to be proportional to d to the power of 5 thirds, at least within a specific range of length scales, known fancifully as the inertial subrange. For air, this range is from about 0.1 centimeters up to 1 kilometer. This fact has since been verified by experiment many times over, it would appear that 5 thirds is a sort of fundamental constant of turbulence. It's an oddly specific fact, I know, but what I love about the existence of a constant like this is that it suggests there's some predictability, however slight, to this whole mass. There is something ironic about talking about this energy cascade while viewing two-dimensional slices of a fluid, because it is a distinctly three-dimensional phenomenon. While fluid flow in two dimensions can have a sort of turbulence, this energy transfer actually tends to go the other way, from the small scales up to larger ones. So keep in mind, while you're looking at this 2D slice of turbulence, it's actually very different in character from turbulence in 2D. One of the mechanisms behind this energy cascade, which could only ever happen in three dimensions, is a process known as vortex stretching. A rotating part of the fluid will tend to stretch out perpendicular to the plane of rotation, resulting in smaller eddies spinning faster. This transition from energy held in a large vortex, to instead being held in smaller vortices, would be impossible if there weren't another dimension to stretch in. Or, if this vortex were bent around to meet itself in a ring shape, in a way, it's like a vortex which is blocking itself from stretching out this way. And, as mentioned earlier, this is indeed a surprisingly stable configuration for a fluid, order amidst chaos. Interestingly though, when we made these vortex rings in practice and followed them over a long period of time, they do have a tendency to slowly stretch out, albeit at a much longer time scale than the vortex stretching I was just talking about. Which brings us back to Diana and Dan. Huge thanks to the both of them for getting so much footage and making all of this happen. Make sure to hop over to physics girl now to see some of the vortex ring demos. And as I said, you'll also get to learn about something that happens when you change the shape of the hole in this vortex cannon. The result then, it specifics certainly surprised me, and you'll get to hear it through Diana's typical and infectious, superhuman level of enthusiasm.
Using topology to solve a counting riddle | The Borsuk-Ulam theorem and stolen necklaces
You know that feeling you get when things that seem completely unrelated turn out to have a key connection? In math especially, there's a certain tingly sensation I get whenever one of those connections starts to fall into place. This is what I have in store for you today. It takes some time to set up, I have to introduce a fair division puzzle from discrete math called the stolen necklace problem, as well as a topological fact about spheres that we'll use to solve it, called the Borsuk Ulam theorem. But trust me, seeing these two seemingly disconnected pieces of math come together is well worth the setup. Let's start with the puzzle that we're going to solve. You and your friend steal a necklace full of a whole bunch of jewels. Maybe it's got some sapphires, emeralds, diamonds and rubies. They're all arranged on the necklace in some random order. And let's say that it happens to be an even number of each type of jewel. Right here I have eight sapphires, ten emeralds, four diamonds and six rubies. You and your friend want to split up the booty evenly, with each of you getting half of each jewel type. That is, four sapphires, five emeralds, two diamonds and three rubies each. Of course you could just cut off all of the jewels and divvy them up evenly, but that's boring, there's not a puzzle there. Instead, the challenge is for you to make as few cuts to the necklace as possible, so that you can divvy up the resulting segments between you and your co-conspirator, with each of you getting half of each jewel type. For example, for the arrangement that I'm showing here, I just did it with four cuts. If I give the top three strands to you and these bottom two strands to your co-conspirator, each of you winds up with four sapphires, five emeralds, two diamonds and three rubies. The claim, the thing that I want to prove in this video, is that if there are N different jewel types, it's always possible to do this fair division with only N cuts or fewer. So with four jewel types, in this example, no matter what random ordering of the jewels, it should be possible to cut it in four places and divvy up the five necklace pieces so that each thief has the same number of each jewel type. With five jewel types, you should be able to do it with five cuts, no matter the arrangement and so on. It's kind of hard to think about, right? You need to keep track of all of these different jewel types, ensuring that they're divided fairly, while making as few cuts as possible. And if you sit down to try this, this is a shockingly hard fact to prove. Maybe the puzzle seems a little contrived, but it's core characteristics, like trying to minimize sharding and allocating some collections of things in a balanced way. These are the kind of optimization issues that actually come up quite frequently in practical applications. For the computer system folks among you, I'm sure you can imagine how this is analogous to kinds of efficient memory allocation problems. Also for the curious among you, I've left a link in the description to an electrical engineering paper that applies this specific problem. Independent from the usefulness though, it certainly does make for a good puzzle. Can you always find a fair division using only as many cuts as there are types of jewels? So that's the puzzle, remember it. And now we take a seemingly unrelated side step to the total opposite side of the mathematical universe, topology. They imagine taking a sphere in 3D space and squishing it somehow onto the 2D plane, stretching and morphing it however you'd like to do so. The only constraint I'll ask is that you do this continuously, which you can think of as meaning never cut the sphere or tear it in any way during this mapping. As you do this, many different pairs of points will land on top of each other once they hit the plane. And you know, that's not really a big deal. The special fact that we're going to use, known as the Borsuk-Ulam theorem, is that you will always be able to find a pair of points that started off on the exact opposite sides of the sphere, which land on each other during the mapping. Points on the exact opposite like this are called antipodes, or antipadal points. For example, if you think of the sphere as Earth and you're mapping as a straight projection of every point directly onto the plane of the equator, the North and the South Pole, which are antipadal, each land on the same point. And in this example, that's the only antipadal pair that lands on the same point. And any other antipadal pair will end up offset from each other somehow. If you tweaked this function a bit, maybe shearing it during the projection, the North and the South Pole don't land on each other anymore. But when the topology gods close it or they open a window, because the Borsuk-Ulam theorem guarantees that no matter what, there must be some other antipadal pair that now land on top of each other. The classic example to illustrate this idea, which math educators introducing Borsuk-Ulam are required by law to present, is that there must exist some pair of points on the opposite side of the Earth, where the temperature and the barometric pressure are both precisely the same. This is because associating each point on the surface of the Earth with a pair of numbers, temperature and pressure, is the same thing as mapping the surface of the Earth onto a two-decoordinate plane, where the first coordinate represents temperature and the second represents pressure. The implicit assumption here is that temperature and pressure each vary continuously as you walk around the Earth. So this association is a continuous mapping from the sphere onto a plane. Some non-taring way to squish that surface into two dimensions. So what Borsuk-Ulam implies is that no matter what the weather patterns on Earth, or any other planet for that matter, two antipetal points, must land on top of each other, which means they map to the same temperature, pressure, pair. Since you're watching this video, you're probably a mathematician at heart, so you want to see why this is true, not just that it's true. So let's take a little sidestep through topology proof land, and I think you'll agree that this is a really satisfying line of reasoning. First refraising what it is we want to show slightly more symbolically, if you have some function f that takes in a point p of the sphere and spits out some pair of coordinates, you want to show that no matter what crazy choice of function this is, as long as it's continuous, you'll be able to find some point p so that f of p equals f of negative p, where negative p is the antipetal point on the other side of the sphere. The key idea here, which might seem small at first, is to rearrange this and say f of p minus f of negative p equals 0, 0, and focus on a new function, g of p, that's defined to be this left-hand side here, f of p minus f of negative p. This way what we need to show is that g maps some point of the sphere onto the origin in 2D space. So rather than finding a pair of colliding points which could land anywhere, this helps limit our focus to just one point of the output space, the origin. This function g has a pretty special property which is going to help us out, that g of negative p is equal to negative g of p, basically negating the input involves swapping these terms. In other words, going to the antipetal point of the sphere results in reflecting the output of g through the origin of the output space, or maybe think of it as rotating that output 180 degrees around the origin. Notice what this means if you were to continuously walk around the equator and look at the outputs of g, what happens when you go halfway around? Well the output needs to have wandered to the reflection of the starting point through the origin. Then as you continue walking around the other half, the second half of your output path must be the reflection of the first half, or equivalently it's the 180 degree rotation of that first path. Now there's a slim possibility that one of these points happens to pass through the origin, in which case you've looked out and were done early. But otherwise, what we have here is a path that winds around the origin at least once. And now look at that path on the equator and imagine continuously deforming it up to the north pole, cinching that loop tight. As you do this, the resulting path in the output space is also continuously deforming to a point, since the function g is continuous. Now because it wound around the origin, at some point during this process, it must cross the origin. And this means there is some point p on the sphere, where g of p has the coordinates 0,0. Which means f of p minus f of negative p equals 0,0. F of p is the same as f of negative p, the antipodal collision that we're looking for. Isn't that clever? And it's a pretty common style of argument in the context of topology. It doesn't matter what particular continuous function from the sphere to the plane you define, this line of reasoning will always zero in on an antipodal pair that lands on top of each other. At this point maybe you're thinking, yeah, yeah, lovely math and all, but we've strayed pretty far away from the necklace problem. But just you wait, here's where things start getting clever. First, answer me this. What is a sphere really? Well, points in 3D space are represented with three coordinates, in some sense that's what 3D space is to a mathematician at least, all possible triplets of numbers. And the simplest sphere to describe with coordinates is the standard unit sphere, centered at the origin. The set of all points a distance 1 from the origin, meaning all triplets of numbers so that the sum of their squares is 1. So the geometric idea of a sphere is related to the algebraic idea of a set of positive numbers that add up to 1. That might sound simple, but took that away in your mind. If you have one of these triplets, the point on the opposite side of the sphere, the corresponding antipodal point, is whatever you get by flipping the sign of each coordinate, right? So let's just write out what the Borsuch-Goulon theorem is saying symbolically. Trust me, this will help with getting back to the necklace problem. For any function that takes in points on the sphere, triplets of numbers who squares sum to 1, and spits out some point in 2D space, some pair of coordinates like temperature and pressure. As long as the function is continuous, there will be some input, so that flipping all of its signs doesn't change the output. With that in mind, look back at the necklace problem. Part of the reason these two things feel so very unrelated is that the necklace problem is discrete, while the Borsuch-Goulon theorem is continuous. So our first step is to translate this to all the necklace problem into a continuous version, seeking the connection between necklace divisions and points on the sphere. For right now, let's limit ourselves to the case where there's only two jewel types, say sapphires and emeralds, and we're hoping to make a fair division of this necklace after only two cuts. As an example, just to have up on the screen, let's say there's eight sapphires and ten emeralds on the necklace. As a reminder, this means the goal is to cut the necklace in two different spots and divvy up those three segments so that each thief ends up with half of the sapphires and half of the emeralds. Notice the top and the bottom each have four sapphires and five emeralds. For our continuous suffocation, think of the necklace as a line with linked one, with the jewels sitting evenly spaced on it, and divide up that line into eighteen evenly sized segments, one for each jewel. And rather than thinking of each jewel as a discreet, indivisible entity on each segment, remove the jewel itself and just paint that segment the color of the jewel. So in this case, eight eighteenths of the line would be painted sapphires and ten eighteenths would be painted emerald. The continuous variant of the puzzle is now to ask if we can find two cuts anywhere on this line, not necessarily on the one eighteenth interval marks, that lets us divide up the pieces so that each thief has an equal length of each color. In this case, each thief should have a total of four eighteenths of sapphires colored segments and five eighteenths of emerald colored segments. And important, but somewhat subtle point here, is that if you can solve the continuous variant, you can also solve the original discreet version. To see this, let's say you did find a fair division whose cuts didn't happen to fall cleanly between the jewels, maybe it cuts only part way through an emerald segment. Well, because this is a fair division, the length of emerald in both top and bottom has to add up to five total emerald segments, a whole number multiple of the segment lengths. So even if the division cut partially into an emerald segment on the left, it has to cut partially into an emerald segment on the right, and more specifically in such a way that the total length adds up to a whole number multiple of the segment lengths. What that means is that you can adjust each cut without affecting the division so that they ultimately do line up on the one eighteenth marks. Now, why are we doing all this? Well, in the continuous case, where you can cut wherever you want on this line, think about all of the choices going into dividing the necklace and allocating the pieces. First, you choose two locations to cut the interval, but another way to think about that is to choose three positive numbers that add up to one. For example, maybe you choose one-sixth, one-third, and one-half, which correspond to these two cuts. Any time you find three positive numbers that add up to one, it gives you a way to cut the necklace and vice versa. After that, you have to make a binary choice for each of these pieces, for whether it goes to T1 or Thief II. Now compare that to if I asked you to choose some arbitrary point on a sphere in three-dimensional space. Point with coordinates xyz so that x squared plus y squared plus z squared equals one. Well, you might start off by choosing three positive numbers that add to one. Maybe you want x squared to be one-sixth, y squared to be one-third, and z squared to be one-half. Then, you have to make a binary choice for each one of them, choosing whether to take the positive square root or the negative square root, in a way that's completely parallel to dividing the necklace and allocating the pieces. Alright, hang with me now, because this is the key observation of the whole video. It gives a correspondence between points on the sphere and necklace divisions. For any point xyz on the sphere, because x squared plus y squared plus z squared is one, you can cut the necklace so that the first piece has a length x squared, the second has a length y squared, and the third has a length z squared. For that first piece, if x is positive, give it to Thief I, otherwise give it to Thief II. And likewise give the third piece to Thief I if z is positive, and to Thief II if z is negative. And you could go the other way around. Any way that you divide up the necklace and divvy up the pieces gives us a unique point on the sphere. It's as if the sphere is a weirdly perfect way to encapsulate the idea of all possible necklace divisions, just with a geometric object. And here we are tantalizingly close. Think of the meaning of antipodal points under this association. If the point xyz on the sphere corresponds to some necklace allocation, what is the point negative x, negative y, and negative z correspond to? Well, the squares of these three coordinates are the same, so each one corresponds to making the same cuts on the necklace. The difference is that every piece switches which Thief it belongs to. So jumping to an antipodal point on the opposite side of the square is the same. To an antipodal point on the opposite side of the sphere corresponds with exchanging the pieces. Now remember what it is that we're actually looking for. We want the total length of each jewel type belonging to Thief I to equal that for Thief II. Or in other words, in a fair division, performing this antipodal swap doesn't change the amount of each jewel belonging to each thief. Your brain should be burning with the thought of Borsik Ulam at this point. Typically, you might construct a function that takes in a given necklace allocation and spits out two numbers, the total length of sapphire belonging to Thief I and the total length of emerald belonging to Thief I. We want to show that there must exist a way to divide the necklace, with two cuts and divvy up the pieces so that these two numbers are the same as what they would be for Thief II. Or said differently, where swapping all of the pieces wouldn't change those two numbers. Because of this back and forth between necklace allocations and the points of the sphere, and because pairs of numbers correspond with points on the xy plane, this is, in effect, a map from the sphere onto the plane. And the animation you're looking at right now is that literal map for the necklace I was showing. So the Borsik Ulam theorem guarantees that some antipodal pair of points on the sphere land on each other in the plane, which means there must be some necklace division using two cuts that gives a fair division between the Thieves. That my friends is what beautiful math feels like. Alright, and if you're anything like me, you're just basking in the glow of what a clever proof that is, and it might be easy to forget that what we actually want to solve is the more general stolen necklace problem, with any number of jewel types. Luckily, we've now done 95% of the work, generalizing is pretty brief. The main thing to mention is that there is a more general version of the Borsik Ulam theorem, one that applies to higher dimensional spheres. As an example, Borsik Ulam applies to mapping hypersphere in 40 space into three dimensions. And what I mean by a hypersphere is the set of all possible lists of four coordinates where the sum of their squares equals one. Those are the points in 40 space, a distance one from the origin. Borsik Ulam says that if you try to map that set, all those special quadruplets of numbers, into three dimensional space, continuously associating each one with some triplet of numbers, there must be some antipodal collision, an input x1, x2, x3, x4, where flipping all of the signs wouldn't change the output. I'll leave it to you to pause and ponder and think about how this could apply to the three dual case, and about what the general statement of Borsik Ulam might be, and how it applies to the general necklace problem. And maybe, just maybe, this gives you an inkling of why mathematicians care about things like higher dimensional spheres, regardless of whether or not they exist in physical reality. It's not always about the sphere per se, it's about what other problems in math they can be used to encode.
2021 Summer of Math Exposition results
This summer, James Schloss and I ran a contest that's meant to encourage more people to make online math explainers. And here, I want to share with you some of my favorites. I wrote up a much fuller blog post about the event and the process for selecting winners, but let's open here with some of the key points. First of all, it was extremely open-ended. It's meant for math explainers of any kind, any medium, any target audience, any topic, pretty much just as long as it's about math. The promise was to select around 4 to 5 to be winners, which I would then feature in a video, here we are now. And after I made the initial announcement, Brilliant came in and kindly offered cash prizes to those winners. Now before anything, let me just emphasize this notion of a winner should really be taken with a heavy grain of salt. The spirit of the event, and hopefully this is obvious, is to get more people sharing their knowledge, and preferably to have them do so in a way that's as engaging and empathetic and novel as possible. The way I see it, there's some benefits to framing it as a contest, like we have a clear deadline, and that can be a great cure for perfectionism and stagnation. Also, having some small stakes involved hopefully means that the lessons are put together with a little bit more care, and with a little bit more attention to the kind of principles that I wanted to emphasize with the whole event. The downside, which is not to be taken lightly, is that it gives the false impression that success looks like winning, but it does not. Success means having more material out there that explains math and inspires people to engage with it. Ten years from now, no one is going to remember my arbitrary selections, but every one of the pieces made will still exist and will still potentially help someone trying to learn. We received over 1200 submissions, and as a first pass for the judgment, we had a peer review process to help filter that down to a little over 100, that James and I could give a really close look at. We also had around half a dozen guest judges, people in the space of math exposition, to help us narrow down those final judgments. Many, many thanks to those guests for being willing to volunteer some of their time, and generally helping to ensure that any of my subjective quirks weren't leaking too much into the final decision. Also, a million thanks to James, he really was the core organizer behind everything here, and if you participated in the event and found it helpful, he is absolutely the one you should be thinking. As I said, there's a blog post with more details for any of you who are curious, but right now, without further ado, let me tell you about some outstanding math explainers made this summer that I really think you're going to enjoy. This entry, from a channel called Parallogical, opens by asking why the light reflected at the bottom of a mug seems to form this characteristic cardioid shape. The core mathematical idea that this video teaches is that of envelopes, which in short is a way to describe one curve using a family of other curves. And what really makes the video special is not just how clearly he explains that subject, but how tangible and well chosen the examples are, all delivered with a tone that's just plain friendly and enjoyable to listen to. The key formula he builds up to is really well motivated, and one of those things that would not be obvious if you just sighed out of context. And he gives the tools for any curious viewer to pause and work through for themselves a more detailed understanding, while still leaving room for someone watching to get the general idea and the core points without necessarily being bogged down into that algebra. On the topic of fun ways that light gets redirected, another one of my picks is just an absolutely mind-blowing piece of engineering wizardry. This one is a blog post, it's written by Matt Ferraro, where he explains how he made this acrylic square. It might look in a scintourst, like nothing more than a transparent square, maybe suspiciously wavy. But when you look at its shadow, you can see that it's been carefully crafted to redirect light in such a way as to form a highly deliberate image. The post walks through all of the math and the algorithms involved in pulling this off, including certain false starts in the discovery process, which I love, and the author skillfully draws the reader's attention to which details are important and deserve more of your focus, and which ones are more segments and minutia. I found myself a little surprised about what parts of the process turned out to be hard, and which ones were easy, and by the end, the whole task felt a lot less mysterious, while still commanding no shortage of respect for the fact that someone was actually able to pull it off. The next pick that I have on my list here, I'll be honest, I felt a little bit torn about. I wouldn't be surprised if many of you have already seen this video, it's called The Beauty of Bayzie Curves by Freya Holmer. Now, given the spirit of the contest, which is to encourage people to share their knowledge, without necessarily getting intimidated by a need for high production quality, you know, focus on the quality of the lesson, more so than the medium used to express it. This video was so beautifully produced, I almost felt like choosing at risks sending the wrong message. Also, part of the goal here is to shine a light on comparatively unknown creators, and by the time I was watching this one, it had around 400,000 views. But the thing is, Freya's video really is a fantastic piece of exposition, and it would be a little bit silly of me to fault it for also being beautiful, and evidently also being appreciated by many people in both respects. I do want to be clear, the reason that I'm choosing it is not because of the smooth graphics. It's that here we have someone who uses a certain mathematical tool regularly in her work, and she has the ability to clearly motivate why you should care too, and to go into the details of how it works, the many different facets of how she uses it, how she thinks about it, and what makes it visually great is not so much the smoothness of the graphics of the aesthetic appeal. It's that they're clean, and to the point, serving to aid what's the core value in the whole piece, a series of well-chosen intuitions and applications of a topic in math that deserves to be known by more people. After that one, I really did think that with my other picks, I could help direct the audience of this channel to some excellent lessons that you might not have seen yet. Like with the next one I chose, when I saw it, it had a couple hundred views, and I really loved it, and I was excited to share it with you all. But then, it looks like the internet kind of beat me to it. This thing is quickly going on a million views. Well deserved, by the way. If you haven't seen it, it really is delightful. It's not a traditional math lesson in the sense of explaining some topic that you might need to learn for a course, or even one that exists in a clear field for that matter. But it absolutely captures the spirit of mathematical discovery. The question posed in the video seems a little bit silly at first, which is what's the most complicated passcode for those sorts of swiping swiping pattern passcodes that some phones have. The video opens by making that question rigorous, giving it a solid definition, and then proceeds with a really engaging story of problem solving that involves very real math lessons along the way. Things about number theory, about induction, about generalizing a result, even after you've solved a subproblem, definitely take a look. The final pick I have on my list here, which again is in no particular order, is one that multiple different guest judges singled out as being especially good, and also easily underappreciated. The video describes a really clever and memorable proof of this cute fact from geometry known as Pixthirum, and more than that, the author has some really nice thoughts about the role of different kinds of proofs in math, thoughts which more students, and for that matter more teachers, would really benefit from hearing and thinking about. It's no fancier than it needs to be, but the core idea is just so good that, to me at least, the video has a lot more staying power than many of the professionally produced educational videos that I've seen from established channels out there. So, there you go, those are my five picks for this summer of math exposition. But the thing is, if you had seen the submissions that I have seen, you would agree that choosing just five winners is ridiculous to the point of absurdity. Again, you know, the point is not the winners, the event is about encouraging people to follow through with projects, things about math exposition they might have been thinking about doing, all of that. But just reiterating that point feels a little bit hollow, because they're at least, I don't know, 20 in here, where I feel like it is a genuine crime not to have chosen them. And, well, you know, it's my contest, my rules, so if you'll indulge me, let me quickly tell you about some others that I just loved. One that I think viewers of this channel would especially enjoy is almost an hour, it's about derocked belt trick by Noah Miller. In a recent event that I was doing with Steven Strogatz for the MoMath Museum, we had a call that was ostensibly meant to prepare for that event, but instead we spent much of it just both gushing over how much we liked this one particular video. Any of you who have flirted with this topic will probably know how tricky it can be to understand the link between points on a 40-sphere and rotations in 3D space, and why any of that has to do with quantum mechanics, but this animated hour-long lecture does a really good job laying out the full story. Another one which is long but good is about the unsolvability of the Quintik by Carl Turner. For me it was a bit bizarre seeing it, because when I did, I was actively working on a video that was not just about the same theorem, but about the same comparatively esoteric proof that it describes there. I thought the video did such a wonderful job at just kind of set aside the project. I'll still probably cover it at some point, but now I'm motivated to do it in a different way, but in the meantime, any of you curious about why Quintik polynomials are in a certain sense unsolvable will absolutely love this video. So many entries here were just really solid explainers, plain and simple. This includes the best overview I've seen of the two envelope problem, a great explanation for how fonts get turned into pixels, a wonderful article on spinners, a comic-style blog post about E, a video about an ancient Babylonian algorithm for multiplying numbers, which has unexpected usefulness for certain programming tasks. There were a number of videos in Chinese on the site Billy Billy, including one I really liked about a fundamental theorem for symmetric polynomials, perfectly understandable with the English subtitles. And one absolutely fantastic video in here was about lemur factor stencils, which I had never heard of, and I learned a ton watching this and found it absolutely fascinating. Many of the entries I saw had excellent aha moments, like this one here, with a mildly click-bady title about a graph that will blow your mind, but the thing is, at least in my experience, that title is 100% accurate. There's a video explaining why pie shows up in the Bufan needle problem with a really elegant shift in perspective, and what I especially appreciate is that the author also has an appendix video going through some of the more technical details not covered in the main video. A few entries were highly interesting if for no other reason, just from a technological standpoint alone, including a very well-executed interactive video by Rob Schlob, which let me tell you, is not easy to pull off, as well as a great interactive article introducing complex numbers in the fundamental theorem of algebra on the site Trina. And then some of the entries, setting aside all the explanatory value, were just plain beautiful, like if I had a category here for greatest style, I think my pick would be this one about recreating curves from a children's toy. But setting aside style, or the core point of all of this, which is the explanatory quality, there's one feature of online explainers that can easily be underappreciated, which is the role of narrative and storylines. And a couple entries I think did a great job exemplifying that component. This includes not one, but two entries on this game called Hackenbush, a story about how a lights out puzzle can lead you to Gaussian elimination, a nice exploration of the most efficient way to choose a random point in a circle uniformly, a great puzzle about tiles, which carries with it explanations of core facts from Thibinacci numbers, and one really nicely done video about why the syropinsky triangle shows up in three seemingly completely unrelated contexts. Again, the list goes on for quite a while. As I look at some of these now, I'm really pleased to see that a lot of them have picked up some traction on YouTube, but there still remain many which are very underappreciated. I highly encourage you to go to the playlist, including all the video submissions, and to check out the blog post featuring all other submissions. As you look at that playlist, I would not read into the order of it too much, it was generated programmatically, but I did try to go through and curate the first few ones with videos that I think you might especially like. Honestly, though, you can happily scroll down that playlist and find hidden gems all throughout. Like, really, go check it out right now. If you do, I can almost guarantee that you have hours of edification waiting ahead of you. Not to mention hours of just pure delight. Thanks again to everyone who participated. Let me just say one more time, I really was blown away by the quality here.
Eigenvectors and eigenvalues | Chapter 14, Essence of linear algebra
Icon Vectors and Icon Values is one of those topics that a lot of students find particularly unintuitive. Things like why are we doing this and what does this actually mean are too often left just floating away in an unanswered sea of computations. And as I've put out the videos of the series, a lot of you have commented about looking forward to visualizing this topic in particular. I suspect that the reason for this is not so much that Icon things are particularly complicated or poorly explained. In fact, it's comparatively straightforward and I think most books do a fine job explaining it. The issue is that it only really makes sense if you have a solid visual understanding for many of the topics that precede it. Most important here is that you know how to think about matrices as linear transformations. But you also need to be comfortable with things like determinants, linear systems of equations, and change of basis. Confusion about eigenstuffs usually has more to do with the shaky foundation in one of these topics than it does with eigenvectors and eigenvalues themselves. To start, consider some linear transformation in two dimensions, like the one shown here. It moves the basis vector i-hat to the coordinates 3e0 and j-hat to 1-2. So it's represented with a matrix whose columns are 3e0 and 1-2. Focus in on what it does to one particular vector and think about the span of that vector, the line passing through its origin and its tip. Most vectors are going to get knocked off their span during the transformation. I mean, it would seem pretty coincidental if the place where the vector landed also happens to be somewhere on that line. But some special vectors do remain on their own span, meaning the effect that the matrix has on such a vector is just to stretch it or squish it, like a scalar. For this specific example, the basis vector i-hat is one such special vector. The span of i-hat is the x-axis, and from the first column of the matrix, we can see that i-hat moves over to 3 times itself, still on that x-axis. What's more, because of the way linear transformations work, any other vector on the x-axis is also just stretched by a factor of 3, and hence remains on its own span. A slightly sneakier vector that remains on its own span during this transformation is negative 1-1. It ends up getting stretched by a factor of 2. And again, linearity is going to imply that any other vector on the diagonal line spanned by this guy is just going to get stretched out by a factor of 2. And for this transformation, those are all the vectors with this special property of staying on their span, those on the x-axis getting stretched out by a factor of 3, and those on this diagonal line getting stretched by a factor of 2. And the other vector is going to get rotated somewhat during the transformation, knocked off the line that it spans. As you might have guessed by now, these special vectors are called the eigenvectors of the transformation, and each eigenvector has associated with it what's called an eigenvalue, which is just the factor by which it's stretched or squished during the transformation. Of course, there's nothing special about stretching versus squishing, or the fact that these eigenvalues happen to be positive. In another example, you could have an eigenvector with eigenvalue negative 1-1-1-1, meaning that the vector gets flipped and squished by a factor of 1-1-1-1-1-1-1. But the important part here is that it stays on the line that it spans out without getting rotated off of it. For a glimpse of why this might be a useful thing to think about, consider some three-dimensional rotation. If you can find an eigenvector for that rotation, a vector that remains on its own span, what you have found is the axis of rotation. And it's much easier to think about a 3D rotation in terms of some axis of rotation, and an angle by which it's rotating, rather than thinking about the full 3x3 matrix associated with that transformation. In this case, by the way, the corresponding eigenvalue would have to be 1, since rotations never stretch or squish anything, so the length of the vector would remain the same. This pattern shows up a lot in linear algebra. With any linear transformation described by a matrix, you could understand what it's doing by reading off the columns of this matrix as the landing spots for basis vectors. But often, a better way to get at the heart of what the linear transformation actually does, less dependent on your particular coordinate system, is to find the eigenvectors and eigenvalues. I won't cover the full details on methods for computing eigenvectors and eigenvalues here, but I'll try to give an overview of the computational ideas that are most important for a conceptual understanding. Symbolically, here's what the idea of an eigenvector looks like. A is the matrix representing some transformation with V as the eigenvector, and lambda is a number, namely the corresponding eigenvalue. What this expression is saying is that the matrix vector product A times V gives the same result as just scaling the eigenvector V by some value lambda. So finding the eigenvectors and their eigenvalues of the matrix A comes down to finding the values of V and lambda that make this expression true. It's a little awkward to work with at first because that left-hand side represents matrix vector multiplication, but the right-hand side here is scalar vector multiplication. So let's start by rewriting that right-hand side as some kind of matrix vector multiplication, using a matrix which has the effect of scaling any vector by a factor of lambda. The columns of such a matrix will represent what happens to each basis vector, and each basis vector is simply multiplied by lambda. So this matrix will have the number lambda down the diagonal, with zeros everywhere else. The common way to write this guy is to factor that lambda out, and write it as lambda times i, where i is the identity matrix with ones down the diagonal. With both sides looking like matrix vector multiplication, we can subtract off that right-hand side and factor out the V. So what we now have is a new matrix, A minus lambda times the identity, and we're looking for a vector V such that this new matrix, times V, gives the zero vector. Now this will always be true if V itself is the zero vector, but that's boring. What we want is a non-zero eigenvector. And if you watch chapter 5 and 6, you'll know that the only way it's possible for the product of a matrix with a non-zero vector to become zero is if the transformation associated with that matrix squishes space into a lower dimension. And that squishification corresponds to a zero determinant for the matrix. To be concrete, let's say your matrix A has columns 2, 1, and 2, 3. And think about subtracting off a variable amount lambda from each diagonal entry. Now imagine tweaking lambda, turning a knob to change its value. As that value of lambda changes, the matrix itself changes, and so the determinant of the matrix changes. The goal here is to find a value of lambda that will make this determinant zero, meaning the tweaked transformation squishes space into a lower dimension. In this case, the sweet spot comes when lambda equals 1. Of course, if we had chosen some other matrix, the eigenvalue might not necessarily be 1, the sweet spot might be hit at some other value of lambda. So this is kind of a lot, but let's unravel what this is saying. When lambda equals 1, the matrix A minus lambda times the identity squishes space onto a line. That means there's a non-zero vector v such that A minus lambda times the identity times v equals the zero vector. And remember, the reason we care about that is because it means A times v equals lambda times v, which you can read off as saying that the vector v is an eigenvector of A, staying on its own span during the transformation A. In this example, the corresponding eigenvalue is 1, so v would actually just stay fixed in place. Pause and ponder if you need to make sure that that line of reasoning feels good. This is the kind of thing I mentioned in the introduction. If you didn't have a solid grasp of determinants and why they relate to linear systems of equations having non-zero solutions, an expression like this would feel completely out of the blue. To see this in action, let's revisit the example from the start, with a matrix whose columns are 3, 0 and 1, 2. To find if a value lambda is an eigenvalue, subtract it from the diagonals of this matrix and compute the determinant. Doing this, we get a certain quadratic polynomial in lambda, 3 minus lambda times 2 minus lambda. Since lambda can only be an eigenvalue if this determinant happens to be zero, you can conclude that the only possible eigenvalues are lambda equals 2 and lambda equals 3. To figure out what the eigenvectors are that actually have one of these eigenvalues, say lambda equals 2, plug in that value of lambda to the matrix and then solve for which vectors this diagonally altered matrix sends to zero. If you computed this the way you would any other linear system, you'd see that the solutions are all the vectors on the diagonal line spanned by negative 1, 1. This corresponds to the fact that the unaltered matrix, 3, 0, 1, 2, has the effect of stretching all those vectors by a factor of 2. Now a 2D transformation doesn't have to have eigenvectors. For example, consider a rotation by 90 degrees. This doesn't have any eigenvectors since it rotates every vector off of its own span. If you actually try computing the eigenvalues of a rotation like this, notice what happened. It's matrix has columns 0, 1 and negative 1, 0. Subtract off lambda from the diagonal elements and look for when the determinant is 0. In this case you get the polynomial lambda squared plus 1. The only roots of that polynomial are the imaginary numbers, i and negative i. The fact that there are no real number solutions indicates that there are no eigenvectors. Another pretty interesting example worth holding in the back of your mind is a shear. This fixes i hat and place and moves j hat 1 over, so its matrix has columns 1, 0 and 1, 1. All of the vectors on the x axis are eigenvectors with eigenvalue 1 since they remain fixed in place. In fact, these are the only eigenvectors. When you subtract off lambda from the diagonals and compute the determinant, what you get is 1 minus lambda squared. And the only root of this expression is lambda equals 1. This lines up with what we see geometrically that all of the eigenvectors have eigenvalue 1. Keep in mind though, it's also possible to have just one eigenvalue, but with more than just a line full of eigenvectors. A simple example is a matrix that scales everything by 2. The only eigenvalue is 2, but every vector in the plane gets to be an eigenvector with that eigenvalue. Now is another good time to pause and ponder some of this before I move on to the last topic. I want to finish off here with the idea of an eigenbasis, which relies heavily on ideas from the last video. Take a look at what happens if our basis vectors just so happen to be eigenvectors. For example, maybe i-hat is scaled by negative 1, and j-hat is scaled by 2. Writing their new coordinates as the columns of a matrix, notice that those scalar multiples negative 1 and 2, which are the eigenvalues of i-hat and j-hat, sit on the diagonal of our matrix, and every other entry is a 0. Anytime a matrix has 0s everywhere other than the diagonal, it's called, reasonably enough, a diagonal matrix. And the way to interpret this is that all the basis vectors are eigenvectors, with the diagonal entries of this matrix being their eigenvalues. There are a lot of things that make diagonal matrices much nicer to work with. One big one is that it's easier to compute what will happen if you multiply this matrix by itself a whole bunch of times. Since all one of these matrices does is scale each basis vector by some eigenvalue, applying that matrix many times, say 100 times, is just going to correspond to scaling each basis vector by the 100th power of the corresponding eigenvalue. In contrast, try computing the 100th power of a non-diagonal matrix. Really, try it for a moment. That's a nightmare. Of course, you'll rarely be so lucky as to have your basis vectors also be eigenvectors. But if your transformation has a lot of eigenvectors, like the one from the start of this video, enough so that you can choose a set that spans the full space, then you could change your coordinate system so that these eigenvectors are your basis vectors. I talked about change of basis last video, but I'll go through a super quick reminder here of how to express a transformation currently written in our coordinate system into a different system. Take the coordinates of the vectors that you want to use as a new basis, which in this case means our two eigenvectors, that make those coordinates the columns of a matrix, known as the change of basis matrix. When you sandwich the original transformation, putting the change of basis matrix on its right, and the inverse of the change of basis matrix on its left, the result will be a matrix representing that same transformation, but from the perspective of the new basis vectors coordinate system. The whole point of doing this with eigenvectors is that this new matrix is guaranteed to be diagonal, with its corresponding eigenvalues down that diagonal. This is because it represents working in a coordinate system, where what happens to the basis vectors is that they get scaled during the transformation. A set of basis vectors, which are also eigenvectors, is called, again, reasonably enough, an eigenbasis. So if, for example, you needed to compute the 100th power of this matrix, it would be much easier to change to an eigenbasis, compute the 100th power in that system, then convert back to our standard system. You can't do this with all transformations. A shear, for example, doesn't have enough eigenvectors to span the full space. But if you can find an eigenbasis, it makes matrix operations really lovely. For those of you willing to work through a pretty neat puzzle to see what this looks like in action, and how it can be used to produce some surprising results, I'll leave up a prompt here on the screen. It takes a bit of work, but I think you'll enjoy it. The next and final video of this series is going to be on abstract vector spaces. See you then.
What does area have to do with slope? | Chapter 9, Essence of calculus
Here, I want to discuss one common type of problem where integration comes up. Finding the average of a continuous variable. This is a perfectly useful thing to know in its own right, but what's really neat is that it can give us a completely different perspective for why integrals and derivatives are inverses of each other. To start, take a look at the graph of sine of x between 0 and pi, which is half of its period. What is the average height of this graph on that interval? It's not a useless question. All sorts of cyclic phenomena in the world are modeled using sine waves. For example, the number of hours that the sun is up per day as a function of what day of the year it is follows a sine wave pattern. So if you wanted to predict, say, the average effectiveness of solar panels in summer months versus winter months, you'd want to be able to answer a question like this. What is the average value of that sine function over half of its period? Whereas a case like this is going to have all sorts of constants mucking up the function, you and I are just going to focus on a pure unencumbered sine of x function. But the substance of the approach would be totally the same in any other application. It's kind of a weird question to think about though, isn't it? The average of a continuous variable. Usually with averages, we think of a finite number of variables, where you can add them all up and divide that sum by how many there are. But there are infinitely many values of sine of x between 0 and pi, and it's not like we can just add up all of those numbers and divide by infinity. Now this sensation actually comes up a lot in math, and it's worth remembering, where you have this vague sense that what you want to do is add together infinitely many values associated with a continuum, even though that doesn't really make sense. And almost always, when you get that sense, the key is going to be to use an integral somehow. And to think through exactly how, a good first step is usually to just approximate your situation with some kind of finite sum. In this case, imagine sampling a finite number of points evenly spaced along this range. Since it's a finite sample, you can find the average by just adding up all of the heights sine of x at each one of these, and then dividing that sum by the number of points that you sampled, right? And presumably, if the idea of an average height among all infinitely many points is going to make any sense at all, the more points we sample, which would involve adding up more and more heights, the closer the average of that sample should be to the actual average of the continuous variable. And this should feel at least somewhat related to taking an integral of sine of x between zero and pi, even if it might not be exactly clear how the two ideas match up. For that integral, remember, you also think of a sample of inputs on this continuum, but instead of adding the height sine of x at each one and dividing by how many there are, you add up sine of x times dx, where dx is the spacing between the samples. That is, you're adding up little areas, not heights. And technically, the integral is not quite this sum. It's whatever that sum approaches as dx approaches zero. But it is actually quite helpful to reason with respect to one of these finite iterations, where we're looking at a concrete size for dx and some specific number of rectangles. So what you want to do here is reframe this expression for the average, this sum of the heights divided by the number of sampled points, in terms of dx, the spacing between samples. And now, if I tell you that the spacing between these points is, say, zero point one, and you know that they range from zero to pi, can you tell me how many there are? Well, you can take the length of that interval, pi, and divide it by the length of the space between each sample. If it doesn't go in perfectly evenly, you would have to round down to the nearest integer, but as an approximation, this is completely fine. So if we write that spacing between samples as dx, the number of samples is pi divided by dx. And when we substitute that into our expression up here, you can rearrange it, putting that dx up top and distributing it into the sum. But think about what it means to distribute that dx up top. It means that the terms you're adding up will look like sine of x times dx for the various inputs x that you're sampling. So that numerator looks exactly like an integral expression. And so for larger and larger samples of points, this average will approach the actual integral of sine of x between zero and pi, all divided by the length of that interval, pi. In other words, the average height of this graph is this area divided by its width. On an intuitive level, and just thinking in terms of units, that feels pretty reasonable, doesn't it? Area divided by width gives you an average height. So with this expression in hand, let's actually solve it. As we saw last video, to compute an integral, you need to find an anti-derivative of the function inside the integral. Some other function whose derivative is sine of x. And if you're comfortable with derivatives of trig functions, you know that the derivative of cosine is negative sine. So if you just negate that, negative cosine is the function we want, the anti-derivative of sine. And to gut-check yourself on that, look at this graph of negative cosine. At zero, the slope is zero. And then it increases up to some maximum slope at pi-haves, and then goes back down to zero at pi. And in general, its slope does indeed seem to match the height of the sine graph at every point. So what do we have to do to evaluate the integral of sine between zero and pi? Well, we evaluate this anti-derivative at the upper bound and subtract off its value at the lower bound. More visually, that is the difference in the height of this negative cosine graph above pi and its height at zero. And as you can see, that change in height is exactly two. That's kind of interesting, isn't it? That the area under this sine graph turns out to be exactly two. So the answer to our average height problem, this integral divided by the width of the region, evidently turns out to be two divided by pi, which is around 0.64. I promised at the start that this question of finding the average of a function offers an alternate perspective on why integrals and derivatives are inverses of each other, why the area under one graph has anything to do with the slope of another graph. Notice how finding this average value, two divided by pi, came down to looking at the change in the anti-derivative negative cosine x over the input range divided by the length of that range. And another way to think about that fraction is as the rise overrun slope between the point of the anti-derivative graph below zero and the point of that graph above pi. And now think about why it might make sense that this slope would represent an average value of sine of x on that region. Well by definition sine of x is the derivative of this anti-derivative graph. It gives us the slope of negative cosine at every point. So another way to think about the average value of sine of x is as the average slope over all tangent lines here between zero and pi. And when you view things like that, doesn't it make a lot of sense that the average slope of a graph over all of its points in a certain range should equal the total slope between the start and end points? To digest this idea, it helps to think about what it looks like for a general function. For any function, f of x, if you want to find its average value on some interval, say between a and b, what you do is take the integral of f on that interval divided by the width of that interval, b minus a. You can think of this as the area under the graph divided by its width. Or more accurately, it is the signed area of that graph, since any area below the x-axis is counted as negative. And it's worth taking a moment to remember what this area has to do with the usual notion of a finite average, where you add up many numbers and divide by how many there are. When you take some sample of points spaced out by dx, the number of samples is about equal to the length of the interval divided by dx. So if you add up the values of f of x at each sample and divide by the total number of samples, it's the same as adding up the product f of x times dx and dividing by the width of the entire interval. The only difference between that and the integral is that the integral asks what happens as dx approaches zero. But that just corresponds with samples of more and more points that approximate the true average increasingly well. Now, for any integral, evaluating it comes down to finding an anti-derivative of f of x, commonly denoted capital F of x. What we want is the change to this anti-derivative between a and b, capital F of b minus capital F of a, which you can think of as the change in height of this new graph between the two bounds. I've conveniently chosen an anti-derivative that passes through zero at the lower bound here, but keep in mind, you can freely shift this up and down, adding whatever constant you want to it, and it would still be a valid anti-derivative. So the solution to the average problem is the change in the height of this new graph divided by the change to the x value between a and b. In other words, it is the slope of the anti-derivative graph between the two endpoints. And again, when you stop to think about it, that should make a lot of sense, because little f of x gives us the slope of the tangent line to this graph at each point. After all, it is by definition the derivative of capital F. So why are anti-derivatives the key to solving integrals? Well, my favorite intuition is still the one that I showed last video, but a second perspective is that when you reframe the question of finding an average of a continuous value, as instead finding the average slope of a bunch of tangent lines, it lets you see the answer just by comparing endpoints, rather than having to actually tally up all of the points in between. In the last video, I described a sensation that should bring integrals to your mind. Namely, if you feel like the problem you're solving could be approximated by breaking it up somehow and adding up a large number of small things. And here, I want you to come away recognizing a second sensation that should also bring integrals to your mind. If ever there's some idea that you understand in a finite context, and which involves adding up multiple values, like taking the average of a bunch of numbers. And if you want to generalize that idea to apply to an infinite, continuous range of values, try seeing if you can phrase things in terms of an integral. It's a feeling that comes up all the time, especially in probability, and it's definitely worth remembering. My thanks, as always, go to those making these videos possible.
Why do colliding blocks compute pi?
Last video I left you with a puzzle. The setup involves two sliding blocks in a perfectly idealized world where there's no friction and all collisions are perfectly elastic, meaning no energy is lost. One block is sent towards another smaller one, which starts off stationary and there's a wall behind it, so that the smaller block bounces back and forth until it redirects the big blocks momentum enough to fully turn around sailing away from the wall. If that first block has a mass which is a power of 100 times the mass of the second, for example a million times as much, and insanely surprising fact popped out. The total number of collisions, including those between the second mass and the wall, has the same starting digits as pi. In this example, that's 3,141 collisions. If that first block was a trillion times the mass, it would be 3,141,592 collisions before this happens. Most all of which happen in one huge unrealistic burst. And speaking of unexpectedly big bursts, in the short time since that video went out, lots of people have been sharing solutions and attempts and simulations, which is awesome. So why does this happen? Why should pi show up in such an unexpected place and in such an unexpected manner? Foremost this is a lesson about using phase space, also commonly called configuration space to solve problems. So rest assured that you're not just learning about some esoteric algorithm for pi, this tactic here is core to many other fields, and it's a useful tool to keep in your belt. To start, when one block hits another, how do you figure out the velocity of each one after the collision? The key is to use the conservation of energy together with the conservation of momentum. Let's go ahead and call their masses M1 and M2, and their velocities V1 and V2, which will be the variables changing throughout the process. At any given point, the total kinetic energy is 1.5 M1 times V1 squared, plus 1.5 M2 times V2 squared. So even though V1 and V2 will be changing as the blocks get bumped around, the value of this expression must remain constant. The total momentum of the two blocks is M1 times V1, plus M2 times V2. This also has to remain constant when the blocks hit each other, but it can change as the second block bounces off the wall. In reality, the second block would transfer its momentum to the wall during this collision, and again we're being idealistic, say thinking of that wall as having infinite mass, so such a momentum transfer won't actually move the wall. So here we have two equations and two unknowns. To put these to use, try drawing a picture to represent the equations. You might start by focusing on the energy equation. Since V1 and V2 are changing, maybe you think to represent the equation on a coordinate plane, where x is equal to V1 and y is equal to V2. So individual points on this plane encode the pair of velocities of our block. In that case, the energy equation represents an ellipse, where each point of this ellipse gives you a pair of velocities, all of which correspond to the same total kinetic energy. In fact, let's actually change our coordinates a little bit to make this into a perfect circle, since we know that we're on a hunt for pi. Instead of having the x-coordinate represent V1, let it be the square root of m1 times V1, which for this example, shown stretches the figure in the x-direction by the square root of 10. Likewise, have the y-coordinate represent square root of m2 times V2. That way, when you look at the conservation of energy equation, what it's saying is 1 half x-coordinate plus y-squared equals some constant, which is the equation for a circle, which specific circle depends on the total energy, but that actually doesn't matter for our problem. At the beginning, when the first block is sliding to the left and the second one is stationary, we're at the leftmost point on the circle, right? Where the x-coordinate is negative and the y-coordinate is zero. What about right after the collision? How do we know what happens? Conservation of energy tells us that we must jump to some other point of the circle, but which one? Well, use the conservation of momentum. This tells us that before and after the collision, the value of m1 times V1 plus m2 times V2 must stay constant. In our rescaled coordinates, that looks like saying square root of m1 times x plus square root of m2 times y equals some constant. And that's the equation for a line, specifically a line with a slope of negative square root of m1 over m2. You might ask which specific line, and that depends on what the constant momentum is. But we know that it must pass through our first point, and that locks us into one choice. So just to be clear about what all this is saying, all other pairs of velocities, which would give the same momentum, live on this line. In just the same way that all other pairs of velocities that give the same energy live on this circle. So notice, this gives us one and only one other point that we could jump to. And it should make sense that it's something where the x-coordinate gets a little less negative, and the y-coordinate becomes negative, since that corresponds to the big block, slowing down a little, while the little block zooms off towards the wall. From here, it's quite fun to see how things play out. When the second block bounces off the wall, its speed stays the same, but it goes from negative to positive, right? So in this diagram, that corresponds to reflecting about the x-axis, since the y-coordinate gets multiplied by negative 1. Then once more, the next collision corresponds to a jump along a line with slope, negative square root of m1 over m2, since staying on such a line is what conservation of momentum looks like in this diagram. And from here, you can fill in the rest for how the block collisions correspond to hopping around the circle in our picture, where we keep going like this, until the velocity of that smaller block is both positive and smaller than the velocity of the big one, meaning they'll never touch again. That corresponds to this triangular region in the upper right of the diagram, so in our process, we keep bouncing until we land in that region. What we've drawn here is called a phase diagram, which is a simple but powerful idea in math, where you encode the state of some system, in this case the velocities of our sliding blocks, as a single point in some abstract space. What's powerful here is that it turns questions about dynamics into questions about geometry. In this case, the dynamical idea of all possible pairs of velocities that conserve energy corresponds to the geometric idea of a circle. Accounting the total number of collisions turns into counting the total number of hops along these lines, alternating between vertical and diagonal. But our question remains, why is it that when that mass ratio is a power of 100, the total number of steps shows the digits of pi? Well, if you stare at this picture, maybe, just maybe, you'd notice that all of the arc lengths between the points on this circle seem to be about the same. It's not immediately obvious that this should be true, but if it is, it means that computing the value of one such arc length should be enough to figure out how many total collisions it takes to get us into that end zone. The key here is to use the ever-helpful inscribed angle theorem, which says that whenever you're forming an angle using three points on a circle, p1, p2, and p3, like this, it will be exactly half of the angle formed by p1, the circle center, and p3. p2 can be anywhere on the circle, anywhere except between p1 and p3, and this lovely little fact will be true. So now look back at our phase space and focus specifically on three points like these. Remember that first vertical hop corresponds to the second block bouncing off the wall, and that second hop, along a slope of negative square root of m1 over m2, corresponds to a momentum-conserving block collision. Let's call the angle between this momentum line and the vertical line theta. And now maybe you see it, using the inscribed angle theorem, this arc length between those two bottom points measured in radians will be 2 theta. And importantly, since the momentum line has the same slope for all of those jumps from the top of the circle to the bottom, the same reasoning means that all of these arc lengths must also be 2 theta. So for each hop, if we drop down a new arc like so, then after each collision, we cover another 2 theta radians of the circle. We stop once we're in that end zone on the right, which remember corresponds to both blocks moving to the right with the smaller one going slower. But you can also think of this as stopping at the point when adding one more arc of 2 theta would overlap with the previous one. In other words, how many times do you have to add 2 theta to itself before it covers more than the whole circle, more than 2 pi radians? The answer to this will be the same as the number of collisions between our blocks. Or to say the same thing more compactually, what's the largest integer multiple of theta that doesn't surpass pi? For example, if theta was 0.01 radians, then multiplying it by as much as 314 would keep you below pi. But multiplying by 315 would bring you over that value. So the answer would be 314, meaning if our mass ratio was 1 such that the angle theta in our diagram was 0.01, then the blocks would collide 314 times. So now you know what we need to do. Let's go ahead and actually compute the value theta, say when the mass ratio is 100 to 1. Remember, this rise over run slope of that constant momentum line was the negative square root of m1 over m2, which in this example is negative 10. That would mean that the tangent of this angle theta, opposite over adjacent, is the run over the negative rise, so to speak, which is 1 divided by 10 in this example. So theta is going to be the arc tan of 1 tenth. Speaking more generally, it'll be the inverse tangent of the square root of the small mass over the square root of the big mass. If you go ahead and plug these into a calculator, what you'd notice is that the inverse tangent of such a small value is actually quite close to the value itself. For example, arc tan of 1 over 100, corresponding to a big mass of 10,000 kilograms, is extremely close to 0.01. In fact, it's so close that for the sake of our central question, it might as well be 0.01. And what I mean by that is analogous to what we saw a moment ago, adding this to itself as many as 314 times won't surpass pi, but the 315th time would. And remember, unraveling why we're doing all this, that's a way of counting how many jumps on the phase diagram gets us into the end zone, which in turn is a way of counting how many times the blocks collide until they're sailing off never to touch again. So that, my friends, is why a mass ratio of 10,000 gives 314 collisions. Likewise, a mass ratio of a million to one will give an angle, theta equals the inverse tangent of 1 over 1,000. This is extremely close to 0.001. And again, if we ask about the largest integer multiple of this angle that doesn't surpass pi, it's the same as it would be for a precise value of 0.001. Namely, 3141. These are the first four digits of pi because that is, by definition, what digits of a number mean. This explains why when the mass ratio is a million, the number of collisions is 3141. And you might notice that all of this relies on the hope that the inverse tangent of a small value is sufficiently close to the value itself, which is another way of saying that the tangent of a small value is approximately that value itself. Intuitively, there's a really nice reason this is true. If you look at a unit circle, the tangent of any given angle is the height of this little triangle I've drawn, divided by its width. And when that angle is really small, the width is basically 1, the radius of your circle. And the height is basically the same as the arc length along that circle. And by definition, that arc length is theta. To be more precise about it, the Taylor series expansion of tangent of theta shows that this approximation will have only a cubic error term. So for example, the tangent of 1-100th differs from 1-100th itself by something on the order of 1-1 millionth. So even if we were to consider 314 steps with this angle, the error between the actual value of arc tan 1 over 100 and the approximation of 0.01, it just won't have a chance to accumulate it high enough to be as big as an integer. So let's zoom out and sum up. When blocks collide, you can figure out their new velocities by slicing a line through a circle in a velocity phase diagram, where each of these curves represents a conservation law. Most notably, the conservation of energy is what plants that circular seed that ultimately blossoms into the pie that we find in the final count. Specifically, due to some inscribed angle geometry, the points that we hit of this circle are spaced out evenly, separated by an angle that we were calling 2 theta. This lets us rephrase the question of counting collisions, as instead asking how many times must we add 2 theta to itself before it surpasses 2 pi? If theta looks something like 0.001, the answer to that question has the same first digits as pi. And when the mass ratio is some power of 100, because arc tan of x is so well approximated by x for small values, theta is sufficiently close to this value that it gives the same final count. I'll emphasize again what this phase space allowed us to do, because as I said, this is a lesson useful for all sorts of math, like differential equations and chaos theory and other flavors of dynamics. By representing the relevant state of your system as a single point in an abstract space, it lets you translate problems of dynamics into problems of geometry. I repeat myself because I don't want you to come away just remembering a neat puzzle where pi shows up unexpectedly. I want you to remember this surprise appearance as it is still remnant of the deeper relationship at play. And if this solution leaves you feeling satisfied, it shouldn't, because there is another perspective, more clever and pretty than this one, due to Galperin and his original paper on this phenomenon, which invites us to draw a striking parallel between the dynamics of these blocks and that of a beam of light bouncing between 2 mirrors. That's me. I'm looking forward to seeing you again in the next video.
Euler's Formula and Graph Duality
In my video on the Circle Division problem, I referenced Euler's characteristic formula, and here, I would like to share a particularly nice proof of this fact. It's very different from the inductive proof, typically given, but I'm not trying to argue that this is somehow better or easier to understand than other proofs. Instead, I chose this topic to illustrate one example of the incredible notion of duality and how it can produce wonderfully elegant math. First, let's go over what the theorem states. If you draw some dots and some lines between them, that is, a graph, and if none of these lines intersect, which is to say you have a planar graph, and if you're drawing is connected, then Euler's formula tells us that the number of dots minus the number of lines plus the number of regions these lines cut the plane into, including that outer region, will always be two. As Euler was originally talking about 3D polyhedra when he found this formula, which was only later reframed in terms of planar graphs, instead of saying dots, we say vertices, instead of saying lines, we say edges, and instead of saying regions, we say faces. Hence, we write Euler's discovery as V minus E plus F equals two. Before describing the proof, I need to go through three pieces of graph theory terminology, cycles, spanning trees, and dual graphs. If you are already familiar with some of these topics, and don't care to see how I describe them, feel free to click the appropriate annotation and skip ahead. Imagine a tiny creature sitting on one of the vertices. Let's name him Randolph. If we think of edges as something Randolph might travel along from one vertex to the next, we can sensibly talk about a path as being a sequence of edges that Randolph could travel along, where we don't allow him to backtrack on the same edge. A cycle is simply a path that ends on the same vertex where it begins. You might be able to guess how cycles will be important for our purposes, since they will always enclose a set of faces. Now we imagine that Randolph wants access to all other vertices, but edges are expensive, so he'll only buy access to an edge if he gives him a path to an untouched vertex. This frugality will leave him with a set of edges without any cycles, since the edge finishing off a cycle would always be unnecessary. In general, a connected graph without cycles is called a tree, so named because we can move things around and make it look like a system of branches, and any tree inside a graph which touches all the vertices is called a spanning tree. Before defining the dual graph which runs the risk of being confusing, it's important to remember why people actually care about graphs in the first place. I was actually lying earlier when I said a graph is a set of dots and lines, really it's a set of anything with any notion of connection, but we typically represent those things with dots and those connections with lines. For instance, Facebook stores an enormous graph where vertices are accounts and edges are friendships. Although we could use drawings to represent this graph, the graph itself is the abstract set of accounts and friendships, completely distinct from the drawing. All sorts of things are undrawn graphs, the set of English words considered connected when they differ by one letter. Mathematicians considered connected if they have written a paper together, neurons connected by synapses, or maybe for those of us reasoning about the actual drawing of a graph on the plane, we can take the set of faces this graph cuts the plane into and consider two of them connected if they share an edge. In other words, if you can draw a graph on the plane without intersecting edges, you automatically get a second as of yet undrawn graph whose vertices are the faces and whose edges are, well, edges of the original graph. We call this the dual of the original graph. If you want to represent the dual graph with dots and lines, first do dot inside each one of the faces. I personally like to visualize the dot for that outer region as being a point somewhere at infinity where you can travel in any direction to get there. Next, connect these new dots with new lines that pass through the centers of the old lines, where lines connected to that point at infinity can go off the screen in any direction, as long as it's understood that they all meet up at the same one point. But keep in mind, this is just the drawing of the dual graph, just like the representation of Facebook accounts and friendships with dots and lines is just a drawing of the social graph. The dual graph itself is the collection of faces and edges. The reason I stress this point is to emphasize that edges of the original graph and edges of the dual graph are not just related, but they're the same thing. You see, what makes the dual graph all kinds of awesome is the many ways that it relates to the original graph. For example, cycles in the original graph correspond to connected components of the dual graph. And likewise, cycles in the dual graph correspond with connected components in the original graph. Now, for the cool part, suppose our friend Randolph has an alter ego, Mortimer, living in the dual graph, traveling from face to face instead of from vertex to vertex, passing over edges as he does so. Let's say Randolph has bought all the edges of a spanning tree and that Mortimer is forbidden from crossing those edges. It turns out the edges that Mortimer has available to him are guaranteed to form a spanning tree of the dual graph. To see why, we only need to check the two defining properties of spanning trees. They must give Mortimer access to all faces and there can be no cycles. The reason he still has access to all faces is that it would take a cycle in Randolph's spanning tree to insulate him from a face, but trees cannot have cycles. The reason Mortimer cannot traverse a cycle in the dual graph feels completely symmetric. If he could, he would separate one set of Randolph's vertices from the rest so the spanning tree from which he is banned could not have spanned the whole graph. So not only does the planar graph have a dual graph, any spanning tree within that graph always has a dual spanning tree in the dual graph. Here's the kicker. The number of vertices in any tree is always one more than the number of edges. To see this, note that after you start with the root vertex, each new edge gives exactly one new vertex. Alternatively, within our narrative, you can think of Randolph as starting with one vertex and gaining exactly one more for each edge that he buys in what will become a spanning tree. Since this tree covers all vertices in our graph, the number of vertices is one more than the number of edges owned by Randolph. Moreover, since the remaining edges make up a spanning tree for Mortimer's dual graph, the number of edges he gets is one more than the number of vertices in the dual graph, which are faces of the original graph. Putting this together, it means the total number of edges is two more than the number of vertices plus the number of faces, which is exactly what one of those formula states.
The other way to visualize derivatives | Chapter 12, Essence of calculus
Picture yourself as an early calculus student about to begin your first course. The months ahead of you hold within them a lot of hard work, some neat examples, some not-so-need examples, beautiful connections to physics, not-so-beautiful piles of formulas to memorize, plenty of moments of getting stuck and banging your head into a wall, a few nice aha moments sprinkled in as well, and some genuinely lovely graphical intuition to help guide you through it all. But if the course ahead of you is anything like my first introduction to calculus, or any of the first courses that I've seen in the years since, there's one topic that you will not see, but which I believe stands to greatly accelerate your learning. You see, almost all of the visual intuitions from that first year are based on graphs. The derivative is the slope of a graph, the integral is a certain area under that graph, but as you generalize calculus beyond functions whose inputs and outputs are simply numbers, it's not always possible to graph the function that you're analyzing. There's all sorts of different ways that you'd be visualizing these things. So if all your intuitions for the fundamental ideas, like derivatives, are rooted too rigidly in graphs, it can make for a very tall and largely unnecessary conceptual hurdle between you and the more, quote-unquote, advanced topics, like multi-variable calculus and complex analysis, differential geometry. Now, what I want to share with you is a way to think about derivatives, which I'll refer to as the transformational view, that generalizes more seamlessly into some of those more general context where calculus comes up. And then we'll use this alternate view to analyze a certain fun puzzle about repeated fractions. But first off, I just want to make sure that we're all on the same page about what the standard visual is. If you were to graph a function, which simply takes real numbers as inputs and outputs, one of the first things you learn in a calculus course is that the derivative gives you the slope of this graph. But what we mean by that is that the derivative of the function is a new function, which for every input x returns that slope. Now, I'd encourage you not to think of this derivative as slope idea as being the definition of a derivative. Instead, think of it as being more fundamentally about how sensitive the function is to tiny little nudges around the input. And the slope is just one way to think about that sensitivity relevant only to this particular way of viewing functions. I have not just another video, but a full series on this topic if it's something you want to learn more about. Now the basic idea behind the alternate visual for the derivative is to think of this function as mapping all of the input points on the number line to their corresponding outputs on a different number line. In this context, what the derivative gives you is a measure of how much the input space gets stretched or squished in various regions. That is, if you were to zoom in around a specific input and take a look at some evenly spaced points around it, the derivative of the function of that input is going to tell you how spread out or contracted those points become after the mapping. Here, a specific example helps. Take the function x squared. It maps 1 to 1 and 2 to 4, 3 to 9 and so on. And you could also see how it acts on all of the points in between. And if you were to zoom in on a little cluster of points around the input 1 and then see where they land around the relevant output, which for this function also happens to be 1, you'd notice that they tend to get stretched out. In fact, it roughly looks like stretching out by a factor of 2. And the closer you zoom in, the more this local behavior looks just like multiplying by a factor of 2. This is what it means for the derivative of x squared at the input x equals 1 to be 2. That's what that fact looks like in the context of transformations. If you looked at a neighborhood of points around the input 3, they would get roughly stretched out by a factor of 6. This is what it means for the derivative of this function at the input 3 to equal 6. Around the input 1-4th, a small region actually tends to get contracted, specifically by a factor of 1-half. And that's what it looks like for a derivative to be smaller than 1. Now the input 0 is interesting. Zooming in by a factor of 10, it doesn't really look like a constant stretching or squishing, for one thing all of the outputs end up on the right positive side of things. And as you zoom in closer and closer by 100x or by 1000x, it looks more and more like a small neighborhood of points around 0 just gets collapsed into 0 itself. And this is what it looks like for the derivative to be 0. The local behavior looks more and more like multiplying the whole number line by 0. It doesn't have to completely collapse everything to a point at a particular zoom level, instead it's a matter of what the limiting behavior is as you zoom in closer and closer. It's also instructive to take a look at the negative inputs here. Things start to feel a little cramped since they collide with where all the positive input values go, and this is one of the downsides of thinking of functions as transformations. But for derivatives, we only really care about the local behavior anyway. What happens in a small range around a given input? Here, notice that the inputs in a little neighborhood around, say negative 2, they don't just get stretched out, they also get flipped around. Specifically, the action on such a neighborhood looks more and more like multiplying by negative 4 the closer you zoom in. This is what it looks like for the derivative of a function to be negative. I think you get the point, this is all well and good, but let's see how this is actually useful in solving a problem. A friend of mine recently asked me a pretty fun question about the infinite fraction 1 plus 1 divided by 1 plus 1 divided by 1 plus 1 divided by 1, non-non-non-non-non. And clearly you watch math videos online, so maybe you've seen this before. But my friend's question actually cuts to something that you might not have thought about before, relevant to the view of derivatives that we're looking at here. The typical way that you might evaluate an expression like this is to set it equal to x and then notice that there's a copy of the full fraction inside itself. So you can replace that copy with another x and then just solve for x. That is what you want is to find a fixed point of the function 1 plus 1 divided by x. But here's the thing, there are actually two solutions for x. Two special numbers, where 1 plus 1 divided by that number, gives you back the same thing. One is the golden ratio, phi, around 1.618. And the other is negative 0.618, which happens to be negative 1 divided by phi. I like to call this other number phi's little brother, since just about any property that phi has, this number also has. And this raises the question. Would it be valid to say that that infinite fraction that we saw is somehow also equal to phi's little brother, negative 0.618? Maybe you initially say, obviously not. Everything on the left hand side is positive, so how could it possibly equal a negative number? Well, first we should be clear about what we actually mean by an expression like this. One way that you could think about it, then it's not the only way there's freedom for choice here, is to imagine starting with some constant, like 1, and then repeatedly applying the function 1 plus 1 divided by x. And then asking, what is this approach as you keep going? I mean, certainly symbolically what you get looks more and more like our infinite fraction. So maybe if you wanted to equal a number, you should ask what the series of numbers approach is. And if that's your view of things, maybe you start off with a negative number, so it's not so crazy for the whole expression to end up negative. After all, if you start with negative 1 divided by phi, then applying this function 1 plus 1 over x, you get back the same number, negative 1 divided by phi. So no matter how many times you apply, you're staying fixed at this value. But even then, there is one reason that you should probably view phi as the favorite brother in this pair. Here, try this. Pull up a calculator of some kind, then start with any random number, and then plug it into this function 1 plus 1 divided by x, and then plug that number into 1 plus 1 over x, and then again, and again, and again and again and again. No matter what constant you start with, you eventually end up at 1.618. Even if you start with a negative number, even one that's really, really close to phi's little brother, eventually, it shies away from that value and jumps back over to phi. So what's going on here? phi is one of these fixed points favored above the other one. Maybe you can already see how the transformational understanding of derivatives is going to be helpful for understanding this setup. But for the sake of having a point of contrast, I want to show you how a problem like this is often taught using graphs. If you were to plug in some random input to this function, the y value tells you the corresponding output, right? So to think about plugging that output back into the function, you might first move horizontally until you hit the line y equals x, and that's going to give you a position where the x value corresponds to your previous y value, right? So then from there, you can move vertically to see what output this new x value has. Then you would repeat. You move horizontally to the line y equals x to find a point whose x value is the same as the output that you just got, and then you move vertically to apply the function again. Now personally, I think this is kind of an awkward way to think about repeatedly applying a function, don't you? I mean, it makes sense, but you kind of have to pause and think about it to remember which way to draw the lines. And you can, if you want, think through what conditions make this spider web process narrow in on a fixed point versus propagating away from it. And in fact, go ahead, pause right now and try to think it through as an exercise. It has to do with slopes. Or if you want to skip the exercise for something that I think gives a much more satisfying understanding, think about how this function acts as a transformation. So I'm going to go ahead and start here by drawing a whole bunch of arrows to indicate where the various sample the input points will go. And side note, don't you think this gives a really neat emergent pattern? I wasn't expecting this, but it was cool to see it pop up when animating. I guess the action of 1 divided by x gives this nice emergent circle, and then we're just shifting things over by 1. Anyway, I want you to think about what it means to repeatedly apply some function, like 1 plus 1 over x, in this context. Well, after letting it map all of the inputs to the outputs, you could consider those as the new inputs, and then just apply the same process again, and then again, and do it however many times you want. Notice, in animating this with a few dots representing the sample points, it doesn't take many iterations at all before all of those dots kind of clump in around 1.618. Now remember, we know that 1.618, and it's a little brother, negative 0.618 on and on, stay fixed in place during each iteration of this process. But zoom in on a neighborhood around phi. During the map, points in that region get contracted around phi, meaning that the function, 1 plus 1 over x, has a derivative with a magnitude that's less than 1 at this input. In fact, this derivative works out to be around negative 0.38. So what that means is that each repeated application scrunches the neighborhood around this number smaller and smaller, like a gravitational pull towards phi. So now tell me what you think happens in the neighborhood of phi's little brother. Over there, the derivative actually has a magnitude larger than 1, so points near the fixed point are repelled away from it. And when you work it out, you can see that they get stretched by more than a factor of 2 in each iteration. They also get flipped around because the derivative is negative here, but the salient fact for the sake of stability is just the magnitude. Mathematicians would call this right value a stable fixed point, and the left one is an unstable fixed point. Something is considered stable, if when you perturb it just a little bit, it tends to come back towards where it started, rather than going away from it. So what we're seeing is a very useful little fact, that the stability of a fixed point is determined by whether or not the magnitude of its derivative is bigger or smaller than 1. And this explains why phi always shows up in the numerical play where you're just hitting enter on your calculator over and over, but phi's little brother never does. Now as to whether or not you want to consider phi's little brother a valid value of the infinite fraction, well, that's really up to you. Everything we just showed suggests that if you think of this expression as representing a limiting process, then because every possible seed value other than phi's little brother gives you a series converging to phi, it does feel kind of silly to put them on equal footing with each other. But maybe you don't think of it as a limit. Maybe the kind of math you're doing lends itself to treating this as a purely algebraic object, like the solutions of a polynomial, which simply has multiple values. Anyway, that's beside the point. And my point here is not that viewing derivatives as this change in density is somehow better than the graphical intuition on the whole. In fact, picturing an entire function this way can be kind of clunky and impractical as compared to graphs. My point is that it deserves more of a mention in most of the introductory calculus courses, because it can help make a student's understanding of the derivative a little bit more flexible. But like I mentioned, the real reason that I'd recommend you carry this perspective with you as you learn new topics is not so much for what it does with your understanding of single variable calculus, it's for what comes after.
Vectors | Chapter 1, Essence of linear algebra
The fundamental root of it all building block for linear algebra is the vector. So it's worth making sure that we're all on the same page about what exactly a vector is. You see, probably speaking, there are three distinct but related ideas about vectors, which I'll call the physics student perspective, the computer science student perspective, and the mathematicians perspective. The physics student perspective is that vectors are arrows pointing in space. What defines a given vector is its length and the direction it's pointing. But as long as those two facts are the same, you can move it all around and it's still the same vector. Vectors that live in the flat plane are two-dimensional, and those sitting in broader space that you and I live in are three-dimensional. The computer science perspective is that vectors are ordered lists of numbers. For example, let's say you were doing some analytics about house prices, and the only features you cared about were square footage and price. You might model each house with a pair of numbers, the first indicating square footage, and the second indicating price. Notice the order matters here. In the lingo, you'd be modeling houses as two-dimensional vectors, where in this context, vector is pretty much just a fancy word for list, and what makes it two-dimensional is the fact that the length of that list is two. The mathematician, on the other hand, seeks to generalize both these views, basically saying that a vector can be anything where there's a sensible notion of adding two vectors and multiplying a vector by a number, operations that I'll talk about later on in this video. The details of this view are rather abstract, and I actually think it's healthy to ignore it until the last video of this series, favoring a more concrete setting in the interim. But the reason I bring it up here is that it hints at the fact that the ideas of vector addition and multiplication by numbers will play an important role throughout linear algebra. But before I talk about those operations, let's just settle in on a specific thought to have in mind when I say the word vector. Given the geometric focus that I'm shooting for here, whenever I introduce a new topic involving vectors, I want you to first think about an arrow, and specifically think about that arrow inside a coordinate system, like the xy plane, with its tail sitting at the origin. This is a little bit different from the physics student perspective, where vectors can freely sit anywhere they want in space. In linear algebra, it's almost always the case that your vector will be rooted at the origin. Then, once you understand a new concept in the context of arrows in space, we'll translate it over to the list of numbers point of view, which we can do by considering the coordinates of the vector. Now, while I'm sure that many of you are already familiar with this coordinate system, it's worth walking through explicitly, since this is where all of the important back and forth happens between the two perspectives of linear algebra. Focusing our attention on two dimensions for the moment, you have a horizontal line called the x-axis, and a vertical line called the y-axis. The place where they intersect is called the origin, which you should think of as the center of space and the root of all vectors. After choosing an arbitrary length to represent one, you make tick marks on each axis to represent this distance. When I want to convey the idea of 2D space as a whole, which you'll see comes up a lot in these videos, I'll extend these tick marks to make grid lines, but right now they'll actually get a little bit in the way. The coordinates of a vector is a pair of numbers that basically gives instructions for how to get from the tail of that vector at the origin to its tip. The first number tells you how far to walk along the x-axis, positive numbers indicating rightward motion, negative numbers indicating leftward motion, and the second number tells you how far to walk parallel to the y-axis after that, positive numbers indicating upward motion and negative numbers indicating downward motion. To distinguish vectors from points, the convention is to write this pair of numbers vertically with square brackets around them. Every pair of numbers gives you one and only one vector, and every vector is associated with one and only one pair of numbers. What about in three dimensions? Well, you add a third axis called the z-axis, which is perpendicular to both the x and y axes, and in this case, each vector is associated with an ordered triplet of numbers. The first tells you how far to move along the x-axis, the second tells you how far to move parallel to the y-axis, and the third one tells you how far to then move parallel to this new z-axis. Every triplet of numbers gives you one unique vector in space, and every vector in space gives you exactly one triplet of numbers. All right, so back to vector addition and multiplication by numbers. After all, every topigan linear algebra is going to center around these two operations. Luckily, each one is pretty straightforward to define. Let's say we have two vectors, one pointing up and a little to the right, and the other one pointing right and down a bit. To add these two vectors, move the second one so that its tail sits at the tip of the first one. Then if you draw a new vector from the tail of the first one to where the tip of the second one now sits, that new vector is there some. This definition of addition, by the way, is pretty much the only time in linear algebra where we let vectors stray away from the origin. Now, why is this a reasonable thing to do? Why this definition of addition and not some other one? Well, the way I like to think about it is that each vector represents a certain movement, a step, with a certain distance and direction in space. If you take a step along the first vector, then take a step in the direction and distance described by the second vector, the overall effect is just the same as if you moved along the sum of those two vectors to start with. You could think about this as an extension of how we think about adding numbers on a number line. One way that we teach kids to think about this, say with 2 plus 5, is to think of moving two steps to the right, followed by another five steps to the right. The overall effect is the same as if you just took seven steps to the right. In fact, let's see how vector addition looks numerically. The first vector here has coordinates 1, 2. And the second one has coordinates 3, negative 1. When you take the vector sum using this tip-to-tail method, you can think of a four-step path from the origin to the tip of the second vector. Walk one to the right, then two up, then three to the right, then one down. Reorganizing these steps so that you first do all of the right word motion, then do all the vertical motion, you can read it as saying, first move 1 plus 3 to the right, then move 2 minus 1 up. So the new vector has coordinates 1 plus 3 and 2 plus negative 1. In general, vector addition in this list of numbers conception looks like matching up their terms and adding each one together. The other fundamental vector operation is multiplication by a number. Now this is best understood just by looking at a few examples. If you take the number 2 and multiply it by a given vector, it means you stretch out that vector so that it's two times as long as when you started. If you multiply that vector by say 1 third, it means you squish it down so that it's 1 third the original length. When you multiply it by a negative number, like negative 1.8, then the vector first gets flipped around, then stretched out by that factor of 1.8. This process of stretching or squishing or sometimes reversing the direction of a vector is called scaling. And whenever you catch a number like 2 or 1 third or negative 1.8, acting like this, scaling some vector, you call it a scalar. In fact, throughout linear algebra, one of the main things that numbers do is scale vectors, so it's common to use the word scalar pretty much interchangeably with the word number. Numerically, stretching out a vector by a factor of, say, 2 corresponds with multiplying each of its components by that factor, 2. So in the conception of vectors as lists of numbers, multiplying a given vector by a scalar means multiplying each one of those components by that scalar. You'll see in the following videos what I mean when I say that linear algebra topics tend to revolve around these two fundamental operations, vector addition and scalar multiplication. And I'll talk more in the last video about how and why the mathematician thinks only about these operations independent and abstracted away from, however you choose to represent vectors. In truth, it doesn't matter whether you think about vectors as fundamentally being arrows in space, like I'm suggesting you do, that happen to have a nice numerical representation, or fundamentally as lists of numbers that happen to have a nice geometric interpretation. The usefulness of linear algebra has less to do with either one of these views than it does with the ability to translate back and forth between them. It gives the data analyst a nice way to conceptualize many lists of numbers in a visual way, which can seriously clarify patterns and data and give a global view of what certain operations do. And on the flip side, it gives people like physicists and computer graphics programmers a language to describe space and the manipulation of space using numbers that can be crunched and run through a computer. When I do mathy animations, for example, I start by thinking about what's actually going on in space and then get the computer to represent things numerically, thereby figuring out where to place the pixels on the screen, and doing that usually relies on a lot of linear algebra understanding. So there are your vector basics, and in the next video, I'll start getting into some pretty neat concepts surrounding vectors, like span, bases, and linear dependence. See you then.
Oh, wait, actually the best Wordle opener is not “crane”…
Last week I put up this video about solving the game Wordal, or at least trying to solve it, using information theory. And I wanted to add a quick, uh, what should we call this? An addendum, a confession. Basically, I just want to explain a place where it by made a mistake. It turns out there was a very slight bug in the code that I was running to recreate Wordal and then run all of the algorithms to solve it and test their performance. And it's one of those bugs that affects a very small percentage of cases, so it was easy to miss, and it has only a very slight effect that for the most part doesn't really matter. Basically, it had to do with how you assign a color to a guess that has multiple different letters in it. For example, if you guess speed and the true answer is abide, how should you color those two E's from the guess? Well, the way that it works with the Wordal conventions is that the first E would be colored yellow, and the second one would be colored gray. You might think of that first one as matching up with something from the true answer, and the grayness is telling you there is no second E. By contrast, if the answer was something like a race, both of those E's would be colored yellow, telling you that there is a first E in a different location, and there's a second E, also in a different location. Similarly, if one of the E's hits and it's green, then that second one would be gray in the case where the true answer has no second E, but it would be yellow in the case where there is a second E and it's just in a different location. Long story short, somewhere along the way, I accidentally introduced behavior that differs from these conventions slightly. And honestly, it was really dumb. Basically, at some point in the middle of the project, I wanted to speed up some of the computations, and I was trying a little trick for how it computed the value for this pattern between any given pair of words, and you know, I just didn't really think it through, and it introduced this slight change. The ironic part is that in the end, the actual way to make things fastest is to pre-compute all those patterns so that everything is just a look up, and so it wouldn't matter how long it takes to do each one, especially if you're writing hard to read buggy code to make it happen. You know, you live and you learn. As far as how this affects the actual video, I mean very little of substance really changes. Of course, the main lesson about what is information, what is entropy, all that stays the same. Every now and then, if I'm showing on screen some distribution associated with a given word, that distribution might actually be a little bit off, because some of the buckets associated with various patterns should include either more or fewer true answers. Even then, it doesn't really come up, because it was very rare that I would be showing a word that had multiple letters that also hit this edge case. But, one of the very few things of substance that does change, and that arguably does matter a fair bit, was the final conclusion, around how if we want to find the optimal possible score for the wordal answer list, what opening guess does such an algorithm use? In the video, I said the best performance that I could find came from opening with the word crane, which was true only in the sense that the algorithms were playing a very slightly different game. After fixing it and rerunning it all, there is a different answer, for what the theoretically optimal first guess is for this particular list. And look, I know that you know that the point of the video is not to find some technically optimal answer to some random online game. The point of the video is to shamelessly hop on the bandwagon of an internet trend to sneak attack people with an information theory lesson. And that's all good, I stand by that part. But I know how the internet works, and for a lot of people, the one main takeaway was what is the best opener for the game wordal. And I get it, I walked into that because I put it in the thumbnail, but presumably you can forgive me if I want to add a little correction here. And a more meaningful reason to circle back to all this, actually, is that I never really talked about what went into that final analysis. And it's interesting as a sub lesson in its own, right? So that's worth doing here. Now if you'll recall, most of our time last video was spent on the challenge of trying to write an algorithm to solve wordal that did not use the official list of all possible answers. It might taste that feels a bit like overfitting to a test set, and what's more fun is building something that's resilient. This is why we went through the whole process of looking at relative word frequencies in the English language to come up with some notion of how likely each one would be to be included as a final answer. However, for what we're doing here, where we're just trying to find an absolute best performance period, I am incorporating that official list and just shamelessly overfitting to the test set, which is to say we know with certainty whether a word is included or not, and we can assign a uniform probability to each one. If you'll remember, the first step in all of this was to say for a particular opening guess, maybe something like my old favorite, Crane, how likely is it that you would see each of the possible patterns? And in this context, where we are shamelessly overfitting to the wordal answer list, all that involves is counting how many of the possible answers give each one of these patterns. And then of course, most of our time was spent on this kind of funny looking formula to quantify the amount of information that you would get from this guess. That basically involves going through each one of those buckets and saying how much information would you gain? It has this log expression that is a fanciful way of saying how many times would you cut your space of possibilities in half if you observed a given pattern? We take a weighted average of all of those, and it gives us a measure of how much we expect to learn from this first guess. In a moment, we'll go deeper than this, but if you simply search through all 13,000 different words that you could start with, and you ask which one has the highest expected information, it turns out the best possible answer is SOAR, which doesn't really look like a real word. But I guess it's an obsolete term for a baby hawk? The top 15 openers by this metric happened to look like this, but these are not necessarily the best opening guesses, because they're only looking one step in with the heuristic of expected information to try to estimate what the true score will be. But there's few enough patterns that we can do an exhaustive search two steps in. For example, let's say you opened with SOAR, and the pattern you happen to see was the most likely one, all grays, then you can run identical analysis from that point. For a given proposed second guess, something like KITTY, what's the distribution across all patterns in that restricted case, where we're restricted only to the words that would produce all grays for SOAR, and then we measure the flatness of that distribution using this expected information formula, and we do that for all 13,000 possible words that we could use as a second guess. Doing this, we can find the optimal second guess in that scenario, and the amount of information we were expected to get from it. And if we wash rinse and repeat and do this for all of the different possible patterns that you might see, we get a full map of all the best possible second guesses, together with the expected information of each. From there, if you take a weighted average of all those second step values, weighted according to how likely you are to fall into that bucket, it gives you a measure of how much information you're likely to gain from the guess SOAR after the second step. When we use this two-step metric as our new means of ranking, the list gets shaken up a bit. SOAR is no longer first place, it falls back to 14th, and instead what rises to the top is slain. Again, doesn't feel very real, and it looks like it is a British term for a spade that's used for cutting turf. Alright, but as you can see, it is a really tight race among all of these top contenders, or who gains the most information after those two steps. And even still, these are not necessarily the best opening guesses, because information is just the heuristic, it's not telling us the actual score if you actually play the game. What I did is I ran the simulation of playing all 2,315 possible wordl games, with all possible answers, on the top 250 from this list. And by doing this, seeing how they actually perform, the one that ends up very marginally with the best possible score turns out to be Salay, which is, let's see, Salay, and alternate spelling for Salay, which is a light medieval helmet. Alright, if that feels a little bit too fake for you, which it does for me, you'll be happy to know that Trace and Crate give almost identical performance. Each of them has the benefit of obviously being a real word, so there is one day when you get it right on the first guess, since both are actual wordl answers. This move from sorting based on the best two step enter piece to sorting based on the lowest average score also shakes up the list, but not nearly as much. For example, Salay was previously third place before it bubbles to the top, and Crate and Trace were both fourth and fifth. If you're curious, you can get slightly better performance from here by doing a little brute forcing. There's a very nice blog post by Jonathan Olson, if you're curious about this, where he also lets you explore what the optimal following guesses are for a few of the starting words based on these optimal algorithms. Stepping back from all this though, I'm told by some people that it, quote, ruins the game to overanalyze it like this and try to find an optimal opening guess. You know, it feels kind of dirty if you use that opening guess after learning it, and it feels inefficient if you don't. But the thing is, I don't actually think this is the best opener for a human playing the game. For one thing, you would need to know what the optimal second guess is for each one of the patterns that you see. And more importantly, all of this isn't a setting where we are absurdly overfit to the official word-al-answer list. The moment that, say the New York Times chooses to change what that list is under the hood, all of this would go out the window. The way that we humans play the game is just very different from what any of these algorithms are doing. We don't have the word list memorized, we're not doing exhaustive searches, we get intuition from things like, what are the vowels, and how are they placed? I would actually be most happy if those of you watching this video promptly forgot what happens to be the technically best opening guess, and instead came out remembering things like, how do you quantify information, or the fact that you should look out for when a greedy algorithm falls short of the globally best performance that you would get from a deeper search. For my taste at least, the joy of writing algorithms to try to play games actually has very little bearing on how I like to play those games as a human. The point of writing algorithms for all this is not to affect the way that we play the game, it's still just a fun word game. It's to hone in our muscles for writing algorithms and more meaningful contexts elsewhere.
The Wallis product for pi, proved geometrically
Alright, I think you're gonna like this. I want to show you a beautiful result that reveals a surprising connection between a simple series of fractions and the geometry of circles. But unlike some other results like this that you may have seen before, this one involves multiplying things instead of adding them up. Now the video you're about to watch is particularly exciting for us at 3 Blue 1 Brown, because it came about a little differently from most of the videos that we've done. If you step back and think about it, the value of any kind of math presentation comes from a combination of the underlying math and then all of the choices that go into how to communicate it. And for almost all of the content on this channel, the underlying math is something that's well known in the field. It's either based on general theory or some particular paper. And my hope is for the novelty to come from the communication half. And with this video, the result we're discussing, a very famous infinite product for Pi, known as the Wallace product, is indeed well known math. However, what we'll be presenting is, to our knowledge, a more original proof of this result. For context, after watching our video on the Basel problem, it's either the new 3B1B member, who some of you may remember from the video about color and winding numbers. Well, he's been some time thinking about the approach taken in that video, as well as thinking about the connection between the Basel problem and the Wallace product, and he's stumbled into a new proof of the relationship between the Wallace product and Pi. I mean, I'll leave open the possibility that an argument of this style is hidden somewhere in the literature beyond what our searching pulled up. But I can't at least say that it was found independently, and that if it does exist out there, it has done a fantastic job hiding itself from the public view. So without further ado, let's dive into the math. Consider the product 2 over 1 times 4 over 3 times 6 over 5 on and on and on. Or what we're doing is including all the even numbers as the numerators, and odd numbers as the denominators. Of course, all the factors here are bigger than 1, so as you go through the series, multiplying each new factor in 1 by 1, the result keeps getting bigger and bigger. In fact, it turns out that it eventually gets bigger than any finite limit. So in that sense, it's not super interesting, it just blows up to infinity. And on the other hand, if you shift things over slightly, looking at 2 divided by 3 times 4 divided by 5 times 6 divided by 7 on and on, all of those factors are less than 1, so the result keeps getting smaller and smaller. And this time, the series turns out to approach 0. But, what if we mixed the two? If you looked at 2 over 1 times 2 over 3 times 4 over 3 times 4 over 5 on and on like this, where now, the partial products along the way keep going up and then down and then up a little bit and then down a little bit less, until all of these jumps and falls are of almost no change at all. So now, it must be converging to some kind of positive finite value. But what is that value? Believe it or not, we'll discover that this equals pi divided by 2. And to understand the connection between this product, apparently unrelated to circles, and pi, we're going to need to take a slight digression through a few geometric tools. It's a productive digression, though, since these are some useful ideas to have in your problem solving tool belt for all kinds of other math. The setup here involves a circle, with many different points evenly spaced around it, and then one additional special point. This is similar to what we had in the video on the Basel problem, where we pictured these evenly spaced points as lighthouses and thought of that special point as an observer. Now, back then, the quantity we cared about involved looking at the distance between the observer and each lighthouse, then taking the inverse square of each of those distances and adding them all up. This is why we had the whole narrative with lighthouses in the first place, since the inverse square law gave a really nice physical interpretation to this quantity. It was the total amount of light received by that observer. But despite that nice physical interpretation, there's nothing magical about adding inverse square distances. That just happened to be what was useful for that particular problem. Now to tackle our new problem of 2 over 1 tends to over 3 tends to over 3 tends to over 3 tends to over 5 and so on. We're going to do something similar, but different in the details. Instead of using the inverse square distances, just look at the distances themselves, directly. And instead of adding them up, we'll be multiplying them, giving a quantity that I'll be referring to as the distance product for the observer, that'll be important. And even though this distance product no longer has a nice physical analogy, I still kind of want to illustrate it with lighthouses and then observer, because, well, I don't know, it's pretty, and also it's just more fun than abstract geometric points. Now for this proof of the Wallace product, we're going to need two key facts about this distance product, two little lemas. First, if the observer is positioned halfway between two lighthouses on the circle, this distance product, the thing that you get by multiplying together the lengths of all these lines, works out to be exactly two. No matter how many lighthouses there are, and second, if you remove one of those light houses and put the observer in its place, this distance product from all of the remaining lighthouses, happens to equal the number of lighthouses that you started with. Again, no matter how many lighthouses there are. And if those two facts seem crazy, I agree. I mean, it's not even obvious that the distance product here should work out to be an integer in either case. And also, it seems super tricky to actually compute all of the distances and then multiply them together like this. But it turns out there is a, well, a trick to this tricky calculation that makes it quite simple. The main idea is that the geometric property of these points being evenly spaced around a circle corresponds to a really nice algebraic property, if we imagine this to be the unit circle, in the complex plane. With each of those lighthouses, now sitting on some specific complex number. Some of you might recognize these as the roots of unity, but let me quickly walk through this idea in case any of you are unfamiliar. Think about squaring one of these numbers. It has a magnitude of one, so that's going to stay the same, but the angle it makes with the horizontal will double. That's how squaring complex numbers works. Similarly, cubing this number is going to triple the angle that it makes with the horizontal. And in general, raising it to the nth power multiplies the angle by n. So for example, on screen right now there are seven evenly spaced points around the unit circle, which I'll call L0, L1, L2, and so on. And they're rotated in such a way that L0 is sitting at the number 1 on that right-hand side. So because the angle that each one of these makes with the horizontal is an integer multiple of 1 seventh of a turn, raising any one of these numbers to the seventh power rotates you around to landing on the number 1. In other words, these are all solutions to the polynomial equation x to the seventh minus 1 equals 0. But on the other hand, we could construct a polynomial that has these numbers as roots a totally different way by taking x minus L0 times x minus L1 on a non-on-on up to x minus L6. I mean, you plug in any one of these numbers, and that product will have to equal 0. And because these two degree seven polynomials have the same seven distinct roots and the same leading term, it's just x to the seventh in both cases, they are in fact one and the same. Now, take a moment to appreciate just what a marvelous fact that is. This right-hand side looks like it would be an absolute nightmare to expand. Not only are there a lot of terms, but writing down what exactly each of those complex numbers is is going to land us in a whole mess of signs and cosines. But because of the symmetry of the setup, we know that when all of the algebraic dust settles, it's going to simplify down to just being x to the seventh minus 1. All of the other terms will cancel out. And of course, there's nothing special about seven here. If you have n points evenly spaced around a circle like this, they are the roots of x to the n minus 1 equals 0. And now you might see why this would give a nice simplifying trick for computing the distance product that we defined a moment ago. If you consider the observer to be any other complex number, not necessarily on the circle, and then you plug in that number for x. That right-hand side there is giving you some new complex number whose magnitude is the product of the distances between the observer and each lighthouse. But look at that left-hand side. It is a dramatically simpler way to understand what that product is ultimately going to simplify down to. Surprisingly, this means that if our observer sits on the same circle as the lighthouses, the actual number of lighthouses won't be important. It's only the fraction of the way between adjacent lighthouses that describes our observer which will come into play. If this fraction is f, then observer to the power n lands f of the way around a full circle. So the magnitude of the complex number observer to the n minus 1 is the distance between the number 1 and a point f of the way around a full unit circle. For example, on screen right now we have seven lighthouses and the observer is sitting one-third of the way between the first and the second. So when you raise the complex number associated with that observer to the seventh power, they end up one-third of the way around the full circle. So the magnitude of observer to the seven minus 1 would be the length of this cord right here, which for one-third of the way around the circle happens to be about 1.73. And remember, this value is, quite remarkably, the same as the full distance product that we care about. We could increase or decrease the number of lighthouses, and no matter what, so long as that observer is one-third of the way between lighthouses, we would always get the length of this same cord as our distance product. In general, let's define a special function for ourselves, cord of f, which will mean for any fraction f, the length of a cord corresponding to that fraction of a unit circle. So for example, what we just saw was cord of 1-third. Actually, it's not so hard to see that cord of f amounts to the same thing as 2 times the sine of f-haves times 2 pi, which is 2 times the sine of f-pi. But sometimes it's easier to just think of it as cord of f. So the result we've just shown is that for an observer, f of the way between two lighthouses, the total distance product, as complicated as that might seem, works out to be exactly cord of f, no matter how many lighthouses there are. So in particular, think about cord of 1-half. This is the distance between two points on the opposite ends of a unit circle, which is 2. So we see that no matter how many lighthouses there are, equally spread around the unit circle, putting an observer exactly halfway along the circle between two of them results in a distance product of precisely two. And that's our first key fact, so just tuck that away. For the next key fact, imagine putting the observer right on one of the lighthouses. Well, then of course the distance product is zero. The distance zero lighthouse ends up annihilating all other factors. But suppose we just got rid of that one troublesome lighthouse, and considered only the contributions from all of the other ones. What would that distance product work out to be? Well, now instead of considering the polynomial observer to the n-1, which has a root at all of these n-roots of unity, we're looking at the polynomial observer to the n-1 divided by observer-1, which has a root at all of the roots of unity, except for the number one itself. And a little algebra shows that this fraction is the same thing as one plus observer, plus observer squared, on and on and on, up to observer to the n-1. And so if you plug in observer equals one, since that's the number he's sitting on, what do you get? All of the terms here become one, so it works out to be n, which means the total distance product for this setup equals the number of original lighthouses. Now this does depend on the total number of lighthouses, but only in a very simple way. I mean, think about this, this is incredible. The total distance product that an observer sitting at one of the lighthouses receives from all other lighthouses is precisely n, where n is the total number of lighthouses, including the ignored one. That is our second key fact. And by the way, proving geometric facts with complex polynomials like this is pretty standard in math. And if you went up to your local mathematician and showed him or her these two facts, or other facts like these, they quickly recognized both that these facts are true and how to prove them using the methods we just showed. And now, so can you. So next, with both these facts in our back pocket, let's see how to use them to understand the product that we're interested in and how it relates to Pi. Take this setup with n lighthouses evenly spaced around a unit circle, and imagine two separate observers, what I'll call the keeper and the sailor. Put the keeper directly on one of the lighthouses and put the sailor halfway between that point and the next lighthouse. The idea here will be to look at the distance product for the keeper divided by the distance product for the sailor. And then we're going to compute this ratio in two separate ways. From the first key fact, we know that the total distance product for the sailor is two. And the distance product for the keeper? Well, it's zero since he's standing right on top of one. But if we got rid of that lighthouse, then by our second key fact, the remaining distance product for that keeper is n. And of course, by getting rid of that lighthouse, we've also gotten rid of its contribution to the sailor's distance product. So that denominator now has to be divided by the distance between the two observers. And simplifying this just a little bit, it means that the ratio between the keeper's distance product and the sailors is n times the distance between the two observers all divided by two. But we could also compute this ratio in a different way by considering each lighthouse individually. For each lighthouse, think about its contribution to the keeper's distance product, meaning just its distance to the keeper, divided by its contribution to the sailor's distance product, its distance to the sailor. And when we multiply all of these factors up over each lighthouse, we have to get the same ratio in the end, n times the distance between the observers all divided by two. Now, that might seem like a super messy calculation. But, as n gets larger, this actually gets simpler for any particular lighthouse. For example, think about the first lighthouse after the keeper in the sense of counterclockwise from him. This is a bit closer to the sailor than it is to the keeper, specifically the angle from this lighthouse to the keeper is exactly twice the angle from this lighthouse to the sailor. And those angles aren't exactly proportional to these straight line distances, but as n gets larger and larger, the correspondence gets better and better. And for a very large n, the distance from the lighthouse to the keeper is very nearly twice the distance from that lighthouse to the sailor. And in the same way, looking at the second lighthouse after the keeper, it has an angle to keeper divided by angle to sailor ratio of exactly four thirds, which is very nearly the same as the distance to keeper divided by distance to sailor ratio as n gets large. And that third lighthouse L3 is going to contribute a fraction that gets closer and closer to six fifths as n is approaching infinity. Now, for this proof, we're going to want to consider all the lighthouses on the bottom of the circle a little bit differently, which is why I've enumerated them negative one, negative two, negative three, and so on. If you look at that first lighthouse before the keeper, it has a distance to keeper over distance to sailor ratio that approaches two thirds as n approaches infinity. And then the second lighthouse before it, L-2 here, contributes a ratio that gets closer and closer to four fifths. And the third lighthouse L-3 contributes a fraction closer and closer to six sevenths and so on. Using this over all of the lighthouses, we get the product 2 over 1 times 2 over 3 times 4 over 3 times 4 over 5 times 6 over 5 times 6 over 7 on a non-a-non. This is the product that we're interested in studying. And in this context, each one of those terms reflects what the contribution for a particular lighthouse is as n approaches infinity. And when I say contribution, I mean the contribution to this ratio of the keeper's distance product to the sailor's distance product, which we know at every step has to equal n times the distance between the observers divided by 2. So what does that value approach as n approaches infinity? Well, the distance between the observers is half of 1 over n of a full turn around the circle. And since this is a unit circle, its total circumference is 2 pi. So the distance between the observers approaches pi divided by n. And therefore, n times this distance divided by 2 approaches pi divided by 2. So there you have it. Our product 2 over 1 times 2 over 3 times 4 over 3 times 4 over 5 on a non-a-non must approach pi divided by 2. This is a truly marvelous result. And it's known as the Wallace product, named after 17th century mathematician John Wallace, who first discovered this fact in a way more convoluted way. And also, little bit of trivia, this is the same guy who discovered, or rather invented, the infinity symbol. And actually, if you look back at this argument, we've pulled a little bit of sleight of hand in the informality here, which the particularly mathematically sophisticated among you might have caught. What we have here is a whole bunch of factors, which we knew multiplied together to get n times the distance between the observers divided by 2. And then we looked at the limit of each factor individually, as n went to infinity, and concluded that the product of all of those limiting terms had to equal whatever the limit of n times the distance between the observers divided by 2 is. But what that assumes is that the product of limits is equal to the limit of products, even when there's infinitely many factors. And this kind of commuting of limits in infinitary arithmetic? Well, it's not always true. It often holds, but it sometimes fails. Here, let me show you a simple example of a case where this kind of commuting of limits doesn't actually work out. So we've got a grid here where every row has a single seven and then a whole bunch of ones. So if you were to take the infinite product of each row, you just get seven for each one of them. So since every one of these products is seven, the limit of the product is also seven. But look at what happens if you take the limits first. If you look at each column, the limit of a given column is going to be one, since at some point it's nothing but ones. But then, if you're taking the product of those limits, you're just taking the product of a bunch of ones, so you get a different answer, namely one. Luckily, mathematicians have spent a lot of time thinking about this phenomenon, and they've developed tools for quickly seeing certain conditions under which this exchanging of the limits actually works. In this case, a particular standard result known as dominated convergence quickly assures us that the argument we just showed will go through in full rigor. For those of you who are interested, Svither has written up a supplemental blog post to this video, which covers those details, along with many more things. And I should also say we need to be a little careful about how to interpret a product like this. Remember, we have contributions from Lighthouse's counterclockwise from the keeper, as well as Lighthouse's clockwise from the keeper. And what we did was interleave these in order to get our product. Now the Lighthouse's counterclockwise from the keeper contribute 2 over 1, 4 over 3, 6 over 5, on and on, and the ones clockwise from the keeper contribute 2 over 3, 4 over 5, 6 over 7. And like I said before, if you play around with those individual series, you'll find that the first one gets larger and larger and blows up to infinity, and the second one gets smaller and smaller, approaching 0. So it's actually pretty delicate to make sense out of this overall product, in terms of computing the two halves separately, and then combining them. And indeed, we'll find that if you intermix these two halves differently, for example taking twice as many factors from one of them, for each factor from the other, you could get a different result for the overall product. It's only when you specifically combine them in this one for one manner, that you get a product that converges to pi has. This is something that falls out of the way, that dominated convergence justifies us in computing limits the way we did. And again, for more details, see the supplemental post. Still, those are just technicalities. The conceptual gist for what's going on here is exactly what we just showed. And in fact, after doing all that work, it would be a shame not to take a quick moment to talk about one more neat result that falls out of this argument. Incubely, this is the coolest part of the whole proof. You see, we can generalize this whole discussion. Think back to when we discovered our first key fact, where we saw that you could not only consider placing the sailor precisely halfway between lighthouses, but any fraction f of the way between adjacent lighthouses. In that more general setting, the distance product for the sailor wasn't necessarily two, but it was cord of f where f is that fraction of the way between lighthouses. And if we go through the same reasoning that we just did with the sailor at this location instead and change nothing else, what we'll find is that the ratio of the keeper's distance product to the sailor's distance product is now n times the distance between them divided by cord of f, which approaches f times 2 pi divided by cord of f as n gets larger. And in the same way as before, you could alternatively calculate this by considering the contributions from each individual lighthouse. If you take the time to work this out, the k-th lighthouse after the keeper will contribute a factor of k divided by k minus f to this ratio. And all the lighthouses before the keeper, they contribute the same thing, but you're just plugging in negative values for k. If you combine all those contributions over all non-zero integers k, where in the same way as before, you have to be careful about how you bundle the positive and negative k terms together. What you'll get is that the product of k divided by k minus f over all non-zero integers k is going to equal f times 2 pi divided by cord of f. Put another way, since cord of f is 2 times the sign of f pi, this product is the same as f times 2 pi divided by 2 times sign of f pi, which is f pi over sign of f pi. Now rewriting this just a little bit more, what you get is a pretty interesting fact. Sign of f times pi is equal to f pi times this really big product, the product of 1 minus f over k over all non-zero integers k. So what we found is a way to express sign of x as an infinite product, which is really cool if you think about it. So not only does this proof give us the Wallace product, which is incredible in its own right. So generalizes to give us the product formula for the sign. And what's neat about that is that it connects to how Euler originally solved the basil problem, the sum that we saw in the previous video. He was looking at this very infinite product for sign. I mean connecting these formulas for pi to circles is one thing, but connecting them to each other is another thing entirely. And once again, if you want more details on all of this, check out the supplementary blog post.
The impossible chessboard puzzle
You walk alone into a room and you find a chess board. Each of the 64 squares has a coin sitting on top of it. And taking a step back, this is going to be one of those classic prisoner puzzles, where a strangely math obsessed warden offers you and a fellow inmate a chance for freedom, but only if the two of you solve some elaborate scheme that they've laid out. In this case, what they've done is carefully turned over each of the coins to be heads or tails according to whatever pattern they want it to be, and then they show you a key, they put that key inside one of the chess board squares, each square is a secret compartment or something like that, so you know where the key is. And the goal is going to be to get prisoner number 2 to also know where the key is, but the only thing that the warden allows you to do before you leave the room is to turn over 1 and only 1 of these coins. At that point, you walk out, your fellow prisoner walks in, and with no information other than the set of heads and tails that they're looking at, which you've only barely tweaked. They're supposed to deduce where the key is hidden, potentially winning freedom for the both of you. As is typical with these puzzles, the two of you can strategize ahead of time if you want, but you won't know what the specific layout of heads and tails is. And moreover, the warden can listen in on your strategy and do their absolute best to thwart it with some adversarial arrangement of the coins and the key. So I first heard about this puzzle over dinner conversation actually at a wedding, and it just totally sucked me in. I remember the drive home was maybe 3 hours, and I think my mind was glued to the topic of flipping coins and encoding state that whole time. But the puzzle sticks with you even after that. After I solved it, I fell into these two surprisingly interesting rabbit holes. One was to prove that the challenge is actually impossible if you vary the setup a little bit, maybe making it a 6x6 chessboard, or maybe removing one of the squares. And to give you a little sense for where that rabbit hole leads, this video is going to end with an especially pleasing way to paint the corners of a four-dimensional cube. The other rabbit hole was to work out how closely you can connect the solution of this puzzle with error correction, which is a super important topic in computer science and information theory. The idea is that when computers send and store data, the messiness of the real world inevitably flips a bit now and then, and that can completely change how the data is read. So error correcting codes are a way to add a shockingly small amount of information to a message that makes it possible for the receiver to identify both when there is an error, and more impressively, precisely how to fix it. It turns out that the intuition for solving this puzzle is essentially the same as the intuition behind these things called heming codes, which are one of the earliest examples of highly efficient error correction, which is all to say time spent mulling over this problem is not as useless as you might think it is. Now you and I aren't actually going to go through the solution here. Instead, I filmed a video all about that on Standup Maths with Matt Parker, who I'm sure many of you recognize from his combined YouTube and Standup and Book fame. We each talk through our thought process and solving it, and it's good fun, because there are multiple ways of looking at it. Instead, what I want to do with you here is take a more global view of every possible strategy for this puzzle, and bring you with me down that first rabbit hole of proving why certain variations necessarily leave room for the warden to thwart you, no matter how clever you are. The proof itself is one of those satisfying moments where you shift perspective and it reveals the solution, and the whole context leading up to it is a nice chance to practice reasoning about higher dimensional objects as a way to draw conclusions about information and data. Plus, it does more to help you appreciate the solution to the original puzzle, when you can see how it is, in a sense, almost impossible. Where to start? What we want is some kind of visualization for what it even means to solve this puzzle, and to build up to the general case, let's not things down to the simplest case that we can that still has any kind of meaning to it. Two squares, two coins, and two possibilities for where the key is. One way that you could solve this is to simply let the second coin communicate where the key is. If it's tails, it means the key is in the left square, if it's heads, it means the key is in the right square. Not a big deal, right? It's one bit of information, so when you need to change that coin, you can flip it, but if you don't need to change it, you can just flip the other coin. First things first, let's stop thinking about these as heads and tails and start thinking of them as ones and zeros. That's much easier to do math with. Then you can think of these pairs of coins as a set of coordinates, where each of the four possible states that the board can be in, sit at the corners of a unit square, like this. This might feel like a silly thing to do when we already know how to solve this case, but it's a good warm-up for turning the larger cases into a kind of geometry. Notice, flipping one of the coins moves you along an edge of the square, since it's only changing one of the coordinates. Our strategy of letting that second coin encode the key location could be drawn by associating the bottom two corners, where the y-coordinate is zero, with the key is under square zero, state. Which means those top two corners are associated with the key is under square one, state. So think about what it means for our solution to actually work. It means that no matter where you start, if you're forced to take a step along an edge, or force to flip one of the coins, you can always guarantee that you end up in whichever of these two regions you want to. Now the question is, what does it look like for a bigger chessboard? The next simplest case would be three squares, three coins, and three possibilities for where the key is. This gives us eight possible states that the coin can be in. I'm playing the same game that we did before, interpreting these states as coordinates, it brings us up into three-dimensional space, with each state sitting at the corner of a unit cube. The usefulness in a picture like this is that it gives a very vivid meaning to the idea of turning over one of the coins. Every time you flip a coin, you're walking along the edge of a cube. Now, what would it mean for you and your fellow inmate to have a strategy for this puzzle? Whenever prisoner two walks into that room, they need to be able to associate the state that they're looking at, three bits, basically, with one of three possible squares. We're already thinking very visually, so let's associate those squares with colors, maybe red for square zero, green for square one, and blue for square two. In this conception, coming up with a strategy, any possible strategy, is the same thing as coloring each of the eight corners of the cube, either red, green, or blue. So for example, let's say you colored the whole cube red. Well, I don't know if you'd call this a strategy exactly, but it would correspond with always guessing that the key is under square zero. Let's say instead your strategy was to add the first two coins together, and use that as an encoding for the key location, well then the cube would look like this. What's kind of fun is we can count how many total strategies exist, with three choices for the color of each vertex and eight total vertices, we get three to the power eight. Or if you're comfortable letting your mind stray to the thought of painting a 64 dimensional cube, you can have fun thinking about the sense in which there are 64 to the two to the 64 total possible strategies for the original puzzle. This is the size of the haystack when you're looking for the needle. Another attempt for the three square case might look like taking zero times coin zero, plus one times coin one, plus two times coin two, and then reduce that some mod three if you need to. Over on stand up maths, Matt and I both talk about trying a version of this for the 64 square case, and why it works decently well for a random arrangement of coins, but why it's ultimately doomed. From our view over here, it just looks like one more way to color the cube, but it's worth taking a moment to walk through some of those corners. Let's say that you get into the room, and all three coins are set to tails. So it's like you're starting at the corner, zero, zero, zero. If you were to flip coin zero, that doesn't change the sum, so it takes you to another red corner. If you flipped coin one, it increases the sum by one, so it takes you to a green corner. And flipping coin two takes you up to two, which looks like a blue corner. The fact that you always have access to whichever color you want is a reflection of the fact that this strategy will always win if this is the corner that you're starting on. On the other hand, let's say that you started at zero, one, zero. On that case, flipping coin zero takes you to another green corner, since it doesn't change the sum. But flipping either coin one or coin two happen to take you to a red corner. There's simply no way to get to a blue corner. Basically, what's happening here is that you have the options to subtract one by turning off coin one or to add two by turning on coin two. And if you're working mod three, those are both actually the same operation. But that means that there's no way to change the sum to be two. An adversarial warden who knows your strategy could start with this configuration, put the key under square two, and call it done. But even without thinking about sums mod three or anything like that, whatever the implementation details, you can see this in our picture manifested as a corner that has two neighbors of the same color. If you don't have a bird's eye view of all possible strategies, when you find that any specific one of them just doesn't work, you're left to wonder, okay, maybe there's a sneaky clever strategy that I just haven't thought of yet. But when we're thinking about colors on the cube, you're naturally led to an interesting combinatorial question. Is there some way that you can paint this so that the three neighbors of any given vertex always represent red, green, and blue? Maybe it seems bizarre, even convoluted, to go from a puzzle with chessboards and coins to talking about painting corners of a cube. But this is actually a much more natural step than you might expect. I've talked with a lot of people about this puzzle. And what I love is that many of the experienced problem solvers immediately jump, unprompted, to talking about coloring the corners of a cube, as if it's a kind of de facto language for this puzzle. And it really is, thinking about binary strings as vertices of a high-dimensional cube, with bit flips corresponding to edges, that actually comes up a lot, especially in coding theory, like the error correction stuff I referenced earlier. But it's more, you often hear mathematicians talk about coloring things as a way to describe partitioning them into distinct sets. If you've ever heard of that hilariously enormous number Graham's constant, for example, the problem where that came up was also phrased in terms of assigning colors to a high-dimensional cube. Though, in that case colors were given to pairs of vertices instead of individual ones. The point is, analyzing how to color a high-dimensional cube is more of a transferable skill than you might expect. So to ask questions, can you make it so that every vertex has a red, a green, and a blue neighbor? Remember, this is the same thing as having an encoding for key locations, so that you're always one flip away from communicating whichever location you want to. It would actually be enlightening if you paused the video and tried this now. It's like a weird three-dimensional variant of a Sudoku. Very similar to Sudoku's, in fact, in the sense that you want certain subsets to be filled with all three possible states. For example, you might start by painting one of the corners in arbitrary color, let's say, red. And so you know that it's three neighbors need to be red, green, and blue, doesn't really matter how you do it. And then maybe we move to the red neighbor and say that the other two adjacencies need to be green and blue. Maybe we do it like this. But at least how I've drawn it here, you're stuck. You are unable to choose a correct color for the next two. Can you see why? What I'd like to share is a lovely little argument that explains not only why this will never work in three dimensions, but also why it can't work in any dimension that's not a power of two. The idea is that the symmetry in the property that we're looking at will end up implying that there have to be an equal number of red, green, and blue vertices. But that would mean that there's eight thirds of each, which is not possible. And before I go on, pause, see if you can think of a way to solidify that intuition. It's a fun exercise in turning a vague instinct into a solid proof. Alright, you ready? One way to do this is to imagine a process where you go through each corner and you count how many of its neighbors are a particular color, say red. So each step here, we're looking at the three neighbors of a given vertex, counting up the red ones, and adding that to a total tally. For this specific coloring, that count came out to be 12. But if we had the property that we wanted, every corner would have exactly one red neighbor so that count should be eight. On the other hand, every red corner is counted exactly three times, once for each instance where it's somebody's neighbor. So that final tally has to be three times the total number of red corners. So you know, it's simple, find a coloring where eight thirds of the corners are red. Isn't that nice? Counting how many times some corner has a red neighbor is the same as counting how many times a red corner has some neighbor, and that's actually enough to get us a contradiction. It's also nice is that this argument immediately generalizes to higher dimensions. Think about solving the chessboard puzzle with n squares. Again, the puzzle is to associate each arrangement of coins with some state, some possible location for the key. And the goal is to make it so that the arrangements that you can get to, with one flip of a coin, represent all possible states, all possible places the warden might have hidden that key. And if you can't visualize most higher dimensional cubes, we can still talk about things like vertices of such a cube and their neighbors, basically as a way to describe bit strings and the ones which are one bit flip away. Really, there's just two relevant facts you need to know. If you're standing at one of these vertices, you have n distinct neighbors. And the total number of vertices is 2 to the n, one for each bit string of length n. And from here, you can play the same game that we did in three dimensions. You can go through each corner and count how many red neighbors it has. If it's possible to do the coloring we want, this sum should be 2 to the n, one for each vertex. On the other hand, each red corner is counted once for each of its neighbors. So that means that we need to end up with n times the total number of red corners. Since that left-hand side is a power of 2, the right-hand side also has to be a power of 2, which could only ever happen if and itself is some smaller power of 2. So for example, if we were in four dimensions or 64 dimensions, there is no contradiction. It's at the very least possible to evenly divide the vertices among the different colors. To be clear, that is not the same thing as saying there necessarily is a solution for the power of 2 case, it's just that it can't be ruled out yet. To me, this is completely delightful, just by imagining coloring the corners of a cube and then counting how many there are, you can conclude that no possible strategy, no matter how clever you are, can work in all of the cases for this chessboard puzzle if the number of squares isn't a power of 2. So even though it might seem to make it easier if you knock off a couple squares or reduce the size of the board, it actually makes the task hopeless. It also means that the solution to this puzzle, which I'll point you to in a moment, can be viewed as a particularly symmetric way to color the corners of a high dimensional cube in a way that's disallowed in most dimensions. And if you're curious, I just couldn't resist showing this explicitly for a 4 dimensional cube. So in the same way that you can take a 3D cube and kind of squish it down into 2 dimensions, maybe with a little perspective, and get the same graph structure for how the vertices and edges are all connected, we can do the same thing projecting a 4 dimensional cube into 3 dimensional space and still go to complete view for how all of the vertices and edges are hooked together. If you wanted to try your hand at a weird sort of 4 dimensional cousin of a Sudoku, you could pause right now and try to figure out how to color these vertices in such a way that each of the 4 neighbors of any one represents all 4 different colors. Using essentially the same computation that solves the chessboard puzzle for the 4 square case, I can get the computer to explicitly draw that out for us. And at this point, when you're hopefully burning to know what the actual solution is, I'd like you to hop on over to Stand Up Maths, where Matt and I show you how it works. If any of you are somehow not yet familiar with Stand Up Maths, it is one of my favorite channels run by one of my favorite people, so please do immediately subscribe when to land over there. I promise, you're in for quite a few delights with everything else he has to offer. Before explaining it, he and I simply walk through what it looks like for us to actually perform the solution, and as we do, I really want you to try thinking of the solution yourself, and to predict what it is that we're doing before we tell you. And if you're curious about the connection with hamming codes and error correction, I'm definitely game to make a video on that, just let me know in the comments. I've been told that as far as motivating puzzles go, not everyone is as interested in symmetrical ways to paint a 64 dimensional cube as I am. But reliable data transmission? Come on, I think we can all agree that that's universally sexy.
Thinking outside the 10-dimensional box
Math is sometimes a real tease. It seduces us with the beauty of reasoning geometrically in two and three dimensions where there's this really nice back and forth between pairs or triplets of numbers and spatial stuff that our visual cortex is good at processing. For example, if you think about a circle with radius 1 centered at the origin, you are in effect conceptualizing every possible pair of numbers x and y that satisfy a certain numerical property that x squared plus y squared is 1. And the usefulness here is that a lot of facts that look opaque in a purely analytic context become quite clear geometrically and vice versa. Honestly, this channel has been the direct beneficiary of this back and forth since it offers such a rich library of that special category of cleverness that involves connecting to seemingly disparate ideas. And I don't just mean the general back and forth between pairs or triplets of numbers and spatial reasoning. I mean this specific one between sums of squares and circles and spheres. It's at the heart of the video I made showing how pi is connected to number theory and primes and the one showing how to visualize all possible Pythagorean triples. It also underlies the video on the Borsuch Ulam theorem being used to solve what was basically accounting puzzle by using topological facts about spheres. There is no doubt that the ability to frame analytic facts geometrically is very useful for math. But it's all a tease because when you start asking questions about quadruplets or quintuplets or 100 tuples of numbers, it's frustrating. The constraints on our physical space seem to have constrained our intuitions about geometry. And we lose this back and forth. I mean it is completely reasonable to imagine that there are problems out there that would have clever and illuminating solutions if only we knew how to conceptualize say lists of 10 numbers as individual points in some space. For mathematicians or computer scientists or physicists, problems that are framed in terms of lists of numbers, lists of more than three numbers, are a regular part of the job. And the standard approach to actually doing math in higher dimensions is to use two and three dimensions for analogy, but to fundamentally reason about things just analytically. Somewhat analogous to a pilot relying primarily on instruments and not sight while flying through the clouds. Now what I want to offer here is a hybrid between the purely geometric and the purely analytic views, a method for making the analytic reasoning a little more visual in a way that generalizes to arbitrarily high dimensions. And to drive home the value of a tactic like this, I want to share with you a very famous example where analogies with two and three dimensions cannot help because of something extremely counterintuitive that only happens in higher dimensions. The hope though is that what I show you here helps to make that phenomenon more intuitive. The focus throughout will be on higher dimensional spheres. For example, when we talk about a four dimensional sphere, say with radius one centered at the origin, what that actually is is the set of all quadruplets of numbers, x, y, z, w, where the sum of the squares of these numbers is one. What I have pictured here now is multiple three-dimensional slices of a 40-sphere projected back into three dimensions. But it's confusing. And even if you do wrap your head around it, it just pushes the question back to how you would think about a five or a six or a seven-dimensional sphere. And more importantly, squinting your eyes to understand a projection like this is not very reflective of what doing math with a 40-sphere actually entails. Instead, the basic idea here will be to get very literal about it and to think about four separate numbers. I like to picture four vertical number lines with sliders to represent each number. Each configuration of these sliders is a point in 40 space, a quadruplet of numbers. And what it means to be on a 4-d unit sphere centered at the origin is that the sum of the squares of these four values is one. Our goal is to understand which movements of these sliders correspond to movements on the sphere. To do that, it helps if we knock things down to two dimensions where we can actually see the circle. So ask yourself, what's a nice way to think about this relation that x squared plus y squared is one? Well, I like to think of the value of x squared as the real estate belonging to x. And likewise, the value of y squared is the real estate belonging to y. And that they have a total of one unit of real estate to share between them. So moving around on the circle corresponds to a constant exchange of real estate between the two variables. Part of the reason I choose this term is that it lets us make a very useful analogy that real estate is cheap, near zero, and more expensive away from zero. To see this, consider starting off in a position where x equals one and y is zero, meaning x has all of the real estate to itself, which in our usual geometric picture means we're on the right most point of the circle. If you move x down just a bit to 0.9, the value of x squared changes to 0.81. So it has in effect given up 0.19 units of real estate. But for y squared to increase by that same amount, y has to move an entire 0.44 units away from zero, more than four times the amount that x moved. In other words, x changed a little to give up expensive real estate so that y could move a lot and gain the same value of cheap real estate. In terms of the usual circle drawing, this corresponds to the steep slope near the right side. A small nudge in x allows for a very big change to y. Moving forward, let's add some tick marks to these lines to indicate what 0.05 units of real estate looks like at each point. That is, how much would x have to change so that the value of x squared changes by 0.05? As you walk around the circle, the trade-off and value between x squared and y squared gives this piston-looking dance motion, where the sliders are moving more slowly away from 0 because real estate is more expensive in those regions. There are just more tick marks to cover per unit distance. Also, a nice side effect of the term real estate is that it aligns naturally with the fact that it comes in units of distance squared. So the square root of the total real estate among all coordinates gives us the distance from the origin. For a unit sphere in three dimensions, the set of all triplets x, y, z where the sum of their squares is 1, all we have to do is add a third slider for z. But these three sliders still only have the one unit of real estate to share between them. To get a feel for this, imagine holding x in place at 0.5, where it occupies 0.25 units of real estate. What this means is that y and z can move around in the same piston-dance motion we saw before, as they trade off the remaining 0.75 units of real estate. In terms of our typical way of visualizing a sphere, this corresponds to slicing the sphere along the plane where x is 0.5, and looking at the circle formed by all of the choices for y and z on that sphere. As you increase the value of x, the amount of real estate left over for y and z is smaller, and this more constrained piston-dance is what it feels like for the circular slice to be smaller. Eventually, once x reaches the value 1, there's no real estate left over, so you reach this singularity point, where y and z are both forced to be 0. The feeling here is a bit like being a bug on the surface of the sphere. You are unable to see the whole sphere all at once. Instead, you're just sitting on a single point, and you have some sense for what local movements are allowed. In four dimensions and higher, we lose the crutch of the global view that a spatial visual offers, but the fundamental rules of this real estate exchange remain the same. If you fix one slider in place and watch the other three trade off, this is basically what it means to take a slice of the four d sphere to get a small three d sphere. In much the same way that fixing one of the sliders for the three-dimensional case gives us a circular slice when the remaining two were free to vary. Now, watching these sliders move about and thinking about the real estate exchange is pretty fun, but it runs the risk of being aimless unless we have an actual high-dimensional puzzle to sink our teeth into. So let's set aside the sliders for just a moment and bring in a very classic example of something that seems reasonable and even dull in two and three dimensions, but which is totally out of whack in higher dimensions. To start, take a two by two box centered at the origin. Its corners are on the vertices 1, 1, 1 negative 1, negative 1, 1, and negative 1, negative 1. Draw four circles, each with radius 1, centered at these four vertices. So each one is tangent to two of its neighbors. Now I want you to think of the circle centered at the origin, which is just large enough to be touching those corner circles tangent to each one of them. What we want to do for this setup and for its analogies and higher dimensions is find the radius of that inner circle. Here in two dimensions we can use the Pythagorean theorem to see that the distance from the origin to the corner of the box is the square root of two, which is around 1.414. Then you can subtract off this portion here the radius of the corner circle, which by definition is one, and that means the radius of the inner circle is square root of two minus one, or about 0.414. No surprises here, that seems pretty reasonable. Now do something analogous in three dimensions. Draw a two by two cube whose corners have vertices 111, 111, negative 1, on and on and on. Then we're going to take eight different spheres, each of which has a radius 1, and center them on these vertices, so that each one is tangent to three of its neighbors. Now again, think about the sphere centered at the origin, which is just large enough to be barely touching those eight corner spheres. As before, we can start by thinking about the distance from the origin to the corner of the box, say the corner at 111. By the way, I guess I still haven't yet explicitly said that the way distances work in higher dimensions is always to add up the squares of the components in each direction and take the square root. If you've never seen why this follows from the Pythagorean theorem just in the two-dimensional case, it's actually a really fun puzzle to think about, and I've left the relevant image up on the screen for any of you who want to pause and ponder on it. Anyway, in our case, the distance between the origin and the corner 111 is the square root of 1 square plus 1 square plus 1 squared, or square root of 3, which is about 1.73. So the radius of that inner sphere is going to be this quantity minus the radius of a corner sphere, which by definition is 1. And again, 0.73 seems like a reasonable radius for that inner sphere. But what happens to that inner radius as you increase dimensions? Obviously, the reason I bring this up is that something surprising will happen, and some of you might see where this is going. But actually don't want it to feel like a surprise. As fun as it is to wow people with counterintuitive facts in math, the goal here is genuine understanding, not shock. For higher dimensions, we'll be using sliders to get a gut feel for what's going on, but since it's kind of a different way of viewing things, it helps to get a running start by looking back at how to analyze the two and three-dimensional cases in the context of sliders. First things first, how do you think about a circle centered at a corner, like one negative one? Well previously, for a circle centered at the origin, the amount of real estate belonging to both x and y was dependent on their distance from the number 0. And it's the same basic idea here as you move around the center, it's just that the real estate might be dependent on the distance between each coordinate and some other number. So for this circle, centered at one negative one, the amount of real estate belonging to x is the square of its distance from one. Likewise, the real estate belonging to y is the square of its distance from negative one. Other than that, the looking feel with this piston dance trade-off is completely the same. For simplicity, we'll only focus on one of these circles, the one centered at one one. Now ask yourself, what does it mean to find a circle centered at the origin large enough to be tangent to this guy when we're thinking just in terms of sliders? Well, notice how this point of tangency happens when the x and y coordinates are both the same, or phrase differently? At the point of this corner circle, closest to the origin, the real estate is shared evenly between x and y. This will be important for later, so let's really dig in and think about why it's true. Imagine perturbing that point slightly, maybe moving x a little closer to 0, which means y would have to move a little away from 0. The change in x would have to be a little smaller than the change in y. Since the real estate it gains by moving farther away from 1 is more expensive than the real estate that y loses by getting closer to 1. But from the perspective of the origin point, 0.00, that tradeoff is reversed. The resulting change to x squared is smaller than the resulting change to y squared. Since when real estate is measured with respect to 0.00, that move of y towards 1 is the more expensive one. What this means is that any slight perturbation away from this point where real estate is shared evenly results in an increasing distance from the origin. The reason we care is that this point is tangent to the inner circle, so we can also think about it as being a point of the inner circle. And this will be very useful for higher dimensions. It gives us a reference point to understanding the radius of that inner circle. Specifically, you can ask how much real estate is shared between x and y at this point when real estate measurements are done with respect to the origin, 0.00. For example, down here in two dimensions, both x and y dip below 0.5 in this configuration. So the total value x squared plus y squared is going to be less than 0.5 squared plus 0.5 squared. Comparing to this halfway point is really going to come in handy for wrapping our mind around what happens in higher dimensions. Taking things one step at a time, let's bump it up to three dimensions. Consider the corner sphere with radius one centered at 111. The point on that sphere that's closest to the origin corresponds to the configuration of sliders where x, y, and z are all reaching down toward zero and equal to each other. Again, they all have to go a little beyond that halfway point because the position 0.5 only accounts for 0.5 squared or 0.25 units of real estate. So with all three coordinates getting a third of a unit of real estate, they need to be farther out. And again, since this is a point where the corner sphere is tangent to the inner sphere, it's also a point of the inner sphere. So with reference to the origin, 0, 0, 0, think about the amount of real estate shared between x, y, and z in this position corresponding to the tangent point. It's definitely less than 0.75, since all three of these are smaller than 0.5. So each one has less than 0.25 units of real estate. And again, we sit back and feel comfortable with this result, right? The inner sphere is smaller than the corner spheres. But things get interesting when we move up into four dimensions. Our 2 by 2 by 2 by 2 box is going to have 16 vertices at 1111, 1111 negative 1, and so on, with all possible binary combinations of 1 and negative 1. What this means is that there are 16 unit spheres centered at these corners, each one tangent to four of its neighbors. As before, we'll just be focusing on one of them, the one centered at 11111. The point of this sphere closest to the origin corresponds to the configuration of sliders where all four coordinates reach exactly halfway between 1 and 0. And that's because when one of the coordinates is 0.5 units away from 1, it has 0.25 units of real estate, with respect to the point 1. We do the same trick as before, thinking of this now as a point of the inner sphere and measuring things with respect to the origin. But you can already see what's cool about four dimensions. As you switch to thinking of real estate with respect to 0.000, it's still the case that each of these four coordinates has 0.25 units of real estate, making for a total of 1 shared between the four coordinates. In other words, that inner sphere is precisely the same size as the corner spheres. This matches with what you see numerically, by the way, where you can compute the distance between the origin and the corner 1111 is the square root of four. And then when you subtract off the radius of one of the corner spheres, what you get is 1. But there's something much more satisfying about seeing it, rather than just computing it. In particular, here's a cool aspect of the fact that that inner sphere has radius 1. Move things around so that all of the real estate goes to the coordinate x, and you'll end up at the point 1,000. This point is actually touching the 2 by 2 by 2 by 2 box. And when you're stuck thinking in the 2 or 3 dimensional cases, this fact that the inner sphere has radius 1, the same size as the corner spheres, and that it touches the box, well, it just seems too big. But it's important to realize, this is fundamentally a four dimensional phenomenon, and you just can't cram it down into smaller dimensions. But things get weirder, let's knock it up to five dimensions. In this case, we have quite a few corner spheres, 32 in total. But again, for simplicity, we'll only be thinking about the one-centered at 11111. Think about the point of the sphere closest to the origin, where all five coordinates are equally splitting the one unit of shared real estate. This time, each coordinate is a little higher than 0.5. If they reach down to 0.5, each one would have 0.25 units of real estate giving a total of 1.25, which is too much. But the tables are turned when you view this as a point on the inner sphere. Because with respect to the origin, this configuration has much more than one unit of real estate. Not only is every coordinate more than 0.5 units away from 0, but the larger number of dimensions means that there's more total real estate when you add it all up. Specifically, you can compute that the radius of that inner sphere is about 1.24. The intuitive feel for what that means is that the sliders can roam over more territory than what just a single unit of real estate would allow. One fun way to see what this means is to adjust everything so that all of the real estate goes to just one coordinate. Because this coordinate can reach beyond one, what you are seeing is that this five-dimensional inner sphere actually pokes outside the box. But to really get a feel for how strange things become, as a last example, I want to jump up into 10 dimensions. Remember, all this means is that points have 10 coordinates. First sphere with radius 1, a single unit of real estate must be shared among all 10 of those coordinates. As always, the point of this corner sphere closest to the origin is the one where all 10 coordinates split the real estate evenly. And here, you can really see just how far away this feels from the origin. Or it phrase differently, that inner sphere is allowed to have a very large amount of real estate. In fact, you can compute that the radius of the inner sphere is about 2.16. And viewed from this perspective, where you have 10 full dimensions to share that real estate, doesn't it actually feel somewhat reasonable that the inner sphere should have a radius more than twice as big as all those corner spheres. To get a sense for just how big this inner sphere is, look back into two dimensions and imagine a 4x4 box bounding all four circles from the outside. Or go to three dimensions and imagine a 4x4x4 box bounding all of those corner spheres from the outside. Way up here in 10 dimensions, that quote unquote inner sphere is actually large enough to poke outside of that outer bounding box, since it has a diameter bigger than 4. I know that seems crazy, but you have to realize that the face of the box is always two units away from the origin, no matter how high the dimension is. And fundamentally, it's because it only involves moving along a single axis. But the point 11111111111, which determines the inner sphere's radius, is actually really far away from the center all the way up here in 10 dimensions. And it's because all 10 of those dimensions add a full unit of real estate for that point. And of course, as you keep upping the dimensions, that inner sphere just keeps growing without bound. Not only is it poking outside of these boxes, but the proportion of the inner sphere lying inside the box decreases exponentially towards zero, as the dimension keeps increasing. So taking a step back, one of the things I like about using this slider method for teaching, is that when I shared it with a few friends, the way they started to talk about higher dimensions became a little less metaphysical, and started to sound more like how you would hear a mathematician talk about the topic. Rather than skeptically asking whether or not 10 dimensional spaces a real thing, recognizing that it's exactly as real as numbers are, people would actually probe at what other properties high dimensional spheres have, and what other shapes feel like in terms of sliders. This box situation is just one in a number of things that feel very crazy about higher dimensional spheres. And it's really fun to think about these others in the context of sliders, and real estate. It's obviously limited. I mean, you're a bug on the surface of these objects, only getting a feel for one point at a time, and for the rules of movement. Also, geometry can be quite nice when it's coordinate-free, and this is the opposite of that. But it does give a foothold into thinking about high dimensional shapes a little more concretely. Now you could say that viewing things with sliders is no different from thinking about things purely analytically. I mean, it's honestly little more than representing each coordinate literally. It's kind of the most obvious thing you might do. But this small move makes it much more possible to play with the thought of a high dimensional point. And even little things, like thinking about the squares of coordinates as real estate, can shed light on some seemingly strange aspects of high dimensions, like just how far away the corner of a box is from its center. If anything, the fact that it's such a direct representation of a purely analytic description is exactly what makes it such a faithful reflection of what genuinely doing math and higher dimensions entails. We're still flying in the clouds, trusting the instruments of analytic reasoning. But this is a redesign of those instruments, one which better takes advantage of the fact that such a large portion of our brains goes towards image processing. I mean, just because you can't visualize something doesn't mean you can't still think about it visually.
Visualizing the chain rule and product rule | Chapter 4, Essence of calculus
In the last videos, I talked about the derivatives of simple functions. And the goal was to have a clear picture or intuition to hold on your mind that actually explains where these formulas come from. But of course, most of the functions you deal with in modeling the world involve somehow mixing or combining or tweaking these simple functions in some other way. So our natural next step is to understand how you take derivatives of more complicated combinations. And again, I don't want these to be something to memorize. I want you to have a clear picture in mind for where each one comes from. Now this really boils down into three basic ways to combine functions. You can add them together, you can multiply them, and you can throw one inside the other, known as composing them. Sure, you could say subtracting them, but really, that's just multiplying the second by negative one and adding them together. And likewise, dividing functions doesn't really add anything, because that's the same as plugging one inside the function one over x and then multiplying the two together. So really, most functions you come across just involve layering together these three different types of combinations. So there's not really a bound on how monstrous things can become. But as long as you know how derivatives play with just those three combination types, you'll always be able to just take it step by step and peel through the layers for any kind of monstrous expression. So the question is, if you know the derivative of two functions, what is the derivative of their sum, of their product, and of the function composition between them? The sum rule is easiest, if somewhat tongue twisting to say out loud. The derivative of a sum of two functions is the sum of their derivatives. But it's worth warming up with this example by really thinking through what it means to take a derivative of a sum of two functions. Since the derivative patterns for products and for function composition won't be so straight forward, and they're going to require this kind of deeper thinking. For example, let's think about this function f of x equals sine of x plus x squared. It's a function where for every input you add together the values of sine of x and x squared at that point. For example, let's say at x equals 0.5, the height of the sine graph is given by this vertical bar, and the height of the x squared parabola is given by this slightly smaller vertical bar. And their sum is the length you get by just stacking them together. Now for the derivative, you want to ask what happens as you nudge that input slightly, maybe increasing it up to 0.5 plus dx. The difference in the value of f between those two places is what we call df. And when you picture it like this, I think you'll agree that the total change in the height is whatever the change to the sine graph is, what we might call d sine of x, plus whatever the change to x squared is, dx squared. Now we know that the derivative of sine is cosine, and remember what that means. It means that this little change d sine of x is about cosine of x times dx. It's proportional to the size of our initial nudge dx, and the proportionality constant equals cosine of whatever input we happen to start at. Likewise, because the derivative of x squared is 2x, the change in the height of the x squared graph is going to be about 2 times x times whatever dx was. So rearranging df divided by dx, the ratio of the tiny change to the sum function to the tiny change in x that caused it, is indeed cosine of x plus 2x, the sum of the derivatives of its parts. But like I said, things are a bit different for products, and let's think through y, and let's think through y in terms of tiny nudges again. In this case, I don't think graphs are our best bet for visualizing things. Pretty commonly in math, that a lot of levels of math really, if you're dealing with a product of two things, it helps to understand it as some kind of area. In this case, maybe you try to configure some mental setup of a box where the side lengths are sine of x and x squared. But what would that mean? Well, since these are functions, you might think of those sides as adjustable, dependent on the value of x, which maybe you think of as this number that you can just freely adjust up and down. So getting a feel for what this means, focus on that top side there, who changes as the function sine of x. As you change this value of x up from zero, it increases up to a length of 1, as sine of x moves up towards its peak. And after that, it starts to decrease as sine of x comes down from 1. And in the same way that height there is always changing as x squared. So f of x defined as the product of these two functions is going to be the area of this box. And for the derivative, let's think about how a tiny change to x by dx influences that area. What is that resulting change in area df? Well, the nudge dx caused that width to change by some small d sine of x, and it caused that height to change by some dx squared. And this gives us three little snippets of new area, a thin rectangle on the bottom, whose area is its width sine of x times its thin height dx squared. And there's this thin rectangle on the right, whose area is its height x squared times its thin little width d sine of x. And there's also this little bit in the corner, but we can ignore that. This area is ultimately going to be proportional to dx squared. And as we've seen before, that becomes negligible as dx goes to zero. I mean, this whole setup is very similar to what I showed last video with the x squared diagram. And just like then, keep in mind that I'm using somewhat beefy changes here to draw things, just so that we can actually see them. But in principle, dx is something very, very small. And that means that dx squared and d sine of x are also very, very small. So applying what we know about the derivative of sine and of x squared, that tiny change dx squared is going to be about 2x times dx. And that tiny change d sine of x, well, that's going to be about cosine of x times dx. As usual, we divide out by that dx to see that the ratio we want, df divided by dx, is sine of x times the derivative of x squared, plus x squared times the derivative of sine. And nothing we've done here is specific to sine or to x squared. This same line of reasoning would work for any two functions, g and h. And sometimes people like to remember this pattern with a certain mnemonic that you kind of sing in your head. Left d right, right d left. In this example, where we have sine of x times x squared, left d right means you take that left function sine of x times the derivative of the right, in this case 2x. Then you add on right d left. That right function x squared times the derivative of the left one, cosine of x. Now, out of context presented as a rule to remember, I think this would feel pretty strange, don't you? But when you actually think of this adjustable box, you can see what each of those terms represents. Left d right is the area of that little bottom rectangle. And right d left is the area of that rectangle on the side. By the way, I should mention that if you multiply by a constant, say 2 times sine of x, things end up a lot simpler. The derivative is just the same as the constant multiplied by the derivative of the function. In this case, 2 times cosine of x. I'll leave it to you to pause and ponder and just kind of verify that that makes sense. Aside from addition and multiplication, the other common way to combine functions, and believe me, this one comes up all the time, is to shove one inside the other, function composition. For example, maybe we take the function x squared, and we just shove it on inside sine of x to get this new function sine of x squared. What do you think the derivative of that new function is? To think this one through, I'm going to choose yet another way to visualize things. Just to emphasize that in creative math, we've got lots of options. I'll put up three different number lines. The top one is going to hold the value of x. The second one is going to hold the value of x squared. And that third line is going to hold the value of sine of x squared. That is, the function x squared gets you from line 1 to line 2. And the function sine gets you from line 2 to line 3. As I shift around this value of x, maybe moving it up to the value 3, that second value stays pegged to whatever x squared is, in this case moving up to 9. And that bottom value, being sine of x squared, is going to go to whatever sine of 9 happens to be. So for the derivative, let's again start by just nudging that x value by some little dx. And I always think that it's helpful to think of x as starting at some actual concrete number, maybe 1.5 in this case. The resulting nudge to that second value, the change in x squared caused by such a dx, is dx squared. And we could expand this like we have before, as 2x times dx, which for our specific input would be 2 times 1.5 times dx. But it actually helps to keep things written as dx squared, at least for now. And in fact, I'm going to go one step further. I'm going to give a new name to this x squared, maybe h. So that instead of writing dx squared for this nudge, we write dh. And this makes it easier to think about that third value, which is now pegged at sine of h. Its change is d sine of h. The tiny change caused by the nudge dh. And by the way, the fact that it's moving to the left while the dh bump is going to the right, that just means that this change d sine of h is going to be some kind of negative number. And once again, we can use our knowledge of the derivative of the sine. This d sine of h is going to be about cosine of h times dh. That's what it means for the derivative of sine to be cosine. And unfolding things, we can just replace that h with x squared again. So we know that that bottom nudge is going to have a size of cosine of x squared times dh squared. And in fact, let's unfold things even further. That intermediate nudge dh squared is going to be about 2x times dx. And it's always a good habit to remind yourself of what an expression like this actually means. In this case, where we started at x equals 1.5 up top, this whole expression is telling us that the size of the nudge on that third line is going to be about cosine of 1.5 squared times 2 times 1.5 times whatever the size of dx was. It's proportional to the size of dx and this derivative is giving us that proportionality constant. Notice what we came out with here. We have the derivative of the outside function and it's still taking in the unaltered inside function and then we're multiplying it by the derivative of that inside function. Again, there is nothing special about sine of x or x squared. If you have any two functions, g of x and h of x, the derivative of their composition, g of h of x, is going to be the derivative of g evaluated on h, multiplied by the derivative of h. This pattern right here is what we usually call the chain rule. Notice for the derivative of g, I'm writing it as dgdh instead of dgdx. On the symbolic level, this is a reminder that the thing you plug into that derivative is still going to be that intermediary function h. But more than that, it's an important reflection of what this derivative of the outer function actually represents. Remember, in our three line setup, when we took the derivative of the sine on that bottom, we expanded the size of that nudge d sine as cosine of h times dh. This was because we didn't immediately know how the size of that bottom nudge depended on x. That's kind of the whole thing we were trying to figure out. But we could take the derivative with respect to that intermediate variable, h. That is, figure out how to express the size of that nudge on the third line as some multiple of dh, the size of the nudge on the second line. And it was only after that that we unfolded further by figuring out what dh was. So in this chain rule expression, we're saying, look at the ratio between a tiny change in g, the final output, to a tiny change in h that caused it, h being the value that we plug into g. Then multiply that by the tiny change in h divided by the tiny change in x that caused it. Notice, those dh's cancel out, and they give us a ratio between the change in that final output and the change to the input that, through a certain chain of events, brought it about. And that cancellation of dh is not just a notational trick. That is a genuine reflection of what's going on with the tiny nudges that underpin everything we do with derivatives. So those are the three basic tools to have in your belt to handle derivatives of functions that combine a lot of smaller things. You've got the sum rule, the product rule, and the chain rule. And I'll be honest with you, there is a big difference between knowing what the chain rule is and what the product rule is, and actually being fluent with applying them in even the most hairy of situations. Watching videos, any videos, about the mechanics of calculus is never going to substitute for practicing those mechanics yourself, and building up the muscles to do these computations yourself. I really wish that I could offer to do that for you, but I'm afraid the ball is in your court my friend to seek out the practice. What I can offer, and what I hope I have offered, is to show you where these rules actually come from. To show that they're not just something to be memorized and hammered away, but they're natural patterns, things that you too could have discovered just by patiently thinking through what a derivative actually means.
What's so special about Euler's number e? | Chapter 5, Essence of calculus
I've introduced a few derivative formulas, but a really important one that I left out was exponentials. So here, I want to talk about the derivatives of functions like 2 to the x, 7 to the x, and also to show why e to the x is arguably the most important of the exponentials. First of all, to get an intuition, let's just focus on the function 2 to the x. And let's think of that input as a time, t, maybe in days, and the output 2 to the t as a population size, perhaps of a particularly fertile band of pie creatures which doubles every single day. And actually, instead of population size, which grows in discrete little jumps with each new baby pie creature, maybe let's think of 2 to the t as the total mass of the population. I think that better reflects the continuity of this function, don't you? So for example, at time t equals 0, the total mass is 2 to the 0 equals 1 for the mass of 1 creature. At t equals 1 day, the population has grown to 2 to the 1 equals 2 creature masses. At day t equals 2, it's t squared or 4, and in general, it just keeps doubling every day. For the derivative, we want dm dt, the rate at which this population mass is growing, thought of as a tiny change in the mass, divided by a tiny change in time. And let's start by thinking of the rate of change over a full day, say between day 3 and day 4. All in this case, it grows from 8 to 16, so that's 8 new creature masses added over the course of 1 day. And notice, that rate of growth equals the population size at the start of the day. Between day 4 and day 5, it grows from 16 to 32, so that's a rate of 16 new creature masses per day, which again equals the population size at the start of the day. And in general, this rate of growth over a full day equals the population size at the start of that day. So it might be tempting to say that this means the derivative of 2 to the t equals itself, that the rate of change of this function at a given time t is equal to the value of that function. And this is definitely in the right direction, but it's not quite correct. What we're doing here is making comparisons over a full day, considering the difference between 2 to the t plus 1 and 2 to the t. But for the derivative, we need to ask what happens for smaller and smaller changes. What's the growth over the course of a tenth of a day? A hundredth of a day, one-one-billionth of a day. This is why I had to think of the function as representing population mass, since it makes sense to ask about a tiny change in mass over a tiny fraction of a day. But it doesn't make as much sense to ask about the tiny change in a discrete population size per second. More abstractly, for a tiny change in time, dt, we want to understand the difference between 2 to the t plus dt and 2 to the t all divided by dt. The change in the function per unit time, but now we're looking very narrowly around a given point in time, rather than over the course of a full day. And here's the thing. I would love if there was some very clear geometric picture that made everything that's about to follow just pop out. Some diagram, or you could point to one value and say, see, that part. That is the derivative of 2 to the t. And if you know of one, please let me know. And while the goal here, as with the rest of the series, is to maintain a playful spirit of discovery, the type of play that follows will have more to do with finding numerical patterns rather than visual ones. So start by just taking a very close look at this term 2 to the t plus dt. A core property of exponentials is that you can break this up as 2 to the t times 2 to the dt. That really is the most important property of exponents. If you add two values in that exponent, you can break up the output as a product of some kind. This is what lets you relate additive ideas, things like tiny steps in time, to multiplicative ideas, things like rates and ratios. I mean, just look at what happens here. After that move, we can factor out the term 2 to the t, which is now just multiplied by 2 to the dt minus 1, all divided by dt. And remember, the derivative of 2 to the t is whatever this whole expression approaches as dt approaches 0. And at first glance, that might seem like an unimportant manipulation. But a tremendously important fact is that this term on the right, where all of the dt stuff lives, is completely separate from the t term itself. It doesn't depend on the actual time where we started. You can go off to a calculator and plug in very small values for dt here. For example, maybe typing in 2 to the 0.001 minus 1 divided by 0.001. What you'll find is that for smaller and smaller choices of dt, this value approaches a very specific number, around 0.6931. Don't worry if that number seems mysterious. The central point is that this is some kind of constant. Unlike derivatives of other functions, all of the stuff that depends on dt is separate from the value of t itself. So the derivative of 2 to the t is just itself, but multiplied by some constant. And that should kind of make sense, because earlier, it felt like the derivative for 2 to the t should be itself, at least when we were looking at changes over the course of a full day. And evidently, the rate of change for this function over much smaller time scales is not quite equal to itself, but it's proportional to itself, with this very peculiar proportionality constant of 0.6931. And there's not too much special about the number 2 here. If instead we had dealt with the function 3 to the t, the exponential property would also have led us to the conclusion that the derivative of 3 to the t is proportional to itself. But this time it would have had a proportionality constant 1.0986. And for other bases to your exponent, you can have fun trying to see what the various proportionality constants are, maybe seeing if you can find a pattern in them. For example, if you plug in 8 to the power of a very tiny number, minus 1, and divide by that same tiny number, what you'd find is that the relevant proportionality constant is around 2.079. And maybe, just maybe, you would notice that this number happens to be exactly 3 times the constant associated with the base for 2. So these numbers certainly aren't random, there is some kind of pattern, but what is it? What does 2 have to do with the number 0.6931? And what does 8 have to do with the number 2.079? Well, a second question that is ultimately going to explain these mystery constants is whether there's some base where that proportionality constant is 1, where the derivative of a constant to the power t is not just proportional to itself, but actually equal to itself. And there is! It's the special constant E, around 2.71828. In fact, it's not just that the number E happens to show up here, this is, in a sense, what defines the number E. If you ask why does E of all numbers have this property, it's a little like asking why does pi of all numbers happen to be the ratio of the circumference of a circle to its diameter. This is at its heart what defines this value. All exponential functions are proportional to their own derivative, but E alone is the special number so that that proportionality constant is 1, meaning E to the t actually equals its own derivative. One way to think of that is that if you look at the graph of E to the t, it has the peculiar property that the slope of a tangent line to any point on this graph equals the height of that point above the horizontal axis. The existence of a function like this answers the question of the mystery constants, and it's because it gives a different way to think about functions that are proportional to their own derivative. The key is to use the chain rule. For example, what is the derivative of E to the 3t? Well, you take the derivative of the outermost function, which due to this special nature of E is just itself, and then multiply by the derivative of that inner function, 3t, which is the constant, 3. Or rather than just applying a rule blindly, you could take this moment to practice the intuition for the chain rule that I talked through last video, thinking about how a slight nudge to t changes the value of 3t and how that intermediate change nudges the final value of E to the 3t. Either way, the point is E to the power of some constant times t is equal to that same constant times itself. And from here, the question of those mystery constants really just comes down to a certain algebraic manipulation. The number 2 can also be written as E to the natural log of 2. There's nothing fancy here. This is just the definition of the natural log, it asks the question E to the what equals 2. So the function 2 to the t is the same as the function E to the power of the natural log of 2 times t. And from what we just saw, combining the facts that E to the t is its own derivative with the chain rule, the derivative of this function is proportional to itself with the proportionality constant equal to the natural log of 2. And indeed, if you go plug in the natural log of 2 to a calculator, you'll find that it's 0.6931, the mystery constant that we ran into earlier. And the same goes for all of the other bases. The mystery proportionality constant that pops up when taking derivatives is just the natural log of the base. The answer to the question E to the what equals that base. In fact, throughout applications of calculus, you rarely see exponentials written as some base to a power t. Instead, you almost always write the exponential as E to the power of some constant times t. It's all equivalent. I mean, any function like 2 to the t or 3 to the t can also be written as E to some constant times t. At the risk of staying over-focused on the symbols here, I really want to emphasize that there are many, many ways to write down any particular exponential function. And when you see something written as E to some constant times t, that's a choice that we make to write it that way. And the number E is not fundamental to that function itself. What is special about writing exponentials in terms of E like this is that it gives that constant in the exponent a nice readable meaning. Here, let me show you what I mean. All sorts of natural phenomena involve some rate of change that's proportional to the thing that's changing. For example, the rate of growth of a population actually does tend to be proportional to the size of the population itself, assuming there isn't some limited resource slowing things down. And if you put a cup of hot water in a cool room, the rate at which the water cools is proportional to the difference in temperature between the room and the water. Or said a little differently, the rate at which that difference changes is proportional to itself. If you invest your money, the rate at which it grows is proportional to the amount of money there at any time. In all of these cases where some variables rate of change is proportional to itself, the function describing that variable over time is going to look like some kind of exponential. And even though there are lots of ways to write any exponential function, it's very natural to choose to express these functions as E to the power of some constant times T. Since that constant carries a very natural meaning, it's the same as the proportionality constant between the size of the changing variable and the rate of change. And as always, I want to thank those who have made this series possible.
Some light quantum mechanics (with minutephysics)
You guys know Henry from Minifysics, right? Well, he and I just made a video on a certain quantum mechanical topic. Bales in equalities. It's a really mind-warping topic that not enough people know about, and even though it's a quantum thing, it's based on some surprisingly simple math, and you should definitely check it out. For this video, we have in mind those viewers who actually want to learn some quantum mechanics more deeply. And obviously, it's a huge topic. I know where near the scope of a single video, but the question we asked was, what topic could we present that's not meant to be some eye-catching piece of quantum weirdness, but which actually lays down some useful foundations for anyone who, you know, wants to learn this field? What topic would set the right intuitions for someone before they dove into, say, the Feynman lectures? Well, a natural place to start, where quantum mechanics itself started, is light. Specifically, if you want to learn quantum, you have to have an understanding of waves, and how they're described mathematically. And what we'd like to build to here is the relationship between the energy in a purely classical wave and the probabilities that govern quantum behavior. In fact, we'll actually spend most of the time talking through the pre-quantum understanding of light, since that sets up a lot of the relevant wave mechanics. The thing is, a lot of ideas from quantum mechanics, like describing states as superpositions with various amplitudes and phases, come up in the context of classical waves in a way that doesn't involve any of the quantum weirdness people might be familiar with. This also helps to appreciate what's actually different in quantum mechanics, namely certain restrictions on how much energy these waves can have, how they behave when measured, and quantum entanglement, though we won't cover entanglement in this video. So we'll start with the late 1800s understanding of light as waves in the electromagnetic field. Here, let's break that down a bit. The electric field is a vector field, and that means every point in space has some arrow attached to it, indicating the direction and strength of the field. Now, the physical meaning of those arrows is that if you have some charged particle in space, there's going to be a force on that particle in the direction of the arrow, and it's proportional to the length of the arrow and the specific charge of the particle. Likewise, the magnetic field, it's another vector field. Where now, the physical meaning of each arrow, is that when a charged particle is moving through that space, there's going to be a force perpendicular to both its direction of motion and to the direction of the magnetic field. And the strength of that force is proportional to the charge of the particle, its velocity, and the length of the magnetic field arrow. For example, a wire with a current of moving charges next to a magnet is either pushed or pulled by that magnetic field. A kind of culmination of the 19th century physics understanding of how these two fields work are Maxwell's equations, which among other things describe how each of these fields can cause a change to the other. Specifically, what Maxwell's equations tell us is that when the electric field arrows seem to be forming a loop around some region, the magnetic field will be increasing inside that region perpendicular to the plane of the loop. And symmetrically, such a loop in the magnetic field corresponds to a change in the electric field within it, perpendicular to the plane of the loop. Now, the specifics for how exactly these equations work is really beautiful and worth the full video on its own, but all you need to know for now is that one natural consequence of this mutual interplay and how changes to one field cause changes to the other in its neighboring regions is that you get these propagating waves where the electric field and magnetic fields are oscillating perpendicular to each other and perpendicular to the direction of propagation. When you hear the term electromagnetic radiation, which refers to things like radio waves and visible light, this is what it's talking about. Stream waves in both the electric and magnetic fields. Of course, it's now almost mainstream to know of light as electromagnetic radiation, but it's neat to think about just how surprising this was in Maxwell's time. That these fields that have to do with forces on charged particles and magnets not only have something to do with light, but that what light is is a propagating wave as these two fields dance with each other causing this mutual oscillation of increasing and decreasing field strength. With this as a visual, let's take a moment to lay down the math used to describe waves. It'll still be purely classical, but ideas that are core to quantum mechanics, like superposition, amplitudes, phases, all of these come up in this context. And I would argue with a clearer motivation for what they actually mean. Take this wave and think of it as directed straight out of the screen towards your face. And let's go ahead and ignore the magnetic field right now, just looking at how the electric field oscillates. And also we're only going to focus on one of these vectors oscillating in the plane of the screen, which we'll think of as the xy plane. If it oscillates horizontally like this, we say that the light is horizontally polarized. So the y component of this electric field is zero at all times, and we might write the x component as something like cosine of 2 pi times f t, where f represents some frequency and t is time. So if f was 1, for example, that means it takes exactly 1 second for this cosine function to go through a full cycle. For a lower frequency, that would mean it takes more time for the cosine to go through its full cycle. As the value t increases, the inside of this cosine function increases more slowly. Also we're going to include another term in here, phi, called the phase shift, which tells us where this vector is in its cycle at time t equals zero. You'll see why that matters in just a moment. Now by default, cosine only oscillates between negative 1 and 1. So let's put another term in front, a, that gives us the amplitude of this wave. One more thing, just to make things look a little more like they often do in quantum mechanics, instead of writing it as a column vector like this, I'm going to separate it out into two different components using these symbols called kets. This ket here indicates a unit vector in the horizontal direction. And this ket over here represents a unit vector in the vertical direction. If the light is vertically polarized, meaning the electric field is wiggling purely in the up and down direction, its equation might look like this, where the horizontal component is now zero, and the vertical component is a cosine with some frequency, amplitude, and a phase shift. Now if you have two distinct waves, two ways of wiggling through space over time that solve Maxwell's equations, then adding both of these together gives another valid wave, at least in a vacuum. That is, at each point in time, add these two vectors tip to tail to get a new vector. Doing this at all points in space and all points in time gives a new valid solution to Maxwell's equations. At least this is all true in a vacuum. This is because Maxwell's equations in a vacuum are what's called linear equations. They're essentially a combination of derivatives acting on the electric and magnetic fields to give zero. So if one field F1 satisfies this equation, and another field F2 satisfies it, then there's some F1 plus F2 also satisfies it, since derivatives are linear. So the sum of two or more solutions to Maxwell's equations is also a solution to Maxwell's equations. This new wave is called a superposition of the first two. And here, superposition essentially just means some, or in some context, weighted some, since if you include some kind of amplitude and phase shift in each of these components, it can still be called a superposition of the two original vectors. Now right now, the resulting superposition is a wave wiggling in the diagonal direction. But if the horizontal and vertical components were out of phase with each other, which might happen if you increase the phase shift in one of them, their sum might instead trace out some sort of ellipse. And in the case where the phases are exactly 90 degrees out of sync with each other, and the amplitudes are both equal, this is what we call circularly polarized light. This by the way is why it's important to keep track not just of the amplitude in each direction, but also of the phase. It affects the way the two waves add together. That's also an important idea that carries over to quantum, and underlies some of the things that look confusing at first. And here's another important idea. We're describing waves by adding together the horizontal and vertical components, but we could also choose to describe everything with respect to different directions. I mean, you could describe waves as some superposition of the diagonal and the anti-diagonal directions. In that case, vertically polarized light would actually be a superposition of these two diagonal wiggling directions, at least when both are in phase with each other and they have the same magnitude. Now the choice of which directions you write things in terms of is called a basis, and which basis is nice as to work with? Well, that typically depends on what you're actually doing with the light. For example, if you have a polarizing filter, like that from a set of polarized sunglasses, the way these work is by absorbing the energy from electromagnetic oscillations in some particular direction. A vertically oriented polarizer, for example, would absorb all of the energy from these waves along the horizontal directions. At least classically, that's how you might think about it. So if you're analyzing light and it's passing through a filter like this, it's nice to describe it with respect to the horizontal and vertical directions. That way what you can say is that whatever light passes through the filter is just the vertical component of the original wave. But if you had a filter oriented, say, diagonally, well, then it would be convenient to describe things as a superposition of that diagonal direction, and its perpendicular anti-diagonal direction. These ideas will carry over almost word for word to the quantum case. Quantum states, much like this wiggling direction of our wave, are described as a superposition of multiple base states, where you have many choices for what base states to use. And just like with classical waves, the components of such a superposition will have both an amplitude and a phase of some kind. And by the way, for those of you who do read more into quantum mechanics, you'll find that these components are actually given using a single complex number, rather than a cosine expression like this one. One way to think of this is that complex numbers are just a very convenient and natural mathematical way to encode an amplitude and a phase with a single value. That can make things a little confusing because it's hard to visualize a pair of complex numbers, which is what would describe a superposition of two base states. But you can think about the use of complex numbers throughout quantum mechanics as a result of its underlying wavy nature, and this need to encapsulate the amplitude and the phase for each direction. OK, just one quick point before getting into the quantum. Look at one of these waves and focus just on the electric field portion like we were before. Classically, we think about the energy of a wave like this as being proportional to the square of its amplitude. And I want you to notice how well this lines up with the Pythagorean theorem. If you were to describe this wave as a superposition of a horizontal component with amplitude A X and a vertical component with amplitude A Y, then its energy density is proportional to A X squared plus A Y squared. And you can think of this in two different ways. Either it's because you're adding up the energies of each component in the superposition or it's just that you're figuring out the new amplitude using the Pythagorean theorem and taking the square. Isn't that nice? In the classical understanding of light, you should be able to dial this energy up and down continuously, however you want, by changing the amplitude of the wave. But what physicists started to notice in the late 19th and early 20th centuries was that this energy actually seems to come in discrete amounts. Specifically, the energy of one of these electromagnetic waves always seems to come as an integer multiple of a specific constant times the frequency of that wave. We now call this constant Planck's constant, commonly denoting it with a letter H. Physically, what this means is that whenever this wave trades its energy with something else, like an electron, the amount of energy it trades off is always an integer multiple of H times its frequency. Importantly, this means there is some minimal non-zero energy level for waves of a given frequency, HF. If you have an electromagnetic wave with this frequency and energy, you cannot make it smaller without eliminating it entirely. That feels weird when the conception of a wave is a nice, continuously oscillating vector field. But that's not how the universe works as late 19th and early 20th century experiments started to expose. In fact, I've done a video about this called the origin of quantum mechanics. However, it's worth noting that this phenomenon is actually common in waves when they're constrained in certain ways, like in pipes or instrument strings, and it's called harmonics. What's weird is that electromagnetic waves do this in free space, even when they're not constrained. And what do we call an electromagnetic wave with this minimal possible energy of photon? But like I said, the math used to describe classical electromagnetic waves carries over to describing a photon. It might have, say, a 45 degree diagonal polarization, which can be described as a superposition of a purely horizontal state and a purely vertical state, where each one of these components has some amplitude and phase. And with a different choice in bases, that same state might be described as a superposition of two other directions. All of this is stuff that you would see if you started reading more into quantum mechanics. But this superposition has a different interpretation than before, and it has to. Let's say you were thinking of this diagonally polarized photon kind of classically, and you said it has an amplitude of one unit for a semi-appropriate unit system. Well, that would make the hypothetical amplitudes of its horizontal and vertical components each the square root of one half. And like Henry said, the energy of a photon is this special constant h times its frequency. And because in a classical setting, energy is proportional to the square of the amplitude of this wave, it's tempting to think of half of the energy as being in the horizontal component and half of it as being in the vertical component. But waves of this frequency cannot have half the energy of a photon. I mean, the whole novelty of quantum here is that energy comes in these discrete, indivisible chunks. So these components, with an imagined amplitude of one over the square root of two, could not exist in isolation, and you might wonder what exactly they mean. Well, let's get experimental about it. If you were to take a vertically oriented polarizing filter and shoot this diagonally polarized photon right at it, what do you think would happen? Classically, the way you'd interpret this superposition is that the half of its energy in the horizontal direction would be absorbed. But because energy comes in these discrete photon packets, it either has to pass through with all of its energy or get absorbed entirely. And if you actually did this experiment, about half the time, the photon goes through entirely, and about half the time, it gets absorbed entirely. And it appears to be random whether a given photon passes through or not. If it does pass through, forcing it to make a decision like this actually changes it so that its polarization is oriented along the filter's direction. This is analogous to the classic Schrodinger's cat setup. We have something that's in a superposition of two states, but once you make a measurement of that superposition, forcing it to interact with an observer in a way where each of those two states would behave differently. From the perspective of that observer, this superposition collapses to be entirely in one state or entirely in another, dead or alive, horizontal or vertical. One pretty neat way to see this in action, which Henry and I talk about in the other video, is to take several polarized sunglasses or some other form of polarizing filters and start by holding two of them between you and some light source. If you rotate them to be 90 degrees off from each other, the light source is blacked out completely, or at least with perfect filters it would be, because all of the photons passing through that first one are polarized vertically, so they actually have a 0% chance of passing a filter oriented horizontally. But if you insert a third filter oriented at a 45 degree angle between the two, it actually lets more light through. And what's going on here is that 50% of the photons passing that vertical filter will also pass through the diagonal filter. And once they do, they're going to be changed to have a purely diagonal polarization. And then once they're in that state, they have a 50-50 chance of passing through the filter oriented at 90 degrees. So even though 0% of the photons passing through the first would pass through that last if nothing was in between, by introducing another filter, 25% of them now pass through all three. Now that's something that you could not explain unless that middle filter forces the photons to change their states. And that experiment, by the way, becomes all the weirder when you dig into the specific probabilities for angles between 0 and 45 degrees. And that's actually what we talk about in the other video. For example, one specific value we focus on there is the probability that a photon whose polarization is 22.5 degrees off the direction of a filter is going to end up passing through that filter. Again, it's helpful to think of this wave as having an amplitude of 1, and then you'd think of the horizontal component as having amplitude sine of 22.5 degrees, which is around 0.38. And the vertical component would have an amplitude cosine of 22.5 degrees, which is around 0.92. Classically, you might think of its horizontal component as having energy proportional to 0.38 squared, which is around 0.15. Likewise, you might think of the vertical component as having an energy proportional to 0.92 squared, which comes out to be around 0.85. And like we said before, classically, this would mean if you pass it through a vertical filter, 15% of its energy is absorbed in the horizontal direction. But because the energy of light comes in these discrete quanta that cannot be subdivided, instead what you observe is that 85% of the time the photon passes through entirely, and 15% of the time it gets completely blocked. Now, I want to emphasize that the wave equations don't change. The photon is still described as a superposition of two oscillating components, each with some phase and amplitude. And these are often encoded using a single complex number. The difference is that classically, the squares of the amplitudes of each component tells you the amount of that wave's energy in each direction. But with quantized light at this minimal non-zero energy level, the squares of those amplitudes tell you the probabilities that a given photon is going to be found to have all of its energy in one direction or not. Also these components could still have some kind of phase difference. Just like with classical waves, photons can be circularly polarized, and they exist polarizing filters that only let through photons that are polarized circularly, say in the clockwise direction. Or rather, they let through all photons probabilistically where the probabilities are determined by describing each one of those photons as a superposition of the clockwise and counterclockwise states. And then the square of the amplitude of the clockwise component gives you the desired probability. Photons are of course just one quantum phenomenon, one where we initially understood it as a wave thanks to Maxwell's equations and then as individual particles or quanta, hence the name quantum mechanics. But as many of you well know, there's a flip side to this where many things that were understood to come and discrete little packets like electrons were discovered to be governed by similar wave quantum mechanics. In cases way more general than this one photon polarization example, quantum mechanical states are described as some superposition of multiple base states, and the superposition depends on what basis you choose. Each component in this superposition is given with an amplitude and a phase, often encoded as a single complex number, and the need for this phase arises from the wave nature of these objects. As with the photon example, the choice of how to measure these objects can determine a set of base states, where the probability of measuring a particle to be in one of these base states is proportional to the squares of the amplitudes of these numbers. It's funny to think though that if the wavy nature of electrons and other particles was discovered first, we might instead refer to the whole subject as harmonic mechanics or something like that, since the weirdness there is not that waves come in discrete units, but that particles are governed by wave equations. This video was supported in part by Brilliant, and as viewers of this channel know, what I like about Brilliant is that they're a great complement to passively watching educational videos. All of you here want to learn more math, or physics, or the math that prepares you for physics, and the only way to actually learn this stuff is to actively grapple with puzzles and problem solving. Brilliant offers many really well curated sequences of problems that help you to master all sorts of technical subjects. You all like physics, clearly, so I think that you would enjoy their courses on classical mechanics and gravitational physics, and honestly group theory would give you a really good foundation. But there are many other great courses too, especially in math. If you go to Brilliant.org slash 3B1B, that one lets them know that you came from here, and also the first 200 people that go to that link are going to get 20% off the annual Brilliant Premium subscription. That's the subscription I've been using, and it's actually really fun to have a bank of these puzzles and problems. But of course, for those of you who want some more passive viewing, don't forget that Henry and I just put out a video on Bell's inequalities over on Minute Physics. If for some reason you haven't been following Minute Physics these days, and I don't know why you wouldn't have been, the videos there have been really top notch, so definitely take a moment to poke around the rest of his channel.
Winding numbers and domain coloring
There's two things here. The main topic and the meta topic. So the main topic is going to be this really neat algorithm for solving two-dimensional equations, things that have two unknown real numbers, or also those involving a single unknown, which is a complex number. So for example, if you wanted to find the complex roots of a polynomial, or maybe some of those million dollar zeros of the remanzata function, this algorithm would do it for you. And this method is super pretty, since a lot of colors are involved. And more importantly, the core underlying idea applies to all sorts of math, well beyond this algorithm for solving equations, including a bit of topology, which I'll talk about afterwards. But what really makes this worth 20 minutes or so of your time is that it illustrates a lesson much more generally useful throughout math, which is try to define constructs that compose nicely with each other. You'll see what I mean by that as the story progresses. To motivate the case with functions that have 2D inputs and 2D outputs, let's start off simpler, with functions that just take in a real number and spit out a real number. If you want to know when a function f of x equals some other function, g of x, you might think of this as searching for when the graphs of those functions intersect, right? I mean, that gives you an input where both functions have the same output. To take a very simple example, imagine f of x is x squared, and g of x is the constant function 2. In other words, you want to find the square root of 2. Even if you know almost nothing about finding square roots, you can probably see that 1 squared is less than 2, and 2 squared is bigger than 2, so you realize, ah, there's going to be some solution in between those two values. And then if you wanted to narrow it down further, maybe you'd try squaring the halfway point, 1.5. And this comes out to be 2.25 a little bit too high, so you would focus on the region between 1 and 1.5. And so on, you can probably see how this would keep going, you keep computing at the midpoint and then chopping your search space in half. Now another way to think about this, which is going to make it easier for us once we get up to higher dimensions, is to instead focus on the equivalent question for when the difference between these two functions is zero. In those terms, we found a region of inputs where that difference was negative on one end, and it was positive on another end. And then we split it into two, and the half that we narrowed our attention to was the one where the outermost points had varying signs. And like this, we were able to keep going forever, taking each region with varying signs on the border, finding a smaller such region among its halves, knowing that ultimately, we have to be narrowing in on a point, which is going to be exactly zero. So in short, solving equations can always be framed as finding when a certain function is equal to zero. And to do that, we have this heuristic. If f is positive at one point and negative at another point, you can find some place in between where it's zero, at least if everything changes smoothly with no sudden jumps. Now the amazing thing that I want to show you is that you can extend this kind of thinking into two-dimensional equations. Equations between functions whose inputs and outputs are both two-dimensional. For example, complex numbers are 2D, and this tool that we're developing is perfect for finding solutions to complex equations. Now, since we're going to be talking about these 2D functions so much, let's take a brief side step and consider how do we illustrate these? I mean, graphing a function with a 2D input and a 2D output would require four dimensions, and that's not really going to work so well in our 3D world on our 2D screens, but we still have a couple good options. One is to just look at both the input space and the output space side by side. Each point in the input space moves to a particular point in the output space. Then I can show how moving around that input point corresponds to certain movements in the output space. All of the functions we consider will be continuous, in the sense that small little changes to the input only correspond to small little changes in the output. There's no sudden jumps. Now, another option we have is to imagine the arrow from the origin of the output space to that output point and to attach a miniature version of that arrow to the input point. This can give us a sense at a glance for where a given input point goes, or where many different input points go by drawing the full vector field. And unfortunately, when you do this at a lot of points, it can get pretty cluttered, so here, let me make all of the arrows the same size, and what this means is we can get a sense just of the direction of each output point. But perhaps the prettiest way to illustrate two-dimensional functions, and the one we'll be using most of this video, is to associate each point in that output space with a color. Here we've used hues, that is where the color falls along a rainbow or a color wheel, to correspond to the direction away from the origin. And we're using darkness or brightness to correspond to the distance from the origin. For example, focusing just on this ray of outputs, all of these points are red, but the one's closer to the origin are a little darker, and the ones farther away are a little brighter. And focusing just on this ray of outputs, all of the points are green. And again, closer to the origin means darker, farther away means lighter. And so on, all we're doing here is assigning a specific color to each direction, all changing continuously. And you might notice the darkness and brightness differences here are pretty subtle, but for this video, all we care about is the direction of outputs, not the magnitudes, the hues, not the brightness. The one important thing about brightness for you to notice is that near the origin, which has no particular direction, all of the colors fade to black. So for thinking about functions, now that we've decided on a color for each output, we can visualize 2D functions by coloring each point in the input space based on the color of the point where it lands in the output space. I like to imagine many different points from that input space, hopping over to their corresponding outputs in the output space, then getting painted based on the color of the point where they land, and then hopping back to where they came from in the input space. Doing this for every point in the input space, you can get a sense just by looking at that input space for roughly where the function takes each point. For example, this stripe of pink points on the left tells us that all of those points get mapped somewhere in the pink direction that lower left of the output space. So those three points which are black with lots of colors around them are the ones that go to zero. Alright, so just like the one decays, solving equations of two-dimensional functions can always be reframed by asking when a certain function is equal to zero. So that's our challenge right now. Create an algorithm that finds which input points of a given 2D function go to zero. Now you might point out that if you're looking at a color map like this by seeing those black dots, you already know where the zeros of the function are. So does that count? Well, keep in mind that to create a diagram like this, we've had the computer compute the function at all of the pixels on the plane. But our goal is to find a more efficient algorithm that only requires computing the function at as few points as possible, only having a limited view of the colors, so to speak. And also, from a more theoretical standpoint, it'd be nice to have a general construct that tells us the conditions for whether or not a zero exists inside a given region. Now remember, in one dimension, the main insight was that if a continuous function is positive at one point and negative at another, then somewhere in between, it must be zero. So how do we extend that into two dimensions? We need some sort of analog of talking about signs. Well, one way to think about what signs even are is directions. Positive means you're pointing to the right along the number line, and negative means you're pointing to the left. Two dimensional quantities also have direction, but for them, the options are much wider. They can point anywhere along a whole circle of possibilities. So the same way that in one dimension, we were asking whether a given function is positive or negative on the boundary of a range, which is just two points. For 2D functions, we're going to be looking at the boundary of a region, which is a loop, and ask about the direction of the function's output along that boundary. For example, we see that along this loop, around this zero, the output goes through every possible direction, all of the colors of the rainbow, red, yellow, green, blue, and back to red, and everything in between along the way. But along this loop over here, with no zeros inside of it, the output doesn't go through every color. It goes through some of the orangeish ones, but never say green or blue. And this is promising. It looks a lot like how things worked in one dimension. Maybe, in the same way that if a 1D function takes both possible signs on the boundary of a 1D region, there was a zero somewhere inside. We might hypothesize that if a 2D function hits outputs of all possible directions, all possible colors along the boundary of a 2D region, then somewhere inside that region, it must go to zero. So that's our guess, and take a moment to think about if this should be true, and if so, why? If we start thinking about a tiny loop around some input point, we know that since everything is continuous, our function takes it to some tiny loop near the corresponding output. But look, for most tiny loops, the output barely varies in color. If you pick any output point other than zero, and draw a sufficiently tight loop near it, the loops colors are all going to be about the same color as that point. A tight loop over here is all blueish. A tight loop over here is going to be all yellowish. You certainly aren't going to get every color of the rainbow. The only point where you can tighten loops around it while still getting all of the colors is the colorless origin, zero itself. So it is indeed the case that if you have loops going through every color of the rainbow, tightening and tightening, narrowing in on a point, then that point must in fact be a zero. And so let's set up a 2D equation solver, just like our one-dimensional equation solver. When we find a large region, whose border goes through every color, split it into two, and then look at the colors on the boundary of each half. In the example shown here, the border on the left half doesn't actually go through all colors. There are no points that map to the orangeish yellowish directions, for example. So I'll gray out this area as a way of saying we don't want to search it any further. Now the right half does go through all of the colors. Spins a lot of time in the green direction, then passes through yellow orange red, as well as blue violet pink. Now remember, what that means is that points of this boundary get mapped to outputs of all possible directions. So we'll explore it further, subdividing again and checking the boundary for each region. And the boundary of the top is all green, so we'll stop searching there. But the bottom is colorful enough to deserve a subdivision. And just continue like this. Check which subregion has a boundary covering all possible colors, meaning points of that boundary get mapped to all possible directions, and keep chopping those regions in half, like we did for the one-dimensional case. Eventually leading us to a zero of the fun... Well, hmm, actually hang on a second. What happened here? And either of those last subdivisions on the bottom right passed through all the colors. So our algorithm stopped because it didn't want to search through either of those, but it also didn't find a zero. Okay, clearly something's wrong here. And that's okay, being wrong is a regular part of doing math. If we look back, we had this hypothesis, and it led us to this proposed algorithm, so we were mistaken somewhere. And being good at math is not about being right the first time. It's about having the resilience to carefully look back and understand the mistakes and understand how to fix them. Now the problem here is that we had a region whose border went through every color, but when we split it in the middle, neither subregions border went through every color. We had no options for where to keep searching next. And that broke the zero finder. Now, in one dimension, this sort of thing never happened. Anytime you had an interval whose end points have different signs, if you split it up, you know that you're guaranteed to get some sub interval whose end points also have different signs. Or put another way, anytime you have two intervals whose end points don't change signs, if you combine them, you'll get a bigger interval whose end points also don't change sign. But in two dimensions, it's possible to find two regions whose borders don't go through every color, but whose boundaries combine to give a region whose border does go through every color. And in just this way, our proposed zero finding algorithm broke. In fact, if you think about it, you can find a big loop whose border goes through every possible color without there being a zero inside of it. Now that's not to say that we were wrong in our claims about tiny loops, when we said that a forever narrowing loop going through every color has to be narrowing in on a zero. But what made a mess of things for us is that this does my border go through every color or not property doesn't combine in a nice predictable way when you combine regions. But don't worry, it turns out we can modify this slightly to a more sophisticated property that does combine to give us what we want. The idea is that instead of simply asking whether we can find a color at some point along the loop, let's keep more careful track of how these colors change as we walk around that loop. Let me show you what I mean with an example. I'll keep a little color wheel up here in the corner to help us keep track. When the colors along a path of inputs move through the rainbow in a specific direction from red to yellow, yellow to green, green to blue, or blue to red, the output is swinging clockwise. But on the other hand, if the colors move the other way through the rainbow, from blue to green, green to yellow, yellow to red, or red to blue, the output is swinging counter clockwise. So walking along this short path here, the colors wind a fifth of the way clockwise through the color wheel. And walking along this path here, the colors wind another fifth of the way clockwise through the color wheel. And of course that means that if you go through both paths one after the other, the colors wind a total of two fifths of a full turn clockwise. The total amount of winding just adds up. And this is going to be key. This is the kind of straightforward combining that will be useful to us. And when I say total amount of winding, I want you to imagine an old fashioned odometer that ticks forward as the arrow spins clockwise, but backwards as the arrow spins counterclockwise. So counterclockwise winding counts as negative clockwise winding. The outputs may turn a lot, but if some of that turning is in the opposite direction, it cancels out. For example, if you move forward along this path and then move backwards along that same path, the total amount of winding ends up just being zero. The backwards movement literally rewinds through the previously seen colors, reversing all the previous winding and returning the odometer back to where it started. For our purposes, we'll care most about looking at the winding along loops. For example, let's say we walk around this entire loop clockwise. The outputs that we come across wind around a total of three full clockwise turns. The colors swung through the rainbow, Roy G Biv, in order, from red to red again, and then again and again. In the jargon mathematicians use, we say that along this loop, the total winding number is three. Now for other loops, it could be any other whole number. Maybe a larger one if the output swings around many times as the input walks around a single loop, or it could be a smaller number if the output only swings around once or twice. Or that winding number could even be a negative integer, if the output was swinging counter-clockwise as we walk clockwise around the loop. But along any loop, this total amount of winding has to be a whole number. I mean, by the time you get back to where you started, you'll have the same output that you started with. Incidentally, if a path actually contains a point where the output is precisely zero, then technically you can't define a winding number along that, since the output has no particular direction. Now this isn't going to be a problem for us, because our whole goal is to find zeroes. So if this ever comes up, we just lucked out early. Alright, so the main thing to notice about these winding numbers is that they add up nicely when you combine paths into bigger paths. But what we really want is for the winding numbers along the borders of regions to add up nicely when we combine regions to make bigger regions. So do we have that property? Well, take a look. The winding number, as we go clockwise around this region on the left, is the sum of the winding numbers from these four paths. And the winding as we go clockwise around this region, on the right, is the sum of the winding numbers from these four paths. And when we combine those two regions into a bigger one, most of those paths become part of the clockwise border of the bigger region. And as for those two paths that don't, well, they cancel out perfectly. One of them is just the reverse, the rewinding of the other one, like we saw before. So the winding numbers along borders of regions add up in just the way that we want them to. Also side note, this reasoning about oriented borders adding up nicely like this comes up a lot in mathematics, and it often goes under the name Stokes theorem. Those of you who've studied multivariable calculus might recognize it from that context. So now, finally, with winding numbers in hand, we can get back to our equations solving goals. The problem with the region we saw earlier is that even though its border passed through all possible colors, the winding number was actually zero. The outputs wound around halfway through yellow to red and then started going counterclockwise back to the other direction and continued going through blue and hitting red from the other way, all in such a way that the total winding netted out to be zero. But if you find a loop which not only hits every color, but it has the stronger condition of a non-zero winding number, then if you were to split it in half, you're guaranteed that at least one of those halves has a non-zero winding number as well, since things add up nicely in the way we want them to. So in this way, you can keep going, narrowing in further and further onto one point. And as you narrow in onto a point, you'll be doing so with tiny loops that have non-zero winding numbers, which implies they go through all possible colors. And therefore, like I said before, the point they're narrowing in on must be a zero. And that's it. We have now created a two-dimensional equations solver, and this time, I promise, there are no bugs. Winding numbers are precisely the tool we need to make this work. We can now solve equations that look like where does f of x equal g of x in two dimensions, just by considering how the difference between f and g winds around. Whenever we have a loop whose winding number isn't zero, we can run this algorithm on it, and we're guaranteed to find a solution somewhere within it. And what's more, just like in one dimension, this algorithm is incredibly efficient. We keep narrowing in on half the size of our region each round, thus quickly narrowing in on the zeroes. And all the while, we only have to check the value of the function along points of these loops, rather than checking it on the many, many points of the interior. So in some sense, the overall work done is proportional only to the search spaces perimeter, not the full area, which is amazing. Now once you understand what's going on, it is weirdly mesmerizing to just watch this in action, giving it some function and letting it search for zeros. Like I said before, complex numbers, they're two-dimensional. So let's apply it to some equation with complex numbers. For example, here's the algorithm finding the zeros of the function x to the fifth minus x minus 1 over the complex plane. It started by considering a very large region around the origin, which ended up having a winding number of 5. Each time you find a loop with a non-zero winding number, you split it in half and figure out the winding number of the two smaller loops. Either one or both of them is guaranteed to have a non-zero winding number, and when you see this, you know that there's a zero somewhere inside that smaller loop. So you keep going in the same way, searching the smaller space. We also stop exploring a region if the path that we're computing along happens to stumble across a zero, which actually happened once for this example on the right half here. Those rare occurrences interfere with our ability to compute winding numbers, but hey, we got a zero. And as for loops whose winding number is zero, you just don't explore those further. Maybe they have a solution inside, maybe they don't, we have no guarantees. And lending our equation's hover continue in the same way, it eventually converges to lots of zeros for this polynomial. By the way, it is no coincidence that the total winding number in this example happened to be 5. With complex numbers, the operation x to the n directly corresponds to walking around the output's origin n times as you walk around the input's origin once. So with the polynomial, for large enough inputs, every term other than the leading term becomes insignificant in comparison. So any complex polynomial whose leading term is x to the n has a winding number of n around a large enough loop. And in that way, our winding number technology actually guarantees that every complex polynomial has a zero. This is such an important fact that mathematicians call it the fundamental theorem of algebra. Having an algorithm for finding numerical solutions to equations like this is extremely practical. But the fundamental theorem of algebra is a good example of how these winding numbers are also quite useful on a theoretical level, guaranteeing the existence of a solution to a broad class of equations for suitable conditions, which is much more the kind of thing mathematicians like thinking about. I'll show you a couple more amazing applications of this in the context of topology in a follow-up video, which includes correcting a mistake from an old 3-blue 1-brown video, which one, well, watch all of the videos, everything on this channel, and see if you can spot the error first. The primary author of this video is one of the newest 3-blue 1-brown team members, Sridhar Mesh.
Linear combinations, span, and basis vectors | Chapter 2, Essence of linear algebra
In the last video, along with the ideas of vector addition and scalar multiplication, I described vector coordinates, where there's this back and forth between, for example, pairs of numbers and two-dimensional vectors. Now, I imagine the vector coordinates were already familiar to a lot of you, but there's another kind of interesting way to think about these coordinates, which is pretty central to linear algebra. When you have a pair of numbers that's meant to describe a vector, like 3-2, I want you to think about each coordinate as a scalar, meaning think about how each one stretches or squishes vectors. In the xy coordinate system, there are two very special vectors, the one pointing to the right with length 1, commonly called i-hat, or the unit vector in the x direction, and the one pointing straight up with length 1, commonly called j-hat, or the unit vector in the y direction. Now, think of the x-coordinate of our vector as a scalar that scales i-hat, stretching it by a factor of 3, and the y-coordinate as a scalar that scales j-hat, flipping it and stretching it by a factor of 2. In this sense, the vector that these coordinates describe is the sum of two scaled vectors. That's a surprisingly important concept, this idea of adding together two scaled vectors. Those two vectors i-hat and j-hat have a special name, by the way, together they're called the basis of a coordinate system. What this means, basically, is that when you think about coordinates as scalars, the basis vectors are what those scalars actually scale. There's also a more technical definition, but I'll get to that later. By framing our coordinate system in terms of these two special basis vectors, it raises a pretty interesting and subtle point. We could have chosen different basis vectors and gotten a completely reasonable, new coordinate system. For example, think some vector pointing up into the right, along with some other vector pointing down into the right in some way. Take a moment to think about all the different vectors that you can get by choosing two scalars, using each one to scale one of the vectors, then adding together what you get. Which two-dimensional vectors can you reach by altering the choices of scalars? The answer is that you can reach every possible two-dimensional vector, and I think it's a good puzzle to contemplate why. A new pair of basis vectors like this still gives us a valid way to go back and forth between pairs of numbers and two-dimensional vectors. But the association is definitely different from the one that you get using the more standard basis of i-hat and j-hat. This is something I'll go into much more detail on later, describing the exact relationship between different coordinate systems. But for right now, I just want you to appreciate the fact that any time we describe vectors numerically, it depends on an implicit choice of what basis vectors we're using. So, any time that you're scaling two vectors and adding them like this, it's called a linear combination of those two vectors. Where does this word linear come from? Why does this have anything to do with lines? Well, this isn't the etymology, but one way I like to think about it is that if you fix one of those scalars and let the other one change its value freely, the tip of the resulting vector draws a straight line. Now, if you let both scalars range freely and consider every possible vector that you can get, there are two things that can happen. For most pairs of vectors, you'll be able to reach every possible point in the plane. Every two-dimensional vector is within your grasp. However, in the unlucky case where your two original vectors happen to line up, the tip of the resulting vector is limited to just the single line passing through the origin. Actually, technically there's a third possibility too, both your vectors could be zero, in which case you'd just be stuck at the origin. Here's some more terminology. The set of all possible vectors that you can reach with a linear combination of a given pair of vectors is called the span of those two vectors. So, restating what we just saw in this lingo, the span of most pairs of 2D vectors is all vectors of 2D space. But when they line up, their span is all vectors whose tip sit on a certain line. Remember how I said that linear algebra revolves around vector addition and scalar multiplication? Well, the span of two vectors is basically a way of asking, what are all the possible vectors you can reach using only these two fundamental operations, vector addition and scalar multiplication? This is a good time to talk about how people commonly think about vectors as points. It gets really crowded to think about a whole collection of vectors sitting on a line, and more crowded still to think about all two-dimensional vectors all at once, filling up the plane. So, when dealing with collections of vectors like this, it's common to represent each one with just a point in space. The point at the tip of that vector, where, as usual, I want you thinking about that vector with its tail on the origin. That way, if you want to think about every possible vector whose tip sits on a certain line, just think about the line itself. Likewise, to think about all possible two-dimensional vectors all at once, conceptualize each one as the point where its tip sits. So, in effect, what you'll be thinking about is the infinite flat sheet of two-dimensional space itself, leaving the arrows out of it. In general, if you're thinking about a vector on its own, think of it as an arrow. And if you're dealing with a collection of vectors, it's convenient to think of them all as points. So, for our span example, the span of most pairs of vectors ends up being the entire infinite sheet of two-dimensional space. But if they line up, their span is just a line. The idea of span gets a lot more interesting if we start thinking about vectors in three-dimensional space. For example, if you take two vectors in three-d space that are not pointing in the same direction, what does it mean to take their span? Well, their span is the collection of all possible linear combinations of those two vectors, meaning all possible vectors you get by scaling each of the two of them in some way and then adding them together. You can kind of imagine turning two different knobs to change the two scalars defining the linear combination, adding the scaled vectors and following the tip of the resulting vector. That tip will trace out some kind of flat sheet cutting through the origin of three-dimensional space. This flat sheet is the span of the two vectors. Or, more precisely, the set of all possible vectors whose tips sit on that flat sheet is the span of your two vectors. Isn't that a beautiful mental image? So, what happens if we add a third vector and consider the span of all three of those guys? A linear combination of three vectors is defined pretty much the same way as it is for two. You'll choose three different scalars, scale each of those vectors and then add them all together. And again, the span of these vectors is the set of all possible linear combinations. Two different things could happen here. If your third vector happens to be sitting on the span of the first two, then the span doesn't change. You're sort of trapped on that same flat sheet. In other words, adding a scaled version of that third vector to the linear combination doesn't really give you access to any new vectors. But if you just randomly choose a third vector, it's almost certainly not sitting on the span of those first two. Then, since it's pointing in a separate direction, it unlocks access to every possible three-dimensional vector. One way I like to think about this is that as you scale that new third vector, it moves around that span sheet of the first two, sweeping it through all of space. Another way to think about it is that you're making full use of the three freely changing scalars that you have at your disposal to access the full three dimensions of space. Now, in the case where the third vector was already sitting on the span of the first two, or the case where two vectors happen to line up, we want some terminology to describe the fact that at least one of these vectors is redundant, not adding anything to our span. Whenever this happens, where you have multiple vectors and you could remove one without reducing the span, the relevant terminology is to say that they are linearly dependent. Another way of phrasing that would be to say that one of the vectors can be expressed as a linear combination of the others, since it's already in the span of the others. On the other hand, if each vector really does add another dimension to the span, there's said to be linearly independent. So with all of that terminology, and hopefully with some good mental images to go with it, let me leave you with a puzzle before we go. The technical definition of a basis of a space is a set of linearly independent vectors that span that space. Now, given how I described a basis earlier, and given your current understanding of the words span and linearly independent, think about why this definition would make sense. In the next video, I'll get into matrices and transforming space. See you then!
Beyond the Mandelbrot set, an intro to holomorphic dynamics
Today, I'd like to tell you about a piece of math known as holomorphic dynamics. This is the field which studies things like the mandalbrut set, and in fact, one of my main goals today is to show you how this iconic shape, the poster child of math, pops up in a more general way than the initial definition might suggest. Now this field is also intimately tied to what we talked about in the last video, with Newton's Rackdoll, and another goal of ours towards the end of this video will be to help tie up some of the loose ends that we had there. So first of all, this word holomorphic might seem a little weird. It refers to functions that have complex number inputs and complex number outputs, and which you can also take a derivative of. Basically, what it means to have a derivative in this context is that when you zoom in to how the function behaves near a given point, to the point and its neighbors, it looks roughly like scaling and rotating, like multiplying by some complex constant. We'll talk more about that in just a bit, but for now, know that it includes most of the ordinary functions that you could write down, things like polynomials, exponentials, trig functions, all of that. The relevant dynamics in the title here comes from asking what happens when you repeatedly apply one of these functions over and over, in the sense of evaluating on some input, then evaluating the same function on whatever you just got out, and then doing that again, and again and again and again. Sometimes the pattern of points emerging from this gets trapped in a cycle. Other times, the sequence will just approach some kind of limiting point. Or maybe the sequence gets bigger and bigger and it flies off to infinity, which mathematicians also kind of think of as approaching a limit point just the point at infinity. And other times still, they have no pattern at all, and they behave chaotic. What's surprising is that for all sorts of functions that you might write down, when you try to do something to visualize when these different possible behaviors arise, it often results in some insanely intricate fractal pattern. Those of you who watched the last video have already seen one neat example of this. There's this algorithm called Newton's method, which finds the root of some polynomial P, and the way it works is to basically repeatedly iterate the expression x minus P of x divided by P prime of x, P prime being the derivative. When your initial seed value is in the loose vicinity of a root to that polynomial, a value where P of x equals 0, this procedure produces a sequence of values that really quickly converges to that root. This is what makes it a useful algorithm in practice. But then we tried to do this in the complex plane, looking at the many possible seed values, and asking which root in the complex plane each one of these seed values might end up on. Then we associated a color with each one of the roots, and then colored each pixel of the plane, based on which root a seed value starting at that pixel would ultimately land on. The results we got were some of these insanely intricate pictures, with these rough fractal boundaries between the colors. Now in this example, if you look at the function that we're actually iterating, say for some specific choice of a polynomial, like z cubed minus 1, you can rewrite the whole expression to look like one polynomial divided by another. Mathematicians call these kinds of functions Rational functions. And if you forget the fact that this arose from Newton's method, you could reasonably ask what happens when you iterate any other Rational function. And in fact, this is exactly what the mathematicians P or Fatou and Gaston Julia did in the years immediately following World War I, and they built up a surprisingly rich theory of what happens when you iterate these Rational functions, which is particularly impressive given that they had no computers to visualize any of this the way that you and I can. I think this distinguishes Julia as easily being one of the greatest mathematicians of all time who had no nose. Now remember those two names, they'll come up a bit later. By far, the most popularized example of a rational function that you might study like this, and the fractals that can ensue, is one of the simplest functions, z squared plus c, where c is some constant. I'm going to guess that this is at least somewhat familiar to many of you, but it certainly doesn't hurt to quickly summarize the story here, since it can help set the stage for what comes later. For this game, we're going to think of c as a value that can be changed, and it'll be visible as this movable yellow dot. For the actual iterative process, we will always start with an initial value of z equals zero. So after iterating this function once, doing z squared plus c, you get c. If you iterate a second time, plugging in that value to the function, you get c squared plus c. And as I change around the value c here, you can kind of see how the second value moves in lock step. Then we can plug in that second value to get z3 in that third value to get z4, and continue on like this, visualizing our chain of values. So if I keep doing this many different times for the first many values, for some choices of c, this process remains bounded. You can still see it all on the screen. And other times, it looks like it blows up, and you can actually show that if it gets as big as two, it'll blow up to infinity. If you color the points of the plane where it stays bounded, black, and you assign some other gradient of colors to the divergent values, based on how quickly the process rushes off to infinity, you get one of the most iconic images in all of math, the Mandelbrot set. Now this interactive dot-synstic visualization of the trajectory, by the way, is heavily inspired by Ben Sparks illustration and the number file video he did about the Mandelbrot set, which is great, you should watch it. I honestly thought it was just too fun not to re-implement here. I would also highly recommend the interactive article on echo.net about all of this stuff, or any of you who haven't had the pleasure of reading that yet. What's nice about the Ben Sparks illustration is how it illuminates what each different part of the Mandelbrot set actually represents. This largest cardioid section includes the values of c, so that the process eventually converges to some limit. The big circle on the left represents the values where the process gets trapped in a cycle between two values. And then the top and bottom circles show values where the process gets trapped in a cycle of three values, and so on like this, each one of these little islands kind of has its own meaning. Also notice, there's an important difference between how this Mandelbrot set and the Newton fractals we were looking at before are each constructed, beyond just a different underlying function. For the Mandelbrot set, we have a consistent seed value, z equals zero, but the thing we're tweaking is the parameter, c, changing the function itself. So what you're looking at is what we might call a parameter space. But with Newton's fractal, we have a single unchanging function, but what we associate with each pixel is a different seed value for the process. Of course, we could play the same game with the map z squared plus c. We could fix c at some constant and then let the pixels represent the different possible initial values, z0. So whereas each pixel of the Mandelbrot set corresponds to a unique function, the image is on the right, each just corresponds to a single function. As we change the parameter c, it changes the entire image on the right. And again, just to be clear, the rule being applied is that we color pixels black if the process remains bounded and then apply some kind of gradient to the ones that diverge away to infinity based on how quickly they diverge to infinity. In principle, and it's kind of mind-wurping to think about, there is some four-dimensional space of all combinations of c and z0. And what we're doing here is kind of looking through individual two-dimensional slices of that unimaginable pattern. You'll often hear or read the images on the right being referred to as Julia sets or Julia fractals. And when I first learned about all this stuff, I'll admit that I kind of was left with the misconception that this is what the term Julia set refers to, specifically the z squared plus c case, and moreover that it's referring to the black region on the inside. However, the term Julia set has a much more general definition, and it would refer just to the boundaries of these regions, not the interior. To set the stage for a more specific definition, and to also make some headway towards the first goal that I mentioned at the start, it's worth stepping back and really just picturing yourself as a mathematician right now, discovering all of this. What would you actually do to construct a theory around this? It's one thing to look at some pretty pictures, but what sorts of questions would you ask if you actually want to understand it all? In general, if you want to understand something complicated, a good place to start is to ask if there are any parts of the system that have some simple behavior, preferably the simplest possible behavior. And in our example, that might mean asking when does the process just stay fixed in place, meaning f of z is equal to z. That's a pretty boring set of dynamics, I think you'd agree. We call a value with this property a fixed point of the function. In the case of the functions arising from Newton's method, by design, they have a fixed point at the roots of the relevant polynomial. You can verify for yourself, if p of z is equal to zero, then the entire expression is simply equal to z. That's what it means to be a fixed point. If you're into exercises, you may enjoy pausing for a moment and computing the fixed points of this Mandelbrot set function, z squared plus c. More generally, any rational function will always have fixed points, since asking when this expression equals z can always be rearranged as finding the roots of some polynomial expression. And from the fundamental theorem of algebra, this must have solutions, typically as many solutions as the highest degree in this expression. Incidentally, this means that you could also find those fixed points using Newton's method, but maybe that's a little bit too meta for us right now. Now just asking about fixed points is maybe easy, but a key idea for understanding the full dynamics, and hence the diagrams that we're looking at, is to understand stability. We say that a fixed point is attracting, if nearby points tend to get drawn in towards it, and repelling if they're pushed away. And this is something that you can actually compute, explicitly, using the derivative of the function. Symbolically, when you take derivatives of complex functions, it looks exactly the same as it would for real functions, though something like z squared has a derivative of 2 times z. But geometrically, there's a really lovely way to interpret what this means. For example, at the input 1, the derivative of this particular function evaluates to be 2. And what that's telling us is that if you look at a very small neighborhood around that input, and you follow what happens to all the points in that little neighborhood as you apply the function, in this case z squared, then it looks just like you're multiplying by 2. This is what a derivative of 2 means. To take another example, let's look at the input i. We know that this function moves that input to the value negative 1, that's i squared. But the added information that it's derivative at this value is 2 times i gives us the added picture that when you zoom in around that point, and you look at the action of the function on this tiny neighborhood, it looks like multiplication by 2i, which in this case is saying it looks like a 90 degree rotation combined with a expansion by a factor of 2. For the purposes of analyzing stability, the only thing we care about here is the growing and shrinking factor. The rotational part doesn't matter. So if you compute the derivative of a function at its fixed point, and the absolute value of this result is less than 1, it tells you that the fixed point is attracting, that nearby points tend to come in towards it. If that derivative has an absolute value bigger than 1, it tells you the fixed point is repelling, it pushes away its neighbors. For example, if you work out the derivative of our Newton's map expression, and you simplify a couple things a little bit, here's what you would get out. So if z is a fixed point, which in this context means that it's one of the roots of the polynomial p, this derivative is not only smaller than 1, it's equal to 0. These are sometimes called super attracting fixed points, since it means that a neighborhood around these points doesn't merely shrink, it shrinks a lot. And again, this is kind of by design, since the intent of Newton's method is to produce iterations that fall towards a root as quickly as they can. Pulling up our z squared plus c example, if you did the first exercise to find its fixed points, the next step would be to ask, when is at least one of those fixed points attracting? For what values of c is this going to be true? And then, if that's not enough of a challenge, try using the result that you find to show that this condition corresponds to the main cardioid shape of the mandal broad set. This is something you can compute explicitly, it's pretty cool. A natural next step would be to ask about cycles, and this is where things really start to get interesting. If f of z is not z, but some other value, and then that value comes back to z, it means that you fall into a two cycle. You could explicitly find these kinds of two cycles by evaluating f of f of z, and then setting it equal to z. For example, with the z squared plus c map, f of f of z expands out to look like this. A little messy, but it's not too terrible. The main thing to highlight is that it boils down to solving some degree for equation. You should note though that the fixed points will also be solutions to this equation, so technically the two cycles are the solutions to this minus the solutions to the original fixed point equation. And likewise, you can use the same idea to look for n cycles by composing f with itself, and different times. The explicit expressions that you would get quickly become insanely messy, but it's still elucidating to ask how many cycles would you expect based on this hypothetical process. If we stick with our simple z squared plus c example, as you compose it with itself, you get a polynomial with degree four, and then one with degree eight, and then degree sixteen, and so on and so on, exponentially growing the order of the polynomial. So in principle, if I asked you how many cycles are there with a period of one million, you can know that it's equivalent to solving some just absolutely insane polynomial expression with a degree of two to the one million. So again, fundamental theorem of algebra, you would expect to find something on the order of two to the one million points in the complex plane, which cycle in exactly this way. And more generally, for any rational map, you'll always be able to find values whose behavior falls into a cycle with period n, and ultimately boils down to solving some probably insane polynomial expression. And just like with this example, the number of such periodic points will grow exponentially with n. I didn't really talk about this in the last video about Newton's fractal, but it's sort of strange to think that there are infinitely many points that fall into some kind of cycle, even for a process like this. In almost all cases though, these points are somewhere on the boundary between those colored regions, and they don't really come up in practice because the probability of landing on one of them is zero. What matters for actually falling into one of these is if one of the cycles is attracting, in the sense that a neighborhood of points around a value from that cycle would tend to get pulled in towards that cycle. A highly relevant question for someone interested in numerical methods is whether or not this Newton's map process ever has an attracting cycle, because if there is, it means there's a non-zero chance that your initial guess gets trapped in that cycle, and it never finds a root. The answer here is actually yes. More explicitly, if you try to find the roots of z cubed minus 2z plus 2, and you're using Newton's method, watch what happens to a small cluster that starts around the value zero. It sort of bounces back and forth. And, well, okay, in this case the cluster we started with was a little bit too big, so some of the outer points get sprayed away, but here's what it looks like if we start with a smaller cluster. Notice how all of the points genuinely do shrink in towards the cycle between zero and one. It's not likely that you hit this with a random seed, but it definitely is possible. The exercise that you could do to verify that a cycle like this is attracting, by the way, would be to compute the derivative of f of f of z, and you check that at the input zero, this derivative has a magnitude less than 1. The thing that blew my mind a little is what happens when you try to visualize which cubic polynomials have attracting cycles at all. Hopefully, if Newton's method is going to be at all decent at finding roots, those attracting cycles should be rare. First of all, to better visualize the one example we're looking at, we could draw the same fractal that we had before, coloring each point based on what root the seed value starting at that point will tend to. But this time, we'll have an added condition of coloring points that says that if the seed value never gets close enough to a root at all, we will color the pixel black. Notice, if I tweak the roots, meaning that we're trying out different cubic polynomials, it's actually really hard to find any place to put them so that we see any black pixels at all. I can find this one little sweet spot here, but it's definitely rare. Now what I want is some kind of way to visualize every possible cubic polynomial at once with a single image in a way that shows which ones have attracting cycles. Luckily, it turns out that there is a really simple way to test whether or not one of these polynomials has an attracting cycle. All you have to do is look at the seed value which sits at the average of the three roots, this center of mass here. Turns out this is not at all obvious. If there's an attracting cycle, you can guarantee that this seed value will fall into that attracting cycle. In other words, if there are any black points, this will be one of them. If you want to know where this magical fact comes from, it stems from a theorem of our good friend Fatou. He showed that if one of these rational maps has an attracting cycle, you can look at the values where the derivative of your iterated function equals zero, and at least one of those values has to fall into the cycle. That might seem like a little bit of a weird fact, but the loose intuition is that if a cycle is going to be attracting, at least one of its values should have a very small derivative. That's where the shrinking will come from. And this in turn means that that value in the cycle sits near some point where the derivative is not merely small but equal to zero, and that point ends up being close enough to get sucked into the cycle. This fact also justifies why with the mantle-brot set, where we were only using one seed value z equals zero, it's still enough to get us a very full and interesting picture. If there's a stable cycle to be found, that one seed value is definitely going to find it. I feel like maybe I'm assigning a little too much homework and exercises today, but if you're into that, yet another pleasing one would be to look back at the derivative expression that we found with our function that arises from Newton's method, and use this wonderful theorem of Fatou's to show our magical fact about cubic polynomials, that it suffices to just check this midpoint over the roots. Honestly though, all of those are details that you don't really have to worry about. The upshot is that we can perform a test for whether or not one of these polynomials has an attracting cycle by looking at just a single point, not all of them. And because of this, we can actually generate a really cool diagram. The way this will work is to fix two roots in place, let's say putting them at z equals negative one and z equals positive one, and then we'll move around that third root, which I'll call lambda. Remember, the key feature that we're looking for is when the point at the center of mass is black. So what we'll do is draw a second diagram on the right where each pixel corresponds to one possible choice of lambda. What we're going to do is color that pixel based on the color of this midpoint of the three roots. If this feels a little bit confusing, that's totally okay. There are kind of a lot of layers at play here. Just remember, each pixel on the right corresponds to a unique polynomial as determined by this parameter lambda. In fact, you might call this a parameter space. Sound familiar? Points in this parameter space are colored black if and only if the Newton's method process for the corresponding polynomial produces an attracting cycle. Again, don't know if that takes a little moment to digest. Now, at first glance, it might not look like there are any black points at all on this diagram. And this is good news, it means that in most cases, Newton's method will not get sucked into cycles like this. But, and I think I've previewed this enough that you know exactly where this is going, if we zoom in, we can find a black region, and that black region looks exactly like a mandalbrot set. Yet again, asking a question where we tweak a parameter for one of these functions, yields this iconic cardioid and bubbles shape. The upshot is that this shape is not as specific to the z squared plus c example, as you might think. It seems to relate to something more general and universal about parameter spaces with processes like this. Still, one pressing question is why we get fractals at all. In the last video, I talked about how the diagrams for Newton's method have this very peculiar property where if you draw a small circle around the boundary of a colored region, that circle must actually include all available colors from the picture. And this is true more generally for any rational map. If you were to sign colors to regions based on which limiting behavior points fall into, like which limit point or which limit cycle or does it tend to infinity, then tiny circles that you draw either contain points with just one of those limiting behaviors, or they contain points with all of them. It's never anything in between. So in the case where there's at least three colors, this property implies that our boundary could never be smooth. Since along a smooth segment, you can draw a small enough circle that touches just two colors, not all of them. And empirically, this is what we see. No matter how far you zoom in, these boundaries are always rough. And furthermore, you might notice that as we zoom in, you can always see all available colors within the frame. This doesn't explain rough boundaries in the context where there's only two limiting behaviors, but still, it's a loose end that I left in that video worth tying up. And it's a nice excuse to bring in two important bits of terminology. Julia sets and Fatou sets. If a point eventually falls into some stable predictable pattern, we say that it's part of the Fatou set of our iterated function. And for all the maps that we've seen, this includes almost everything. The Julia set is everything else, which in the pictures we've seen would be the rough boundaries between the colored regions. What happens as you transition from one stable attractor to another? For example, the Julia set will include all of the repelling cycles and the repelling fixed points. A typical point from the Julia set, though, will not be a cycle. It'll bounce around forever with no clear pattern. Now, if you look at a point in the Fatou set, and you draw a small enough disk around it, as you follow the process, that small disk will eventually shrink, as you fall into whatever the relevant stable behavior is. Unless you're going to infinity, but you could kind of think of that as the disk shrinking around infinity, but maybe that just confuses matters. By contrast, if you draw a small disk around a point on the Julia set, it tends to expand over time, as the points from within that circle go off and kind of do their own things. In other words, points of the Julia set tend to behave chaoticly. Their nearby neighbors, even very nearby, will eventually fall into qualitatively different behaviors. But it's not merely that this disk expands. A pretty surprising result, key to the multi-color property mentioned before, is that if you let this process play out, that little disk eventually expands so much that it hits every single point on the complex plane, with at most two exceptions. This is known as the stuff goes everywhere principle of Julia sets. Okay, it's not actually called that, and the source I was reading from, it's mentioned as a corollary to something known as Montel's theorem, but it should be called that. In some sense, what this is telling us is that the points of the Julia set are not merely chaotic. They're kind of as chaotic as they possibly can be. Here, let me show you a little simulation using the Newton's map, with a cluster of a few thousand points all starting from within a tiny, tiny distance, one one millionth, from a point on the Julia set. Of course, the stuff goes everywhere principle is about the uncountably infinitely many points that would lie within that distance, and that they eventually expand out to hit everything on the plane, except possibly two points. But this little cluster should still give the general idea, a small finite sample from that tiny disk, get sprayed all over the place, and seemingly all directions. What this means for our purposes is that if there's some attractive behavior of our map, something like an attracting fixed point or an attracting cycle, you can be guaranteed that the values from that tiny disk around the point on the Julia set, no matter how tiny it was, will eventually fall into that attracting behavior. If we have a case with three or more attracting behaviors, this gives us some explanation for why the Julia set is not smooth, but it has to be complicated. Even still, this might not be entirely satisfying because it kicks the can one more step down the road, raising the question of why this stuff goes everywhere principle is true in the first place. Like I mentioned, it comes from something called Montel's theorem, and I'm choosing not to go into the details there, because honestly, it's a lot to cover. The proof I could find ends up leaning on something known as the J function, which is a whole intricate story in its own right, I will of course leave links and resources in the description for any of you who are hungry to learn more. And if you know of a simpler way to see why this principle is true, I'm definitely all ears. I should also say, as a brief side note, that even though the pictures we've seen so far have a Julia set, which has an area of zero, it's kind of the boundary between these regions, there are examples where the Julia set is the entire plane, everything behaves chaotically, which is kind of wild. The main takeaway for this particular section is the link between the chaos and the fractal. At first it seems like these are merely analogous to each other, you know, Newton's method turns out to be a kind of messy process for some seed values, and this messiness is visible one way by following the trajectory of a particular point, and another way by the complexity of our diagrams, but those feel like qualitatively different kinds of messiness. Maybe it makes for a nice metaphor, but nothing more. However, what's neat here is that when you quantify just how chaotic some of the points are, well, that quantification leads us to an actual explanation for the rough fractal shape via this boundary property. Quite often you see chaos and fractals sort of married together in math, and to me at least it's satisfying whenever that marriage comes with a logical link to it, rather than as two phenomena that just happened to coincide.
Snell's law proof using springs
So in my video with Steve Strogat's about the Braquista Cron, we referenced this thing called Snell's Law. It's the principle and physics that tells you how light bends as it travels from one medium into another where its speed changes. Our conversation did talk about this in detail, but it was a little bit too much detail, so I ended up cutting it out of the video. So what I want to do here is just show you a condensed version of that, because it references a pretty clever argument by Mark Levy, and it also gives a sense of completion to the Braquista Cron solution as a whole. Consider when light travels from air into water. A speed of light is a little bit slower in water than it is in air, and this results in the beam of light bending as it enters the water. Why? There are many ways that you can think about this, but a pretty neat one is to use Pharma's principle. We talked about this in detail in the Braquista Cron video, but in short, it tells you that if light goes from some point to another, it will always do it in the fastest way possible. Consider some point A in its trajectory in the air, and some point B on its trajectory in the water. First you might think that the straight line between them is the fastest path. The only problem with that strategy though, even though it's the shortest path, is that you may be spending a long time in the water. The path is slower in the water, so the path can become faster if we shift things to favor spending more time in the air. You might even try to minimize the time spent in the water by shifting it all the way to the right. However, it's not actually the best thing to do either. As with the Braquista Cron problem, we find ourselves trying to balance these two competing factors. It's a problem that you can write down with geometry. And if this was a calculus class, we would set up the appropriate equation with a single variable x and find where it's derivative is zero. But we've got something better than calculus, a Mark Levy solution. He recognized that optics is not the only time that nature seeks out a minimum. It does so with energy as well. Any mechanical setup will stabilize when the potential energy is at a minimum. So for this light into immediate problem, he imagines putting a rod on the border between the air and the water, and placing a ring on the rod, which is free to slide left and right. Now, attach a spring from the point A to the ring, and a second spring between the ring and point B. You can think of the layout of the springs as a potential path that light could take between A and B. To finagle things so that the potential energy in the springs equals the amount of time that light would take on that path, you just need to make sure that each spring has a constant tension, which is inversely proportional to the speed of light in its medium. The only problem with this is that constant tension springs don't actually exist. That's right, they're on physical springs, but they're still the aspect of the system wanting to minimize its total energy. That physical principle will hold even though these springs don't exist in the world as we know it. The reason springs make the problem simpler though is that we can find this stable state just by balancing forces. The leftward component of the force in the top spring has to cancel out with the rightward component in the force of the bottom spring. In this case, the horizontal component in each spring is just the total force times the sine of the angle that that spring makes with the vertical. And from that, out pops this thing called Snell's Law, which many of us learned in our first physics class. This law says that sine of theta, divided by the speed of light, stays constant when light travels from one medium to another, where theta is the angle that that beam of light makes with a line perpendicular to the interface between the two media. So there you go, no calculus necessary.
Linear transformations and matrices | Chapter 3, Essence of linear algebra
Hey everyone, if I had to choose just one topic that makes all of the others in linear algebra start to click and which too often goes unlearn the first time a student takes a linear algebra, it would be this one. The idea of a linear transformation and its relation to matrices. For this video, I'm just going to focus on what these transformations look like in the case of two dimensions and how they relate to the idea of matrix vector multiplication. In particular, I want to show you a way to think about matrix vector multiplication that doesn't rely on memorization. To start, let's just parse this term linear transformation. Transformation is essentially a fancy word for function, it's something that takes in inputs and spits out an output for each one. Specifically, in the context of linear algebra, we like to think about transformations that take in some vector and spit out another vector. So why use the word transformation instead of function if they mean the same thing? Well it's to be suggestive of a certain way to visualize this input output relation. You see, a great way to understand functions of vectors is to use movement. If a transformation takes some input vector to some output vector, we imagine that input vector moving over to the output vector. Then to understand the transformation as a whole, we might imagine watching every possible input vector move over to its corresponding output vector. It gets really crowded to think about all of the vectors all at once, each one is an arrow, so as I mentioned last video, a nice trick is to conceptualize each vector not as an arrow, but as a single point, the point where it's tip-sits. That way, to think about a transformation taking every possible input vector to some output vector, we watch every point in space moving to some other point. In the case of transformations in two dimensions, to get a better feel for the whole shape of the transformation, I like to do this with all of the points on an infinite grid. I also sometimes like to keep a copy of the grid in the background just to help keep track of where everything ends up relative to where it starts. The effect for various transformations moving around all of the points in space is, you've got to admit, beautiful. It gives the feeling of squishing and morphing space itself. As you can imagine, though arbitrary transformations can look pretty complicated, but luckily linear algebra limits itself to a special type of transformation, ones that are easier to understand, called linear transformations. Similarly speaking, a transformation is linear if it has two properties, all lines must remain lines without getting curved, and the origin must remain fixed in place. For example, this right here would not be a linear transformation, since the lines get all curvy. And this one right here, although it keeps the line straight, is not a linear transformation because it moves the origin. This one here fixes the origin, and it might look like it keeps line straight, but that's just because I'm only showing the horizontal and vertical grid lines. When you see what it does to a diagonal line, it becomes clear that it's not at all linear, since it turns that line all curvy. In general, you should think of linear transformations as keeping grid lines parallel and evenly spaced. Some linear transformations are simple to think about, like rotations about the origin. Others are a little trickier to describe with words. So how do you think you could describe these transformations numerically? If you were, say, programming some animations to make a video teaching the topic, what formula do you give the computer so that if you give it the coordinates of a vector, it can give you the coordinates of where that vector lands? It turns out that you only need to record where the two basis vectors, i-hat and j-hat each land, and everything else will follow from that. For example, consider the vector v with coordinates negative 1, 2, meaning that it equals negative 1 times i-hat plus 2 times j-hat. If we play some transformation and follow where all three of these vectors go, the property that grid lines remain parallel and evenly spaced has a really important consequence. The place where v lands will be negative 1 times the vector where i-hat landed plus 2 times the vector where j-hat landed. In other words, it started off as a certain linear combination of i-hat and j-hat, and it ends up as that same linear combination of where those two vectors landed. This means you can deduce where v must go based only on where i-hat and j-hat each land. This is why I like keeping a copy of the original grid in the background. For the transformation shown here, we can read off that i-hat lands on the coordinates 1 negative 2, and j-hat lands on the x-axis over at the coordinates 3 0. This means that the vector represented by negative 1 i-hat plus 2 times j-hat ends up at negative 1 times the vector 1 negative 2 plus 2 times the vector 3 0. Adding that all together, you can deduce that it has to land on the vector 5 2. This is a good point to pause and ponder, because it's pretty important. Now, given that I'm actually showing you the full transformation, you could have just looked to see that v has the coordinates 5 2. But the cool part here is that this gives us a technique to deduce where any vectors land so long as we have a record of where i-hat and j-hat each land without needing to watch the transformation itself. Write the vector with more general coordinates x and y, and it will land on x times the vector where i-hat lands 1 negative 2 plus y times the vector where j-hat lands 3 0. Threeing out that sum, you see that it lands at 1 x plus 3 y, negative 2 x plus 0 y. I give you any vector, and you can tell me where that vector lands using this formula. What all of this is saying is that a two-dimensional linear transformation is completely described by just four numbers, the two coordinates for where i-hat lands, and the two coordinates for where j-hat lands. Isn't that cool? It's common to package these coordinates into a 2 x 2 grid of numbers, called a 2 x 2 matrix, where you can interpret the columns as the two special vectors where i-hat and j-hat each land. If you're given a 2 x 2 matrix describing a linear transformation and some specific vector and you want to know where that linear transformation takes that vector, you can take the coordinates of the vector, multiply them by the corresponding columns of the matrix, and add together what you get. This corresponds with the idea of adding the scaled versions of our new basis vectors. Let's see what this looks like in the most general case, where your matrix has entries A, B, C, D. And remember, this matrix is just a way of packaging the information needed to describe a linear transformation. Always remember to interpret that first column A, C as the place where the first basis vector lands, and that second column B, D as the place where the second basis vector lands. When we apply this transformation to some vector xy, what do you get? Well, it'll be x times A, C plus y times B, D. Putting this together, you get a vector A, X plus B, Y, C, X plus D, Y. You could even define this as matrix vector multiplication when you put the matrix on the left of the vector like it's a function. Again, you could make high schoolers memorize this without showing them the crucial part that makes it feel intuitive. But isn't it more fun to think about these columns as the transformed versions of your basis vectors and to think about the results as the appropriate linear combination of those vectors? Let's practice describing a few linear transformations with matrices. For example, if we rotate all of space 90 degrees counterclockwise, then i hat lands on the coordinates 0, 1. And j hat lands on the coordinates negative 1, 0. So the matrix we end up with has columns 0, 1, negative 1, 0. To figure out what happens to any vector after a 90 degree rotation, you could just multiply its coordinates by this matrix. Here's a fun transformation with a special name called a shear. In it, i hat remains fixed, so the first column of the matrix is 1, 0. But j hat moves over to the coordinates 1, 1, which become the second column of the matrix. And at the risk of being redundant here, figuring out how a shear transforms a given vector comes down to multiplying this matrix by that vector. Let's say we want to go the other way around, starting with a matrix, say with columns 1, 2, and 3, 1. And we want to deduce what its transformation looks like. Pause and take a moment to see if you can imagine it. One way to do this is to first move i hat to 1, 2. Then move j hat to 3, 1. Always moving the rest of space in such a way that keeps grid lines parallel and evenly spaced. If the vectors that i hat and j hat land on are linearly dependent, which, if you recall from last video, means that 1 is a scaled version of the other, it means that the linear transformation squishes all of 2D space onto the line where those two vectors sit, also known as the one-dimensional span of those two linearly dependent vectors. To sum up, linear transformations are a way to move around space, such that grid lines remain parallel and evenly spaced, and such that the origin remains fixed. Delightfully, these transformations can be described using only a handful of numbers, the coordinates of where each basis vector lands. Matrices give us a language to describe these transformations, where the columns represent those coordinates, and matrix vector multiplication is just a way to compute what that transformation does to a given vector. The important takeaway here is that every time you see a matrix, you can interpret it as a certain transformation of space. Once you really digest this idea, you're in a great position to understand linear algebra deeply. Almost all of the topics coming up, from a matrix multiplication to determinants, change of basis, eigenvalues, all of these will become easier to understand once you start thinking about matrices as transformations of space. Most immediately, in the next video, I'll be talking about multiplying two matrices together. See you then!
Why slicing a cone gives an ellipse
Suppose you love math, and you had to choose just one proof to show someone to explain why it is that math is beautiful. Something that can be appreciated by anyone from a wide range of backgrounds while still capturing the spirit of progress and cleverness in math. What would you choose? Well, after I put out a video on Feynman's Lost Lecture about why planets orbit in ellipses, published as a guest video over on Minute Physics, someone on Reddit asked about why the definition of an ellipse given in that video, the classic two thumbtacks in a piece of string construction, is the same as the definition involving slicing a cone. Well, my friend, you've asked about one of my all-time favorite proofs, a lovely bit of 3D geometry which, despite requiring almost no background, still captures the spirit of mathematical inventiveness. For context, then to make sure we're all on the same page, there are at least three main ways that you could define an ellipse geometrically. One is to say you take a circle and you just stretch it out in one dimension. For example, maybe you consider all of the points as x y coordinates, and what you do is multiply just the x coordinate by some special factor for all the points. Another is the classic two thumbtacks in a piece of string construction, where you loop a string around two thumbtacks stuck into a piece of paper and pull it taut with a pencil and then trace around, keeping the string taut the whole time. What you're drawing by doing this is the set of all points so that the sum of the distances from each pencil point to the two thumbtack points stays constant. Those two thumbtack points are each called a focus of the ellipse, and what we're saying here is that this constant focal sum property can be used to define what an ellipse even is. And yet another way to define an ellipse is to slice a cone with a plane at an angle, an angle that's smaller than the slope of the cone itself. The curve of points where this plane and the cone intersect forms an ellipse, which is why you'll often hear ellipses referred to as a conic section. Now of course, an ellipse is not just one curve, it's a family of curves, ranging from a perfect circle up to something that's infinitely stretched. The specific shape of an ellipse is typically quantified with a number called its eccentricity, which I sometimes just read in my head as squishification. A circle has eccentricity zero, and the more squished the ellipse is, the closer its eccentricity is to the number one. For example, Earth's orbit has an eccentricity 0.0167, very low squishification, meaning it's really close to just being a circle. While Halley's comet has an orbit with eccentricity 0.9671, very high squishification. In the thumbtack definition of an ellipse based on the constant sum of the distances from each point to the two foci, this eccentricity is determined by how far apart the two thumbtacks are. Specifically, it's the distance between the foci, divided by the length of the longest axis of the ellipse. For slicing a cone, the eccentricity is determined by the slope of the plane that you used for the slicing. And you might just defyably ask, especially if you're a certain reddit user, why on Earth should these three definitions have anything to do with each other? I mean, sure, it kind of makes sense that each should produce some vaguely oval-looking stretched out loop. But why should the family of curves, produced by these three totally different methods, be precisely the same shapes? In particular, when I was younger, I remember feeling really surprised that slicing a cone would produce such a symmetric shape. You might think that the part of the intersection farther down would kind of bulge out and produce a more lopsided egg shape. But nope, the intersection curve is an ellipse, the same evidently symmetric curve you'd get by just stretching a circle, or tracing around two thumbtacks. But of course, math is all about proofs, so how do you give an airtight demonstration that these three families of curves are actually the same? For example, let's focus our attention on just one of these equivalences, namely that slicing a cone will give us a curve that could also be drawn using the thumbtack construction. What you need to show here is that there exist two thumbtack points somewhere inside that slicing plane, such that the sum of the distances from any point of the intersection curve to those two points remains constant, no matter where you are on that intersection curve. I first saw the trick to showing why this is true in Paul Lockhart's magnificent book Measurement, which I would highly recommend to anyone younger old who needs a reminder of the fact that math is a form of art. The stroke of genius comes in the very first step, which is to introduce two spheres into this picture, one above the plane and one below it. Each one of them sized just right, so as to be tangent to the cone along a circle of points, and tangent to the plane at just a single point. Why you would think to do this of all things is a tricky question to answer, and one that will turn back to. Right now, let's just say that you have a particularly playful mind that loves engaging with how different geometric objects all fit together. But once these spheres are sitting here, I actually bet that you could prove the target result yourself. Here, I'll help you step through it, but at any point if you feel inspired, please do pause and just try to carry on without me. First off, these spheres have introduced two special points inside the curve, the points where they're tangent to the plane. So a reasonable guess might be that these two tangency points are the focus points. That means that you're going to want to draw lines from these foci to some point along the ellipse, and ultimately the goal is to understand what the sum of the distances of those two lines is, or at the very least to understand why that sum doesn't depend on where you are along the ellipse. Keep in mind, what makes these lines special is that each one does not simply touch one of the spheres. It's actually tangent to that sphere at the point where it touches. And in general for any math problem, you want to use the defining features of all of the objects involved. Another example here is what even defines the spheres. It's not just the fact that they're tangent to the plane, but that they're also tangent to the cone, each one at some circle of tangency points. So you're going to need to use those two circles of tangency points in some way. But how exactly? Well, one thing you might do is just draw a line straight from the top circle down to the bottom one along the cone. And there's something about doing this that feels vaguely reminiscent of the constant sum-thumbtack property, and hence promising. You see, it passes through the ellipse, and so by snipping that line at the point where it crosses the ellipse, you can think of it as the sum of two line segments, each one hitting the same point on the ellipse. And you can do this through various different points of the ellipse, depending on where you are around the cone, always getting two line segments with a constant sum, namely whatever the straight line distance from the top circle to the bottom circle is. So you see what I mean about it being vaguely analogous to the thumbtack property, in that every point of the ellipse gives us two distances, whose sum is a constant. Granted, these lengths are not to the focal points, they're to the big and the little circle, but maybe that leads you to making the following conjecture. The distance from a given point on this ellipse, this intersection curve, straight down to the big circle, is, you conjecture, equal to the distance to the point where that big sphere is tangent to the plane, our first proposed focus point. Likewise, perhaps the distance from that point on the ellipse to the small circle is equal to the distance from that point to the second proposed focus point, where the small sphere touches the plane. So is that true? Well, yes. Here, let's give a name to that point that we have on the ellipse. Q. The key is that the line from Q to the first proposed focus is tangent to the big sphere, and the line from Q straight down along the cone is also tangent to the big sphere. Here, let's look at a different picture for some clarity. If you have multiple lines drawn from a common point to a sphere, all of which are tangent to that sphere, you can probably see just from the symmetry of the setup that all of these lines have to have the same length. And in fact, I encourage you to try proving this yourself, or to otherwise pause and ponder on the proof that I've left on the screen. But looking back at our cone slicing setup, your conjecture would be correct. The two lines extending from the point Q on the ellipse, tangent to the big sphere, have the same length. Similarly, the line from Q to the second proposed focus point is tangent to the little sphere, as is the line from Q straight up along the cone, so those two also have the same length. And so, the sum of the distances from Q to the two proposed focus points is the same as the straight line distance from the little circle down to the big circle along the cone, passing through Q. And clearly, that does not depend on which point of the ellipse you chose for Q. But a boom but a bang, slicing the cone is the same as the thumbtack construction, since the resulting curve has the constant focus on property. Now this proof was first found by Germinal, Germinal, Germa, who cares? Dandelion, a guy named Dandelion in 1822. So these two spheres are sometimes called Dandelion spheres. You can also use the same trick to show why slicing a cylinder at an angle will give you an ellipse. And if you're comfortable with the claim that projecting a shape from one plane onto another tilted plane has the effect of simply stretching out that shape, this also shows why the definition of an ellipse as a stretched circle is the same as the other two. More homework. So why do I think that this proof is such a good representative for math itself? That if you had to show just one thing to explain to a non-math enthusiast why you love the object, why this would be a good candidate. The obvious reason is that it's substantive and beautiful without requiring too much background. But more than that, it reflects a common feature of math that sometimes there is no single, most fundamental way of defining something, that what matters more is showing equivalences. And even more than that, the proof itself involves one key moment of creative construction, adding the two spheres. Now most of it leaves room for a nice systematic and principled approach. And this kind of creative construction is, I think, one of the most thought-provoking aspects of mathematical discovery. And you might understandably ask where such an idea comes from. In fact, talking about this particular proof, here's what Paul Lockhart says in measurement. How do people come up with such ingenious arguments? It's the same as the way people come up with madembovery or monolisa. I have no idea how it happens. I only know that when it happens to me, I feel very fortunate. I agree, but I do think we can say at least a little something more about this. While it is ingenious, we can perhaps decompose how someone who has immersed themselves in a number of other geometry problems might be particularly primed to think of adding these specific spheres. First, a common tactic in geometry is to relate one length to another. And in this problem, you know, from the outset, that being able to relate these two lengths to the foci to some other two lengths, especially ones that line up, would be a useful thing. Even though at the start, you don't even know where the focus points are. And even if it's not clear exactly how you do that, throwing spheres into the picture isn't all that crazy. Again, if you built up a relationship with geometry through practice, you would be well acquainted with how relating one length to another happens all the time when circles and spheres are in the picture, because it cuts straight to the defining feature of what it even means to be a circle or a sphere. And this is obviously a very specific example, but the point I want to make is that you can often view glimpses of ingeniousness, not as inexplicable miracles, but as the residue of experience. And when you do, the idea of genius goes from being mesmerizing to instead being actively inspirational.
Binary, Hanoi and Sierpinski, part 1
Today, I want to share with you a neat way to solve the towers of Hanoi puzzle just by counting in a different number system. And surprisingly, this stuff relates to finding a curve that fills Serpinsky's triangle. I learned about this from a former CS lecturer of mine, his name's Keith Schwartz, and I've got to say, this man is one of the best educators that I've ever met. I actually recorded a bit of the conversation where he showed me this stuff, so you guys can hear some of what he described directly. It's weird, I'm not normally this sort of person who likes little puzzles in games, but I just love looking at the analysis of puzzles in games, and I love just looking at mathematical patterns and getting that come from. In case you're unfamiliar, let's just lay down what the towers of Hanoi puzzle actually is. So, you have a collection of three pegs, and you have these discs of descending size. You think of these discs as having a hole in the middle so that you can fit them onto a peg. The setup pictured here has five discs, which all label 01234, but in principle, you could have as many discs as you want. So they all start up stacked up from biggest to smallest on one spindle, and the goal is to move the entire tower from one spindle to another. The rules you can only move one disc at a time, and you can't move a bigger disc on top of a smaller disc. For example, your first move must involve moving disc 0, since any other disc has stuff on top of it that needs to get out of the way before it can move. After that, you can move disc 1, but it has to go on whatever peg doesn't currently have disc 0. Since otherwise, you'd be putting a bigger disc on a smaller one, which isn't allowed. If you've never seen this before, I highly encourage you to pause and pull out some books of varying sizes, and try it out for yourself, just kind of get a feel for what the puzzle is, if it's hard, why it's hard, if it's not, why it's not, that kind of stuff. Now, Keith showed me something truly surprising about this puzzle, which is that you can solve it just by counting up in binary and associating the rhythm of that counting with a certain rhythm of disc movements. For anyone unfamiliar with binary, I'm going to take a moment to do a quick overview here first. Actually, even if you are familiar with binary, I want to explain it with a focus on the rhythm of counting, which you may or may not have thought about before. Any description of binary typically starts off with an introspection about our usual way to represent numbers, what we call base 10, since we use 10 separate digits, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9. The rhythm of counting begins by walking through all 10 of these digits. Then, having run out of new digits, you express the next number 10 with 2 digits, 1, 0. You say that 1 is in the 10's place, since it's meant to encapsulate the group of 10 that you've already counted up to so far, while freeing the 1's place to reset to 0. The rhythm of counting repeats like this, counting up 9, rolling over to the 10's place. Counting up 9 more, rolling over to the 10's place, etc. Until after repeating that process 9 times, you roll over to a 100's place. A digit that keeps track of how many groups of 100 you've hit, freeing up the other two digits to reset to 0. In this way, the rhythm of counting is kind of self-similar. Even if you zoom out to a larger scale, the process looks like doing something, rolling over, doing that same thing, rolling over, and repeating 9 times before an even larger rollover. In binary, also known as base 2, you limit yourself to 2 digits, 0 and 1, commonly called bits, which is short for binary digits. The result is that when you're counting, you have to roll over all the time. After counting 0, 1, you've already run out of bits, so you need to roll over to a 2's place, writing 1, 0, and resisting every urge in your base 10 trained brain to read this as 10, and instead understand it to mean 1 group of 2 plus 0. Then increment up to 1, 1, which represents 3, and already you have to roll over again. And since there's a 1 in that 2's place, that has to roll over as well, giving you 1, 0, 0, which represents 1 group of 4 plus 0 groups of 2 plus 0. In the same way that digits in base 10 represent powers of 10, bits in base 2 represent different powers of 2. So instead of talking about a 10's place, a 100's place, a 1000's place, things like that, you talk about a 2's place, a 4's place, and an 8's place. The rhythm of counting is now a lot faster, but that almost makes it more noticeable. Flip the last roll over once. Flip the last roll over twice. Flip the last roll over once. Flip the last roll over three times. Again, there's a certain self-similarity to this pattern. At every scale, the process is to do something, roll over, then do that same thing again. At the small scale, say counting up to 3, which is 1, 1, and binary, this means flip the last bit, roll over to the 2's, then flip the last bit. At a larger scale, like counting up to 15, which is 1, 1, 1, 1, and binary, the process is to let the last 3 count up to 7, roll over to the 8's place, then let the last 3 bits count up again. Counting up to 255, which is 8 successive ones, this looks like letting the last 7 bits count up till they're full, rolling over to the 128's place, then letting the last 7 bits count up again. Alright, so with that many introduction, the surprising fact that Keith showed me is that we can use this rhythm to solve the towers of Hanoi. You start by counting from 0, whenever you're only flipping that last bit from a 0 to a 1, move disk 0, 1 peg to the right. If it was already on the rightmost peg, you just loop it back to the first peg. If in your binary counting, you roll over once to the 2's place, meaning you flip the last 2 bits, you move disk number 1. Where do you move it? You might ask? Well, you have no choice. You can't put it on top of disk 0, and there's only one other peg, so you move it where you're forced to move it. So after this, counting up to 1, 1, that involves just flipping the last bit, so you move disk 0 again. Then when your binary counting rolls over twice to the 4's place, move disk number 2, and the pattern continues like this. Flip the last, move disk 0. Flip the last 2, move disk 1. Flip the last, move disk 0. And here we're going to have to roll over 3 times to the 8's place, and that corresponds to moving disk number 3. There's something magical about it. Like when I first saw this, it was like this can't work. Like I don't know how this works. I don't know why this works. Now I know, but it's just magical when you see it. I remember putting together an animation for this, when I was teaching this, and just like, I know how this works. I know all the things in it. It's still fun to just sit and just like, you know, watch it play out. Oh yeah. I mean, it's not even clear at first that this is always going to give legal moves. For example, how do you know that every time you're rolling over to the 8's place, the disk 3 is necessarily going to be freed up to move? At the same time, the solution just immediately raises these questions. Like, where does this come from? Why does this work? And is there a better way of doing this than having to do 2 to the n minus 1 steps? It turns out, not only does this solve towers of Hanoi, but it does it in the most efficient way possible. Understanding why this works and how it works and what the heck is going on comes down to a certain perspective on the puzzle. What CS folk might call a recursive perspective? Disk 3 is thinking, OK, 2 1 and 0, you have to get off of me. I can't really function under this much weighted pressure. And so just from disk 3's perspective, if you want to figure out how does disk 3 get to get over here, somehow, I don't care how, this 2 1 and 0 have to get to spindle V. That's the only way they can move. Any of these disks are on top of 3. I can't move. Any of them are in spindle V. It can't move there. So somehow we have to get 2 1 and 0 off. Having done that, then we can move disk 3 over there. And then disk 3 says, I'm set. You never need to move me again. Everyone else just figured out how to get here. And in a sense, you now have a smaller version of the same problem. Now you've got 2 0 1 and 2 sitting on spindle V. You've got to get them to see. So the idea is that if I just focus on one disk and I think about what am I going to have to do to get this disk to work, I can turn my bigger problem into something slightly smaller. And then how do I solve that? Well, it's exactly the same thing. Disk 2 is going to say, disk 1 and disk 0, you need to, you know, it's not you. It's me, I just need some space. Get off. They need to move somewhere. Then disk 2 can move to where it needs to go. Then disk 1 and 0 can do this. But the interesting point is that every single disk pretty much has the exact same strategy. They all say, everybody above me get off. Then I'm going to move. Okay, everyone pout back on. When you know that insight, you can code up something that will solve towers of annoy like five or six lines of code, which probably has the highest ratio of like intellectual investment. Like lines of code ever. And if you think about it for a bit, it becomes clear that this has to be the most efficient solution. At every step, you're only doing what's forced upon you. You have to get disk 0 through 2 off before you can move disk 3. And you have to move disk 3. And then you have to move disk 0 through 2 back onto it. There's just not any room for inefficiency from this perspective. So why does counting and binary capture this algorithm? Well what's going on here is that this pattern of solving a subproblem, moving a big disk, then solving a subproblem again, is perfectly paralleled by the pattern of binary counting. Count up some amount? Roll over. Count up to that same amount again. And this towers of annoy algorithm and binary counting are both self-similar processes. In the sense that if you zoom out and count up to a larger power of 2 or solve towers of annoy with more disks, they both still have that same structure. Subproblem, do a thing, subproblem. For example, at a pretty small scale, solving towers of annoy for 2 disks, move disk 0, move disk 1, move disk 0, is reflected by counting up to 3 in binary. With the last bit, roll over once, flip the last bit. At a slightly larger scale, solving towers of annoy for 3 disks, looks like doing whatever it takes to solve 2 disks, move disk 2, then do whatever it takes to solve 2 disks again. Analogously counting up to 111 in binary involves counting up to 3, rolling over all 3 bits, and counting up 3 more. At all scales, both processes have this same breakdown. So in a sense, the reason that this binary solution works, or at least an explanation, I feel like there's no one explanation, but I think the most natural one is that the pattern you would use to generate these binary numbers has exactly the same structure as the pattern you would use for towers of annoy, which is why if you look at the bits flipping, you're effectively reversing this process. You're saying what process generated these? Like if I were trying to understand how these bits were flipped to give me this thing, you're effectively reversing engineering the recursive algorithm for towers of annoy, which is why it works out. That's pretty cool, right? But it actually gets cooler. I haven't even gotten to how this relates to Sir Pinsky's triangle. And that's exactly what I'm going to do in the follow on video, part 2. Many thanks to everybody who's supporting these videos on Patreon. I just finished the first chapter of Essence of Calculus, and I'm working on the second one right now, and Patreon supporters are getting early access to these videos before I publish the full series in a few months. This video and the next one are also supported in part by Desmos, and before the next video, I just want to take a moment and share with you guys a little about who they are and the fact that they're hiring. So Desmos is actually really cool. They make a lot of these interactive math activities for classrooms and tools for teachers. The real meat of their offering is in their classroom activities. From my part, I'm super impressed by just how well-fought out these activities are from a pedagogical standpoint. The team clearly knows their stuff, and they know where they stand to make a difference in students and teachers' lives. And like I said, they're hiring. They are always looking to bring in more good talent, whether that's engineering talent, designers, teachers, or whatever other skill sets line up with what they want to do. If any of you out there are interested in joining them, helping them make some of these great tools for teachers and students. You can check out the careers page that I've linked in the description. Personally, I think they're doing some really meaningful stuff. I think their activities are building genuinely good math intuitions for students. And the world could use a few more talented people pointing their efforts towards education, the way that they do. Alright, so with that, I'll see you guys next video, and I think you're really going to like where this is going.
Cramer's rule, explained geometrically | Chapter 12, Essence of linear algebra
In a previous video, I've talked about linear systems of equations, and I sort of brushed aside the discussion of actually computing solutions to these systems. And while it's true that the number crunching is typically something we leave to the computers, digging into some of these computational methods is a good litmus test for whether or not you actually understand what's going on. But that's really where the rubber meets the road. Here, I want to describe the geometry behind a certain method for computing solutions to these systems, known as Kramer's Rule. The relevant background here is understanding determinants, a little bit of dot products, and of course linear systems of equations. So be sure to watch the relevant videos on those topics if you're unfamiliar or rusty. But first, I should say up front that this Kramer's Rule is not actually the best way for computing solutions to linear systems of equations. Gaussian elimination, for example, will always be faster. So why learn it? Well think of it as a sort of cultural excursion. It's a helpful exercise in deepening your knowledge of the theory behind these systems. Wrapping your mind around this concept is going to help consolidate ideas from linear algebra, like the determinant in linear systems, by seeing how they relate to each other. Also, from a purely artistic standpoint, the ultimate result here is just really pretty to think about, way more so than Gaussian elimination. Alright, so the setup here will be some linear system of equations, say with two unknowns, x and y, and two equations. In principle, everything we're talking about will also work for systems with larger number of unknowns and the same number of equations. But for simplicity, a smaller example is just nicer to hold in our heads. So as I talked about in a previous video, you can think of this setup geometrically, as a certain known matrix, transforming an unknown vector, x, y, where you know what the output is going to be, in this case, negative four or negative two. Remember, the columns of this matrix are telling you how that matrix acts as a transform, each one telling you where the basis vectors of the input space land. So what we have is a sort of puzzle, which input vector, x, y, is going to land on this output, negative four or negative two. One way to think about our little puzzle here is that we know the given output vector is some linear combination of the columns of the matrix, x times the vector where i hat lands plus y times the vector where j hat lands. But what we want is to figure out what exactly that linear combination should be. Remember, the type of answer you get here can depend on whether or not the transformation squishes all of space into a lower dimension, that is, if it has a zero determinant. In that case, either none of the inputs land on our given output, or there's a whole bunch of inputs landing on that output. But for this video, we'll limit our view to the case of a non-zero determinant, meaning the outputs of this transformation still span the full indimensional space that it started in. Every input lands on one and only one output, and every output has one and only one input. As the first pass, let me show you an idea that's wrong, but in the right direction. The x-coordinate of this mystery input vector is what you get by taking its dot product with the first basis vector, one zero. Likewise, the y-coordinate is what you get by dotting it with the second basis vector, zero one. So maybe you hope that after the transformation, the dot products with the transformed version of the mystery vector with the transformed version of the basis vectors will also be these coordinates, x and y. It would be fantastic, because we know what the transformed version of each of those vectors are. There's just one problem with it. It's not at all true. For most linear transformations, the dot product before and after the transformation will look very different. For example, you could have two vectors generally pointing in the same direction with a positive dot product, which get pulled apart from each other during the transformation in such a way that they end up having a negative dot product. Likewise, things that start off perpendicular with dot product zero, like the two basis vectors, quite often don't stay perpendicular to each other after the transformation. That is, they don't preserve that zero dot product. And looking at the example I just showed, dot products certainly aren't preserved. They tend to get bigger, since most vectors are getting stretched out. In fact, worth a while aside note here, transformations which do preserve dot products are special enough to have their own name, or the normal transformations. These are the ones that leave all of the basis vectors perpendicular to each other and still with unit lengths. You often think of these as the rotation matrices. They correspond to rigid motion, with no stretching or squishing or morphing. Solving a linear system with an orthonormal matrix is actually super easy. Because dot products are preserved, taking the dot product between the output vector and all the columns of your matrix will be the same as taking the dot product between the mystery input vector and all of the basis vectors, which is the same as just finding the coordinates of that mystery input. So in that very special case, x would be the dot product of the first column with the output vector, and y would be the dot product of the second column with the output vector. Why am I bringing this up when this idea breaks down for almost all linear systems? Well it points us in a direction of something to look for. Is there an alternate geometric understanding for the coordinates of our input vector that remains unchanged after the transformation? If your mind has been mulling over determinants, you might think of the following clever idea. Take the parallelogram defined by the first basis vector i hat and the mystery input vector, xy. The area of this parallelogram is the base, one, times the height perpendicular to that base, which is the y coordinate of that input vector. So the area of that parallelogram is a sort of screwy roundabout way to describe the vector's y coordinate. It's a wacky way to talk about coordinates, but run with me. And actually, to be a little more accurate, you should think of this as the signed area of that parallelogram, in the sense described in the determinant video. That way, a vector with a negative y coordinate would correspond to a negative area for this parallelogram, at least if you think of i hat as in some sense being the first out of these two vectors defining the parallelogram. And symmetrically, if you look at the parallelogram spanned by our mystery input vector and the second basis, j hat, its area is going to be the x coordinate of that mystery vector. Again, it's a strange way to represent the x coordinate, but you'll see what it buys us in a moment. And just to make sure it's clear how this might generalize, let's look in three dimensions. Ordinarily, the way you might think about one of a vector's coordinates, say, its z coordinate, would be to take its dot product with the third standard basis vector, often called k hat. But an alternate geometric interpretation would be to consider the parallelipyped that it creates with the other two basis vectors, i hat and j hat. If you think of the square with area 1 spanned by i hat and j hat as the base of this whole shape, then its volume is the same as its height, which is the third coordinate of our vector. And likewise, the wacky way to think about the other coordinates of the vector would be to form a parallelipyped using the vector and then all of the basis vectors other than the one corresponding to the direction you're looking for. Then the volume of this gives you the coordinate. Or rather, we should be talking about the signed volume of parallelipypeds in the sense described in the determinant video using the right hand rule. So the order in which you list these three vectors matters. That way negative coordinates still make sense. Okay, so why think of coordinates as areas and volumes like this? Well as you apply some sort of matrix transformation, the areas of these parallelograms, well they don't stay the same, they might get scaled up or down, but, and this is the key idea of determinants, all of the areas get scaled by the same amount, namely the determinant of our transformation matrix. For example, if you look at the parallelograms spanned by the vector where your first basis vector lands, which is the first column of the matrix, and the transformed version of xy, what is its area? Well, this is the transformed version of the parallelogram where we were looking at earlier, the one whose area was the y coordinate of the mystery input vector. So its area is just going to be the determinant of the transformation multiplied by that y coordinate. So that means we can solve for y by taking the area of this new parallelogram in the output space divided by the determinant of the full transformation. And how do you get that area? Well, we know the coordinates for where the mystery input vector lands. That's the whole point of a linear system of equations. So what you might do is create a new matrix whose first column is the same as that of our matrix, but whose second column is the output vector, and then you take its determinant. So look at that. Just using data from the output of the transformation, namely the columns of the matrix and the coordinates of our output vector, we can recover the y coordinate of the mystery input vector, which is halfway to solving the system. Likewise, the same idea can give us the x coordinate. Look at the parallelogram we defined earlier, which encodes the x coordinate of the mystery input vector, spanned by that vector and j hat. The transformed version of this guy is spanned by the output vector and the second column of the matrix, and its area will have been multiplied by the determinant of that matrix. So to solve for x, you can take this new area divided by the determinant of the full transformation. And similar to what we did before, you can compute the area of that output parallelogram by creating a new matrix whose first column is the output vector and whose second column is the same as the original matrix. So again, just using data from the output space, the numbers we see in our original linear system, we can solve for what x must be. This formula for finding the solutions to a linear system of equations is known as Kramer's rule. Here, just a sanity check ourselves, plug in some numbers here. The determinant of that top, altered matrix, is 4 plus 2, which is 6, and the bottom determinant is 2, so the x coordinate should be 3. And indeed, looking back at the input vector we started with, the x coordinate is 3. Likewise, Kramer's rule suggests that the y coordinate should be 4 divided by 2, or 2, and that is in fact the y coordinate of the input vector that we were starting with. The case with 3 dimensions or more is similar, and I highly recommend that you take a moment to pause and think through it yourself. Here, I'll give you a little bit of momentum. What we have is a known transformation given by some 3 by 3 matrix and a known output vector, given by the right side of our linear system, and we want to know what input lands on that output. If you think of, say, the z coordinate of that input vector, as the volume of that special parallel apypad we were looking at earlier, spanned by i-hat, j-hat, and the mystery input vector, what happens to that volume after the transformation? And what are the various ways that you can compute that volume? Really, pause and take a moment to think through the details of generalizing this to higher dimensions, finding an expression for each coordinate of the solution to a larger linear system. Looking through more general cases like this and convincing yourself that it works and why it works is where all of the learning really happens, much more so than listening to some dude on YouTube walk you through the same reasoning again.
But what is the Fourier Transform? A visual introduction
This right here is what we're going to build to this video, a certain animated approach to thinking about a super important idea from Matt, the 48 transform. For anyone unfamiliar with what that is, my number one goal here is just for the video to be an introduction to that topic. But even for those of you who are already familiar with it, I still think that there's something fun and enriching about seeing what all of its components actually look like. The central example to start is going to be the classic one, decomposing frequencies from sound. But after that, I also really want to show a glimpse of how this idea extends well beyond sound and frequency into many seemingly disparate areas of math and even physics. Really, it is crazy just how ubiquitous this idea is. Let's dive in. This sound right here is a pure A, 440 beats per second, meaning if you were to measure the air pressure right next to your headphones or your speaker as a function of time, it would oscillate up and down around its usual equilibrium in this wave, making 440 oscillations each second. A lower pitch note, like a D, has the same structure just fewer beats per second. And when both of them are played at once, what do you think the resulting pressure versus time graph looks like? Well, at any point in time, this pressure difference is going to be the sum of what it would be for each of those notes individually. Which let's face it, this kind of a complicated thing to think about. At some points, the peaks match up with each other, resulting in a really high pressure. At other points, they tend to cancel out. And oh and oh, what you get is a wave-ish pressure versus time graph that is not a pure sine wave. It's something more complicated. And as you add in other notes, the wave gets more and more complicated. But right now, all it is is a combination of four pure frequencies. So it seems needlessly complicated given the low amount of information put into it. A microphone recording any sound just picks up on the air pressure at many different points in time. It only sees the final sum. So our central question is going to be how you can take a signal like this and decompose it into the pure frequencies that make it up. Pretty interesting, right? Adding up those signals really mixes them all together. So pulling them back apart feels akin to unmixing multiple paint colors that have all been stirred up together. The general strategy is going to be to build for ourselves a mathematical machine that treats signals with a given frequency differently from how it treats other signals. To start, consider simply taking a pure signal, say with a lowly three beats per second so that we can plot it easily. And let's limit ourselves to looking at a finite portion of this graph. In this case, the portion between zero seconds and 4.5 seconds. The key idea is going to be to take this graph and sort of wrap it up around a circle. Concretely, here's what I mean by that. Imagine a little rotating vector where at each point in time, its length is equal to the height of our graph for that time. So high points of the graph correspond to a greater distance from the origin and low points end up closer to the origin. And right now, I'm drawing it in such a way that moving forward two seconds in time corresponds to a single rotation around the circle. Our little vector drawing this wound up graph is rotating at half a cycle per second. So this is important. There are two different frequencies that play here. There's the frequency of our signal, which goes up and down three times per second. And then, separately, there's the frequency with which we're wrapping the graph around the circle, which at the moment is half of a rotation per second. But we can adjust that second frequency however we want. Maybe we want to wrap it around faster? Or maybe we go and wrap it around slower? And that choice of winding frequency determines what the wound up graph looks like. Some of the diagrams that come out of this can be pretty complicated, although they are very pretty. But it's important to keep in mind that all that's happening here is that we're wrapping the signal around a circle. The vertical lines that I'm drawing up top, by the way, are just a way to keep track of the distance on the original graph that corresponds to a full rotation around the circle. So line spaced out by 1.5 seconds would mean it takes 1.5 seconds to make one full revolution. And at this point, we might have some sort of vague sense that something special will happen when the winding frequency matches the frequency of our signal, 3 beats per second. All of the high points on the graph happen on the right side of the circle, and all of the low points happen on the left. But how precisely can we take advantage of that in our attempt to build a frequency unmixing machine? Well, imagine this graph as having some kind of mass to it, like a metal wire. This little dot is going to represent the center of mass of that wire. As we change the frequency and the graph winds up differently, that center of mass kind of wobbles around a bit. And for most of the winding frequencies, the peaks and the valleys are all spaced out around the circle in such a way that the center of mass stays pretty close to the origin. But when the winding frequency is the same as the frequency of our signal, in this case 3 cycles per second, all of the peaks are on the right and all of the valleys are on the left, so the center of mass is unusually far to the right. Here, to capture this, let's draw some kind of plot that keeps track of where that center of mass is for each winding frequency. Of course, the center of mass is a two-dimensional thing, it requires two coordinates to fully keep track of. But for the moment, let's only keep track of the x-coordinate. So for a frequency of zero, when everything is bunched up on the right, this x-coordinate is relatively high, and then as you increase that winding frequency and the graph balances out around the circle, the x-coordinate of that center of mass goes closer to zero, and it just kind of wobbles around a bit. But then, at 3 beats per second, there's a spike. As everything lines up to the right. This right here is the central construct, so let's sum up what we have so far. We have that original intensity versus time graph, and then we have the wound up version of that in some two-dimensional plane. And then, as a third thing, we have a plot for how the winding frequency influences the center of mass of that graph. And by the way, let's look back at those really low frequencies near zero. This big spike around zero in our new frequency plot just corresponds to the fact that the whole cosine wave is shifted up. If I chose in a signal that oscillates around zero, dipping into negative values, then, as we play around with various winding frequencies, this plot of the winding frequency versus center of mass would only have a spike at the value of 3. But negative values are a little bit weird and messy to think about, especially for a first example, so let's just continue thinking in terms of the shifted up graph. I just want you to understand that that spike around zero only corresponds to the shift. Our main focus, as far as frequency decomposition is concerned, is that bump at 3. This whole plot is what I'll call the almost Fourier transform of the original signal. There's a couple small distinctions between this and the actual Fourier transform, which I'll get to in a couple minutes. But already, you might be able to see how this machine lets us pick out the frequency of a signal. Just to play around with it a little bit more, take a different pure signal, let's say with a lower frequency of 2 beats per second, and do the same thing. Winded around a circle, imagine different potential winding frequencies, and as you do that, keep track of where the center of mass of that graph is, and then plot the X coordinate of that center of mass as you adjust the winding frequency. Just like before, we get a spike when the winding frequency is the same as the signal frequency, which in this case is when it equals 2 cycles per second. But the real key point, the thing that makes this machine so delightful, is how it enables us to take a signal consisting of multiple frequencies and pick out what they are. Imagine taking the two signals we just looked at. The wave with 3 beats per second, and the wave with 2 beats per second. And add them up. Like I said earlier, what you get is no longer a nice pure cosine wave, it's something a little more complicated. But imagine throwing this into our winding frequency machine. It is certainly the case that as you wrap this thing around, it looks a lot more complicated to have this chaos and chaos and chaos and chaos, and then whoop! Things seem to line up really nicely at 2 cycles per second. And as you continue on, it's more chaos and more chaos, more chaos, chaos, chaos, chaos, things nicely align again at 3 cycles per second. And like I said before, the wound up graph can look kind of busy and complicated, but all it is is the relatively simple idea of wrapping the graph around a circle. It's just a more complicated graph and a pretty quick winding frequency. Now what's going on here with the two different spikes is that if you were to take two signals and then apply this almost Fourier transform to each of them individually and then add up the results. What you get is the same as if you first added up the signals and then applied this almost Fourier transform. And the attentive viewers among you might want to pause and ponder and convince yourself that what I just said is actually true. It's a pretty good test to verify for yourself that it's clear what exactly is being measured inside this winding machine. This property makes things really useful to us because the transform of a pure frequency is close to zero everywhere except for a spike around that frequency. So when you add together two pure frequencies, the transform graph just has these little peaks above the frequencies that went into it. So this little mathematical machine does exactly what we wanted. It pulls out the original frequencies from their jumbled up sums, unmixing the mixed bucket of paint. And before continuing into the full math that describes this operation, let's just get a quick glimpse of one context where this thing is useful, sound editing. Let's say that you have some recording and it's got an annoying high pitch that you would like to filter out. Well at first your signal is coming in as a function of various intensities over time, different voltages given to your speaker from one millisecond to the next. But we want to think of this in terms of frequencies. So when you take the Fourier transform of that signal, the annoying high pitch is going to show up just as a spike at some high frequency. filtering that up by just smushing this spike down. What you'd be looking at is the Fourier transform of a sound that's just like you're recording only without that high frequency. Luckily, there's a notion of an inverse Fourier transform that tells you which signal would have produced this as its Fourier transform. I'll be talking about that inverse much more fully in the next video, but long story short, applying the Fourier transform to the Fourier transform gives you back something close to the original function. Kind of. This is a little bit of a lie, but it's in the direction of truth. And most of the reason that it's a lie is that I still have yet to tell you what the actual Fourier transform is, since it's a little more complex than this X coordinate of the center of mass idea. First off, bringing back this wound up graph and looking at its center of mass, the X coordinate is really only half the story, right? I mean, this thing is in two dimensions. It's got a Y coordinate as well. And as is typical in math, whenever you're dealing with something too dimensional, it's elegant to think of it as the complex plane, where this center of mass is going to be a complex number that has both a real and an imaginary part. And the reason for talking in terms of complex numbers, rather than just saying it has two coordinates, is that complex numbers lend themselves to really nice descriptions of things that have to do with winding and rotation. For example, Euler's formula famously tells us that if you take E to some number times I, you're going to land on the point that you get if you were to walk that number of units around a circle with radius one counterclockwise starting on the right. So imagine you wanted to describe rotating at a rate of one cycle per second. Something that you could do is take the expression E to the 2 pi times I times T, where T is the amount of time that has passed. Since first circle with radius one, 2 pi describes the full length of its circumference. And this is a little bit dizzying to look at, so maybe you want to describe a different frequency, something lower and more reasonable. And for that, you would just multiply that time T in the exponent by the frequency, F. For example, if F was one tenth, then this vector makes one full turn every 10 seconds. Since the time T has to increase all the way to 10 before the full exponent looks like 2 pi I. I have another video giving some intuition on why this is the behavior of E to the X for imaginary inputs, if you're curious. But for right now, we're just going to take it as a given. Now, why am I telling you this? You might ask. Well, it gives us a really nice way to write down the idea of winding up the graph into a single, tight little formula. First off, the convention in the context of Fourier transforms is to think about rotating in the clockwise direction. So let's go ahead and throw a negative sign up into that exponent. Now, take some function describing a signal intensity versus time, like this pure cosine wave we had before, and call it G of T. If you multiply this exponential expression times G of T, it means that the rotating complex number is getting scaled up and down according to the value of this function. So you can think of this little rotating vector with its changing length as drawing the wound up graph. So think about it, this is awesome. This really small expression is a super elegant way to encapsulate the whole idea of winding a graph around a circle with a variable frequency, F. And remember, the thing we want to do with this wound up graph is to track its center of mass. So think about what formula is going to capture that. Well, to approximate it at least, you might sample a whole bunch of times from the original signal, see where those points end up on the wound up graph, and then just take an average. That is, add them all together as complex numbers, and then divide by the number of points that you've sampled. This will become more accurate if you sample more points, which are closer together. And in the limit, rather than looking at the sum of a whole bunch of points divided by the number of points, you take an integral of this function divided by the size of the time interval that we're looking at. Now the idea of integrating a complex valued function might seem weird than to anyone who's shaky with calculus, maybe even intimidating, but the underlying meaning here really doesn't require any calculus knowledge. The whole expression is just the center of mass of the wound up graph. So great, step by step, we have built up this kind of complicated, but let's face it, surprisingly small expression for the whole winding machine idea that I talked about. And now there is only one final distinction to point out between this and the actual honest to goodness Fourier transform. Namely, just don't divide out by the time interval. The Fourier transform is just the integral part of this. What that means is that instead of looking at the center of mass, you would scale it up by some amount. If the portion of the original graph you were using spanned three seconds, you would multiply the center of mass by three. If it was spanning six seconds, you would multiply the center of mass by six. Physically, this has the effect that when a certain frequency persists for a long time, then the magnitude of the Fourier transform at that frequency is scaled up more and more. For example, what we're looking at right here is how when you have a pure frequency of two beats per second and you wind it around the graph at two cycles per second, the center of mass stays in the same spot, right? It's just tracing out the same shape. But the longer that signal persists, the larger the value of the Fourier transform at that frequency. For other frequencies, though, even if you just increase it by a bit, this is canceled out by the fact that for longer time intervals, you're giving the wound up graph more of a chance to balance itself around the circle. That is a lot of different moving parts. So let's step back and summarize what we have so far. The Fourier transform of an intensity versus time function, like G of T, is a new function, which doesn't have time as an input, but instead takes in a frequency. What I've been calling the winding frequency. In terms of notation, by the way, the common convention is to call this new function G hat with a little circumflex on top of it. Now the output of this function is a complex number, some point in the 2D plane that corresponds to the strength of a given frequency in the original signal. The plot that I've been graphing for the Fourier transform is just the real component of that output, the x-coordinate. But you could also graph the imaginary component separately if you wanted a fuller description. And all of this is encapsulated inside that formula that we built up. Out of context, you can imagine how seeing this formula would seem sort of daunting. But if you understand how exponentials correspond to rotation, how multiplying that by the function G of T means drawing a wound up version of the graph, and how an integral of a complex valued function can be interpreted in terms of a center of mass idea, you can see how this whole thing carries with it a very rich intuitive meaning. And by the way, one quick small note before we can call this wrapped up, even though in practice with things like sound editing, you'll be integrating over a finite time interval. The theory of Fourier transforms is often phrased where the bounds of this integral are negative infinity and infinity. Concretely, what that means is that you consider this expression for all possible finite time intervals, and you just ask, what is its limit as that time interval grows to infinity? And man, oh man, there is so much more to say. So much, I don't want to call it done here. This transform extends to corners of math well beyond the idea of extracting frequencies from signal. So the next video I put out is going to go through a couple of these. And that's really where things start getting interesting. So stay subscribed for when that comes out. Or an alternate option is to just binge on a couple three blue on brown videos so that the YouTube recommender is more inclined to show you new things that come out. Really the choice is yours. And to close things off, I have something pretty fun, a mathematical puzzler from this video sponsor, Jane Street. Who's looking to recruit more technical talent. So let's say that you have a closed bounded convex set, C, sitting in 3D space, and then let B be the boundary of that space, the surface of your complex blob. Now imagine taking every possible pair of points on that surface and adding them up, doing a vector sum. Let's name this set of all possible sums, D. Your task is to prove that D is also a convex set. So Jane Street is a quantitative trading firm. And if you're the kind of person who enjoys math and solving puzzles like this, the team there really values intellectual curiosity. So they might be interested in hiring you. And they're looking both for full time employees and interns. For my part, I can say the couple of people I've interacted with there just seem to love math and sharing math. And when they're hiring, they look less at it background and finance than they do at how you think, how you learn, and how you solve problems. Hence the sponsorship of a 3 blue on brown video. If you want the answer to that puzzler or to learn more about what they do or to apply for open positions, go to JaneStreet.com slash 3B1B.
Fractal charm: Space filling curves
I'm going to add a little bit of the color. I'm going to add a little bit of the color. I'm going to add a little bit of the color. I'm going to add a little bit of the color.
Quaternions and 3d rotation, explained interactively
In a moment, I'll point you to a separate website hosting a short sequence of what we're calling Explorable Videos. It was done in collaboration with Ben Eater, who some of you may know is that Guy who runs the excellent computer engineering channel. And if you don't know who he is, viewers of this channel would certainly enjoy the content of his, so do check it out. This collaboration was something a little different, though, for both of us. And all of the web development that made these Explorable Videos possible is completely thanks to Ben. I don't want to say too much about it here, it's really something you have to experience for yourself. Certainly, one of the coolest projects I've had the pleasure of working on. Before that, though, if you can contain your excitement, I want to use this video as a chance to tee things up with a little bit of surrounding context. So to set the stage, last video I described Quaternions, a certain four-dimensional number system that the 19th century versions of Wolverine and the old man from Home Alone called Evil for how convoluted it seemed at the time. And perhaps you too are wondering why on earth anyone would bother with such an alien-seeming number system. Well, one of the big reasons, especially for programmers, is that they give a really nice way for describing 3D orientation, which is not susceptible to the bugs and edge cases of other methods. I mean, they're interesting mathematically for a lot of reasons, but this application for computer graphics and robotics and virtual reality and anything involving 3D orientation is probably the biggest use case for Quaternions. To take one example, a friend of mine who used to work at Apple and a Matus Jack, delighted in telling me about shipping code to hundreds of millions of devices that uses Quaternions to track the phone's model for how it's oriented in space. That's right, your phone almost certainly has software running somewhere inside of it that relies on Quaternions. The thing is, there are other ways to think about computing rotations, many of which are way simpler to think about than Quaternions. For example, any of you familiar with linear algebra will know that 3 by 3 matrices can really nicely describe 3D transformations. And a common way that many programmers think about constructing a rotation matrix for a desired orientation is to imagine rotating an object around 3 easy to think about axes, where the relevant angles for these rotations are commonly called Euler angles. And this mostly works, but one big problem is that it's vulnerable to something called gimbal lock, where when two of your axes of rotation get lined up, you lose a degree of freedom. And it can also cause difficulties and ambiguities when trying to interpolate between two separate orientations. If you're curious for more of the details, there are many great sources online for learning about Euler angles and gimbal lock, and I've left links in the description to a few of them. Not only do Quaternions avoid issues like gimbal lock, they give a really seamless way to interpolate between two three-dimensional orientations, one which lacks the ambiguities of Euler angles, and which avoids the issues of numerical precision and normalization that arise in trying to interpolate between two rotation matrices. To warm up to the idea of how multiplication in some higher dimensional number system might be used to compute rotations, take a moment to remember how it is that complex numbers give a slick method for computing 2D rotations. Specifically, let's say you have some point in two dimensional space, like 4-1, and you want to know the new coordinates you'd get if you rotate this point 30 degrees around the origin. Complex numbers give sort of a snazzy way to do this. You take the complex number that's 30 degrees off the horizontal with magnitude 1, cosine 30 degrees plus sine of 30 degrees times i, and then you just multiply this by your point represented as a complex number. The only rule you need to know to carry out this computation is that i squared equals negative 1. And then, in what might feel like a bit of black magic to those first learning it, carrying out this product from that one simple rule gives the coordinates of a new point, the point rotated 30 degrees from the original. Using quaternions to describe 3D rotations is similar, though the look and feel is slightly different. Let's say you want to rotate some angle about some axis. You first define that axis with a unit vector, which we'll write as having ij and k components, normalize so that the sum of the squares of those components is 1. Similar to the case of complex numbers, you use the angle to construct a quaternion by taking cosine of that angle as the real part plus sine of that angle times an imaginary part. Except this time the imaginary part has 3 components, the coordinates of our axis of rotation. Well, actually you take half of the angle, which might feel totally arbitrary, but hopefully that's going to make some sense by the end of this whole experience. Now let's say you have some 3D point, which we'll write with ijk components, and you want to know the coordinates that you'd get when you rotate this point by your specified angle around your specified axis. What you do is not just a single quaternion product, but a sort of quaternion sandwich, where you multiply by q from the left and the inverse of q from the right. If you know the rules for how ij and k multiply amongst themselves, you can carry out these two products by expanding everything out, or more realistically by having a computer do it for you. And in what might feel like a bit of black magic, this big computation will return for you the rotated version of the point. Our goal is to break this down and visualize what's happening with each of these two products. I'll review the method for thinking about quaternion multiplication, described last video, and explain why half the angle is used and why you would multiply from the right by the inverse. From the screen now and at the top of the description, you'll find a link to eater.net slash quaternions, which is where Ben set up the Explorable Video Tutorial, where I explain what's going on with this rotation computation. It's just really cool. Eater did something awesome here. So at the very least, just take a couple minutes to go look at it. But I'd love it if you went through the full experience.
A Curious Pattern Indeed
A two points on a circle and draw a line straight through. The space which was encircled is divided into two. To these points out a third one, which gives us two more courts, the space through which these lines run has been fissured into four. Continue with the fourth point and three more lines drawn straight, now the count of disjoint region sums in all to eight. A fifth point and its four lines support this pattern gleaned, counting sections one divides that there are now sixteen. This pattern here of doubling does seem a sturdy one, but one more step is troubling as the sixth gives thirty one. Wait, what? One two four eight sixteen thirty one? What's going on here? Why does the pattern start off as powers of two only to fall short by one at the sixth iteration? That seems arbitrary. Why not one two four eight sixteen thirty two sixty three? Or one two four eight fifteen? If you keep going, the number of sections deviates even further from powers of two, except when it hits two fifty six. But this just begs the question of what the pattern really is and why it flirts with powers of two. In my next few videos, I'll explain what's happening, which will include one of my all-time favorite proofs. But interesting problems deserve to be shared, pondered, and discussed before their secrets are hastily revealed. So while I work on animating my explanation, I encourage you to think of your own. To be clear, the question is this. If you take some set of points on a circle and connect every pair of them with a line, how many pieces do these lines cut the circle into? Does it matter where these points are? And why does the answer coincide with powers of two for fewer than six points?
Why do prime numbers make these spirals? | Dirichlet’s theorem, pi approximations, and more
The full title of this video might be something like, how pretty but pointless patterns and polar plots of primes prompt pretty important pondering on properties of those primes. I first saw this pattern that I'm about to show you in a question on the math stack exchange. It was asked by a user under the name Dwimeark, and answered by Greg Martin, and it relates to the distribution of prime numbers, together with rational approximations for pi. You see what the user had been doing was playing around with data in polar coordinates. As a quick reminder, so we're all in the same page, this means labeling points in 2D space, not with the usual xy coordinates, but instead with a distance from the origin, commonly called r for radius, together with the angle that that radial line makes with the horizontal, commonly called theta. And for our purposes, this angle will be measured in radians, which basically means that an angle of pi is halfway around, and then 2 pi is a full circle. And notice, polar coordinates are not unique in the sense that adding 2 pi to that second coordinate doesn't change the location that this pair of numbers is referring to. The pattern that we'll look at centers of down plotting points where both of these coordinates are a given prime number. There is no practical reason to do this. It's purely fun, we're just frall looking around in the playground of data visualization. And to get a sense for what it means, look at all the whole numbers rather than just the primes. The point 11 sits a distance one away from the origin with an angle of 1 radian, which actually means this arc is the same length as that radial line, and then 2 2 has twice that angle and twice the distance. And to get to 3 3, you rotate one more radian with a total angle that's now slightly less than a half turn, since 3 is slightly less than pi, and you step one unit farther away from the origin. I really want you to make sure that it's clear what's being plotted, because everything that follows depends on understanding it. Each step forward is like the tip of a clock hand, which rotates one radian with each tick a little less than a sixth of a turn, and it grows by one unit at each step. As you continue, these points spiral outwards, forming what's known in the business as an arc-immediate spiral. Now if you make the admittedly arbitrary move to knock out everything except the prime numbers, it initially looks quite random. After all, primes are famous for their chaotic and difficult to predict behavior, but when you zoom out, what you start to see are these very clear galactic-seeming spirals, and what's weird is some of the arms seem to be missing. Then zooming out even further, those spirals give way to a different pattern. These many different outward-pointing rays, and those rays seem to mostly come in clumps of four, but there's the occasional gap, like a comb missing its teeth. The question for you and me, naturally, is what on earth is going on here? Where do these spirals come from? And why do we instead get straight lines at this larger scale? If you wanted, you could ask a more quantitative question, and count that there are 20 total spirals, and then up at that larger scale, if you patiently went through each ray, you'd count up a total of 280. And so this adds a further mystery of where these numbers are coming from, and why they would arise from primes. Now this is shocking, and beautiful, and you might think that it suggests some divine hidden symmetry within the primes. But to study your expectations, I should say that the fact that the person asking this question on math exchange jumped right into prime numbers makes the puzzle a little misleading. If you look at all of the whole numbers, not just the primes, as you zoom out, you see very similar spirals. They're much cleaner, and now there's 44 of them instead of 20, but it means that the question of where the spirals come from is, perhaps disappointingly, completely separate from the question of what happens when we limit our view to primes. But don't be too disappointed, because both these questions are still phenomenal puzzles. There's a very satisfying answer for the spirals, and even if the primes don't cause the spirals, asking what goes on when you filter for those primes does lead you to one of the most important theorems about the distribution of prime numbers, known in number theory as derischläste theorem. To kick things off, let's zoom back in a little bit smaller. Did you notice that as we were resuming out, there were these six little spirals? This offers a good starting point to explain what's happening in the two larger patterns. Notice how all the multiples of six form one arm of this spiral. Then the next one is every integer that's one above a multiple of six. Then after that, it includes all the numbers two above a multiple of six, and so on. Why is that? Well remember that each step forward in the sequence involves a turn of one radian. So when you count up by six, you've turned a total of six radians, which is a little bit less than two pi, a full turn. So every time you count up by six, you've almost made a full turn. It's just a little less. Another six steps, a slightly smaller angle. Six more steps, smaller still, and so on. With this angle changing gently enough, that it gives the illusion of a single curving line. When you limit the view to prime numbers, all but two of these spiral arms, they'll go away. And think about it, a prime number can't be a multiple of six, and it also can't be two above a multiple of six unless it's two, or four above a multiple of six, since all of those are even numbers. It also can't be three above a multiple of six, unless it's the number three itself, since all of those are divisible by three. So at least at this smaller scale, nothing magical is going on. And while we're in this simpler context, let me introduce some terminology that mathematicians use. Each one of these sequences, where you're counting up by six, is fancifully called a residue class mod six. The word residue here is sort of an over dramatic way of saying remainder, and mod means something like where the thing you divide by is. So for example, six goes into twenty three times, and it leaves a remainder of two. So twenty has a residue of two mod six. Together with all the other numbers, leaving a remainder of two, when the thing you divide by is six, you have a full residue class mod six. I know that that sounds like the world's most pretentious way of saying, everything two above a multiple of six. But this is the standard jargon, and it is actually handy to have some words for the idea. So looking at a diagram in the lingo, each of these spiral arms corresponds to a residue class mod six, and the reason we see them is that six is close to two pi, turning six radians is almost a full turn. And the reason we see only two of them when filtering for primes is that all prime numbers are either one or five above a multiple of six, with the exceptions of two and three. With all that as a warm up, let's think about the larger scale. In the same way that six steps is close to a full turn, taking 44 steps is very close to a whole number of turns. Here, let's compute it. There are two pi radians per rotation, right? So taking 44 steps, turning 44 radians, gives a total of 44 divided by two pi rotations, which comes out to be just barely above seven full turns. You could also write this by saying that 44 sevenths is a close approximation for two pi, which some of you may better recognize as the famous 22 sevenths approximation for pi. What this means is when you count up by multiples of 44 in the diagram, each point has almost the same angle as the last one, just a little bit bigger. So as you continue on with more and more, we get this very gentle spiral as the angle increases very slowly. Similarly, all the numbers one above a multiple of 44 make another spiral, but rotated one radian counterclockwise, same for everything two above a multiple of 44 and so on, eventually filling out the full diagram. To phrase it with our fancier language, each of these spiral arms shows a residue class mod 44. And maybe now you can tell me what happens when we limit our view to prime numbers. Primes cannot be a multiple of 44, so that arm won't be visible. Nor can a prime be two above a multiple of 44 or four above and so on, since all those residue classes have nothing but even numbers. Likewise, any multiples of 11 can't be prime, except for 11 itself, so the spiral of numbers 11 above a multiple of 44, they won't be visible, and neither will the spiral of numbers 33 above a multiple of 44. This is what gives the picture those Milky Way-saving gaps. Each spiral we're left with is a residue class that doesn't share any prime factors with 44, and within each one of those arms that we can't reject out of hand, the prime numbers seem to be sort of randomly distributed, and that's a fact that I'd like you to tuck away, we'll return to it later. This is another good chance to inject some of the jargon that mathematicians use, but we care about right here are all the numbers between 0 and 43 that don't share a prime factor with 44, right? The ones that aren't even and they also aren't divisible by 11. When two numbers don't share any factors like this, we call them relatively prime, or also co-prime. In this example, you could count that there are 20 different numbers between 1 and 44 that are co-prime to 44, and this is a fact that a number theorist would compactly write by saying phi of 44 equals 20, where the Greek letter phi here refers to Euler's Toshin function, yet another needlessly fancy word, which is defined to be the number of integers from 1 up to n, which are co-prime to n. It comes up enough that it's handy to have compact notation. More obscurely, and I had never heard this before, but I find it too delightful not to tell, these numbers are sometimes called the Totatives of n. Back to the main thread, in short, what the user on math exchange was seeing are two unrelated pieces of number theory, but illustrated in one drawing. The first is that 44-7 is a very close rational approximation for two pi, which results in the residue classes mod 44 being cleanly separated out. The second is that many of these residue classes contain zero prime numbers, or sometimes just one, so they won't show up, but on the other hand, primes do show up plentifully enough in all 20 of the other residue classes that it makes these spiral arms visible. And at this point, maybe you can predict what's going on at the larger scale. Just as six radians is vaguely close to a full turn, and 44 radians is quite close to 7 full turns, it just so happens that 710 radians is extremely close to a whole number of full turns. Visually you can see this by the fact that the point ends up almost exactly on the x-axis, but it's more compelling analytically. 710 radians is 710 divided by 2 pi rotations, which works out to be 113.0000095. Some of you may have seen this in another form. It's saying that 710-113 is a close approximation for two pi, which is more commonly seen in saying that 355 over 113 is a very good approximation for pi. Now if you want to understand where these rational approximations are coming from, and what it means for one like this to be unusually good, like way better than you would get for phi or e, or square root of 2, or other famous irrationals, I highly recommend taking a look at this great method or video. For our storyline though, it means that when you move forward by steps of 710, the angle of each new point is almost exactly the same as the last one, just microscopically bigger. Even very far out, one of these sequences looks like a straight line. And of course the other residue classes mod 710 also form these nearly straight lines. 710 is a big number though, so when all of them are on screen, and there's only so many pixels on the screen, it's a little hard to make them out. So in this case, it's actually easier to see when we limit the view to primes, where you don't see many of those residue classes. In reality, with a little further zooming, you can see that there actually is a very gentle spiral to these. But the fact that it takes so long to become prominent is a wonderful illustration, maybe the best illustration I've ever seen, for just how good an approximation this is for two pi. Tying up the remaining loose thread here, if you want to understand what happens when you filter for primes, it's entirely analogous to what we did before. The factors of 710 are 71, 5 and 2. So if the remainder, or residue, is divisible by any of those, then so is the number. When you pull up all of the residue classes with odd numbers, it looks like every other ray in the otherwise quite crowded picture. And then of those that remain, these are the ones that are divisible by 5, which are nice and evenly spaced at every 5th line. Notice, the fact that prime numbers never show up in any of these is what explains the pattern of the lines we saw at the beginning coming in clumps of 4. And moreover, of those remaining, these 4 residue classes are the ones that are divisible by 71, so the primes aren't going to show up there. And that's what explains why the clumps of 4 that we saw occasionally have like a missing tooth in your comb. And if you were wondering where that number 280 came from, it comes from counting how many of the numbers from 1 up to 710 don't share any prime factors with 710. These are the ones that we can't rule out for including primes based on some obvious divisibility consideration. This of course doesn't guarantee that any particular one will contain prime numbers, but at least empirically when you look at this picture, it actually seems like the primes are pretty evenly distributed among the remaining classes. Wouldn't you agree? This last point is actually the most interesting observation of the whole deal. It relates to a pretty deep fact in number theory, known as Derechle's theorem. To take a simpler example, then residue classes mod 710? Think of those mod 10. Because we write numbers in base 10, this is the same thing as grouping numbers together by what their last digit is. Everything whose last digit is 0 is a residue class, everything whose last digit is 1 is another residue class, and so on. Other than 2, prime numbers can't have an even number as their last digit, since that means they're even. And likewise, any prime bigger than 5 can't end in a 5. There's nothing surprising there, that's one of the first facts you observe when you learn about prime numbers. Anything bigger than 5 has to end in either a 1, a 3, a 7, or a 9. A much more nuanced question though, is how exactly these primes are divvied up among those remaining 4 groups? Here, let's make a quick histogram, counting through each prime number, where the bars are going to show what proportion of the primes that we've seen so far have a given last digit. So, in particular, the 2 in the 5 slots should go down to 0 over time. What would you predict is going to happen as we move through more and more primes? Well, as we get a lot of them, it seems like a pretty even spread between these 4 classes, about 25% each. And probably that's what you would expect, after all, why would prime numbers show some sort of preference for one last digit over another? But primes aren't random, they are a definite sequence, and they show patterns in other ways, and it's highly non-obvious how you would prove something like this. Or, for that matter, how do you rigorously phrase what it is that you want to prove? A mathematician might go about it something like this. If you look at all the prime numbers, less than some big number x, and you consider what fraction of them are, say, one above a multiple of 10? That fraction should approach 1-4th as x approaches infinity. And likewise for all of the other allowable residue classes, like 3 and 7 and 9. Of course, there's nothing special about 10. A similar fact should hold for any other number. Considering our old friends, the residue classes mod 44, for example, let's make a similar histogram, showing what proportion of the primes show up in each one of these. Again, as time goes on, we see an even spread between the 20 different allowable residue classes, which you can think of in terms of each spiral arm from our diagram, having about the same number of primes as each of the others. Maybe that's what you'd expect, but this is a shockingly hard fact to prove. The first man who cracked this puzzle was Dereeshley in 1837, and it forms one of the crowning jewels at the foundation of modern analytic number theory. Histograms like these ones give a pretty good illustration of what the theorem is actually saying. Nevertheless, you might find it enlightening to see how it might be written in a math text, with all the fancy jargon and everything. It's essentially what we just saw for 10, but more general. Again, you look at all of the primes up to some bound x, but instead of asking for what proportion of them have a residue of, say, 1 mod 10, you ask what proportion have a residue of r mod n, where n is any number, and r is anything that's co-prime to n. Remember, that means it doesn't share any factors with n bigger than 1. Instead of approaching 1-4th, as x goes to infinity, that proportion goes to 1 divided by phi of n, where phi is that special function I mentioned earlier that gives the number of possible residues co-prime to n. And in case this is too clear for the reader, you might see a buried in more notation, where this denominator and the numerator are both written with a special prime counting function. The convention, rather confusingly, is to use the symbol pi for this function, even though it's totally unrelated to the number pi. In some contexts, when people refer to Dereeshlai's theorem, they refer to a much more modest statement, which is simply that each of these residue classes that might have infinitely many primes does have infinitely many. In order to prove this, what Dereeshlai did was show that the primes are just as dense in any one of these residue classes as in any other. For example, imagine someone asked you to prove that there are infinitely many primes ending in the number 1, and the way you do it is by showing that a quarter of all the primes end in a 1. Together with the fact that there are infinitely many primes, which we've known since Euclid, this gives a stronger statement, and a much more interesting one. Now the proof? Well, it's way more involved than would be reasonable to show here. One interesting fact worth mentioning is that it relies heavily on complex analysis, which is the study of doing calculus with functions whose inputs and outputs are complex numbers. Now that might seem weird, right? I mean, prime numbers seem wholly unrelated to the continuous world of calculus, much less when complex numbers end up in the mix. But since the early 19th century, this is absolutely par for the course when it comes to understanding how primes are distributed. And this isn't just antiquated technology either. Understanding the distribution of primes in residue classes like this continues to be relevant in modern research too. Some of the recent breakthroughs on small gaps between primes, edging towards that ever-illusive twin-prime conjecture, have their basis in understanding how primes split up among these kinds of residue classes. Okay, looking back over the puzzle, I want to emphasize something. The original bit of data visualization, whimsy, that led to these patterns? Well, like, it doesn't matter. No one cares. There's nothing special about plotting p-comapy in polar coordinates. And most of the initial mystery in these spirals resulted from the artifacts that come from dealing with integer number of radians, which is kind of weird. But on the other hand, this kind of play is clearly worth it if the end result is a line of questions that leads you to something like the Rieschlästeerum, which is important, especially if it inspires you to learn enough to understand the tactics of the underlying proof. No small task, by the way. And this isn't a coincidence that a fairly random question like this can lead you to an important and deep fact for math. What it means for a piece of math to be important and deep is that it connects to many other topics. So even an arbitrary exploration of numbers, as long as it's not too arbitrary, has a good chance of stumbling into something meaningful. Sure, you'll get a much more concentrated dosage of important facts by going through a textbook or a course, and there will be many fewer uninteresting dead ends. But there is something special about rediscovering these topics on your own. If you effectively reinvent Euler's totion function before you've ever seen it defined, or if you start wondering about rational approximations before learning about continued fractions, or if you seriously explore how primes are divvied up between residue classes before you've even heard the name to Rieschlä, then when you do learn those topics, you'll see them as familiar friends, not as arbitrary definitions, and that will almost certainly mean that you learn it more effectively.
Divergence and curl: The language of Maxwell's equations, fluid flow, and more
Today, you and I are going to get into divergence and curl. To make sure we're all on the same page, let's begin by talking about vector fields. Essentially, a vector field is what you get if you associate each point in space with a vector, some magnitude and direction. Maybe those vectors represent the velocities of particles of fluid at each point in space, or maybe they represent the force of gravity at many different points in space, or maybe a magnetic field strength. Quick note on drawing these. Often if you were to draw the vectors to scale, the longer ones end up just cluttering up the whole thing. So it's common to basically lie a little and artificially shorten ones that are too long, maybe using color to give some vague sense of length. Now in principle, vector fields and physics might change over time. In almost all real world fluid flow, the velocities of particles in a given region of space will change over time in response to the surrounding context. Wind is not a constant, it comes in gusts. An electric field changes as the charged particles characterizing it move around. But here, we'll just be looking at static vector fields, which maybe you think of as describing a steady state system. Also, while such vectors could in principle be three-dimensional, or even higher, we're just going to be looking at two dimensions. An important idea, which regularly goes unsaid, is that you can often understand a vector field which represents one physical phenomenon, better by imagining what if it represented a different physical phenomenon? What if these vectors describing gravitational force instead defined a fluid flow? What would that flow look like? And what can the properties of that flow tell us about the original gravitational force? And what if the vectors defining a fluid flow were thought of as describing the downhilled direction of a certain hill? Is such a hill even exist? And if so, what does it tell us about the original flow? These sorts of questions can be surprisingly helpful. For example, the ideas of divergence and curl are particularly viscerally understood when the vector field is thought of as representing fluid flow, even if the field you're looking at is really meant to describe something else, like an electric field. Here, take a look at this vector field, and think of each vector as describing the velocity of a fluid at that point. Notice that when you do this, that fluid behaves in a very strange, non-physical way. Around some points, like these ones, the fluid seems to just spring into existence from nothingness, as if there's some kind of source there. Some other points act more like sinks, where the fluid seems to disappear into nothingness. The divergence of a vector field, at a particular point of the plane, tells you how much this imagined fluid tends to flow out of or into small regions near it. For example, the divergence of a vector field evaluated at all of those points that act like sources will give a positive number. And it doesn't just have to be that all of the fluid is flowing away from that point. The divergence would also be positive if it was just that the fluid coming into it from one direction was slower than the flow coming out of it in another direction, since that would still insinuate a certain spontaneous generation. Now, on the flip side, if in a small region around a point, there seems to be more fluid flowing into it than out of it, that divergence at that point would be a negative number. Remember, this vector field is really a function that takes in two-dimensional inputs and spits out two-dimensional outputs. The divergence of that vector field gives you a new function, one that takes in a single 2D point as its input, but its output depends on the behavior of the field in a small neighborhood around that point. In this way, it's analogous to a derivative, and that output is just a single number, measuring how much that point acts as a source or a sink. And purposefully delaying discussion of computations here, the understanding for what it represents is more important. Notice, this means that for an actual physical fluid, like water, rather than some imagined one used to illustrate an arbitrary vector field, then if that fluid is incompressible, the velocity vector field must have a divergence of zero everywhere. That's an important constraint on what kinds of vector fields could solve real-world fluid flow problems. For the curl at a given point, you also think about the fluid flow around it, but this time, you ask how much that fluid tends to rotate around the point. As in, if you were to drop a twig in the fluid at that point, somehow fixing its center in place, would it tend to spin around? Regions where that rotation is clockwise are said to have positive curl, and regions where its counterclockwise have negative curl. And it doesn't have to be that all of the vectors around the input are pointing counterclockwise, or all of them are pointing clockwise. A point inside a region like this one, for example, would also have non-zero curl, since the flow is slow at the bottom, but quick up top, resulting in a net clockwise influence. And really, true proper curl is a three-dimensional idea. One where you associate each point in 3D space with a new vector, characterizing the rotation around that point, according to a certain right-hand rule. And I have plenty of content from my time economy describing this in more detail, if you want. But for our main purpose, I'll just be referring to the two-dimensional variant of curl, which associates each point in 2D space with a single number, rather than a new vector. As I said, even though these intuitions are given in the context of fluid flow, both of these ideas are significant for other sorts of vector fields. One very important example is how electricity and magnetism are described by four special equations. These are known as Maxwell's equations, and they're written in the language of divergence and curl. This top one, for example, is Gauss's law, stating that the divergence of an electric field at a given point is proportional to the charge density at that point. Unpacking the intuition for this, you might imagine positively charged regions as acting like sources of some imagined fluid and negatively charged regions as being the sinks of that fluid. And throughout parts of space where there is no charge, the fluid would be flowing incompressibly, just like water. Of course there's not some literal electric fluid, but it's a very useful and a very pretty way to read an equation like this. Similarly, another important equation is that the divergence of the magnetic field is zero everywhere. And you could understand that by saying that if the field represents a fluid flow, that fluid would be incompressible with no sources and no sinks. It acts just like water. This also has the interpretation that magnetic monopoles, something that acts just like a north or a south end of a magnet in isolation, don't exist. There's nothing analogous to positive and negative charges in an electric field. Likewise, the last two equations tell us that the way that one of these fields changes depends on the curl of the other field. And really this is a purely three-dimensional idea, and a little outside of our main focus here. But the point is that divergence and curl arise in contexts that are unrelated to flow. And side note, the back and forth from these last two equations is what gives rise to light waves. And quite often, these ideas are useful in contexts which don't even seem spatial in nature at first. To take a classic example that students of differential equations often study, let's say that you wanted to track the population sizes of two different species, where maybe one of them is a predator of another. The state of this system, at a given time, meaning the two population sizes, could be thought of as a point in two-dimensional space, what you would call the phase space of this system. For a given pair of population sizes, these populations may be inclined to change based on things like how reproductive are the two species, or just how much does one of them enjoy eating the other one. These rates of change would typically be written analytically as a set of differential equations. It's okay if you don't understand these particular equations. I'm just throwing them up for those of you who are curious, and because replacing variables with pictures makes me laugh a little bit. But the relevance here is that a nice way to visualize what such a set of equations is really saying is to associate each point on the plane, each pair of population sizes, with a vector indicating the rates of change for both variables. For example, when there are lots of foxes but relatively few rabbits, the number of foxes might tend to go down because of the constrained food supply, and the number of rabbits might also tend to go down because they're getting eaten by all of the foxes, potentially at a rate that's faster than they can reproduce. So a given vector here is telling you how, and how quickly, a given pair of population sizes tends to change. Notice, this is a case where the vector field is not about physical space, but instead it's a representation of a certain dynamic system that has two variables, and how that system evolves over time. This can maybe also give us sense for why mathematicians care about studying the geometry of higher dimensions, more to far system was tracking more than just two or three numbers. Now the flow associated with this field is called the phase flow for our differential equation, and it's a way to conceptualize at a glance how many possible starting states would evolve over time. Operations like divergence and curl can help to inform you about the system, do the population sizes tend to converge towards a particular pair of numbers, or are there some values that they diverge away from? Are there cyclic patterns, and are those cycles stable or unstable? To be perfectly honest with you, for something like this, you'd often want to bring in related tools beyond just divergencing curl, those would give you the full story, but the frame of mind that practice with these two ideas brings you carries over well to studying setups like this with similar pieces of mathematical machinery. Now if you really want to get a handle on these ideas, you'd want to learn how to compute them and to practice those computations, and I'll leave some links to where you can learn about this and practice if you want. Again, I did some videos and articles and worked examples for Khan Academy on this topic during my time there, so too much detail here will start to feel redundant for me. But there is one thing worth bringing up regarding the notation associated with these computations. Commonly, the divergence is written as a dot product between this upside down triangle thing and your vector field function, and the curl is written as a similar cross product. Sometimes students are told that this is just a notational trick, each computation involves a certain sum of certain derivatives and treating this upside down triangle as if it was a vector of derivative operators can be a helpful way to keep everything straight. But it is actually more than just a mnemonic device, there is a real connection between divergence and the dot product and between curl and the cross product. Even though we won't be doing practice computations here, I would like to give you at least some vague sense for how these four ideas are connected. Imagine taking some small step from one point of your vector field to another. The vector at this new point will likely be a little bit different from the one at the first point. There will be some change to the function after that step, which you might see by subtracting off your original vector from that new one. And this kind of difference to your function over small steps is what differential calculus is all about. Now the dot product gives you kind of a measure of how aligned two vectors are, right? Now the dot product of your step vector with that difference vector that it causes tends to be positive in regions where the divergence is positive and vice versa. In fact, in some sense, the divergence is a sort of average value for this dot product of a step with a change to the output that it causes over all possible step directions, assuming that things are rescaled appropriately. I mean, think about it. If a step in some direction causes a change to that vector in that same direction, this corresponds to a tendency for outward flow for positive divergence. And on the flip side, if those dot products tend to be negative, meaning the difference vector is pointing in the opposite direction from the step vector, that corresponds with the tendency for inward flow, negative divergence. Similarly, remember that the cross product is a sort of measure for how perpendicular two vectors are. So the cross product of your step vector with the difference vector that it causes tends to be positive in regions where the curl is positive and vice versa. You might think of the curl as a sort of average of this step vector difference vector cross product. If a step in some direction corresponds to a change perpendicular to that step, that corresponds to a tendency for flow rotation. So typically, this is the part where there might be some kind of sponsor message. But one thing I want to do with the channel moving ahead is to stop doing sponsored content, and instead make things just about the direct relationship with the audience. I mean that not only in the sense of the funding model with direct support through Patreon, but also in the sense that I think these videos can better accomplish their goal if each one of them feels like it's just about you and me, sharing in a love of math, with no other motive. Especially in the cases where the viewers are students. There are some other reasons, and I wrote up some of my full thoughts on this over on Patreon, which you certainly don't have to be a supporter to read that's just where it lives. I think advertising on the internet occupies a super wide spectrum, from truly degenerate clickbait, up to genuinely well aligned win-win-win partnerships. Now I've always taken care only to do promotions for companies that I would genuinely recommend. To take one example, you may have noticed that I did a number of promos for a brilliant, and it's really hard to imagine better alignment than that, right? I mean, I try to inspire people to be interested in math, but I'm also a firm believer that videos aren't enough, that you need to actively solve problems, and here's a platform that provides practice. And likewise for any others I've promoted too. I always make sure to feel good alignment. But even still, even if you seek out the best possible partnerships, whenever advertising is in the equation, the incentives will always be to try reaching as many people as possible. But when the model is more exclusively about a direct relationship with the audience, the incentives are pointed towards maximizing how valuable people find the experiences that they're given. I think those two goals are correlated, but not always perfectly. I like to think that I'll always try to maximize the value of the experience, no matter what, but for that matter, I also like to think that I can consistently wake up early and resist eating too much sugar. What matters more than wanting something is to actually align incentives. Anyway, if you want to hear more of my thoughts, I'll link to the Patreon post, and thank you again to existing supporters for making this possible, and I'll see you all next video.
Cross products in the light of linear transformations | Chapter 11, Essence of linear algebra
Hey folks, where we left off, I was talking about how to compute a three-dimensional cross product between two vectors, v cross w. It's this funny thing where you write a matrix whose second column has the coordinates of v whose third column has the coordinates of w. But the entries of that first column weirdly are the symbols i hat j hat and k hat, where you just pretend like those guys are numbers for the sake of computations. Then, with that funky matrix in hand, you compute its determinant. If you just chug along with those computations, ignoring the weirdness, you get some constant times i hat plus some constant times j hat plus some constant times k hat. How specifically you think about computing that determinant is kind of beside the point. All that really matters here is that you'll end up with three different numbers that are interpreted as the coordinates of some resulting vector. From here, students are typically told to just believe that the resulting vector has the following geometric properties. Its length equals the area of the parallelogram defined by v and w. It points in a direction perpendicular to both of v and w, and this direction obeys the right hand rule, in the sense that if you point your four finger along v and your middle finger along w, then when you stick up your thumb, it'll point in the direction of the new vector. There are some brute force computations that you could do to confirm these facts, but I want to share with you a really elegant line of reasoning. It leverages a bit of background though, so for this video I'm assuming that everybody has watched chapter five on the determinant and chapter seven where I introduce the idea of duality. As a quick reminder, the idea of duality is that anytime you have a linear transformation from some space to the number line, it's associated with a unique vector in that space, in the sense that performing the linear transformation is the same as taking a dot product with that vector. Numerically, this is because one of those transformations is described by a matrix with just one row, where each column tells you the number that each basis vector lands on. And multiplying this matrix by some vector v is computationally identical to taking the dot product between v and the vector you get by turning that matrix on its side. The takeaway is that whenever you're out in the mathematical wild and you find a linear transformation to the number line, you will be able to match it to some vector, which is called the dual vector of that transformation, so that performing the linear transformation is the same as taking a dot product with that vector. The cross product gives us a really slick example of this process in action. It takes some effort, but it's definitely worth it. What I'm going to do is define a certain linear transformation from three dimensions to the number line, and it'll be defined in terms of the two vectors v and w. Then, when we associate that transformation with its dual vector in 3D space, that dual vector is going to be the cross product of v and w. The reason for doing this will be that understanding that transformation is going to make clear the connection between the computation and the geometry of the cross product. So to back up a bit, remember in two dimensions what it meant to compute the 2D version of the cross product? When you have two vectors v and w, you put the coordinates of v as the first column of a matrix, and the coordinates of w as the second column of a matrix. Then you just compute the determinant. There's no nonsense with basis vectors stuck in a matrix or anything like that, just an ordinary determinant returning a number. Geometrically, this gives us the area of a parallelogram spanned out by those two vectors. With the possibility of being negative depending on the orientation of the vectors. Now if you didn't already know the 3D cross product and you're trying to extrapolate, you might imagine that it involves taking three separate 3D vectors, u, v and w, and making their coordinates the columns of a 3 by 3 matrix, then computing the determinant of that matrix. And as you know from chapter 5, geometrically, this would give you the volume of a parallelo piped, spanned out by those three vectors, with a plus or minus sign depending on the right hand rule orientation of those three vectors. Of course, you all know that this is not the 3D cross product. The actual 3D cross product takes in two vectors and spits out a vector. It doesn't take in three vectors and spit out a number. But this idea actually gets us really close to what the real cross product is. Consider that first vector u to be a variable, say with variable entries x, y and z, while v and w remain fixed. What we have then is a function from three dimensions to the number line. You input some vector x, y, z and you get out a number by taking the determinant of a matrix whose first column is x, y, z and whose other two columns are the coordinates of the constant vectors v and w. Geometrically, the meaning of this function is that for any input vector x, y, z, you consider the parallelo piped defined by this vector v and w, then you return its volume, with a plus or minus sign depending on orientation. Now this might feel like kind of a random thing to do. I mean, where does this function come from? Why are we defining it this way? And I'll admit, at this stage, it might kind of feel like it's coming out of the blue, but if you're willing to go along with it and play around with the properties that this guy has, it's the key to understanding the cross product. One really important fact about this function is that it's linear. I'll actually leave it to you to work through the details of why this is true based on properties of the determinant. But once you know that it's linear, we can start bringing in the idea of duality. Once you know that it's linear, you know that there's some way to describe this function as matrix multiplication. Specifically, since it's a function that goes from three dimensions to one dimension, there will be a one by three matrix that encodes this transformation. And the whole idea of duality is that the special thing about transformations from several dimensions to one dimension is that you can turn that matrix on its side and instead interpret the entire transformation as the dot product with a certain vector. What we're looking for is the special 3D vector that I'll call p such that taking the dot product between p and any other vector xyz gives the same result as plugging in xyz as the first column of a three by three matrix whose other two columns have the coordinates of v and w, then computing the determinant. I'll get to the geometry of this in just a moment, but right now let's dig in and think about what this means computationally. Taking the dot product between p and xyz will give us something times x plus something times y plus something times z, where those some things are the coordinates of p. But on the right side here, when you compute the determinant, you can organize it to look like some constant times x plus some constant times y plus some constant times z, where those constants involve certain combinations of the components of v and w. So those constants, those particular combinations of the coordinates of v and w are going to be the coordinates of the vector p that we're looking for. But what's going on on the right here should feel very familiar to anyone who's actually worked through a cross product computation. Collecting the constant terms that are multiplied by x, y, and by z like this is no different from plugging in the symbols i hat, j hat, and k hat to that first column, and seeing which coefficients aggregate on each one of those terms. It's just that plugging in i hat, j hat, and k hat is a way of signaling that we should interpret those coefficients as the coordinates of a vector. So what all of this is saying is that this funky computation can be thought of as a way to answer the following question. What vector p has the special property that when you take a dot product between p and some vector x, y, z, it gives the same result as plugging in x, y, z to the first column of a matrix whose other two columns have the coordinates of v and w, then computing the determinant. That's a bit of a mouthful, but it's an important question to digest for this video. Now for the cool part, which ties all this together with the geometric understanding of the cross product that I introduced last video. I'm going to ask the same question again, but this time we're going to try to answer it geometrically instead of computationally. What 3D vector p has the special property that when you take a dot product between p and some other vector x, y, z, it gives the same result as if you took the signed volume of a parallel a piped defined by this vector x, y, z along with v and w. Remember the geometric interpretation of a dot product between a vector p and some other vector is to project that other vector on to p, then to multiply the length of that projection by the length of p. With that in mind, let me show a certain way to think about the volume of the parallel a piped that we care about. Start by taking the area of the parallelogram defined by v and w, then multiply it not by the length of x, y, z, but by the component of x, y, z that's perpendicular to that parallelogram. In other words, the way our linear function works on a given vector is to project that vector onto a line that's perpendicular to both v and w, then to multiply the length of that projection by the area of the parallelogram spanned by v and w. But this is the same thing as taking a dot product between x, y, z and a vector that's perpendicular to v and w and the length equal to the area of that parallelogram. What's more, if you choose the appropriate direction for that vector, the cases where the dot product is negative will line up with the cases where the right hand rule for the orientation of x, y, z, v and w is negative. This means that we just found a vector p so that taking a dot product between p and some vector x, y, z is the same thing as computing that determinant of a 3 by 3 matrix whose columns are x, y, z, the coordinates of v and w. So the answer that we found earlier, computationally, using that special notational trick, must correspond geometrically to this vector. This is the fundamental reason why the computation and the geometric interpretation of the cross product are related. Just to sum up what happened here, I started by defining a linear transformation from 3D space to the number line and it was defined in terms of the vectors v and w. Then I went through two separate ways to think about the dual vector of this transformation. The vector such that applying the transformation is the same thing as taking a dot product with that vector. On the one hand, a computational approach will lead you to the trick of plugging in the symbols i hat, j hat and k hat to the first column of a matrix and computing the determinant. But thinking geometrically, we can deduce that this dual vector must be perpendicular to v and w with a length equal to the area of the parallelogram spanned out by those two vectors. Since both of these approaches give us a dual vector to the same transformation, they must be the same vector. So that wraps up dot products and cross products and the next video will be a really important concept for linear algebra, change of basis.
e to the pi i, a nontraditional take (old version)
E to the pi i equals negative 1 is one of the most famous equations in math, but it's also one of the most confusing. Those watching this video likely fall into one of three categories. 1. You know what each term means, but the statement as a whole seems nonsensical. 2. You're lucky enough to see what this means in some long formulas explaining why it works in a calculus class, but it still feels like black magic. Or 3. It's not entirely clear what the terms themselves are. Those in this last category might be in the best position to understand the explanation I'm about to give, since it doesn't require any calculus or advanced math, but will instead require an open-mindedness to reframing how we think about numbers. Once we do this, it will become clear what the equation means, why it's true, and most importantly why it makes intuitive sense. 1. First, let's get one thing straight, but we write as e to the x is not repeated multiplication. That would only make sense when x is a number that we can count, 1, 2, 3, and so on, and even then you'd have to define the number e first. To understand what this function actually does, we first need to learn how to think about numbers as actions. We are first taught to think about numbers as counting things, and addition and multiplication are thought of with respect to counting. However, this motive thinking becomes tricky when we talk about fractional amounts, very tricky when we talk about irrational amounts, and downright nonsensical when we introduce things like the square root of negative 1. Instead, we should think of each number simultaneously being three things, a point on an infinitely extending line, an action which slides that line along itself, in which case we call it an adder, an action which stretches the line, in which case we call it a multiplier. When you think about a number as an adder, you could imagine adding it with all numbers as points on the line all at once, but instead, forget that you already know anything about addition so that we can reframe how you think about it. Think of adders purely as sliding the line with the following rule. You slide until the point corresponding to 0 ends up where the point corresponding with the adder itself started. When you successively apply to adders, the effect will be the same as just applying some other adder. This is how we define their sum. Likewise, forget that you already know anything about multiplication, and think of a multiplier purely as a way to stretch the line. Now, the rule is to fix 0 in place, and bring the point corresponding with 1 to where the point corresponding with the multiplier itself started off, keeping everything evenly spaced as you do so. Just as with adders, we can now redefine multiplication as the successive application of two different actions. The life's ambition of e to the x is to transform adders into multipliers, and to do so as naturally as possible. For instance, if you take two adders, successively apply them, then pump the resulting sum through the function, it's the same as first putting each adder through the function separately, then successively applying the two multipliers you get. More succinctly, e to the x plus y equals e to the x times e to the y. If e to the x was thought of as repeated multiplication, this property would be a consequence, but really it goes the other way around. You should think of this property as defining e to the x, and the fact that the special case of counting numbers has anything to do with repeated multiplication is a consequence of the property. Multiple functions satisfy this property, but when you try to define one explicitly, one stands out as being the most natural, and we express it with this infinite sum. By the way, the number e is just defined to be the value of this function at 1. The number isn't nearly as special as the function as a whole, and the convention to write this function as e to the x is a vestige of its relationship with repeated multiplication. The other less natural functions satisfying this property are the exponentials with different faces. Now the expression e to the pi i at least seems to have some meaning, but you shouldn't think about this infinite sum when trying to make sense of it. You only need to think about turning adders into multipliers. You see, we can also play this game of sliding and stretching in the 2D plane, and this is what complex numbers are. Each number is simultaneously a point on the plane, an adder which slides the plane so that the point for 0 lands on the point for the number, and a multiplier which fixes 0 in place and brings the point for 1 to the point for the number while keeping everything evenly spaced. This can now include rotating along with some stretching and shrinking. All of the actions of the real numbers still apply, sliding side to side and stretching, but now we have a whole host of new actions. For instance, take this point here, we call it i, as an adder it slides the plane up, and as a multiplier it turns it a quarter of the way around. Since multiplying it by itself gives negative 1, which is to say applying this action twice is the same as the action of negative 1 as a multiplier, it is the square root of negative 1. All adding is some combination of sliding sideways and sliding up or down, and all multiplication is some combination of stretching and rotating. Since we already know that e to the x turns slide side to side into stretches, the most natural thing you might expect is for it to turn this new dimension of adders, slides up and down, directly into the new dimension of multipliers, rotations. In terms of points on the plane, this would mean e to the x takes points on this vertical line which correspond to adders that slide the plane up and down, and puts them on the circle with radius 1, which corresponds with the multipliers that rotate the plane. The most natural way you could imagine doing this is to wrap the line around the circle without stretching or squishing it, which would mean it takes a length of 2 pi to go completely around the circle, since by definition this is the ratio of the circumference of a circle to its radius. This means going up pi, which translates to going exactly half way around the circle. When in doubt, if there's a natural way to do things, this is exactly what e to the x will do, and this case is no exception. If you want to see a full justification for why e to the x behaves this way, see this additional video here. So there you have it. This function e to the x takes the adder pi i to the multiplier negative 1.
The essence of calculus
Hey everyone, Grant here. This is the first video in a series on the Essence of Calculus, and I'll be publishing the following videos once per day for the next 10 days. The goal here, as the name suggests, is to really get the heart of the subject out in one binge watchable set. But with the topic that's as broad as Calculus, there's a lot of things that can mean. So here's what I have in mind specifically. Calculus has a lot of rules and formulas, which are often presented as things to be memorized. Lots of derivative formulas, the product rule, the chain rule, implicit differentiation, the fact that integrals and derivatives are opposite, Taylor series, just a lot of things like that. And my goal is for you to come away feeling like you could have invented Calculus yourself. It is, cover all those core ideas, but in a way that makes clear where they actually come from, and what they really mean, using an all around visual approach. Inventing math is no joke, and there is a difference between being told why something's true, and actually generating it from scratch. But at all points, I want you to think to yourself, if you were an early mathematician pondering these ideas and drawing out the right diagrams, does it feel reasonable that you could have stumbled across these truths yourself? In this initial video, I want to show how you might stumble into the core ideas of Calculus by thinking very deeply about one specific bit of geometry, the area of a circle. Maybe you know that this is pi times its radius squared, but why? Is there a nice way to think about where this formula comes from? Well, contemplating this problem and leaving yourself open to exploring the interesting thoughts that come about can actually lead you to a glimpse of three big ideas in Calculus, integrals, derivatives, and the fact that they're opposites. But the story starts more simply, just you and a circle, let's say with radius three. You're trying to figure out its area, and after going through a lot of paper trying different ways to chop up and rearrange the pieces of that area, many of which might lead to their own interesting observations, maybe you try out the idea of slicing up the circle into many concentric rings. This should seem promising because it respects the symmetry of the circle, and math has a tendency to reward you when you respect its symmetries. Let's take one of those rings, which has some inner radius r that's between 0 and 3. If we can find a nice expression for the area of each ring like this one, and if we have a nice way to add them all up, it might lead us to an understanding of the full circles area. Maybe you start by imagining straightening out this ring. And you could try thinking through exactly what this new shape is, and what its area should be, but for simplicity, let's just approximate it as a rectangle. The width of that rectangle is the circumference of the original ring, which is 2 pi times r, right? That's essentially the definition of pi, and it's thickness? Well, that depends on how finely you chopped up the circle in the first place, which was kind of arbitrary. In the spirit of using what will come to be standard calculus notation, let's call that thickness dr for a tiny difference in the radius from one ring to the next. Maybe you think of it as something like 0.1. So approximating this unwrapped ring as a thin rectangle, its area is 2 pi times r, the radius, times dr, the little thickness. And even though that's not perfect, for smaller and smaller choices of dr, this is actually going to be a better and better approximation for that area. Since the top and the bottom sides of this shape are going to get closer and closer to being exactly the same length. So let's just move forward with this approximation, keeping in the back of our minds that it's slightly wrong, but it's going to become more accurate for smaller and smaller choices of dr. That is, if we slice up the circle into thinner and thinner rings. So just to sum up where we are, you've broken up the area of the circle into all of these rings, and you're approximating the area of each one of those as 2 pi times its radius times dr, where the specific value for that inner radius ranges from 0 for the smallest ring up to just under 3 for the biggest ring, spaced out by whatever the thickness is that you choose for dr, something like 0.1. And notice that the spacing between the values here corresponds to the thickness dr of each ring, the difference in radius from one ring to the next. In fact, a nice way to think about the rectangles approximating each ring's area is to fit them all upright side by side along this axis. Each one has a thickness dr, which is why they fit so snugly right there together, and the height of any one of these rectangles sitting above some specific value of r, like 0.6, is exactly 2 pi times that value. That's the circumference of the corresponding ring that this rectangle approximates. Like this 2 pi r can actually get kind of tall for the screen. I mean 2 times pi times 3 is around 19, so let's just throw up a y-axis that scaled a little differently so that we can actually fit all of these rectangles on the screen. A nice way to think about this setup is to draw the graph of 2 pi r, which is a straight line that has a slope 2 pi. Each of these rectangles extends up to the point where it just barely touches that graph. Again, we're being approximate here. Each of these rectangles only approximates the area of the corresponding ring from the circle. But remember, that approximation, 2 pi r times dr, gets less and less wrong as the size of dr gets smaller and smaller. And this has a very beautiful meaning when we're looking at the sum of the areas of all those rectangles. For smaller and smaller choices of dr, you might at first think that that turns the problem into a monstrously large sum. I mean, there's many, many rectangles to consider, and the decimal precision of each one of their areas is going to be an absolute nightmare. But notice, all of their areas in aggregate just looks like the area under a graph. And that portion under the graph is just a triangle, a triangle with a base of 3 and a height that's 2 pi times 3. So its area, 1 half base times height, works out to be exactly pi times 3 squared. Or if the radius of our original circle was some other value, capital R, that area comes out to be pi times r squared. And that's the formula for the area of a circle. It doesn't matter who you are or what you typically think of math, that right there is a beautiful argument. But if you want to think like a mathematician here, you don't just care about finding the answer. You care about developing general problem solving tools and techniques. So take a moment to meditate on what exactly just happened and why it worked. Because the way that we transitioned from something approximate to something precise is actually pretty subtle and it cuts deep to what calculus is all about. You had this problem that could be approximated with the sum of many small numbers, each of which looked like 2 pi r times dr for values of r ranging between 0 and 3. Remember the small number dr here represents our choice for the thickness of each ring, for example, 0.1. And there are two important things to note here. First of all, not only is dr a factor in the quantities we're adding up 2 pi r times dr, it also gives the spacing between the different values of r. And secondly, the smaller our choice for dr, the better the approximation. Adding all of those numbers could be seen in a different, pretty clever way as adding the areas of many thin rectangles sitting underneath the graph, the graph of the function 2 pi r in this case. And this is key by considering smaller and smaller choices for dr, corresponding to better and better approximations of the original problem, the sum, thought of as the aggregate area of those rectangles, approaches the area under the graph. And because of that, you can conclude that the answer to the original question in full unapproximated precision is exactly the same as the area underneath this graph. A lot of other hard problems in math and science can be broken down and approximated as the sum of many small quantities. Things like figuring out how far a car has traveled based on its velocity at each point in time. In a case like that, you might range through many different points in time, and at each one multiply the velocity at that time, times a tiny change in time, dt, which would give the corresponding little bit of distance traveled during that little time. I'll talk through the details of examples like this later in the series, but at a high level, many of these types of problems turn out to be equivalent to finding the area under some graph, in much the same way that our circle problem did. This happens whenever the quantities that you're adding up, the one whose sum approximates the original problem, can be thought of as the areas of many thin rectangles sitting side by side like this. If finer and finer approximations of the original problem correspond to thinner and thinner rings, then the original problem is going to be equivalent to finding the area under some graph. Again, this is an idea we'll see in more detail later in the series, so don't worry if it's not 100% clear right now. The point now is that you, as the mathematician having just solved a problem by reframing it as the area under a graph, might start thinking about how to find the areas under other graphs. I mean, we were lucky in the circle problem that the relevant area turned out to be a triangle, but imagine instead something like a parabola, the graph of x squared. What's the area underneath that curve, say between the values of x equals 0 and x equals 3? Well, it's hard to think about, right? And let me reframe that question in a slightly different way. We'll fix that left endpoint in place at 0 and let the right endpoint vary. Are you able to find a function a of x that gives you the area under this parabola between 0 and x? A function a of x like this is called an integral of x squared. Calculus holds within it the tools to figure out what an integral like this is, but right now it's just a mystery function to us. We know it gives the area under the graph of x squared between some fixed left point and some variable right point, but we don't know what it is. And again, the reason we care about this kind of question is not just for the sake of asking hard geometry questions. It's because many practical problems that can be approximated by adding up a large number of small things can be reframed as a question about an area under a certain graph. And I'll tell you right now that finding this area, this integral function, is genuinely hard. And whenever you come across a genuinely hard question in math, a good policy is to not try too hard to get at the answer directly since usually you just end up banging your head against a wall. Instead, play around with the idea with no particular goal in mind. Spend some time building up familiarity with the interplay between the function defining the graph, in this case x squared, and the function giving the area. In that playful spirit, if you're lucky, here's something that you might notice. When you slightly increase x by some tiny nudge dx, look at the resulting change in area, represented with this sliver that I'm going to call dA for a tiny difference in area. That sliver can be pretty well approximated with a rectangle, one whose height is x squared and whose width is dx. And the smaller the size of that nudge dx, the more that sliver actually looks like a rectangle. Now this gives us an interesting way to think about how a of x is related to x squared. A change to the output of a, this little dA, is about equal to x squared, where x is whatever input you started at, times dx, the little nudge to the input that caused a to change. For rearranged, dA divided by dx, the ratio of a tiny change in a to the tiny change in x that caused it, is approximately whatever x squared is at that point. And that's an approximation that should get better and better for smaller and smaller choices of dx. In other words, we don't know what a of x is, that remains a mystery, but we do know a property that this mystery function must have. When you look at two nearby points, for example 3 and 3.001, consider the change to the output of a between those two points, the difference between the mystery function evaluated at 3.001 and evaluated at 3. That change, divided by the difference in the input values, which in this case is 0.001, would be about equal to the value of x squared for the starting input, in this case 3 squared. And this relationship between tiny changes to the mystery function and the values of x squared itself, is true at all inputs, not just 3. That doesn't immediately tell us how to find a of x, but it provides a very strong clue that we can work with. And there's nothing special about the graph x squared here. Tiny function, defined as the area under some graph, has this property, that dA divided by dx, a slight nudge to the output of a, divided by a slight nudge to the input that caused it, is about equal to the height of the graph at that point. Again, that's an approximation that gets better and better for smaller choices of dx. And here, we're stumbling into another big idea from calculus, derivatives. This ratio, dA divided by dx, is called the derivative of a. Or more technically, the derivative is whatever this ratio approaches as dx gets smaller and smaller. I'll dive much more deeply into the idea of a derivative in the next video. But loosely speaking, it's a measure of how sensitive a function is to small changes in its input. You'll see as the series goes on that there are many, many ways that you can visualize a derivative, depending on what function you're looking at and how you think about tiny nudges to its output. And we care about derivatives because they help us solve problems. And in our little exploration here, we already have a slight glimpse of one way that they're used. They are the key to solving integral questions, problems that require finding the area under a curve. Once you've gained enough familiarity with computing derivatives, you'll be able to look at a situation like this one, where you don't know what a function is, but you do know that its derivative should be x squared, and from that reverse engineer what the function must be. And this back and forth between integrals and derivatives, where the derivative of a function for the area under a graph gives you back the function defining the graph itself, is called the fundamental theorem of calculus. It ties together the two big ideas of integrals and derivatives, and it shows how, in some sense, each one is an inverse of the other. All of this is only a high level view, just a peak at some of the core ideas that emerge in calculus. And what follows in the series are the details for derivatives and integrals and more. At all points, I want you to feel that you could have invented calculus yourself. That if you drew the right pictures and played with each idea in just the right way, these formulas and rules and constructs that are presented could have just as easily popped out naturally from your own explorations. And before you go, it would feel wrong not to give the people who supported this series on Patreon a well-deserved thanks, both for their financial backing, as well as for the suggestions they gave while the series was being developed. You see, supporters got early access to the videos as I made them, and they'll continue to get early access for future essence of type series. And as a thanks to the community, I keep ads off of new videos for their first month. I'm still astounded that I can spend time working on videos like these, and in a very direct way, you are the one to thank for that.
Summer of Math Exposition 2 Invitation
I've got that wanderlust gonna work this scene. We will get back to this buddy at the end of the video. But before that, I want to tell you about the Summer of Math Exposition number two. So last year we did the Summer of Math Exposition number one, which was essentially an open invitation for people to make explanatory math content that they might have thought about making before, but just get an excuse for us to all do it at the same time and together. And the way we did it is we framed it as a contest. So we had different participants, a lot of them were videos, not all of them were videos. We had a process for selecting winners, and aside from the competition aspect of it, which is really just an excuse to have a deadline and have that little extra push to try to make something as good as you can, the most exciting part was just how many really solid pieces of content came out of that. In fact, I was curious the other day and I went to look at the analytics and just looking at the video content mainly because that's where it's easiest to count the total number of hits on it. Of all of the video submissions to the first summer of Math Exposition, if we count up all the views that they accumulated, it's over 15 million total views there. So we want to do it again. If you have some kind of math explainer that you've been thinking about making and posting online in some capacity, whatever that is, and if you want to use the summer as an excuse to make it happen, there are really just three main bits of logistics. Number one, subscribe to the official mailing list that we're going to use for it, all of the official communication about things like timelines, rules for submissions, answers to any questions that people might have had, all of it's going to live there. Secondly, the submission deadline is August 15th. So try to finish whatever you're making maybe five days before that because we have to be in moveable with some kind of line and August 15th is going to be that line. And then if you want to engage with other people who are also participating in the event, maybe share some of your work or get some feedback or just chat about math in general, there's a discord for it. And last year the discord was really the lifeblood of the whole event. I mean, we had the announcement, we had the choosing of the winners, but all of the real community components of it happened on that discord in between. But with this year, I also wanted to add one tiny little twist to the whole event. So I've always thought it might be nice to somehow play matchmaker between domain experts, people like mathematicians or professors of other kinds or teachers or professionals. The people who have the know how and the experience with respect to some technical topic, they know the best stories to tell, but maybe they don't have the time to actually put together a video or they don't have the video making skills and content creation skills and then pair them together with content producers, whether that's people who are really interested in making videos, video editing, animation, or maybe web developers interested in interactive blog posts, game developers, artists, whatever the form of content creation might be. Because if you limit your scope to the world of online educational content creation by individuals, you have to live in this little intersection here between the people who have good lessons to teach and then the people who have the time and the interest and then the technical know how to put together that content. But with well organized collaboration, you can get a lot of good content out of the union of those two sets rather than just the intersection. And I think a shining example of this would be the entire channel number file, where you've got Brady, who's a very talented journalist and interviewer and film creator, videographer, and he goes and interviews the various experts out there. And a lot of the experts wouldn't be making their own videos, but by pairing together with Brady, they get a really polished and well thought out product from it all. Another interesting example of a collaboration like this one is, let me see what it was called, how to construct a leech lattice. It's on the channel for Richard Bortritz, who's a field's medalist mathematician who makes a lot of his own videos, just kind of overhead shot lectures. But this one was a partnership with someone who went in and took one of those lectures, but added some animations and visuals on top to really help clarify what was being said. Because the whole thing is about lattices in 24 dimensions, which is hard to picture. It's a very pleasing construction of them, which is both described and illustrated very beautifully. And to incentivize more collaborations like that one, we've put together a simple space to help people find collaborators more on that in a moment. And then at least one of the winners that we choose this year will be from the set of people who found a collaborator in this way. And also I will throw my hat into this bunch and try to partner with some of the people in that space to maybe make a three blue and brown video. So the way this will work might look a little bit weird at first, but bear with me. We've set up a GitHub repo, and we're going to use GitHub issues on that repository as a way of organizing requests for collaboration. So let's say you're a mathematician, you have some idea for a good piece of expository content. Maybe it's a video you think should get made or an interactive blog post, but you want to find someone to help you actually carry it out. You would go and create a new issue. You would use the template for a topic seeking a producer. And then you just fill it out with all the details of your idea. And the more details, the better. You know, let's say I was going to partner with some mathematician to produce something. I would be interested in seeing pretty good outline of the idea why it's important, maybe even a storyboard in there. And then on the flip side, if you want to produce some kind of content, you know, you're an artist or an animator or a video creator, but you're more interested in getting content ideas from the experts than from yourself, you would also create an issue. Use the producer seeking topic template and then fill that out. So then if you're looking for a collaborator, so let's say I'm going there, I'm looking for mathematicians with interesting ideas or artists and animators that I might want to work with, you can take a look at all of the open issues. And the reason for using GitHub for this, even though that might feel a little bit weird, is that the tools seem right for what we want in terms of having a dedicated conversation for each thread, having nice math embedding in each one of them, making it easy to link between them. And once you've found your collaborator and you're no longer looking for one, you can close the issue. If that's a totally wild way to do it, feel free to complain at me. But for my part, at least over the next week or two, I'll be in there. I'll take a look through what the requests for collaboration are. I think it would be fun for me, at least, to find some domain expert with an interesting topic that I wouldn't have known to produce. And then I was thinking of also finding someone who's interested in content creation. Things like they have at least a little bit of a portfolio such that I can tell if the style would mesh well with whatever topic from the domain expert I end up finding. And then I would be there as a kind of producer and extra collaborator. And we could post the final product on 3 Blue and Brown. And so that's the added twist, but other than that, same deal as last year. Again, this time we'll choose at least five different winners. And again, Brilliant has generously offered to contribute $5,000 for prizes to the winners, so we'll give $1,000 to each one. Also, this year, slightly different to twist, each winner will be sent a golden pie creature, of which only five exist in the whole world. So if that's not enough to get you to contribute, I don't know what will. Other than that, again, all of the official details are in the email list. Take a look at that first post to get all the details that I probably forgot to mention in this video. And honestly, I'm really excited to see what people make.
Hilbert's Curve: Is infinite math useful?
Let's talk about space-filling curves. They are incredibly fun to animate, and they also give a chance to address a certain philosophical question. Math often deals with infinite quantities, sometimes so intimately that the very substance of a result only actually makes sense in an infinite world. So the question is, how can these results ever be useful in a finite context? As with all philosophizing, this is best left to discuss until after we look at the concrete case and the real math. So I'll begin by laying down an application of something called a Hilbert curve, followed by a description of some of its origins in infinite math. Let's say that you wanted to write some software that would enable people to see with their ears. It would take in data from a camera, and then somehow translate that into a sound in a meaningful way. The thought here is that brains are plastic enough to build an intuition from sight, even when the raw data is scrambled into a different format. I've left a few links in the description to studies to this effect. To make initial experiments easier, you might start by treating incoming images with a low resolution, maybe 256x256 pixels. And to make my own animation efforts easier, let's represent one of these images with a square grid, each cell corresponding with a pixel. One approach to this sound to sight software would be to find a nice way to associate each one of those pixels with a unique frequency value. Then when that pixel is brighter, the frequency associated with it would be played louder, and if the pixel were darker, the frequency would be quiet. Listening to all of the pixels all at once would then sound like a bunch of frequencies overlaid on top of one another, with dominant frequencies corresponding to the brighter regions of the image, sounding like some cacophonous mess until your brain learns to make sense out of the information that it contains. Let's temporarily set aside worries about whether or not this would actually work, and instead think about what function from pixel space down to frequency space gives this software the best chance of working. The tricky part is that pixel space is too dimensional, but frequency space is one dimensional. You could of course try doing this with a random mapping. After all, we're hoping that people's brains make sense out of pretty wonky data anyway. However, it might be nice to leverage some of the intuitions that a given human brain already has about sound. For example, if we think in terms of the reverse mapping from frequency space to pixel space, frequencies that are close together should stay close together in the pixel space. That way, even if an ear has a hard time distinguishing between two nearby frequencies, they will at least refer to the same basic point in space. To ensure that this happens, you could first describe a way to weave a line through each one of these pixels. Then, if you fix each pixel to a spot on that line and unravel the whole thread to make it straight, you could interpret this line as a frequency space, and you have an association from pixels to frequencies, which is what we want. Now, one weaving method would be to just go one row at a time, alternating between left and right as it moves up that pixel space. This is like a well-plated game of snake, so let's call this a snake curve. When you tell your mathematician friend about this idea, she says, why not use a Hilbert curve? When you ask her what that is, she stumbles for a moment. So, it's not a curve, but an infinite family of curves. She starts, well, no, it actually is just one thing, but I need to tell you about a certain infinite family first. She pulls out a piece of paper and starts explaining what she decides to call pseudo-Hilbert curves for lack of a better term. For an order one pseudo-Hilbert curve, you divide a square into a 2x2 grid and connect the center of the lower left quadrant to the center of the upper left, over to the upper right, and then down in the lower right. For an order two pseudo-Hilbert curve, rather than just going straight from one quadrant to another, we let our curve do a little work to fill out each quadrant while it does so. Specifically, subdivide the square further into a 4x4 grid, and we have our curved trace out a miniature order one pseudo-Hilbert curve inside each quadrant before it moves on to the next. If we left those many curves oriented as they are, going from the end of the many curve in the lower left to the start of the many curve in the upper left requires this kind of awkward jump, same deal with going from the upper right down to the lower right, so we flip the curves in the lower left and the lower right to make that connection shorter. Going from an order two to an order three pseudo-Hilbert curve is completely similar. You divide the square into an 8x8 grid, then you put an order two pseudo-Hilbert curve in each quadrant, flip the lower left and the lower right ones appropriately, and then connect them all, tip to tail. And the pattern continues like that for higher orders. For the 256x256 pixel array, your mathematician friend explains, you would use an order eight pseudo-Hilbert curve. And remember, defining a curve which weaves through each pixel is basically the same as defining a function from pixel space to frequency space, since you're associating each pixel with a point on the line. Now, this is nice, as a piece of art, but why would these pseudo-Hilbert curves be any better than just the snake curve? Well, here's one very important reason. Imagine that you go through with this project, you integrate the software with real cameras and headphones, and it works. People around the world are using the device building intuitions for vision via sound. What happens when you issue an upgrade that increases the resolution of the camera's image from 256x256 to 512x512? If you're using the snake curve, as you transition to a higher resolution, many points on this frequency line would have to go to completely different parts of pixel space. For example, let's follow a point about halfway along the frequency line. It'll end up about halfway up the pixel space, no matter the resolution, but where it is left to right can differ wildly as you go from 256x512. This means everyone using your software would have to re-learn how to see with their ears, since the original intuitions of which points in space correspond to which frequencies no longer apply. However, with the Hilbert curve technique, as you increase the order of a pseudo-Hilbert curve, a given point on the line moves around less and less. It just approaches a more specific point in space. That way, you've given your users the opportunity to fine-tune their intuitions, rather than re-learning everything. So, for this sound-to-site application, the Hilbert curve approach turns out to be exactly what you want. In fact, given how specific the goal is, it seems almost weirdly perfect. So you go back to your mathematician friend and you ask her, hey, what was the original motivation for defining one of these curves? She explains that near the end of the 19th century, in the aftershock of Cantor's research on infinity, mathematicians were interested in finding a mapping from a one-dimensional line into two-dimensional space, in such a way that the line runs through every single point in space. To be clear, we're not talking about a finite bounded grid of pixels, like we had in the sound-to-site application. This is continuous space, which is very infinite. And the goal is to have a line, which is as thin as thin can be and has zero area, somehow passed through every single one of those infinitely many points that makes up the infinite area of space. Before 1890, a lot of people thought that this was obviously impossible. But then Piano discovered the first of what would come to be known as space-filling curves. In 1891, he'll be followed with his own slightly simple space-filling curve. Technically, each one feels a square, not all of space, but I'll show you later on how once you feel a square with a line, filling all of space is not an issue. By the way, mathematician used this word curve to talk about a line running through space, even if it has jagged corners. This is especially counterintuitive terminology in the context of a space-filling curve, which in a sense consists of nothing but sharp corners. A better name might be something like space-filling fractal, which some people do use, but hey, it's math, so we live with bad terminology. None of the pseudo-hilber curves that you use to fill pixelated space would count as a space-filling curve, no matter how high the order. Just zoom in on one of the pixels. When this pixel is considered part of infinite, continuous space, the curve only passes through the tiniest zero-area slice of it, and it certainly doesn't hit every single point. Your mathematician friend explains that an actual bonafide-hilbert curve is not any one of these pseudo-hilbert curves. Instead, it's the limit of all of them. Now defining this limit rigorously is delicate. You first have to formalize what these curves are as functions. Specifically, functions which take in a single number somewhere between zero and one as their input, and output a pair of numbers. This input can be thought of as a point on the line, and the output can be thought of as coordinates in 2D space. But in principle, it's just an association between a single number and pairs of numbers. For example, an order 2 pseudo-hilbert curve as a function maps the input 0.3 to the output pair 0.125 0.75. An order 3 pseudo-hilbert curve maps that same input 0.3 to the output pair 0.0758 0.6875. Now the core property that makes a function like this a curve and not just any old association between single numbers and pairs of numbers is continuity. The intuition behind continuity is that you don't want the output of your function to suddenly jump at any point when the input is only changing smoothly. And the way that this is made rigorous in math is, well, it's actually pretty clever, and fully appreciating space-filling curves really does require digesting the formal idea of continuity. So it's definitely worth taking a brief side step to go over it now. Consider a particular input point, a, and the corresponding output of the function, b. Draw a circle centered around a, and look at all of the other input points inside that circle. And the consider where the function takes all of those points in the output space. Now draw the smallest circle that you can, centered at b, that contains those outputs. Different choices for the size of the input circle might result in larger or smaller circles in the output space. But notice what happens when we go through this process at a point where the function jumps. Drawing a circle around a, and looking at the input points within the circle, seeing where they map, and drawing the smallest possible circle centered at b, containing those points. No matter how small the circle around a, the corresponding circle around b just cannot be smaller than that jump. For this reason, we say that the function is discontinuous at a, if there's any lower bound on the size of this circle that surrounds b. If, on the other hand, the circle around b can be made as small as you want, with sufficiently small choices for circles around a, you say that the function is continuous at a. A function as a whole is called continuous, if it's continuous at every possible input point. Now with that as a formal definition of curves, you're ready to define what an actual Hilbert curve is. Doing this relies on a wonderful property of the sequence of pseudo-Hilbert curves, which should feel familiar. Take a given input point, like 0.3, and apply each successive pseudo-Hilbert curve function to this point. The corresponding outputs, as we increase the order of the curve, approaches some particular point in space. It doesn't matter what input you start with, this sequence of outputs you get by applying each successive pseudo-Hilbert curve to this point always stabilizes and approaches some particular point in 2D space. This is absolutely not true, by the way, for snake curves, or for that matter most sequences of curves filling pixelated space of higher and higher resolutions. The outputs associated with a given input become wildly erratic as the resolution increases, always jumping from left to right, and never actually approaching anything. Now because of this property, we can define a Hilbert curve function like this. For a given input value between 0 and 1, consider the sequence of points in 2D space you get by applying each successive pseudo-Hilbert curve function at that point. The output of the Hilbert curve function, evaluated on this input, is just defined to be the limit of those points. Because the sequence of pseudo-Hilbert curve outputs always converges, no matter what input you start with, this is actually a well-defined function in a way that it never could have been had we use snake curves. Now, I'm not going to go through the proof for why this gives a space filling curve, but let's at least see what needs to be proved. First, verify that this is a well-defined function by proving that the outputs of the pseudo-Hilbert curve functions really do converge the way that I'm telling you they do. Second, show that this function gives a curve, meaning it's continuous. Third, and most important, show that it fills space in the sense that every single point in the unit square is an output of this function. I really do encourage anyone watching this to take a stab at each one of these. Spoiler alert, all three of these facts turn out to be true. You can extend this to a curve that fills all of space just by tiling space with squares, and then chaining a bunch of Hilbert curves together in a spiraling pattern of tiles, connecting the end of one tile to the start of a new tile with an added little stretch of line if you need to. You can think of the first tile as coming from the interval from 0 to 1, the second tile as coming from the interval from 1 to 2, and so on. So the entire positive real number line is getting mapped into all of 2D space. Take a moment to let that fact sink in. A line, the platonic form of thinness itself, can wander through an infinitely extending and richly dense space and hit every single point. Notice the core property that made pseudo-Hilbert curves useful in both the sound to site application and in their infinite origins is that points on the curve move around less and less as you increase the order of those curves. While translating images to sound, this was useful because it means upgrading to higher resolutions doesn't require retraining your senses all over again. For mathematicians, interested in filling continuous space, this property is what ensured that talking about the limit of a sequence of curves was actually a meaningful thing to do. And this connection here between the infinite and the finite worlds seems to be more of a rule in math than an exception. Another example that several astute commenters on the inventing math video pointed out is the connection between the divergent sum of all powers of 2 and the way that the number negative 1 is represented in computers with bits. It's not so much that the infinite result is directly useful, but instead the same patterns and constructs that are used to define and prove infinite facts have finite analogs, and these finite analogs are directly useful. But the connection is often deeper than a mirror analogy. Many theorems about an infinite object are often equivalent to some theorem regarding a family of finite objects. For example, if during your sound to site project, you would sit down and really formalize what it means for your curve to stay stable as you increase camera resolution. You would end up effectively writing the definition of what it means for a sequence of curves to have a limit. In fact, a statement about some infinite object, whether that's a sequence or a fractal, can usually be viewed as a particularly clean way to encapsulate a truth about a family of finite objects. The lesson to take away here is that even when a statement seems very far removed from reality, you should always be willing to look under the hood and at the nuts and bolts of what's really being said. Who knows, you might find insights for representing numbers from divergent sums, or for seeing with your ears from filling space.
Visualizing the Riemann zeta function and analytic continuation
The remanzata function. This is one of those objects in modern math that a lot of you might have heard of, but which can be really difficult to understand. Don't worry, I'll explain that animation you just saw in a few minutes. A lot of people know about this function because there's a $1 million prize out for anyone who can figure out when it equals zero, an open problem known as the reman hypothesis. Some of you may have heard of it in the context of the divergent sum 1 plus 2 plus 3 plus 4 on and on up to infinity. You see, there's a sense in which this sum equals negative 1 12th, which seems nonsensical if not obviously wrong, but a common way to define what this equation is actually saying uses the remanzata function. But as any casual math enthusiast who started to read into this nose, its definition references this one idea called analytic continuation, which has to do with complex valued functions. And this idea can be frustratingly opaque and unintuitive. So what I'd like to do here is just show you all what this data function actually looks like, and to explain what this idea of analytic continuation is in a visual and more intuitive way. I'm assuming that you know about complex numbers and that you're comfortable working with them, and I'm tempted to say that you should know calculus since analytic continuation is all about derivatives, but for the way I'm planning to present things I think you might actually be fine without that. So to jump right into it, let's just define what this data function is. For a given input where we commonly use the variable s, the function is 1 over 1 to the s, which is always 1, plus 1 over 2 to the s, plus 1 over 3 to the s, plus 1 over 4 to the s, on and on and on, summing up over all natural numbers. So for example, let's say you plug in a value like s equals 2. You'd get 1, plus 1 over 4, plus 1 over 9, plus 1 16th, and as you keep adding more and more precipicals of squares, this just so happens to approach pi squared over 6, which is around 1.645. There's a very beautiful reason for why pi shows up here, and I might do a video on it at a later date, but that's just the tip of the iceberg for why this function is beautiful. You could do the same thing for other inputs s, like 3 or 4, and sometimes you get other interesting values. And so far, everything feels pretty reasonable. You're adding up smaller and smaller amounts, and these sums approach some number. Great, no craziness here. Yet if you were to read about it, you might see some people say that zeta of negative 1 equals negative 1 12. But looking at this infinite sum, that doesn't make any sense. When you raise each term to the negative 1, flipping each faction, you get 1 plus 2 plus 3 plus 4 on and on, overall natural numbers. And obviously that doesn't approach anything. Certainly not negative 1 12th, right? And as any mercenary looking into the Riemann hypothesis knows, this function is said to have trivial zeros at negative even numbers. So for example, that would mean that zeta of negative 2 equals 0. But when you plug in negative 2, it gives you 1 plus 4 plus 9 plus 16 on and on, which again obviously doesn't approach anything, much less 0, right? Well, we'll get to negative values in a few minutes. But for right now, let's just say the only thing that seems reasonable. This function only makes sense when s is greater than 1, which is when this sum converges. So far, it's simply not defined for other values. Now, with that said, Bernard Riemann was somewhat of a father to complex analysis, which is the study of functions that have complex numbers as inputs and outputs. So rather than just thinking about how this sum takes a number s on the real number line to another number on the real number line, his main focus was on understanding what happens when you plug in a complex value for s. So for example, maybe instead of plugging in 2, you would plug in 2 plus i. Now, if you've never seen the idea of raising a number to the power of a complex value, it can feel kind of strange at first, because it no longer has anything to do with repeated multiplication. But mathematicians found that there is a very nice and very natural way to extend the definition of exponents beyond their familiar territory of real numbers and into the realm of complex values. It's not super crucial to understand complex exponents for where I'm going with this video, but I think it'll still be nice if we just summarize the gist of it here. The basic idea is that when you write something like 1 half to the power of a complex number, you split it up as 1 half to the real part times 1 half to the pure imaginary part. We're good on 1 half to the real part. There's no issues there. But what about raising something to a pure imaginary number? Well, the result is going to be some complex number on the unit circle in the complex plane. As you let that pure imaginary input walk up and down the imaginary line, the resulting output walks around that unit circle. For a base like 1 half, the output walks around the unit circle somewhat slowly. But for a base that's farther away from 1, like 1 9th, then as you let this input walk up and down the imaginary axis, the corresponding output is going to walk around the unit circle more quickly. If you've never seen this, and you're wondering why on earth this happens, I've left a few links to good resources in the description. For here, I'm just going to move forward with the what without the why. The main takeaway is that when you raise something like 1 half to the power of 2 s i, which is 1 half squared times 1 half to the i, that 1 half to the i part is going to be on the unit circle, meaning it has an absolute value of 1. So when you multiply it, it doesn't change the size of the number. It just takes that 1 fourth and rotates it somewhat. So, if you were to plug in 2 plus i to the zeta function, one way to think about what it does is to start off with all of the terms raised to the power of 2, which you can think of as piecing together lines whose lengths are the reciprocal of squares of numbers, which like I said before, converges to pi squared over 6. Then when you change that input from 2 up to 2 plus i, each of these lines gets rotated by some amount. But importantly, the lengths of those lines won't change, so the sum still converges. It just does so in a spiral to some specific point on the complex plane. Here, let me show what it looks like when I vary the input s, represented with this yellow dot on the complex plane, where this spiral sum is always going to be showing the converging value for zeta of s. What this means is that zeta of s defined as this infinite sum is a perfectly reasonable complex function as long as the real part of the input is greater than 1, meaning the input s sits somewhere on this right half of the complex plane. Again, this is because it's the real part of s that determines the size of each number, while the imaginary part just dictates some rotation. So now what I want to do is visualize this function. It takes in inputs on the right half of the complex plane and spits out outputs somewhere else in the complex plane. A super nice way to understand complex functions is to visualize them as transformations, meaning you look at every possible input to the function and just let it move over to the corresponding output. For example, let's take a moment and try to visualize something a little bit easier than the zeta function, say f of s is equal to s squared. When you plug in s equals 2, you get 4, so we'll end up moving that point at 2 over to the point at 4. When you plug in negative 1, you get 1, so the point over here at negative 1 is going to end up moving over to the point at 1. When you plug in i, by definition, its square is negative 1, so it's going to move over here to negative 1. Now I'm going to add on a more colorful grid, and this is just because things are about to start moving, and it's kind of nice to have something to distinguish grid lines during that movement. From here, I'll tell the computer to move every single point on this grid over to its corresponding output under the function f of s equals s squared. Here's what it looks like. That can be a lot to take in, so I'll go ahead and play it again. And this time, focus on one of the marked points, and notice how it moves over to the point corresponding to its square. It can be a little complicated to see all of the points moving all at once, but the reward is that this gives us a very rich picture for what the complex function is actually doing, and it all happens in just two dimensions. So back to the Zeta function. We have this infinite sum, which is a function of some complex number s, and we feel good and happy about plugging in values of s, whose real part is greater than 1, and getting some meaningful output via the converging spiral sum. So to visualize this function, I'm going to take the portion of the grid sitting on the right side of the complex plane here, where the real part of numbers is greater than 1, and I'm going to tell the computer to move each point of this grid to the appropriate output. It actually helps if I add a few more grid lines around the number 1, since that region gets stretched out by quite a bit. Alright, so first of all, let's just appreciate how beautiful that is. I mean, damn, that doesn't make you want to learn more about complex functions, you have no heart. But also, this transformed grid is just begging to be extended a little bit. For example, let's highlight these lines here, which represent all of the complex numbers with imaginary part i or negative i. After the transformation, these lines make such lovely arcs before they just abruptly stop. Don't you want to just, you know, continue those arcs? In fact, you can imagine how some altered version of the function, with the definition that extends into this left half of the plane, might be able to complete this picture with something that's quite pretty. Well, this is exactly what mathematicians working with complex functions do. They continue the function beyond the original domain where it was defined. Now as soon as we branch over into inputs where the real part is less than 1, this infinite sum that we originally used to define the function doesn't make sense anymore. You'll get nonsense like adding 1 plus 2 plus 3 plus 4 on and on up to infinity. But just looking at this transformed version of the right half of the plane, where the sum does make sense, it's just begging us to extend the set of points that we're considering as inputs. Even if that means defining the extended function in some way that doesn't necessarily use that sum. Of course, that leaves us with the question, how would you define that function on the rest of the plane? You might think that you could extend it any number of ways. Maybe you define an extension that makes it so the point at, say, s equals negative 1, moves over to negative 1, 12th. But maybe you squiggle on some extension that makes it land on any other value. I mean, as soon as you open yourself up to the idea of defining the function differently for values outside that domain of convergence, that is not based on this infinite sum, the world is your oyster and you can have any number of extensions, right? Well, not exactly. I mean, yes, you can give any child a marker and have them extend these lines any which way. But if you add on the restriction that this new extended function has to have a derivative everywhere, it locks us into one and only one possible extension. I know, I know, I said that you wouldn't need to know about derivatives for this video. And even if you do know calculus, maybe you have yet to learn how to interpret derivatives for complex functions. But luckily for us, there is a very nice geometric intuition that you can keep in mind for when I say a phrase like has a derivative everywhere. Here, to show you what I mean, let's look back at that f of s equals s squared example. Again, we think of this function as a transformation, moving every point s of the complex plane over to the point s squared. For those of you who know calculus, you know that you can take the derivative of this function at any given input. But there's an interesting property of that transformation that turns out to be related and almost equivalent to that fact. If you look at any two lines in the input space that intersect at some angle and consider what they turn into after the transformation, they will still intersect each other at that same angle. The lines might get curved and that's okay, but the important part is that the angle at which they intersect remains unchanged. And this is true for any pair of lines that you choose. So when I say a function has a derivative everywhere, I want you to think about this angle preserving property that any time two lines intersect, the angle between them remains unchanged after the transformation. At a glance, this is easiest to appreciate by noticing how all of the curves that the grid lines turn into still intersect each other at right angles. Complex functions that have a derivative everywhere are called analytic. So you can think of this term analytic as meaning angle preserving. Admittedly, I'm lying to you a little here, but only a little bit. A slight caveat for those of you who want the full details is that at inputs where the derivative of a function is zero, instead of angles being preserved, they get multiplied by some integer. But those points are by far the minority, and for almost all inputs to an analytic function, angles are preserved. So if when I say analytic, you think angle preserving, I think that's a fine intuition to have. Now, if you think about it for a moment, and this is a point that I really want you to appreciate, this is a very restrictive property. The angle between any pair of intersecting lines has to remain unchanged. And yet, pretty much any function out there that has a name turns out to be analytic. The field of complex analysis, which Riemann helped to establish in its modern form, is almost entirely about leveraging the properties of analytic functions to understand results in patterns in other fields of math and science. The Zeta function, defined by this infinite sum on the right half of the plane, is an analytic function. Notice how all of these curves that the grid lines turn into still intersect each other at right angles. So the surprising fact about complex functions is that if you want to extend an analytic function beyond the domain where it was originally defined, for example, extending this Zeta function into the left half of the plane, then if you require that the new extended function still be analytic, that is that it still preserves angles everywhere, it forces you into only one possible extension, if one exists at all. It's kind of like an infinite continuous jigsaw puzzle, where this requirement of preserving angles walks you into one and only one choice for how to extend it. This process of extending an analytic function in the only way possible that still analytic is called, as you may have guessed, analytic continuation. So that's how the full Riemann Zeta function is defined. For values of S on the right half of the plane, where the real part is greater than one, just plug them into this sum and see where it converges. And that convergence might look like some kind of spiral since raising each of these terms to a complex power has the effect of rotating each one. Then for the rest of the plane, we know that there exists one and only one way to extend this definition so that the function will still be analytic, that is so that it still preserves angles at every single point. So we just say that by definition, the Zeta function on the left half of the plane is whatever that extension happens to be. And that's a valid definition because there's only one possible analytic continuation. Notice that's a very implicit definition. It just says, use the solution of this jigsaw puzzle, which through more abstract derivation we know must exist. But it doesn't specify exactly how to solve it. Mathematicians have a pretty good grasp on what this extension looks like, but some important parts of it remain a mystery, a million dollar mystery in fact. Let's actually take a moment and talk about the Riemann hypothesis, a million dollar problem. The places where this function equals zero turn out to be quite important, that is, which points get mapped onto the origin after the transformation. One thing we know about this extension is that the negative even numbers get mapped to zero. These are commonly called the trivial zeros. The naming here stems from a long standing tradition of mathematicians to call things trivial when they understand it quite well, even when it's a fact that is not at all obvious from the outset. We also know that the rest of the points that get mapped to zero sits somewhere in this vertical strip, called the critical strip. And the specific placement of those non trivial zeros encodes a surprising information about prime numbers. It's actually pretty interesting why this function carries so much information about primes. And I definitely think I'll make a video about that later on, but right now things are long enough, so I'll leave it unexplained. Riemann hypothesized that all of these non trivial zeros sit right in the middle of the strip, on the line of numbers s, whose real part is one half. This is called the critical line. If that's true, it gives us a remarkably tight grasp on the pattern of prime numbers, as well as many other patterns and math that stem from this. Now, so far, when I've shown what the Zeta function looks like, I've only shown what it does to the portion of the grid on the screen. And that kind of undersells its complexity. So if I were to highlight this critical line and apply the transformation, it might not seem to cross the origin at all. However, here's what the transformed version of more and more of that line looks like. Notice how it's passing through the number zero many many times. If you can prove that all of the non trivial zeros sit somewhere on this line, the Clay Math Institute gives you one million dollars. And you'd also be proving hundreds if not thousands of modern math results that have already been shown contingent on this hypothesis being true. Another thing we know about this extended function is that it maps the point negative one over to negative one twelve. And if you plug this into the original sum, it looks like we're saying one plus two plus three plus four on and on up to infinity equals negative one twelve. Now, it might seem disingenuous to still call this a sum, since the definition of the Zeta function on the left half of the plane is not defined directly from this sum. Instead, it comes from analytically continuing the sum beyond the domain where it converges. That is, solving the jigsaw puzzle that began on the right half of the plane. That said, you have to admit that the uniqueness of this analytic continuation, the fact that the jigsaw puzzle has only one solution, is very suggestive of some intrinsic connection between these extended values and the original sum.
blue1brown channel trailer
Hey there, welcome to 3BlueOneBrown. So I make videos that animate math. For a lot of them, I just try to find something kind of interesting or thought-provoking, not necessarily in the typical progression of subjects that a student would see in school. To see what I mean there, check out some of the recommended playlists. Some of the channel though, is more directly targeted at people actually trying to learn a commonly taught subject. For an example of what I mean there, check out the Essence of Linear Algebra series that I did. I'll always be working on some other Essence of series like this, so stay posted for more. Now when I say that these videos animate math, on the one hand, I mean that very directly. Things are explained with moving visuals and anthropomorphic, canacartunish pie creatures. But my hope here is that they animate math in another more literal sense of giving life to the subject. Math kind of gets a bad rap for being abstruse and inhuman. And let's face it, a lot of it is simple and can be introduced in a pretty unmotivated way. So the fact is, good math is nothing less than art. I'm guessing a lot of you out there know that feeling when you wrap your mind around a really clever argument, and it's got that very unique flavor of satisfaction. And same goes for re-understanding something familiar but in a novel or unexpected light. Here, I'm just trying to do my part to share feelings like those with more people. So take a look at some of the videos here, and if you want to stay posted about new stuff, or to just see new videos rather than this trailer whenever you come back to the page, consider subscribing.
Alice, Bob, and the average area of a cube's shadow
In a moment, I'm going to tell you about a certain really nice puzzle involving the shadow of a cube. But before we get to that, I should say that the point of this video is not exactly the puzzle per se. It's about two distinct problem solving styles that are reflected in two different ways that we can tackle this problem. In fact, let's anthropomorphize those two different styles by imagining two students, Alice and Bob, that embody each one of the approaches. So Bob will be the kind of student who really loves calculation, as soon as there's a moment when he can dig into the details and get a very concrete view of the concrete situation in front of him. That's where he's the most pleased. Alice, on the other hand, is more inclined to procrastinate the computations, not because she doesn't know how to do them or doesn't want to per se, but she prefers to get a nice high-level general overview of the kind of problem she's dealing with, the general shape that it has before she digs into the computations themselves. She's most pleased if she understands not just the specific question sitting in front of her, but also the broadest possible way that you could generalize it, and especially if the more general view can lend itself to more swift and elegant computations, once she does actually sit down to carry them out. Now, the puzzle that both of them are going to be faced with is to find the average area for the shadow of a cube. So if I have a cube sitting here hovering in space, there are a few things that influence the area of its shadow. One obvious one would be the size of the cube, smaller cube, smaller shadow, but also if it's sitting at different orientations, those orientations correspond to different particular shadows with different areas. And when I say find the average here, what I mean is average over all possible orientations for a particular size of the cube. The astute among you might point out that it also matters a lot where the light source is, if the light source were very low close to the cube itself, then the shadow ends up larger, and if the light source were positioned laterally off to the side, this can distort the shadow and give it a very different shape. Accounting for that light position stands to be highly interesting in its own right, but the puzzle is hard enough as it is, so at least initially let's do the easiest thing we can and say that the light is directly above the cube and really far away, effectively infinitely far, so that all we're considering is a flat projection. In the sense that if you look at any coordinates x, y, z in space, the flat projection would be x, y, zero. So just to get our bearings, the easiest situation to think about would be if the cube is straight up with two of its faces parallel to the ground. In that case this flat projection shadow is simply a square, and if we say the sidelines of the cube are s, then the area of that shadow is s squared. And by the way, any time that I have a label up on these animations like the one down here, I'll be assuming that the relevant cube has a sideline of one. Now another special case among all the orientations that's fun to think about is if the long diagonal is parallel to the direction of the light. In that case the shadow actually looks like a regular hexagon, and if you use some of the methods that we will develop in a few minutes, you can compute that the area of that shadow is exactly the square root of three times the area of one of the square faces. But of course more often the actual shadow will be not so regular as a square or a hexagon. It's some harder to think about shape based on some harder to think about orientation for this cube. Earlier I casually threw out this phrase of averaging over all possible orientations, but you could rightly ask what exactly is that supposed to mean. I think a lot of us have an intuitive feel for what we want it to mean, at least in the sense of what experiment would you do to verify it. You might imagine tossing this cube in the air like a dye, freezing it at some arbitrary point, recording the area of the shadow from that position, and then repeating. If you do this many many times over and over, you can take the mean of your sample. The number that we want to get at the true average here should be whatever that experimental mean approaches as you do more and more tosses, approaching infinitely many. Even still the sticklers among you could complain that doesn't really answer the question, because it leaves open the issue of how we're defining a random toss. The proper way to answer this if we wanted to be more formal would be to first describe the space of all possible orientations, which mathematicians have actually given a fancy name. They call it SO3, typically defined in terms of a certain family of 3 by 3 matrices. And the question we want to answer is what probability distribution are we putting to this entire space? It's only when such a probability distribution is well defined that we can answer a question involving an average. If you are a stickler for that kind of thing, I want you to hold off on that question until the end of the video. You'll be surprised at how far we can get with the more heuristic experimental idea of just repeating a bunch of random tosses without really defining the distribution. Once we see Alice and Bob's solutions, it's actually very interesting to ask how exactly each one of them defined this distribution along their way. And remember, this is not meant to be a lesson about cube shadows per se, but a lesson about problem solving, told through the lens of two different mindsets that we might bring to the puzzle. And as with any lesson on problem solving, the goal here is not to get to the answer as quickly as we can, but hopefully for you to feel like you found the answer yourself. So if there's a point when you feel like you might have an idea, give yourself the freedom to pause and try to think it through. As a first step, and this is really independent of any particular problem solving styles, just anytime you find a hard question, a good thing that you can do is ask, what's the simplest possible non-trivial variant of the problem that you can try to solve? So in our case, what you might say is, okay, let's forget about averaging over all the orientations. That's a tricky thing to think about. And let's even forget about all the different faces of the cube because they overlap, and that's also tricky to think about. Just for one particular face and one particular orientation, can we compute the area of this shadow? Once more if you want to get your bearings with some special cases, the easiest is when that face is parallel to the ground, in which case the area of the shadow is the same as the area of the face. And on the other hand, if we were to tilt that face 90 degrees, then its shadow will be a straight line and it has an area of zero. So Bob looks at this and he wants an actual formula for that shadow. And the way he might think about it is to consider the normal vector perpendicular off of that face. And what seems relevant is the angle that that normal vector makes with the vertical, with the direction where the light is coming from, which we might call theta. Now from the two special cases we just looked at, we know that when theta is equal to zero, the area of that shadow is the same as the area of the shape itself, which is S squared if the square has sidelines S. And if theta is equal to 90 degrees, then the area of that shadow is zero. And it's probably not too hard to guess that trigonometry will be somehow relevant, so anyone comfortable with their trig functions could probably hazard a guess as to what the right formula is. But Bob is more detail-oriented than that. He wants to properly prove what that area should be, rather than just making a guess based on the endpoints. And the way you might think about it could be something like this. If we consider the plane that passes through the vertical as well as our normal vector, and then we consider all the different slices of our shape that are in that plane or parallel to that plane, then we can focus our attention on a two-dimensional variant of the problem. If we just look at one of those slices, who has a normal vector in angle theta away from the vertical, its shadow might look something like this. And if we draw a vertical line up to the left here, we have ourselves a right triangle. And from here we can do a little bit of angle chasing where we follow around what that angle theta implies about the rest of the diagram. And this means the lower right angle in this triangle is precisely theta. So when we want to understand the size of this shadow in comparison to the original size of the piece, we can think about the cosine of that angle theta, which remembers the adjacent over the hypotenuse. It's literally the ratio between the size of the shadow and the size of the slice. So the factor by which the slice gets squished down in this direction is exactly cosine of theta. And if we broaden our view to the entire square, all the slices in that direction get scaled by the same factor. But in the other direction, in the one perpendicular to that slice, there is no stretching or squishing, because the face is not at all tilted in that direction. So overall, the two-dimensional shadow of our two-dimensional face should also be scaled down by this factor of a cosine of theta. It lines up with what you might intuitively guess, given the case where the angle is zero degrees, in the case where it's 90 degrees, but it's reassuring to see why it's true. And actually, as stated so far, this is not quite correct. There is a small problem with the formula that we've written. In the case where theta is bigger than 90 degrees, the cosine would actually come out to be negative. But of course, we don't want to consider the shadow to have negative area, at least not in a problem like this. So there's two different ways you could solve this. You could say we only ever want to consider the normal vector that is pointing up, that has a positive z component. Or more simply, we could say just take the absolute value of that cosine, and that gives us a valid formula. So Bob's happy because he has a precise formula describing the area of the shadow. But Alice starts to think about it a little bit differently. She says, okay, we've got some shape, and then we apply a rotation that sort of situates it into 3D space in some way, and then we apply a flat projection that shoves that back into two dimensional space. And what stands out to her is that both of these are linear transformations. That means that in principle, you could describe each one of them with a matrix, and that the overall transformation would look like the product of those two matrices. What Alice knows from one of her favorite subjects, linear algebra, is that if you take some shape and you consider its area, then you apply some linear transformation, then the area of that output looks like some constant times the original area of the shape. More specifically, we have a name for that constant, it's called the determinant of the transformation. If you're not so comfortable with linear algebra, we could give a much more intuitive description and say, if you uniformly stretch the original shape in some direction, the output will also uniformly get stretched in some direction, so the area of each of them should scale in proportion to each other. Now, in principle, Alice could compute this determinant, but it's not really her style to do that, at least not to do so immediately. Instead, the thing that she writes down is how this proportionality constant between our original shape and its shadow does not depend on the original shape. We could be talking about the shadow of this cat outline, or anything else, and the size of it doesn't really matter. The only thing affecting that proportionality constant is what transformation we're applying, which in this context means we could write it down as some factor that depends on the rotation being applied to the shape. In the back of our mind, because of Bob's calculation, we know what that factor looks like, you know, it's the absolute value of the cosine of the angle between the normal vector and the vertical, but Alice right now is just saying, yeah, yeah, yeah, I can think about that eventually when I want to, but she knows we're about to average over all the different orientations anyway, though she holds out some hope that any specific formula about a specific orientation might get washed away in that average. Now it's easy to look at this and say, okay, well, Alice isn't really doing anything then. Of course, the area of the shadow is proportional to the area of the original shape. They're both two-dimensional quantities. They should both scale like two-dimensional things. But keep in mind, this would not at all be true if we were dealing with the harder case that has a closer light source. In that case, the projection is not linear. The, for example, if I rotate this cat so that its tail ends up quite close to the light source, then if I stretch the original shape uniformly in the x direction, say by a factor of 1.5, it might have a very disproportionate effect on the ultimate shadow, because the tail gets very disproportionately blown up as it gets really close to the light. Again, Alice is keeping an eye out for what properties of the problem are actually relevant, because that helps her know how much she can generalize things. Does the fact that we're thinking about a square face and not some other shape matter? No, not really. Does the fact that the transformation is linear matter? Yes, absolutely. Alice can also apply a similar way of thinking about the average shadow for any shape like this. Say we have some sequence of rotations that we apply to our square face, and let's call them R1, R2, R3, and so on. Then the area of the shadow in each one of those cases looks like some factor times the area of the square, and that factor depends on the rotation. So if we take an empirical average for that shadow across the sample of rotations we're looking at right now, the way it looks is to add up all of those shadow areas and then divide by the total number that we have. Now, because of the linearity, this area of the original square can cleanly factor out of all of that and it ends up on the left. This isn't the exact average that we're looking for, it's just an empirical mean of a sample of rotations, but in principle what we're looking for is what this approaches as the size of our sample approaches infinity, and all the parts that depend on the size of the sample sit cleanly away from the area itself. So whatever this approach is in the limit, it's just going to be some number. It might be a royal pain to compute, we're not sure about that yet, but the thing that Alice notes is that it's independent of the size and the shape of the particular 2D thing that we're looking at. It's a universal proportionality constant, and her hope is that that universality somehow lends itself to a more elegant way to deduce what it must be. Now Bob would be eager to compute this constant here and now, and in a few minutes I'll show you how he does it, but before that I do want to stay in Alice's world for a little bit more because this is where things start to really get fun. In her desire to understand the overall structure of the question before diving into the details, she's curious now about how the area of the shadow of the cube relates to the area of its individual faces. Now if we can say something about the average area of a particular face, does that tell us anything about the average area of the cube as a whole? For example, a simple thing we could say is that that area is definitely less than the sum of the areas across all the faces because there's a meaningful amount of overlap between those shadows. But it's not entirely clear how to think about that overlap, because if we focus our attention just on two particular faces, in some orientations they don't overlap at all, but in other orientations they do have some overlap, and the specific shape and area of that overlap seems a little bit tricky to think about, much less how on Earth we would average that across all of the different orientations. But Alice has about three clever insights through this whole problem, and this is the first one of them. She says, actually, if we think about the whole cube, not just a pair of faces, we can conclude that the area of the shadow, for a given orientation, is exactly one half the sum of the areas of all of the faces. Intuitively you can maybe guess that half of them are bathed in the light and half of them are not, but here's the way that she justifies it. She says for a particular ray of light they would go from the sky and eventually hit a point in the shadow. That ray passes through the cube at exactly two points. There's one moment when it enters and one moment when it exits. So every point in that shadow corresponds to exactly two faces above it. Well, okay, that's not exactly true if that beam of light happened to go through the edge of one of the squares. There's a little bit of ambiguity on how many faces it's passing, but those account for zero area inside the shadow. So we're safe to ignore them if the thing we're trying to do is compute the area. If Alice is pressed and she needs to justify why exactly this is true, which is important for understanding how the problem might generalize, she can appeal to the idea of convexity. Convexity is one of those properties where a lot of us have an intuitive sense for what it should mean, you know, it shapes that just bulge out they never dent inward, but mathematicians have a pretty clever way of formalizing it that's helpful for actual proofs. They say that a set is convex if the line that connects any two points inside that set is entirely contained within the set itself. So a square is convex because no matter where you put two points inside that square, the line connecting them is entirely contained inside the square. But something like the symbol pi is not convex, I can easily find two different points so that the line connecting them has to peak outside of the set itself. None of the letters in the word convex are themselves convex. You can find two points so that the line connecting them has to pass outside of the set. It's a really clever way to formalize this idea of a shape that only bulges out because anytime that it dents inward, you can find these counter-example lines. Or are cube because it's convex between the first point of entry and the last point of exit, it has to stay entirely inside the cube by definition of convexity. But if we were dealing with some other non-convex shape like a donut, you could find a ray of light that enters than exits, then enters, then exits again, so you wouldn't have a clean two-to-one cover from the shadows. The shadows of all of its different parts, if you were to cover this in a bunch of faces, would not be precisely two times the area of the shadow itself. So that's the first key insight, the face shadows double cover the cube shadow. And the next one is a little bit more symbolic, so let's start things off by abbreviating our notation a little to make room on the screen. Instead of writing the area of the shadow of the cube, I'm just going to write S of the cube. And similarly instead of the area of the shadow of a particular face, I'm just going to write S of F, where that subscript j indicates which face I'm talking about. But of course we should really be talking about the shadow of a particular rotation applied to the cube, but I might write this as S of some rotation applied to the cube. And likewise on the right, it's the area of the shadow of that same rotation applied to a given one of the faces. With the more compact notation at hand, let's think about the average of this shadow area across many different rotations, some sample of R1, R2, R3, and so on. Again, that average just involves adding up all of those shadow areas and then dividing them by N. And in principle, if we were to look at this for larger and larger samples, let N approach infinity, that would give us the average area of the shadow of the cube. Some of you might be thinking, yes, we know this, you've said this already, but it's beneficial to write it out so that we can understand why it is that expressing the shadow area for a particular rotation of the cube as a sum across all of its faces, or one half times that sum at least. Why is that beneficial? What is it going to do for us? Well, let's just write it out, where for each one of these rotations of the cube, we could break down that shadow as a sum across that same rotation applied across all of the faces. And when it's written as a grid like this, we can get to Alice's second insight, which is to shift to the way that we're thinking about the sum from going row by row to instead going column by column. For example, if we focused our attention just on the first column, what it's telling us is to add up the area of the shadow of the first face across many different orientations. So if we were to take that sum and divide it by the size of our sample, that gives us an empirical average for the area of the shadow of this face. So if we take larger and larger samples, letting that size go to infinity, this will approach the average shadow area for a square. Likewise, the second column can be thought of as telling us the average area for the second face of the cube, which should of course be the same number, and same deal for any other column. It's telling us the average area for a particular face. So that gives us a very different way of thinking about our whole expression. Instead of saying, add up the areas of the cubes at all the different orientations, we could say, just add up the average shadows for the six different faces and divide the total by one half. The term on the left here is thinking about adding up rows first and the term on the right is thinking about adding up columns first. In short, the average of the sum of the face shadows is the same as the sum of the average of the face shadows. Maybe that's why it seems simple, maybe it doesn't, but I can tell you that there is actually a little bit more than meets the eye to the step that we just took, but we'll get to that later. And remember, we know that the average area for a particular face looks like some universal proportionality constant times the area of that face. Therefore, adding this up across all the faces of the cube, we could think of this as equaling some constant times the surface area of the cube. And that's pretty interesting. The average area for the shadow of this cube is going to be proportional to its surface area. But at the same time, you might complain, well, Alice is just pushing around a bunch of symbols here because none of this matters if we don't know what that proportionality constant is. I mean, it almost seems obvious. Like, of course, the average shadow area should be proportional to the surface area. They're both two-dimensional quantities, so they should scale and lock step with each other. I mean, it's not obvious, after all, for a closer light source, it simply wouldn't be true. And also, this business where we added up the grid column by column versus row by row is a little more nuanced than it might look at first. There's a subtle hidden assumption underlying all of this, which carries a special significance when we choose to revisit the question of what probability distribution is being taken across the space of all orientations. But more than anything, the reason that it's not obvious is that the significance of this result right here is not merely that these two values are proportional. It's that an analogous fact will hold true for any convex solids, and, crucially, the actual content of what Alice has built up so far is that it'll be the same proportionality constant across all of them. Now, if you really mull over that, some of you may be able to predict the way that Alice is able to finish things off from here. It's really delightful. It's honestly my main reason for covering this topic, but before we get into it, I think it's easy to under-appreciate her result, unless we dig into the details of what it is that she manages to avoid. So let's take a moment to turn our attention back into Bob's world. Because while Alice has been doing all of this, he's been busy doing some computations. In fact, what he's been working on is finding exactly what Alice has yet to figure out, which is how to take the formula that he found for the area of a square's shadow and taking the natural next step of trying to find the average of that square's shadow averaged over all possible orientations. So the way Bob starts, if he's thinking about all the different possible orientations for this square, is to ask, what are all the different normal vectors that that square can have in all these orientations? Because everything about its shadow comes down to that normal vector. It's not too hard to see that all those possible normal vectors trace out the surface of a sphere. If we assume it's a unilateral vector, it's a sphere with radius one. And furthermore, Bob figures that each point of this sphere should be just as likely to occur as any other. Our probabilities should be uniform in that way. There's no reason to prefer one direction over another. But in the context of continuous probabilities, it's not very helpful to talk about the likelihood of a particular individual point, because in the uncountable infinity of points on the sphere, that would be zero and unhelpful. So instead, the more precise way to phrase this uniformity would be to say the probability that our normal vector lands in any given patch of area on the sphere should be proportional to that area itself. More specifically, it should equal the area of that little patch divided by the total surface area of the sphere. If that's true, no matter what patch of area we're considering, that's what we mean by a uniform distribution on the sphere. Now to be clear, points on the sphere are not the same thing as orientations in 3D space, because even if you know what normal vector the square is going to have, that leaves us with another degree of freedom, the square could be rotated about that normal vector. But Bob doesn't actually have to care about that extra degree of freedom, because in all of those cases, the area of the shadow is the same. It's only dependent on the cosine of the angle between that normal vector and the vertical. Which is kind of neat, all those shadows are genuinely different shapes, they're not the same, but the area of each of them will be the same. What this means is that when Bob wants this average shadow area over all possible orientations, all he really needs to know is the average value of this absolute value of cosine of theta for all different possible normal vectors, while different possible points on the sphere. So how do you compute an average like this? Well, if we lived in some kind of discrete pixelated world where there's only a finite number of possible angles theta that that normal vector could have, the average would be pretty straightforward. What you do is find the probability of landing on any particular value of theta, which will tell us something like how much of the sphere do normal vectors with that angle make up, and then you multiply it by the thing we want to take the average of, this formula for the area of the shadow. And then you would add that up over all of the different possible values of theta, ranging from zero up to 180 degrees, or pi radians. But of course, in reality, there is a continuum of possible values of theta, this uncountable infinity, and the probability of landing on any specific particular value of theta will actually be zero. And so a sum like this unfortunately doesn't really make any sense, or if it does make sense adding up infinitely many zeros should just give us a zero. The short answer for what we do instead is that we compute an integral. And all of you with you, the hard part here is I'm not entirely sure what background I should be assuming from those of you watching right now. Maybe it's the case that you're quite comfortable with calculus, and you don't need me to belabor the point here. Maybe it's the case that you're not familiar with calculus, and I shouldn't just be throwing down integrals like that. Or maybe you took a calculus class a while ago, but you need a little bit of a refresher. I'm going to go with the option of setting this up as if it's a calculus lesson, because to be honest, even when you are quite comfortable with integrals, setting them up can be kind of an error prone process. And calling back to the underlying definition is a good way to sort of check yourself in the process. If we lived in a time before calculus existed, and integrals weren't a thing, and we wanted to approximate an answer to this question, one way we could go about it is to take a sample of values for theta that ranges from zero up to 180 degrees. We might think of them as evenly spaced with some sort of difference between each one, some delta theta. And it's still the case that it would be unhelpful to ask about the probability of a particular value of theta occurring, even if it's one in our sample, that probability would still be zero, and it would be unhelpful. But what is helpful to ask is the probability of falling between two different values from our sample, in this little band of latitude with a width of delta theta. Based on our assumption that the distribution along this sphere should be uniform, that probability comes down to knowing the area of this band. More specifically, the chances that a randomly chosen vector lands in that band should be that area divided by the total surface area of the sphere. To figure out that area, let's first think of the radius of that band, which if the radius of our sphere is one, is definitely going to be smaller than one. And in fact, if we draw the appropriate little right triangle here, you can see that that little radius, let's just say at the top of the band, should be the sine of our angle, the sine of theta. This means that the circumference of the band should be 2 pi times the sine of that angle, and then the area of the band should be that circumference times its thickness, that little delta theta. Or rather, the area of our band is approximately this quantity. What's important is that for a fine or sample of many more values of theta, the accuracy of that approximation would get better and better. Now remember, the reason we wanted this area is to know the probability of falling into that band, which is this area divided by the surface area of the sphere, which we know to be 4 pi times its radius squared. That's a value that you could also compute with an integral similar to the one that we're setting up now, but for now we can take it as a given as a standard well-known formula. And this probability itself is just a stepping stone in the direction of what we actually want, which is the average area for the shadow of a square. To get that, we'll multiply this probability times the corresponding shadow area, which is this absolute value of cosine theta expression we've seen many times up to this point. And our estimate for this average would now come down to adding up this expression across all of the different bands, all of the different samples of theta that we've taken. This right here, by the way, is when Bob is just totally in his element. We've got a lot of exact formulas describing something very concrete, actually digging in on our way to a real answer. And again, if it feels like a lot of detail, I want you to appreciate that fact, so that you can appreciate just how magical it is when Alice manages to somehow avoid all of this. Anyway, looking back at our expression, let's clean things up a little bit like factoring out all of the terms that don't depend on theta itself. And we can simplify that 2 pi divided by 4 pi to simply be 1 half. And to make it a little more analogous to calculus, with integrals, let me just swap the main terms inside the sum here. What we now have, this sum that's going to approximate the answer to our question, is almost what an integral is. Instead of writing the sigma for sum, we write the integral symbol, this kind of elongated like Nitsian s, showing us that we're going from 0 to pi. And instead of describing the step size as delta theta, a concrete finite amount, we instead describe it as d theta, which I like to think of as signaling the fact that some kind of limit is being taken. What that integral means, by definition, is whatever the sum on the bottom approaches for finer and finer subdivisions, more dense samples that we might take for theta itself. And at this point for those of you who do know calculus, I'll just write down the details of how you would actually carry this out, as you might see it written down in Bob's notebook. It's the usual anti-derivative stuff, but the one key step is to bring in a certain trig identity. In the end, what Bob finds after doing this, is the surprisingly clean fact that the average area for a square's shadow is precisely 1 half the area of that square. This is the mystery constant, which Alice doesn't yet know. If Bob were to look over her shoulder and see the work that she's done, he could finish out the problem right now. He plugs in the constant that he just found, and he knows the final answer. And now, finally, with all of this as backdrop, what is it that Alice does to carry out the final solution? I introduced her as someone who really likes to generalize the results she finds, and usually those generalizations end up as interesting footnotes that aren't really material for solving particular problems. But this is a case where the generalization itself draws her to a quantitative result. Remember, the substance of what she's found so far is that if you look at any convex solid, then the average area for its shadow is going to be proportional to its surface area, and critically, it'll be the same proportionality constant across all of these solids. So all Alice needs to do is find just a single convex solid out there where she already knows the average area of its shadow. And some of you may see where this is going, the most symmetric solid available to us is a sphere. No matter what the orientation of that sphere, its shadow, the flat projection shadow, is always a circle with an area of pi r squared. And so in particular, that's its average shadow area. And the surface area of a sphere, like I mentioned before, is exactly 4 pi r squared. By the way, I did make a video talking all about that surface area formula and how Archimedes proved it thousands of years before calculus existed, so you don't need integrals to find it. The magic of what Alice has done is that she can take this seemingly specific fact that the shadow of a sphere has an area exactly one fourth its surface area, and use it to conclude a much more general fact that for any convex solid out there, its shadow and surface area are related in the same way, in a certain sense. So with that, she can go and fill in the details of the particular question about a cube and say that its average shadow area will be one fourth times its surface area, 6s squared. But the much more memorable fact that she'll go to sleep thinking about is how it didn't really matter that we were talking about a cube at all. Now, that's all very pretty, but some of you might complain that this isn't really a valid argument because spheres don't have flat faces. When I said Alice's argument generalizes to any convex solid, if we actually look at the argument itself, it definitely depends on the use of a finite number of flat faces. For example, if we were mapping it to a Dota Cahedron, he would start by saying that the area of a particular shadow of that Dota Cahedron looks like exactly one half times the sum of the areas of the shadows of all its faces. Once again, you could use a certain ray of light mixed with convexity argument to draw that conclusion. And remember the benefit of expressing that shadow area as a sum is that when we want to average over a bunch of different rotations, we can describe that sum as a big grid, where we can then go column by column and consider the average area for the shadow of each face. And also, a critical fact was the conclusion from much earlier that the average shadow for any 2D object, a flat 2D object, which is important, will equal some universal proportionality constant times its area. And the significance was that that constant didn't depend on the shape itself. It could have been a square, or a cat, or the pentagonal faces of our Dota Cahedron, whatever. Though, after hastily carrying this over to a sphere that doesn't have a finite number of flat faces, you would be right to complain. But luckily, it's a pretty easy detail to fill in. What you can do is imagine a sequence of different polyhedra that's successively approximate a sphere, in the sense that their faces hug tighter and tighter around the genuine surface of this sphere. For each one of those approximations, we can draw the same conclusion that its average shadow is going to be proportional to its surface area with this universal proportionality constant. So then, if we say, okay, let's take the limit of the ratio between the average shadow area at each step and the surface area at each step. Well, since that ratio is never changing, it's always equal to this constant, then in the limit, it's also going to equal that constant. But on the other hand, by their definition, in the limit, their average shadow area should be that of a circle, which is pi r squared. And the limit of the surface areas would be the surface area of the sphere, four pi r squared. So we do genuinely get the conclusion that intuition would suggest, but, as is so common with Alice's argument here, we do have to be a little delicate in how we justify that intuition. It's easy for this contrast of Alice and Bob to come across like a value judgment, as if I'm saying, look how clever Alice has managed to be. She incitfully avoided all those computations that Bob had to do. But that would be a very... um... misguided conclusion. I think there's an important way that popularizations of math differ from the feeling of actually doing math. There's this bias towards showing the slick proofs, the arguments with some clever keen insight that lets you avoid doing calculations. I could just be projecting since I'm very guilty of this, but what I can tell you sitting on the other side of the screen here is that it feels a lot more attractive to make a video about Alice's approach than Bob's. For one thing, in Alice's approach, the line of reasoning is fun. It has these nice aha moments. But also, crucially, the way that you explain it is more or less the same for a very wide range of mathematical backgrounds. It's much less enticing to do a video about Bob's approach, not because the computations are all that bad. I mean, they're honestly not, but the pragmatic reality is that the appropriate pace to explain it looks very different depending on the different mathematical backgrounds in the audience. So you, watching this right now, clearly consume math videos online, and I think in doing so, it's worth being aware of this bias. If the aim is to have a genuine lesson on problem solving, too much focus on the slick proofs runs the risk of being disingenuous. For example, let's say we were to step up to challenge mode here and ask about the case with a closer light source. To my knowledge, there is not a similarly slick solution to Alice's here, where you can just relate to a single shape like a sphere. The much more productive warm-up to have done would have been the calculus of Bob's approach. And if you look at the history of this problem, it was proved by Koshi in 1832, and if we paw through his handwritten notes, they look a lot more similar to Bob's work than Alice's work. Right here at the top of page 11, you can see what is essentially the same integral that you and I set up in the middle. On the other hand, the whole framing of the paper is to find a general fact, not something specific like the case of a cube. Though if we were asking the question which of these two mindsets correlates with the act of discovering new math, the right answer would almost certainly have to be a blend of both. But I would suggest that many people don't sign enough weight to the part of that blend, where you're eager to dive into calculations. I think there's some risk that the videos I make might contribute to that. In the podcast that I did with the mathematician Alex Contorvich, he talked about the often underappreciated importance of just drilling on computations to build intuition, whether you're a student engaging with a new class or a practicing research mathematician engaging with a new field of study. A listener actually wrote in to highlight what an impression that particular section made, their a PhD student, and described themselves as being worried that their mathematical abilities were starting to fade, which they attributed to becoming older and less sharp. But, hearing a practicing mathematician talk about the importance of doing hundreds of concrete examples in order to learn something new, evidently that changed their perspective, in their own words recognizing this completely reshaped their outlook and their results. And if you look at the famous mathematicians through history, you know Newton, Neuler, Gauss, all of them, they all have this seemingly infinite patience for doing tedious calculations. The irony of being biased to show insights that let us avoid calculations is that the way people often train up the intuitions to find those insights in the first place is by doing piles and piles of calculations. All that said, something would definitely be missing without the Alice mindset here. I mean, think about it, how sad would it be if we solve this problem for a cube, and we never stepped outside of the trees to see the forest and understand that this is a super general fact, it applies to a huge family of shapes. And if you consider that math is not just about answering the questions that are posed to you, but about introducing new ideas and constructs, one fun side note about Alice's approach here is that it suggests a fun way to quantify the idea of convexity. Rather than just having a yes-no answer, is it convex, is it not? We could put a number to it by saying consider the average area of the shadow of some solid, multiply that by four, divided by the surface area, and if that number is one, you've got a convex solid, but if it's less than one, it's non-convex, and how close it is to one tells you how close it is to being convex. Also, one of the nice things about the Alice solution here is that it helps explain why it is that mathematicians have what can sometimes look like a bizarre infatuation with generality and with abstraction. The more examples that you see where generalizing and abstracting actually helps you to solve a specific case, the more you start to adopt the same infatuation. And as a final thought, for this stalwart viewers among you who've stuck through it this far, there is still one unanswered question about the very premise of our puzzle. What exactly doesn't mean to choose a random orientation? Now if that feels like a silly question, like, of course we know what it should mean, I would encourage you to watch a video that I just did with NumberFile on a conundrum from probability known as Bertrand's Paradox. After you watch it, and if you appreciate some of the nuance that play here, homework for you is to reflect on where exactly Alice and Bob implicitly answered to this question. The case with Bob is relatively straightforward, but the point at which Alice locks down some specific distribution on the space of all orientations is not at all obvious, it's actually very subtle.
How secure is 256 bit security?
In the main video on cryptocurrencies, I made two references to situations where in order to break a given piece of security, you would have to guess a specific string of 256 bits. One of these was in the context of digital signatures and the other in the context of a cryptographic hash function. For example, if you want to find a message whose SHA 256 hash is some specific string of 256 bits, you have no better way to do it. A better method than to just guess and check random messages. And this would require, on average, 2 to the 256 guesses. Now this is a number so far removed from anything that we ever deal with that it can be hard to appreciate its size. But let's give it a try. 2 to the 256 is the same as 2 to the 32 multiplied by itself 8 times. Now what's nice about that split is that 2 to the 32 is 4 billion, which is at least a number we can think about, right? So all we need to do is appreciate what multiplying 4 billion times itself 8 successive times really feels like. As many of you know, the GPU on your computer can let you run a whole bunch of computations in parallel, incredibly quickly. So if you were to specially program a GPU to run a cryptographic hash function over and over, a really good one might be able to do a little less than a billion hash as per second. And let's say that you just take a bunch of those and cram your computer full of extra GPUs so that your computer can run 4 billion hash as per second. So the first 4 billion here is going to represent the number of hash as per second per computer. Now picture 4 billion of these GPU packed computers. For comparison, even though Google does not at all make their number of servers public, estimates have it somewhere in the single digit millions. In reality, most of those servers are going to be much less powerful than our imagined GPU packed machine. But let's say that Google replaced all of its millions of servers with a machine like this. Then 4 billion machines would mean about 1000 copies of this souped up Google. Let's call that one killer Google worth of computing power. There's about 7.3 billion people on earth. So next, imagine giving a little over half of every individual on earth their own personal killer Google. Now imagine 4 billion copies of this earth. For comparison, the Milky Way has somewhere between 100 and 400 billion stars. We don't really know but the estimates tend to be in that range. So this would be akin to a full 1% of every star in the galaxy having a copy of earth where half the people on that earth have their own personal killer Google. And try to imagine 4 billion copies of the Milky Way. And we're going to call this your gigagallactic supercomputer running about 2 to the 160 guesses every second. Now 4 billion seconds, that's about 126.8 years. 4 billion of those, well that's 507 billion years, which is about 37 times the age of the universe. So even if you were to have your GPU packed killer Google per person multi-planetary gigagallactic computer guessing numbers for 37 times the age of the universe, it would still only have a 1 in 4 billion chance of finding the correct guess. By the way, the state of Bitcoin hashing these days is that all of the miners put together guess and check at a rate of about 5 billion billion hashes per second. That corresponds to one third of what I just described as a killer Google. This is not because there are actually billions of GPU packed machines out there, but because miners actually use something that's about a thousand times better than a GPU. Application specific integrated circuits. These are pieces of hardware specifically designed for Bitcoin mining, for running a bunch of SHA-256 hashes, and nothing else. Turns out there's a lot of efficiency gains to be had when you throw out the need for general computation and design your integrated circuits for one and only one task. Also, on the topic of large powers of two that I personally find it hard to get my mind around, this channel recently surpassed two to the 18th subscribers. And to engage a little more with some sub-portion of those two to the 18 people, I'm going to do a Q&A session. I've left a link in the description to a Reddit thread where you can post questions and upload the ones that you want to hear answers to. And probably in the next video or on Twitter or something like that, I'll announce the format in which I'd like to give answers. See you then.
How to count to 1000 on two hands
nd nd nd nd nd nd nd nd nd nd nd nd nd nd nd nd nd
The most unexpected answer to a counting puzzle
Sometimes math and physics conspire in ways that just feel too good to be true. Let's play a strange sort of mathematical croquet. We're going to have two sliding blocks and a wall. The first block starts by coming in at some velocity from the right, while the second one starts out stationary. Being overly idealistic physicists, let's assume that there's no friction and all of the collisions are perfectly elastic, which means no energy is lost. The astute among you might complain that such collisions would make no sound, but your goal here is going to be to count how many collisions take place. So in slight conflict with that assumption, I want to leave a little plaque sound to better draw your attention to that count. The simplest case is when both blocks have the same mass. The first block hits the second, transferring all of its momentum, then the second one bounces off the wall, and then transfers all of its momentum back to the first, which then sails off towards infinity, three total plaques. What about if the first block was 100 times the mass of the second one? I promise I will explain to you all the relevant physics and due course. It's not entirely obvious how you would predict the dynamics here, but in the spirit of getting to the punchline, let's watch what happens. The second one will keep bouncing back and forth between the wall and the first block 100 times its mass, like a satisfying game of break out, slowly and discreetly redirecting that first block's momentum to point in the opposite direction. In total, there will be 31 collisions before each block is sliding off towards infinity never to be touched again. What if the first block was 10,000 times the mass of the second one? In that case, there would be quite a few more plaques, all happening very rapidly at one point. Adding up an all to 313 total collisions. Well, actually hang on. Wait for it. Okay, 314 plaques. If the first block was 1 million times the mass of the other, then again, with all of our crazy idealistic conditions, almost all of the plaques happen in one big burst, this time resulting in a total of 3141 collisions. Perhaps you see the pattern here, though it's forgivable if you don't since it defies all expectation. And the mass of that first block is some power of 100 times the mass of the second. The total number of collisions have the same digits as pi. This absolutely blew my mind when it was first shared with me. Credit to the viewer Henry Cavill for introducing me to this fact, which was originally discovered by the mathematician Gregory Galperin in 1995 and published in 2003. Part of what I love about this is that if ever there were Olympic games for algorithms that compute pi, this one would have to win medals both for being the most elegant and for being the most comically inefficient. I mean, think about the actual algorithm here. Step 1, implement a physics engine. Step 2, choose the number of digits d of pi that you'd like to compute. Step 3, set the mass of one of the blocks to be 100 to the power d minus 1, then send it traveling on a frictionless surface towards a block of mass 1. And step 4, count all of the collisions. So, for example, to calculate only 20 digits of pi, which fits so cleanly on the screen, one block would have to have 100 billion billion billion billion times the mass of the other, which if that small block was 1 kilogram, means the big one has a mass about 10 times that of the supermassive black hole at the center of the Milky Way. That means you would need to count about 31 billion billion collisions. And at one point in this virtual process, the frequency of cracks would be around 100 billion billion billion clacks per second. So, let's just say that you would need very good numerical precision to get this working accurately, and it would take a very long time for the algorithm to complete. I'll emphasize again that this process is way over-idealized, quickly departing from anything that could possibly happen in real physics. But of course, you all know this is not interesting because of its potential as an actual pi computing algorithm, or as a pragmatic physics demonstration. It's mind-boggling because why on Earth would pi show up here? And it's in such a weird way, too. Its decimal digits are counting something, but usually pi shows up when its precise value is describing something continuous. I will show you why this is true, where there is pi, there is a hidden circle, and in this case, that hidden circle comes from the conservation of energy. In fact, you're going to see two separate methods, which are each as stunning and surprising as the fact itself. With a laying gratification, though, I will make you wait until the next video to see what's going on. In the meantime, I highly encourage you to take a stab at yourself, and be social about it. So it never hurts to recruit some other smart minds to the task.
All possible pythagorean triples, visualized
When you first learned about the Pythagorean theorem, that the sum of the squares of the two shorter sides on a right triangle always equals the square of its hypotenuse, I'm guessing that you came to be pretty familiar with a few examples, like the 3, 4, 5 triangle, or the 5, 12, 13 triangle. And I think it's easy to take for granted that these even exist. Examples where the sum of two perfect squares happens to be a perfect square. But keep in mind for comparison, if you were to change that exponent to any whole number, bigger than two, you go from having many integer solutions to no solutions whatsoever. This is Fermat's famous last theorem. Now there's a special name for any triplet of whole numbers, ABC, where A squared plus B squared equals C squared. It's called a Pythagorean triple. And what we're going to do here is find every single possible example. And moreover, we'll do so in a way where you can visualize how all of these triples fit together. This is an old question, pretty much as old as they come in math. There are some Babylonian clay tablets from 1800 BC, more than a millennium before Pythagoras himself, that just list these triples. And by the way, while we're talking about the Pythagorean theorem, it would be a shame not to share my favorite proof for anyone who hasn't already seen this. You start off by drawing a square on each side of the triangle. And if you take that C squared and add four copies of the original triangle around it, you can get a big square whose side lengths are A plus B. But you can also arrange the A squared and the B squared together with four copies of the original triangle, to get a big square whose side lengths are A plus B. What this means is that the negative space in each of these diagrams, the area of that big square minus four times the area of the triangle, is from one perspective, clearly a squared plus B squared. But from another perspective, it's C squared. Anyway, back to the question of finding whole number solutions. Start by reframing the question slightly. Among all of the points on the plane with integer coordinates, that is, all of these lattice points where grid lines cross, which ones are a whole number distance away from the origin. For example, the point three-four is a distance five away from the origin. And the point twelve-five is a distance thirteen away from the origin. The question of finding Pythagorean triples is completely equivalent to finding lattice points, which are a whole number distance away from the origin. Of course, for most points, like two-one, the distance from the origin is not a whole number. But it is, at least, the square root of a whole number. In this case, two squared plus one squared is five. So that distance, that hypotenuse there, is the square root of five. Now, taking what might seem like a strange step, but one which will justify itself in just a moment, think of this as the complex plane. So that every one of these points, like two-one here, is actually an individual complex number, in this case two plus i. What this gives is a surprisingly simple way to modify it to get a new point whose distance away from the origin is guaranteed to be a whole number. Just square it. Algebraically, when you square a complex number, expanding out this product and matching up all of the like terms, because everything here just involves multiplying and adding integers, each component of the result is guaranteed to be an integer, in this case you get three plus four i. But you can also think of complex multiplication more geometrically. You take this line drawn from the origin to the number and consider the angle it makes with the horizontal axis, as well as its length, which in this case is the square root of five. The effective multiplying anything by this complex number is to rotate it by that angle, and to stretch out by a factor equal to that length. So when you multiply the number by itself, the effect is to double that angle, and importantly, to square its length. Since the length started off as the square root of some whole number, this resulting length is guaranteed to be a whole number, in this case five. Here, let's try it with another example. Start off with some complex number that has integer coordinates, like three plus two i. In this case, the distance between this number and the origin is the square root of three squared plus two squared, which is the square root of 13. Now multiply this complex number by itself. The real part comes out to three squared plus two i squared, which is nine minus four, and the imaginary part is three times two plus two times three. So the result is five plus twelve i, and the magnitude of this new number is 13. The square of the magnitude of our starting number, three plus two i. So simply squaring our randomly chosen lattice point gives us the five twelve thirteen triangle. There's something kind of magical about actually watching this work. It almost feels like cheating. You can start with any randomly chosen lattice point, like four plus i. And just by taking it square, you generate a pathagorean triple. In this case, four plus i squared is fifteen plus eight i, which has a distance seventeen away from the origin. If you play around with this, which I encourage you to do, you'll find that some of the results are kind of boring. If both the coordinates of your starting point are the same, or if one of them is zero, then the triple at the end is going to include a zero. For example, two plus two i squared gives eight i. And even though technically this is indeed a lattice point, a whole number distance away from the origin, the triple that it corresponds to is zero squared plus eight squared equals eight squared, which isn't exactly something to write home about. But for the most part, this method of squaring complex numbers is a surprisingly simple way to generate non-trivial pathagorean triples. And you can even generalize it to get a nice formula. If you write the coordinates of your initial point as u and v, then when you work out u plus v i squared, the real part is u squared minus v squared. And the imaginary part is two times u v. The resulting distance from the origin is going to be u squared plus v squared. It's kind of fun to work out this expression algebraically and see that it does indeed check out. And it's also fun to plug in some random integers for u and v and get out a pathagorean triple. Essentially, we've created a machine where you give it any pair of integers and it gives you back some pathagorean triple. A really nice way to visualize this, which will be familiar to any of you who watch the Zeta function video, is to watch every point of z on the plane move over to the point z squared. So for example, the point three plus two i is going to move over to five plus twelve i. The point i is going to rotate 90 degrees to its square negative one. The point negative one is going to move over to one and so on. Now when you do this to every single point on the plane, including the grid lines, which I'll make more colorful so they're easier to follow, here's what it looks like. So the grid lines all get turned into these parabolic arcs and every point where these arcs intersect is a place where a lattice point landed. So it corresponds to some pathagorean triple. That is, if you draw a triangle whose hypotenuse is the line between any one of these points and the origin and whose legs are parallel to the axes, all three side lengths of that triangle will be whole numbers. What I love about this is that usually when you view pathagorean triples just on their own, they seem completely random and unconnected and you'd be tempted to say there's no pattern. But here we have a lot of them sitting together really organized, just sitting on the intersections of these nicely spaced curves. Now you might ask if this accounts for every possible pathagorean triple. Sadly, it does not. For example, you will never get the point 6 plus 8i using this method. Even though 6, 8, 10 is a perfectly valid pathagorean triple. There are simply no integers u and v where u plus v i squared is 6 plus 8i. Likewise, you will never hit 9 plus 12i. But these don't really feel like anything new, do they? Since you can get each one of them by scaling up the familiar triple, 3, 4, 5, which is accounted for in our method. In fact, for reasons that I'll explain shortly, every possible pathagorean triple that we miss is just some multiple of a different triple that we hit. To give another example, we miss the point 4 plus 3i. There are no integers u and v so that u plus v i squared is 4 plus 3i. In fact, you'll never hit any points whose imaginary component is odd. However, we do hit 8 plus 6i. That's 3 plus i squared. So even though we miss 4 plus 3i, it's just 1 half times the point that we do hit. And by the way, you'll never have to scale down by anything smaller than 1 half. A nice way to think about these multiples that we miss is to take each point that we get using this squaring method and draw a line from the origin through that point out to infinity. Marking all of the lattice points that this line hits will account for any multiples of these points that we might have missed. Doing this for all possible points, you'll account for every possible pathagorean triple. Every right triangle that you ever have seen or ever will see that has whole number side lengths is accounted for somewhere in this diagram. To see why, we'll now shift to a different view of the pathagorean triple problem, one that involves finding points on a unit circle that have rational coordinates. If you take the expression a squared plus b squared equals c squared and divide out by that c squared, what you get is a over c squared plus b over c squared equals 1. This gives us some point on the unit circle x squared plus y squared equals 1 whose coordinates are each rational numbers. This is what we call a rational point of the unit circle. And going the other way around, if you find some rational point on the unit circle, when you multiply out by a common denominator for each of those coordinates, what you'll land on is a point that has integer coordinates and whose distance from the origin is also an integer. With that in mind, consider our diagram, where we squared every possible lattice point and then drew these radial lines through each one to account for any multiples that we might have missed. If you project all of these points onto the unit circle, each one moving along its corresponding radial line, what you'll end up with is a whole bunch of rational points on that circle. And keep in mind, by the way, I'm drawing only finitely many of these dots and lines, but if I drew all infinitely many lines corresponding to every possible squared lattice point, it would actually fill every single pixel of the screen. Now, if our method was incomplete, if we were missing a Pythagorean triple out there somewhere, it would mean that there's some rational point on this circle that we never hit once we project everything onto the circle. And let me show you why that cannot happen. Take any one of those rational points and draw a line between it and the point at negative one. When you compute the rise over run slope of this line, the rise between the two points is rational, and the run is also rational, so the slope itself is just going to be some rational number. So if we can show that our method of squaring complex numbers accounts for every possible rational slope here, it's going to guarantee that we hit every possible rational point of the unit circle, right? Well, let's think through our method. We start off with some point u plus vi that has integer coordinates, and this number makes some angle off of the horizontal, which I'm going to call theta. Squareing this number, the resulting angle off the horizontal is 2 times theta. And of course, when you project that onto the unit circle, it's along the same radial line, so the corresponding rational point of the unit circle also has that same angle, 2 times theta. And here, I'll bring in a nice little bit of circle geometry, which is that anytime you have an angle between two points on the circumference of a circle and its center, that turns out to be exactly two times the angle made by those same points and any other point on the circle circumference, provided that that other point isn't between the original two points. What this means for our situation is that the line between negative one and the rational point on the circle must make an angle theta with the horizontal. In other words, that line has the same slope as the line between the origin and our initial complex number, u plus vi. But look at the rise overrun slope of the line defined by a choice of integers, u and v. The slope is v divided by u. And of course, we can choose v and u to be whatever integers we want. And therefore, we do indeed account for every possible rational slope. So there you go. The radial lines from our method, determined by all possible choices of u and v, must pass through every rational point on this circle. And that means our method must hit every possible pathagorean triple. If you haven't already watched the video about pi hiding in prime regularities, the topics there are highly related to the ones here.
Simulating an epidemic
I want to share with you a few simulations that model how an epidemic spreads. There have recently been a few wonderful interactive articles in this vein, including one in the Washington Post by Harry Stevens, and then another by Kevin Simler over at Melting Asphalt. They are great, you can play with them, they're very informative, all of course leave links in the description. After seeing those, I had a few more questions. Like, if people stay away from each other, I get that that's going to slow down the spread. And what if despite mostly staying away from each other, people still occasionally go to a central location, like a grocery store or a school. Also, what if you're able to identify and isolate the cases? And if you can, what if a few slip through? Say because they don't show symptoms so they aren't tested. How does travel between separate communities affect things? And what if people avoid contact with each other for a while, but then they kind of get tired and stop? We'll explore these questions and more, but first, let's walk through how exactly these models will work. Each simulation represents what's called an SIR model, meaning the population is broken up into three categories, those who are susceptible to getting the disease, those who are infectious, and then those who have recovered from the infection. So people who are immune don't play into it. The way I've written these, for every unit of time that a susceptible person spends within a certain infection radius of someone with the disease, they'll have some probability of contracting it themselves. So we're using physical proximity as a stand-in for things like shaking hands, touching the same surface, kissing, sneezing on each other, all that good stuff. Then, for each infectious person, after some amount of time they'll recover and no longer be able to spread the disease, or if they die, they also won't be able to spread it anymore. So as a more generic term, sometimes people consider the R in SIR to stand for removed. This should go without saying, but let me emphasize it at the start anyway. These are toy models, with a tiny population, inevitably falling far short of the complexities in real people and larger populations. I am not an epidemiologist, so I would be very hesitant to generalize any of the lessons here without deeper consideration. That said, I think it can be healthy to engage the little scientist inside all of us and take the chance to be experimental and quantitative, even if it's in a necessarily limited fashion, especially if the alternative is to dwell on headlines and uncertainty. We'll start things simple and layer on more complexity gradually. In these first few runs that you're looking at, everybody is simply meandering around the city in a kind of random walk, and the infection follows the rules that we've laid out. So it doesn't look great, after not too long, almost everybody gets infected. As a sanity check, what would you expect to happen if I double this radius of infection? You might think of this as representing more total interactions between people or a more socially engaged society. It'll spread more quickly, of course, but how much? It's actually very drastic. Within a short time span, the majority of our little population is infected simultaneously. As another sanity check, what would you expect if we go back to the original infection radius and then we cut the probability of infection in half? Remember, the way this is running for each day that a susceptible person is within that radius of an infectious person, they have some probability of becoming infected. By default, I've said it to 20%, but this is the number that we're now going to cut in half. You might think of this as better handwashing, better cough protection, and less face touching. As you might expect, it spreads out the curve. In fact, it does this by quite a lot, which really illustrates how changes to hygiene have very large effects on the rate of spreading. The first of several key takeaways here that I'd like you to tuck away in your mind is just how sensitive this growth can be to each parameter in our control. It's not that hard to imagine changing your daily habits in a way that multiplies the number of people you interact with, or that cuts your probability of catching an infection in half. But the implications for the pace of the spread are huge. The goal is probably to reduce the total number of people who die, which is going to be some proportion of this removed category in the end. That proportion is not a constant, though. If you get to a point where the peak of the infection curve is too high, meaning there's a time when many people are all sick at once, that's the point when available healthcare resources are overwhelmed, which for a bad disease will increase the mortality rate. Now, I don't know where you're from, but in most towns, people don't actually spend their days drunkenly wandering around the city like this. Often there's a common destination, like a central market or a school. In our model, if we introduce some central spot like this that people regularly visit and then return from, it's...well, just look. One of the main things I was curious about is how to mitigate this effect, and that's something we'll examine in just a bit. Another feature that we could include is to have a few separate communities with transit between them. Every day, each person will have some probability of traveling to the center of another community, and then going about their usual routine from there. All of that is our basic setup, so now the question is what actions help to stop this spread? What is, by far, most effective is to identify and isolate whoever is infectious, for example, with good testing and quick responsiveness. In our simulations, once we hit some critical threshold of cases, we're going to start sending people to a separate location one day after they have the infection. This is, of course, a stand-in for whatever isolation would look like in the real world. It doesn't have to literally be transporting all the sick people into one sad box. Unsurprisingly, this totally halts the epidemic in its tracks. But what if when you're infected, you have a 20% chance of not getting quarantined? Say, because you show no symptoms, so you don't get tested, and you go about your day as usual. We're going to illustrate these people that have no symptoms using yellow circles instead of red. Obviously, this will have a result somewhere between a total quarantine and doing nothing, but where on the spectrum would you predict it'll be? The peak number of simultaneous cases is only a little bit higher, but there is a very long tail, as it takes a much longer time to stamp out, resulting in about twice as many total cases. This gets more interesting when we do it in a setting with many communities and transit between them. Again, totally effective tracking in isolation stops the epidemic very quickly. But what would you predict is going to happen if now 20% of the infectious cases slip through that process? Again, I've set things to wait until a certain critical threshold of cases is hit before our little dot society kicks into gear and takes action. As a side note, it's a little interesting that even when all the parameters are the same, some runs take three times longer to reach this point than others. Before the law of large numbers kicks in, that pace of growth can have as much to do with a role of the dice as it does with anything intrinsic to the disease itself. This leaky quarantining effort definitely flattens out the curve, but it is a much thicker tail and it takes a much longer time to track down all the cases, with over half the population getting infected this time. Now what would you predict if it was only 50% of the infectious cases that were isolated like this? If half the infectious people are getting quarantined, it doesn't mean that half the total population will end up with the disease. Because there are so many agents still out there spreading it, we end up with a situation that's only barely better than if nothing had been done at all. A second key takeaway here is that changes in how many people slip through the tests can cause disproportionately large changes to the total number of people infected. If we look back to the fact that quickly containing cases so early is so effective, it actually has an interesting corollary, which is that the most lethal diseases are, in some sense, are dangerous globally. Remember that the rule of this quarantine simulation is to send infectious people to a separate spot one day after they've been infected. But if the disease kills its host after one day, the model looks identical. It just has a much darker interpretation. It is terrible for those who get it, but it doesn't spread. It also means that the most dangerous viruses are the ones that kill some part of the population while laying unnoticed and spreadable among others, or worse yet, if they remain unnoticed and spreadable in everyone before eventually becoming lethal. One of the reasons that the SARS outbreak in 2002 was so well contained is that just about everybody in this infectious population was showing symptoms, so they were much easier to identify and isolate. A more optimistic corollary of these quarantine simulations is how useful early treatment can be. If there exists an antiviral therapeutic that can move people out of this infectious category quickly, it has the same effect on the model as isolating those cases. But let's say you don't have widespread testing or antiviral therapeutics. Well, let's introduce a new parameter here, which is how much people try to avoid each other. Let's call it the social distance factor. During these animations, I'll apply it as a repulsive force between people and have them glow yellow when they feel just a little too close to their neighbor. This has the sad but kind of cute effect that when our little agents are social distancing, they often end up trembling near the edge of their box. No isolation is perfect, though, so every now and then, even those repulsed by each other will kind of jiggle close enough to get infected. The point is that those interactions are much rarer. Let's take a look at four separate runs here. In each one of them, after we hit 50 cases, I'll turn on social distancing. But in the top left, we'll turn it on for everybody. In the top right, we'll turn it on for 90% of the population. In the bottom left, for 70% of the population, and in the bottom right, for only half the population. What would you predict is going to happen? As you might expect, when 100% of people avoid each other, the disease quickly goes away entirely. And in all four cases, the presence of some kind of widespread social distancing definitely flattens out the curve. However, in terms of the ultimate number of cases, the run with 70%, and even the one with 90% end up with a little less than half the population ultimately getting infected, which is only a tiny bit better than the one with 50%. That case with 90% of people repelling each other certainly takes longer to get there, but evidently, a mere 10% of the population cheating adds enough instability to our system to keep the fire slowly burning for a long time. Again, I'll emphasize that these are toy models, and I leave it to the intelligence of the viewer to judge if the behavior of these little dots accurately reflects what social distancing would mean for you and your life. Even fully sequestered in their home is not necessarily affected by the random jigglings of their neighbor, but then again, few of us genuinely live independently from everyone else. In so far as these models aren't outlandish, the third key takeaway is that social distancing absolutely works to flatten the curve, but even small imperfections prolong the spread for a while. Now let's look at that setup with 12 communities and travel between them. By default, I have it set so that every day each agent has a 2% chance of traveling to the center of a different community. Now let's try a run where once we hit 100 cases, we cut down that travel rate by a factor of 4 to only half a percent. What would you predict is going to happen? The honest answer is that it depends. In some runs, it doesn't make any difference, and the majority of every community gets it. Other times, there are a couple of communities that end up unscathed. In general, the earlier you turn on this effect, the more effective it is. But the takeaway here is that reducing contact between communities really has only a limited effect once those communities already have it, and as solutions go, it's certainly not robust on its own. As a side note, when we run these simulations with larger cities, which has the effect that city centers act like concentrated urban hubs, you can see how as soon as the infection hits one of these urban centers, it very quickly hits all of them. And after that, it slowly spreads to the edges of each community. Let's take a moment to talk about how to quantify this spread. Consider one person with a disease and then count how many people they infect while they have it. The average for this count across everybody who's been sick is known as the effective reproductive number, or R. A more commonly discussed number is R-NOT, which is the value of R in a fully susceptible population, like at the very beginning. This is known as the basic reproductive number. You may have noticed I have this little label on our simulations, and the way that it's being calculated is to look at each individual who's currently infectious, count how many people they've infected so far, estimate how many they're going to infect in total based on the duration of the illness, and then average those numbers. For example, in our first default simulation with no added spices, R is around 2.2 at the highest part of the growth phase, before falling down as the population becomes saturated. The contrast when we doubled the infection radius, R was as high as 8. This factor has a huge effect on the growth rate. It should kind of make sense that it jumped up as high as 8 though. When you double that radius, there's about 4 times as many people inside it to infect. When we chopped the infection rate in half, it hovered around the 1.3 to 1.7 range. While R is greater than 1, the infection is growing exponentially, and it's at that point that it's known as an epidemic. When it holds steady around 1, that's when a disease is called endemic. And less than 1 means that it's on the decline. For comparison, R0 for COVID-19 is estimated to be a little above 2, which is also around what the mean estimate for R0 was during the 1918 Spanish flu pandemic. The seasonal flu by comparison is much lower, around 1.3. In the travel case, as soon as we turn on social distancing and shutdown travel, you can see R quickly dropped down from 2. There's a little bit of a lag between the change we make to the model and the value of this number, since it's calculated based on current infectious cases, which might have existed prior to the method being put in place. Like I said at the start, one of the things I was most curious about is the effect of some kind of shared central location, like a market or a school. And I introduced such a destination for our little dots, R0 jumps as high as 5.8. This might be a little unfair since it puts everyone right in the same spot, and given that we're using physical proximity as a stand-in for things like shaking hands or kissing, we should maybe acknowledge that people going to the same school or grocery store are not as likely to spread an infection as, say, close friends or people living in the same house. To count for this, let's conservatively cut the probability of infection per day in half. This does indeed cut R0 in half, but the effect of a central market remains dramatic. Now let's try a run where after some threshold is hit, we turn on social distancing, but people still go to that central location the way they did before. You may notice that some of our little dots seem to have escaped their little cage, which was not supposed to happen, but I am going to make the conscious choice not to fix that. It's like they all looked at the chaos inside and just went, nope, I'm out. I don't want any part of that. Living in the Bay Area during a shelter-in-place order, I can confirm that this is how some people react. Wandering dots aside, though, let me show you how this graph compares to the control case where we did nothing, and how it compares to what would have happened if in addition to repelling from each other, all of the dots also stopped going to that central location. The peak of the infection curve is a little bit lower than the control, but in terms of the total number of cases, keeping that central location active really defeats the effects of otherwise social distancing. Now let me ask you to make a prediction. What do you think will be more effective? If on top of that social distancing effect, we decrease the frequency with which people go to that central spot, maybe by a factor of 5. Or if we chop the probability of infection down by another factor of 2, for example, meaning people are super extra cautious about washing their hands and not touching their face. The simulation on the left requires our dots to very heavily alter their daily routines, whereas on the right, our dots can mostly continue their usual habits, but they're just much more conscious of hygiene. They're actually nearly identical, which surprised me, given that one of them is a 5-fold decrease and the other is 2-fold. I guess it goes to show that the effect of hygiene, which is maybe easier said than done, really goes a long way. Of course, it doesn't have to be an either or. Our goal with these experiments is to look at the effects of one change at a time. If you're curious, here's what it looks like when we apply social distancing, we restrict the rate that people go to the central location, and we also lower the infection rate all at once. This combination is, indeed, very effective. But I want to emphasize again, how the most desirable case is when you can consistently identify and isolate cases. Even in this central market simulation, which left unchecked gives a huge conflagration, being able to do this effectively still halts the epidemic. And our little dots don't even have to be repelled by each other or stop their trips to the central spot. In real epidemiology, by the way, it gets way more sophisticated than this, with tactics like contact tracing, where you not only identify and isolate known cases, but you do the same for everyone who's been in contact with those cases. During the time that I'm posting this, I imagine there's some expectation for it to be a PSA on social distancing. But to be honest, that's not really my own main takeaway. To be clear, when it's needed, like it is now, social distancing absolutely saves lives. And as we saw earlier, when people cheat, or when they continue to regularly meet at a central spot, it has a disproportionate effect on the long term number of cases. The uncomfortable truth, though, is that while the disease still exists, as soon as people let up, then they go back to their normal lives, if nothing is in place to contain the cases, few though they might be, you'll just get a second wave. After making all of these, what I came away with more than anything else was a deeper appreciation for disease control done right, for the inordinate value of early widespread testing and the ability to isolate cases, for therapeutics that treat these cases, and most importantly, for how easy it is to underestimate all that value, when times are good. I'm writing this during a pandemic, when some viewers may be able to identify a little too well with the trembling dots retreating to the edge of their box. But in the future, many people will be watching this during a pandemic that never was. A time when a novel pathogen that could have spread widely, if left unchecked, was instead swiftly found and contained. Those would be pandemics never make it into the history books, which is maybe why we don't value the heroes behind them the way that we should. Living in a world with widespread travel and vibrant urban centers does make fighting the spread of a disease and uphill battle, that's absolutely true. But that same level of connectedness means that ideas spread more quickly than ever, ideas that can lead to systems and technologies that nip these outbreaks in the bud. It won't happen on its own, and it's clear that we sometimes make mistakes, but I'm fundamentally optimistic about our ability to learn from those mistakes. As you might imagine, these videos require a lot of hours of effort. I don't do any ad reads at the end, and YouTube content related to the current pandemic seems to be systematically demonetized. So I just want to take this chance to say a particularly warm thank you to those who support them directly on Patreon. It really does make a difference.
What does it feel like to invent math?
Take 1 plus 2 plus 4 plus 8 and continue on and on adding the next power of 2 up to infinity. This one is crazy, but there's a sense in which this infinite sound equals negative 1. If you like me, this feels stranger, obviously false, when you first see it. But I promise you, by the end of this video, you and I will make it make sense. To do this, we need to back up, and you and I will walk through what might feel like to discover convergent infinitums, those ones that at least seem to make sense. To define what they really mean, then to discover this crazy equation and stumble upon new forms of math where it makes sense. Imagine that you were an early mathematician in the process of discovering that 1 half plus 1 fourth plus 1 eighth plus 1 16 on and on up to infinity, whatever that means, equals 1. And imagine that you needed to define what it means to add infinitely many things for your friends to take you seriously. What would that feel like? Frankly, I have no idea, and I imagine that more than anything, it feels like being wrong or stuck most of the time. But I'll give my best guess at one way that the successful parts of it might go. One day, you're pondering the nature of distances between objects, and how no matter how close two things are, it seems that they can always be brought a little bit closer together without touching. Farmed of math as you are, you want to capture this paradox co-feeling with numbers. So you imagine placing the two objects on the number line, the first is zero, the second at one. Then you march the first object towards the second, such that with each step, the distance between them is cut in half. You keep track of the numbers this object touches during its march, writing down one half, one half plus a fourth, one half plus a fourth plus an eighth, and so on. That is, each number is naturally written as a slightly longer sum with one more power of two in it. As such, you're tempted to say that if these numbers approach anything, we should be able to write this thing down as a sum that contains the reciprocal of every power of two. On the other hand, we can see geometrically that these numbers approach one. So what you want to say is that one and some kind of infinite sum are the same thing. If your education was too formal, you'd write the statement off as ridiculous. Clearly, you can't add infinitely many things. No human, computer, or physical thing ever could perform such a task. If, however, you approach math with a healthy irreverence, you'll stand brave in the face of ridiculousness and try to make sense out of this nonsense that you wrote down, since it kind of feels like nature gave it to you. So how exactly do you, dear mathematician, go about defining infinite sums? Well practiced in math that you are, you know that finding the right definitions has less about generating new thoughts than it is about dissecting old thoughts. So you go back to how you came across this fuzzy discovery. At no point did you actually perform infinitely many operations. You had a list of numbers, a list that could keep going forever if you had the time, and each number came from a perfectly reasonable finite sum. You noticed that the numbers in this list approach one, but what do you mean by approach? It's not just that the distance between each number and one is smaller, because for that matter, the distance between each number and two also gets smaller. After thinking about it, you realize what makes one special is that your numbers can get arbitrarily close to one, which is to say no matter how small your desired distance is, one-hundredths, one-one-millionth, or one over the largest number you could write down. If you go down your list long enough, the numbers will eventually fall within that tiny, tiny distance of one. Retrospectively, this might seem like the clear way to solidify what you mean by approach, but as a first time endeavor, it's actually incredibly clever. Now you pull out your pin and scribble down the definition for what it means for an infinite sum to equal some number, say X. It means that when you generate a list of numbers by cutting off your sum at finite points, the numbers in this list approach X in the sense that no matter how small the distance you choose, at some point down the list, all the numbers start falling within that distance of X. In doing this, you just invented some math, but it never felt like you were pulling things out of thin air. You were just trying to justify what it was that the universe gave you in the first place. You might wonder if you can find other, more general truths about these infinite sums that you just invented. To do so, you look for where you made any arbitrary decisions. For instance, when you were shrinking the distance between your objects, cutting the interval into pieces of size 1, 1, 4, etc. You could have chosen a proportion other than 1,5. You could have instead cut your interval into pieces of size 9, 10th and 1, 10th. And then cut that right most piece into the same proportions, giving you smaller pieces of size 9, 100ths and 1, 100ths. Then cut that tiny piece of size 1, 100th similarly. Continuing on and on, you'd see that 9, 10ths plus 9, 100ths plus 9, 1000ths, on and on up to infinity equals 1. A fact more popularly written as.9 repeating equals 1. To all of your friends who insist that this doesn't equal 1 and it just approaches it, you're now just smile because you know that with infinite sums, to approach and to equal mean the same thing. To be general about it, let's say that you cut your interval into pieces of size p and 1 minus p, where p represents any number between 0 and 1. Cutting the piece of size p in similar proportions, we now get pieces of size p times 1 minus p and p squared. Continuing in this fashion, always cutting up the right most piece into those same proportions. You'll find that 1 minus p plus p times 1 minus p plus p squared times 1 minus p, on and on always adding p to the next power times 1 minus p equals 1. Dividing both sides by 1 minus p, we get this nice formula. In this formula, the universe has offered a weird form of nonsense. Even though the way that you discovered it only makes sense for values of p between 0 and 1, the right hand side still makes sense when you replace p with any other number, except maybe for 1. For instance, plugging in negative 1, the equation reads 1 minus 1 plus 1 minus 1, on and on forever, alternating between the two, equals 1 half, which feels both pretty silly and kind of like the only thing that it could be. Plugging in 2, the equation reads 1 plus 2 plus 4 plus 8 on and on to infinity equals negative 1, something which doesn't even seem reasonable. On the one hand, rigor would dictate that you ignore these, since the definition of infinite sums doesn't apply in these cases. The list of numbers that you generate by cutting off the sum at finite points doesn't approach anything. But you're a mathematician, not a robot, so you don't let the fact that something is nonsensical stop you. I will leave this sum for another day, so that we can jump directly into this monster. First, to clean things up, notice what you get when you cut off the sum at finite points. 1, 3, 7, 15, 31. They're all one less than a power of 2. In general, when you add up the first n powers of 2, you get 2 to the n plus 1 minus 1, which this animation hopefully makes clear. You decide to humor the universe and pretend that these numbers, all one less than a power of 2, actually do approach negative 1. You will prove to be cleaner if we add 1 to everything and say that the powers of 2 approach 0. Is there any way that this can make sense? In effect, what you're trying to do is make this formula more general by saying that it applies to all numbers, not just those between 0 and 1. Again, to make things more general, you look for any place where you made an arbitrary choice. Here, that place turns out to be very sneaky, so sneaky in fact that it took mathematicians until the 20th century to find it. It's the way that we define distance between two rational numbers. That is to say, organizing them on a line might not be the only reasonable way to organize them. The notion of distance is essentially a function that takes in two numbers and outputs a number indicating how far apart they are. You could come up with a completely random notion of distance, where 2 is 7 away from 3 and 1 half is 4 fits away from 100 and all sorts of things, but if you want to actually use a new distance function the way that you use the familiar distance function, it should share some of the same properties. For example, the distance between two numbers shouldn't change if you shift them both by the same amount. Some 0 and 4 should be the same distance away as 1 and 5 or 2 and 6, even if that same distance is something other than 4 as we're used to. Keeping things general, the distance between two numbers shouldn't change if you add the same amount to both of them. Let's call this property shift invariance. There are other properties that you want your notion of distance to have as well, like the triangle inequality, but before we start worrying about those, let's start imagining what notion of distance could possibly make powers of 2 approach 0 and which is shift invariant. At first you might toil for a while to find a frame of mind where this doesn't feel like utter nonsense, but with enough time and a bit of luck you might think to organize your numbers into rooms, sub rooms, sub sub rooms and so on. You think of 0 as being in the same room as all of the powers of 2 greater than 1. As being in the same sub room as all powers of 2 greater than 2. As being in the same sub sub room as powers of 2 greater than 4 and so on, with infinitely many smaller and smaller rooms. It's pretty hard to draw infinitely many things, so I'm only going to draw 4 room sizes, but keep in the back of your mind that this process should be able to go on forever. If we think of every number as lying in a hierarchy of rooms, not just 0, shift invariance will tell us where all of the numbers must fall. For instance, 1 should be as far away from 3 as 2 is from 0. Likewise the distance between 0 and 4 should be the same as that between 1 and 5, 2 and 6 and 3 and 7. Continuing like this, you'll see which rooms, sub rooms, sub sub rooms and so on, successive numbers must fall into. You can also deduce where negative numbers must fall. For example, negative 1 has to be in the same room as 1. In the same sub room as 3, the same sub sub room as 7 and so on, always in smaller and smaller rooms with numbers 1 less than a power of 2 because 0 is in smaller and smaller rooms with the powers of 2. So how do you turn this general idea of closeness based on rooms and sub rooms into an actual distance function? You can't take this drawing too literally since it makes one look very close to 14 and 0 very far from 13 even though shift invariance should imply that they're the same distance away. Again, in the actual process of discovery, you might toil away, scribbling through many sheets of paper, but if you have the idea that the only thing that should matter in determining the distance between two objects is the size of the smallest room they share, you might come up with the following. Many numbers lying in different large yellow rooms are distance 1 from each other. Those which are in the same large room, but not in the same orange sub room are distance 1 half from each other. Those that are in the same orange sub room, but not in the same sub sub room are distance 1 fourth from each other. When you continue like this, using the reciprocals of larger and larger powers of 2 to indicate closeness. We won't do it in this video, but see if you can reason about which rooms other rational numbers like 1 third and 1 half should fall into. And see if you can prove why this notion of distance satisfies many of the nice properties we expect from a distance function, like the triangle inequality. Here, I'll just say that this notion of distance is a perfectly legitimate one. We call it the two-addic metric, and it falls into a general family of distance functions called the periodic metrics, where P stands for any prime number. These metrics give rise to a completely new type of number, neither real nor complex, and have become a central notion in modern number theory. Using the two-addic metric, the fact that the sum of all powers of 2 equals negative 1 actually makes sense, because the numbers 1, 3, 7, 15, 31, and so on genuinely approach negative 1. This parable does not actually portray the historical trajectory of discoveries, but nevertheless, I still think it's a good illustration of a recurring pattern in the discovery of math. First, nature hands you something that's ill-defined or even nonsensical. The new defined new concepts that make this fuzzy discovery make sense, and these new concepts tend to yield genuinely useful math and broaden your mind about traditional notions. So in answer to the age-old question of whether math is invention or discovery, my personal belief is that discovery of non-rigorous truths is what leads us to the construction of rigorous terms that are useful, opening the door for more fuzzy discoveries, continuing the cycle.
Implicit differentiation, what's going on here? | Chapter 6, Essence of calculus
Let me share with you something that I found particularly weird when I was a student first learning calculus. Let's say that you have a circle with radius 5 centered at the origin of the x-wide plane. This is something defined with the equation x squared plus y squared equals 5 squared. That is, all of the points on the circle are a distance 5 from the origin as encapsulated by the Pythagorean theorem, where the sum of the squares of the two legs on this triangle equal the square of the hypotenuse, 5 squared. And suppose that you want to find the slope of a tangent line to the circle. Maybe at the point xy equals 3, 4. Now if you're savvy with geometry, you might already know that this tangent line is perpendicular to the radius touching it at that point. Well let's say you don't already know that, or maybe you want to take the technique that generalizes to curves other than just circles. As with other problems about the slopes of tangent lines to curves, the key thought here is to zoom in close enough that the curve basically looks just like its own tangent line, and then ask about a tiny step along that curve. The y component of that little step is what you might call dy, and the x component is a little dx. So the slope that we want is the rise over run, dy divided by dx. But unlike other tangent slope problems and calculus, this curve is not the graph of a function. So we can't just take a simple derivative, asking about the size of some tiny nudge to the output of a function caused by some tiny nudge to the input. x is not an input, and y is not an output. They're both just interdependent values related by some equation. This is what's called an implicit curve. It's just the set of all points x, y that satisfies some property written in terms of the two variables, x and y. The procedure for how you actually find dy, dx for curves like this is the thing that I found very weird as a calculus student. You take the derivative of both sides like this. For x squared, you write 2x times dx, and similarly y squared becomes 2y times dy. And then the derivative of that constant 5 squared on the right is just 0. Now you can see where this feels a little strange, right? What does it mean to take the derivative of an expression that has multiple variables in it? And why is it that we're tacking on the little dy and the little dx in this way? But if you just blindly move forward with what you get, you can rearrange this equation and find an expression for dy divided by dx, which in this case comes out to be negative x divided by y. So at the point with coordinates x, y equals 3, 4, that slope would be negative 3 divided by 4, evidently. This strange process is called implicit differentiation. And don't worry, I have an explanation for how you can interpret taking a derivative of an expression with two variables like this. But first, I want to set aside this particular problem and show how it's connected to a different type of calculus problem, something called a related rates problem. Imagine a 5 meter long ladder held up against a wall where the top of the ladder starts four meters above the ground, which by the Pythagorean theorem means that that bottom is three meters away from the wall. And let's say it's slipping down in such a way that the top of the ladder is dropping at a rate of 1 meter per second. The question is, in that initial moment, what's the rate at which the bottom of the ladder is moving away from the wall? It's interesting, right? That distance from the bottom of the ladder to the wall is 100% determined by the distance from the top of the ladder to the floor. So we should have enough information to figure out how the rates of change for each of those values actually depend on each other. It might not be entirely clear how exactly you relate those two. First things first, it's always nice to give names to the quantities that we care about. So let's label that distance from the top of the ladder to the ground y of t, written as a function of time because it's changing. Likewise, label the distance between the bottom of the ladder and the wall x of t. The key equation that relates these terms is the Pythagorean theorem, x of t squared plus y of t squared equals 5 squared. What makes that a powerful equation to use is that it's true at all points of time. Now one way that you could solve this would be to isolate x of t, and then you figure out what y of t has to be based on that 1 meter per second drop rate, and you could take the derivative of the resulting function, dx dt, the rate at which x is changing with respect to time. And that's fine, it involves a couple layers of using the chain rule, and it'll definitely work for you, but I want to show a different way that you can think about the same problem. This left-hand side of the equation is a function of time, right? It just so happens to equal a constant, meaning the value evidently doesn't change while time passes, but it's still written as an expression dependent on time, which means we can manipulate it like any other function that has t as an input. In particular, we can take a derivative of this left-hand side, which is a way of saying if I let a little bit of time pass, some small dt, which causes y to slightly decrease, and x to slightly increase, how much does this expression change? On the one hand, we know that that derivative should be zero, since the expression is a constant, and constants don't care about your tiny nudges in time, they just remain unchanged. But on the other hand, what do you get when you compute this derivative? Well the derivative of x of t squared is 2 times x of t times the derivative of x. That's the chain rule that I talked about last video. 2x dx represents the size of a change to x squared caused by some change to x, and then we're dividing out by dt. Likewise, the rate at which y of t squared is changing is 2 times y of t times the derivative of y. Now evidently, this whole expression must be zero, and that's an equivalent way of saying that x squared plus y squared must not change while the ladder moves. At the very start, time t equals zero, the height y of t is 4 meters, and that distance x of t is 3 meters, and since the top of the ladder is dropping at a rate of 1 meter per second, that derivative dy dt is negative 1 meters per second. Now this gives us enough information to isolate the derivative dx dt, and when you work it out, it comes out to be 4 thirds meters per second. The reason I bring up this ladder problem is that I want you to compare it to the problem of finding the slope of a tangent line to the circle. In both cases, we had the equation x squared plus y squared equals 5 squared, and in both cases, we ended up taking the derivative of each side of this expression. But for the ladder question, these expressions were functions of time, so taking the derivative has a clear meaning. It's the rate at which the expression changes as time changes. But what makes the circle situation strange is that rather than saying that a small amount of time dt has passed, which causes x and y to change, the derivative just has these tiny nudges dx and dy just floating free, not tied to some other common variable, like time. Let me show you a nice way to think about this. Let's give this expression x squared plus y squared a name, maybe s. s is essentially a function of two variables. It takes every point xy on the plane and associates it with a number. For points on this circle, that number happens to be 25. If you stepped off the circle away from the center, that value would be bigger. For other points xy closer to the origin, that value would be smaller. Now what it means to take a derivative of this expression, a derivative of s, is to consider a tiny change to both of these variables, some tiny change dx to x, and some tiny change dy to y. And not necessarily one that keeps you on the circle, by the way, it's just any tiny step in any direction of the xy plane. And from there you ask how much does the value of s change? And that difference, the difference in the value of s before the nudge and after the nudge, is what I'm writing as ds. For example, in this picture, we're starting off at a point where x equals 3 and where y equals 4, and let's just say that that step I drew has dx at negative 0.02 and dy at negative 0.01. Then the decrease in s, the amount that x squared plus y squared changes over that step, would be about 2 times 3 times negative 0.02 plus 2 times 4 times negative 0.01. That's what this derivative expression, 2x dx plus 2y dy, actually means. It's a recipe for telling you how much the value x squared plus y squared changes, as determined by the point xy where you start, and the tiny step dx dy that you take. And as with all things derivative, this is only an approximation, but it's one that gets truer and truer for smaller and smaller choices of dx and dy. The key point here is that when you restrict yourself to steps along the circle, you're essentially saying you want to ensure that this value of s doesn't change. It starts at a value of 25 and you want to keep it at a value of 25. That is, ds should be 0. So setting the expression 2x dx plus 2y dy equal to 0 is the condition under which one of these tiny steps actually stays on the circle. Again, this is only an approximation. Speaking more precisely, that condition is what keeps you on the tangent line of the circle, not the circle itself. But for tiny enough steps, those are essentially the same thing. Of course, there's nothing special about the expression x squared plus y squared equals 5 squared. It's always nice to think through more examples, so let's consider this expression sine of x times y squared equals x. This corresponds to a whole bunch of u-shaped curves on the plane. And those curves, remember, represent all of the points xy where the value of sine of x times y squared happens to equal the value of x. Now imagine taking some tiny step with components dx dy and not necessarily one that keeps you on the curve. Given the derivative of each side of this equation is going to tell us how much the value of that side changes during this step. On the left side, the product rule that we talked through last video tells us that this should be lefty right plus right d left. That is sine of x times the change to y squared, which is 2y times dy plus y squared times the change to sine of x, which is cosine of x times dx. The right side is simply x, so the size of a change to that value is exactly dx. Now setting these two sides equal to each other is a way of saying whatever your tiny step with coordinates dx and dy is, if it's going to keep us on the curve, the values of both the left hand side and the right hand side must change by the same amount. That's the only way that this top equation can remain true. From there, depending on what problem you're trying to solve, you have something to work with algebraically. And maybe the most common goal is to try to figure out what dy divided by dx is. As a final example here, I want to show you how you can actually use this technique of implicit differentiation to figure out new derivative formulas. I've mentioned that the derivative of e to the x is itself. But what about the derivative of its inverse function, the natural log of x? Well the graph of the natural log of x can be thought of as an implicit curve. It's all of the points xy on the plane, where y happens to equal ln of x. It just happens to be the case that the x's and the y's of this equation aren't as intermingled as they were in our other examples. The slope of this graph, dy divided by dx, should be the derivative of ln of x, right? Well, to find that, first rearrange this equation, y equals ln of x to be e to the y equals x. This is exactly what the natural log of x means. It's saying e to the what equals x. Since we know the derivative of e to the y, we can take the derivative of both sides here, effectively asking how a tiny step with components dx dy changes the value of each one of these sides. To ensure that a step stays on the curve, the change to this left side of the equation, which is e to the y times dy, must equal the change to the right side, which in this case is just dx. Rearranging, that means that dy divided by dx, the slope of our graph, equals 1 divided by e to the y. And when we're on the curve, e to the y is, by definition, the same thing as x. So evidently, this slope is 1 divided by x. And of course, an expression for the slope of a graph of a function written in terms of x like this is the derivative of that function. So evidently, the derivative of ln of x is 1 divided by x. By the way, all of this is a little sneak peak into multivariable calculus, where you consider functions that have multiple inputs and how they change as you tweak those multiple inputs. The key, as always, is to have a clear image in your head of what tiny nudges are at play and how exactly they depend on each other. Next up, I'm going to be talking about limits and how they're used to formalize the idea of a derivative.
Integration and the fundamental theorem of calculus | Chapter 8, Essence of calculus
This guy, Growth Indique, is somewhat of a mathematical idol to me. And I just love this quote, don't you? Too often in math, we dive into showing that a certain fact is true with a long series of formulas before stepping back and making sure that it feels reasonable and preferably obvious, at least at an intuitive level. In this video, I want to talk about integrals, and the thing that I want to become almost obvious is that they are an inverse of derivatives. Here we're just going to focus on one example, which is a kind of dual to the example of a moving car that I talked about in chapter 2 of the series, introducing derivatives. Then in the next video, we're going to see how this same idea generalizes, but to a couple other contexts. Imagine that you're sitting in a car, and you can't see out the window. All you see is the speedometer. At some point, the car starts moving. Speeds up. And then slows back down to a stop, all over the course of 8 seconds. The question is, is there a nice way to figure out how far you've traveled during that time based only on your view of the speedometer? Or better yet, can you find a distance function, S of T, that tells you how far you've traveled after a given amount of time, T, somewhere between 0 and 8 seconds? Let's say that you take note of the velocity at every second, and you make a plot over time that looks something like this. And maybe you find that a nice function to model that velocity over time in meters per second is V of T equals T times 8 minus T. You might remember in chapter 2 of the series, we were looking at the opposite situation, where you knew what a distance function was, S of T, and you wanted to figure out the velocity function from that. There, I showed how the derivative of a distance versus time function gives you a velocity versus time function. So in our current situation, where all we know is velocity, it should make sense that finding a distance first time function is going to come down to asking what function has a derivative of T times 8 minus T. This is often described as finding the anti-derivative of a function, and indeed, that's what we'll end up doing, and you could even pause right now and try that. But first, I want to spend the bulk of this video showing how this question is related to finding the area bounded by the velocity graph, because that helps to build an intuition for a whole class of problems, things called integral problems in math and science. To start off, notice that this question would be a lot easier if the car was just moving at a constant velocity, right? In that case, you could just multiply the velocity in meters per second times the amount of time that has passed in seconds, and that would give you the number of meters traveled. And notice, you can visualize that product, that distance, as an area. And if visualizing distance as area seems kind of weird, I'm right there with you. It's just that on this plot, where the horizontal direction has units of seconds, and the vertical direction has units of meters per second, units of area just very naturally. It's really correspond to meters. But what makes our situation hard is that velocity is not constant. It's incessantly changing at every single instant. It would even be a lot easier if it only ever changed at a handful of points, maybe staying static for the first second, and then suddenly discontinuously jumping to a constant seven meters per second for the next second, and so on, with discontinuous jumps to portions of constant velocity. That would make it uncomfortable for the driver, in fact, it's actually physically impossible, but it would make your calculations a lot more straightforward. You could just compute the distance traveled on each interval by multiplying the constant velocity on that interval by the change in time, and then just add all of those up. So what we're going to do is approximate the velocity function as if it was constant on a bunch of intervals. And then, as is common and calculus, we'll see how refining that approximation leads us to something more precise. Here, let's make this a little more concrete by throwing in some numbers. Chop up the time axis between 0 and 8 seconds into many small intervals, each with some little width dt, something like 0.25 seconds. Now consider one of those intervals, like the one between t equals 1 and 1.25. In reality, the car speeds up from 7 meters per second to about 8.4 meters per second during that time, and you could find those numbers just by plugging in t equals 1 and t equals 1.25 to the equation for velocity. What we want to do is approximate the car's motion as if its velocity was constant on that interval. Again, the reason for doing that is we just don't really know how to handle situations other than constant velocity ones. You could choose this constant to be anything between 7 and 8.4. It actually doesn't matter. All that matters is that our sequence of approximations, whatever they are, gets better and better as dt gets smaller and smaller. The treating this car's journey as a bunch of discontinuous jumps and speed between portions of constant velocity becomes a less wrong reflection of reality as we decrease the time between those jumps. So for convenience on an interval like this, let's just approximate the speed with whatever the true car's velocity is at the start of that interval. The height of the graph above the left side, which in this case is 7. So in this example, in a whole, according to our approximation, the car moves 7 meters per second times 0.25 seconds. That's 1.75 meters, and it's nicely visualized as the area of this thin rectangle. In truth, that's a little under the real distance traveled, but not by much. And the same goes for every other interval. The approximated distance is v of t times dt. It's just that you'd be plugging in a different value for t at each one of these, giving a different height for each rectangle. I'm going to write out an expression for the sum of the areas of all those rectangles in kind of a funny way. Take this symbol here, which looks like a stretched S for sum, and then put a 0 at its bottom and an 8 at its top, to indicate that we'll be ranging over time steps between 0 and 8 seconds. And as I said, the amount we're adding up at each time step is v of t times dt. Two things are implicit in this notation. First of all, that value dt plays two separate roles. Not only is it a factor in each quantity that we're adding up, it also indicates the spacing between each sampled time step. So when you make dt smaller and smaller, even though it decreases the area of each rectangle, it increases the total number of rectangles whose areas we're adding up. Because if they're thinner, it takes more of them to fill that space. And second, the reason we don't use the usual sigma notation to indicate a sum is that this expression is technically not any particular sum for any particular choice of dt. It's meant to express whatever that sum approaches as dt approaches 0. And as you can see, what that approaches is the area bounded by this curve and the horizontal axis. Remember, smaller choices of dt indicate closer approximations for the original question, how far does the car actually go? So this limiting value for the sum, the area under this curve, gives us the precise answer to the question in full, unapproximated precision. Now tell me that's not surprising. We had this pretty complicated idea of approximations that can involve adding up a huge number of very tiny things. And yet, the value that those approximations approach can be described so simply. It's just the area underneath this curve. This expression is called an integral of v of t, since it brings all of its values together. It integrates them. Now at this point, you could say, how does this help? You've just reframed one hard question, finding how far the car has traveled, into an equally hard problem, finding the area between this graph and the horizontal axis. And you'd be right. If the velocity distance duo was the only thing that we cared about, most of this video, with all of the area under a curve nonsense, would be a waste of time. We could just skip straight ahead to finding an antiderivative. But finding the area between a function's graph and the horizontal axis is somewhat of a common language for many disparate problems that can be broken down and approximated as the sum of a large number of small things. You'll see more in the next video, but for now, I'll just say in the abstract that understanding how to interpret and how to compute the area under a graph is a very general problem-solving tool. In fact, the first video of this series already covered the basics of how this works. But now that we have more of a background with derivatives, we can actually take this idea to its completion. For a velocity example, think of this right endpoint as a variable, capital T. So we're thinking of this integral of the velocity function between zero and t, the area under this curve between those inputs as a function where the upper bound is the variable. That area represents the distance the car has traveled after t seconds, right? So in reality, this is a distance versus time function, s of t. Now ask yourself, what is the derivative of that function? On the one hand, a tiny change in distance over a tiny change in time, that's velocity, that is what velocity means. But there's another way to see this, purely in terms of this graph and this area, which generalizes a lot better to other integral problems. A slight nudge of dt to the input causes that area to increase some little ds represented by the area of this sliver. The height of that sliver is the height of the graph at that point, v of t, and its width is dt. And for small enough dt, we can basically consider that sliver to be a rectangle, so this little bit of added area, ds, is approximately equal to v of t times dt. And because that's an approximation that gets better and better for smaller dt, the derivative of that area function, ds dt, at this point, equals vt, the value of the velocity function at whatever time we started on. And that right there, that's a super general argument. The derivative of any function giving the area under a graph like this is equal to the function for the graph itself. So if our velocity function is t times 8 minus t, what should s be? What function of t has a derivative of t times 8 minus t? It's easier to see if we expand this out, writing it as 8t minus t squared, and then we can just take each part one at a time. What function has a derivative of 8 times t? Well, we know that the derivative of t squared is 2t, so if we just scale that up by a factor of 4, we can see that the derivative of 4t squared is 8t. And for that second part, what kind of function do you think might have negative t squared as a derivative? Well using the power rule again, we know that the derivative of a cubic term, t cubed, gives us a square term, 3t squared. So if we just scale that down by a third, the derivative of 1 third t cubed is exactly t squared. And then making that negative, you'd see that negative 1 third t cubed has a derivative of negative t squared. Therefore, the anti-derivative of our function, 8t minus t squared, is 4t squared minus 1 third t cubed. But there's a slight issue here. We could add any constant we want to this function, and its derivative is still going to be 8t minus t squared. The derivative of a constant just always goes to 0. And if you were to graph s of t, you could think of this in the sense that moving a graph of a distance function up and down does nothing to affect its slope at every input. So in reality, there's actually infinitely many different possible anti-derivative functions. And every one of them looks like 4t squared minus 1 third t cubed plus c for some constant c. But there is one piece of information that we haven't used yet that's going to let us 0 in on which anti-derivative to use. The lower bound of the integral. This integral has to be 0 when we drag that right endpoint all the way to the left endpoint, right? Since traveled by the car between 0 seconds and 0 seconds is, well, 0. So as we found, the area as a function of capital T is an anti-derivative for the stuff inside. And to choose what constant to add to this expression, what you do is subtract off the value of that anti-derivative function at the lower bound. If you think about it for a moment, that ensures that the integral from the lower bound to itself will indeed be 0. As it so happens, when you evaluate the function we have right here at t equals 0, you get 0. So in this specific case, you actually don't need to subtract anything off. For example, the total distance traveled during the full eight seconds is this expression evaluated at t equals 8, which is about 85.33 minus 0. So the answer as a whole is just 85.33. But a more typical example would be something like the integral between 1 and 7. That's the area pictured right here, and it represents the distance traveled between 1 second and 7 seconds. What you do is evaluate the anti-derivative we found at the top bound, 7, and then subtract off its value at that bottom bound, 1. And notice by the way, it doesn't matter which anti-derivative we chose here. If for some reason it had a constant added to it, like 5, that constant would just cancel out. More generally, any time you want to integrate some function, and remember, you think of that as adding up values f of x times dx for inputs in a certain range, and then asking, what is that sum approach as dx approaches 0? The first step to evaluating that integral is to find an anti-derivative, some other function, capital F, whose derivative is the thing inside the integral. Then the integral equals this anti-derivative evaluated at the top bound, minus its value at the bottom bound. And this fact, right here that you're staring at, is the fundamental theorem of calculus. And I want you to appreciate something kind of crazy about this fact. The integral, the limiting value for the sum of all of these thin rectangles, takes into account every single input on the continuum from the lower bound to the upper bound. That's why we use the word integrate. It brings them all together. And yet, to actually compute it using an anti-derivative, you only look at two inputs, the top bound and the bottom bound. It almost feels like cheating. Using the anti-derivative implicitly accounts for all of the information needed to add up the values between those two bounds. That's just crazy to me. This idea is deep, and there's a lot packed into this whole concept. So let's just recap everything that just happened, shall we? We wanted to figure out how far a car goes just by looking at the speedometer. And what makes that hard is that velocity is always changing. If you approximate velocity to be constant on multiple different intervals, you could figure out how far the car goes on each interval just with multiplication, and then add all of those up. Better and better approximations for the original problem correspond to collections of rectangles, whose aggregate area is closer and closer to being the area under this curve between the start time and the end time. So that area, under the curve, is actually the precise distance traveled for the true nowhere constant velocity function. If you think of that area as a function itself with a variable right endpoint, you can deduce that the derivative of that area function must equal the height of the graph at every point. And that's really the key right there. It means that to find a function giving this area, you ask, what function has V of t as a derivative? There are actually infinitely many antiderivatives of a given function, since you can always just add some constant without affecting the derivative. So you account for that by subtracting off the value of whatever antiderivative function you choose at the bottom bound. By the way, one important thing to bring up before we leave is the idea of negative area. What if the velocity function was negative at some point, meaning the car goes backwards? It's still true that a tiny distance traveled DS on a little time interval is about equal to the velocity at that time multiplied by the tiny change in time. It's just that the number you'd plug in for velocity would be negative, so the tiny change in distance is negative. In terms of our thin rectangles, if a rectangle goes below the horizontal axis like this, its area represents a bit of distance traveled backwards. So if what you want in the end is to find a distance between the car's start point and its end point, this is something you're going to want to subtract. And that's generally true of integrals. Whenever a graph dips below the horizontal axis, the area between that portion of the graph and the horizontal axis is counted as negative. And what you'll commonly hear is that integrals don't measure area per se, they measure the signed area between the graph and the horizontal axis. Next up, I'm going to bring up more context where this idea of an integral and area under curves comes up, along with some other intuitions for this fundamental theorem of calculus. Maybe you remember, chapter 2 of this series introducing the derivative was sponsored by the art of problem solving. So I think there's something elegant to the fact that this video, which is kind of a duel to that one, was also supported in part by the art of problem solving. I really can't imagine a better sponsor for this channel because it's a company whose books and whose courses I recommend to people anyway. They were highly influential to me when I was a student developing a love for creative math. So if you're a parent looking to foster your own child's love for the subject, or if you're a student who wants to see what math has to offer beyond wrote schoolwork, I cannot recommend the art of problem solving enough. Whether that's their newest development to build the right intuitions in elementary school kids, called beast academy, or their courses in higher level topics and contest preparation, going to aops.com slash 3 blue on brown, or clicking on the link in the description lets them know that you came from this channel, which may encourage them to support future projects like this one. I consider these videos a success not when they teach people a particular bit of math, which can only ever be a drop in the ocean, but when they encourage people to go and explore that expanse for themselves. And the art of problem solving is among the few great places to actually do that exploration.
But what is a partial differential equation? | DE2
After seeing how we think about ordinary differential equations in chapter 1, we turn now to an example of a partial differential equation, the heat equation. To set things up, imagine you have some object like a piece of metal, and you know how the heat is distributed across it at any one moment, that is, what's the temperature of every individual point along this plate? The question is, how will this distribution change over time, as the heat flows from warmer spots to cooler ones? The image on the left shows the temperature of an example plate using color, with the graph of that temperature being shown on the right. To take a concrete one-dimensional example, let's say you have two different rods at different temperatures, where that temperature is uniform along each one. You know that when you bring them into contact, the temperature will flow from the hot one to the cool one, tending to make the whole thing equal over time. But how exactly? What will the temperature distribution be at each point in time? As is typical with differential equations, the idea is that it's easier to describe how this setup changes from moment to moment, then it is to jump straight to a description of the full evolution. We write this rule of change in the language of derivatives, though as you'll see, we'll need to expand our vocabulary a bit beyond ordinary derivatives. And don't worry, we'll learn how to read the equations you're seeing now in just a minute. Variations of the heat equation show up in many other parts of math and physics, like Brownian motion, the black sholes equations from finance, and all sorts of diffusion. So there are many dividends to be had from a deep understanding of this one setup. In the last video, we looked at ways of building understanding while acknowledging the truth that most differential equations are simply too difficult to actually solve. And indeed, PDEs tend to be even harder than ODE's, largely because they involve modeling infinitely many values changing in concert. But our main character for today is an equation that we actually can solve. In fact, if you've ever heard of Fourier series, you may be interested to know that this is the physical problem which babyface Fourier over here was trying to solve when he stumbled across the corner of math that is now so replete with his name. We'll dig into Fourier series much more deeply in the next chapter, but I would like to give you at least a little hint of the beautiful connection which is to come. This animation you're seeing right now shows how lots of little rotating vectors, each rotating at some constant integer frequency, can trace out an arbitrary shape. To be clear, what's happening is that these vectors are being added together, tip to tail, at each moment. And you might imagine that the last one has some sort of pencil at its tip, tracing a path as it goes. For finitely many vectors, this tracing usually won't be a perfect replica of the target shape, which in this animation is a lowercase f. But the more circles you include, the closer it gets. What you're seeing now uses only 100 circles, and I think you'd agree that the deviations from the real shape are negligible. What's mind-blowing is that just by tweaking the initial size and angle of each vector, that gives you enough control to approximate any curve that you want. At first, this might seem like an idle curiosity, an eat art project but little more. In fact, the math that makes this possible is the same as the math describing the physics of heat flow. But we're getting ahead of ourselves. Step one is simply to build up the heat equation. And for that, let's start by being clear about what the function we're analyzing is exactly. We have a rod in one dimension, and we're thinking of it as sitting on an x-axis, so each point of that rod is labeled with a unique number, x. The temperature is some function of that position, t of x, shown here as a graph above it. But really, since the value changes over time, we should think of this function as having one more input, t, for time. You could, if you wanted, think of this input space as really being two-dimensional, representing space and time together, with the temperature being graphed as a surface above it, each slice across time, showing you what that distribution looks like at any given moment. Or you could simply think of this graph of temperature changing with time, both are equivalent. This surface is not to be confused with what I was showing earlier, the temperature graph of a two-dimensional body. Be mindful when you're studying equations like these of whether time is being represented with its own axis, or if it's being represented with literal changes over time, say in an animation. Last chapter, we looked at some systems where just a handful of numbers changed over time, like the angle and angular velocity of a pendulum, describing that change in the language of derivatives. But when we have an entire function changing with time, the mathematical tools become slightly more intricate. Because we're thinking of this temperature function with multiple dimensions to its input space, in this case one for space and one for time, there are multiple different rates of change at play. There's the derivative with respect to x, how rapidly the temperature changes as you move along the rod. You might think of this as the slope of our surface when you slice it parallel to the x-axis, or given a tiny step in the x direction, and the tiny change to temperature caused by it, this gives a ratio between the two. But there's also the rate at which a single point on the rod changes with time, what you might think of as the slope of the surface when you slice it in the other direction, parallel to the time axis. Each one of these derivatives tells only part of the story for how this temperature function changes, so we call them partial derivatives. To emphasize this point, the notation changes a little, replacing the letter D with a special curly D, sometimes called Dell. Personally, I think it's a little silly to change the notation for this since it's essentially the same operation. I would rather see notation that emphasizes that the Dell T terms up in the numerators refer to different changes, one is a small change to temperature after a small change in time. The other is a small change to temperature after a small step in space. To reiterate a point I made in the calculus series, I do think it's healthy to initially read derivatives like this as a literal ratio between a small change to the function's output, and the small change to the input that caused it. Just keep in mind that what this notation is meant to encode is the limit of that ratio for smaller and smaller nudges to the input, rather than a specific value of the ratio for a finitely small nudge. This goes for partial derivatives just as much as it does for ordinary derivatives. The heat equation is written in terms of these partial derivatives. It tells us that the way this function changes with respect to time depends on how it changes with respect to space. More specifically, it's proportional to the second partial derivative with respect to x. At a high level, the intuition is that at points where the temperature distribution curves, it tends to change more quickly in the direction of that curvature. Since a rule like this is written using partial derivatives, we call it a partial differential equation. This has the funny result that to an outsider, the name sounds like a tamer version of ordinary differential equations. When, quite to the contrary, partial differential equations tend to tell a much richer story than ODE's and are much harder to solve in general. The general heat equation applies to bodies in any number of dimensions, which would mean more inputs to our temperature function, but it'll be easiest for us to stay focused on the one-dimensional case of a rod. As it is, graphing this in a way which gives time its own axis already pushes our visuals into the third dimension. So, I threw out this equation, but where does this come from? How could you think up something like this yourself? Well, for that, let's simplify things by describing a discrete version of the setup, where you have only finitely many points x in a row. This is sort of like working in a pixelated universe where instead of having a continuum of temperatures, we have a finite set of separate values. The intuition here is simple. For a particular point, if it's two neighbors on either side are on average hotter than it is, it will heat up. If they're cooler on average, it'll cool down. Here, specifically focus on these three neighboring points, x1, x2, and x3, with corresponding temperatures t1, t2, and t3. What we want to compare is the average of t1 and t3, with the value of t2. When this difference is greater than zero, t2 will tend to heat up. And the bigger the difference, the faster it heats up. Likewise, if it's negative, t2 will tend to cool down, at a rate proportional to that difference. More formally, we write that the derivative of t2, with respect to time, is proportional to the difference between this average value of its neighbors and its own value. Alpha here is simply a proportionality constant. To write this in a way which will ultimately explain the second derivative in the heat equation, let me rearrange this right hand a bit, in terms of the difference between t1 and t2, and the difference between t2 and t3. You can quickly check that these two are the same. The top has half of t1, and in the bottom there are two minus signs in front of the t1, so it's positive, and the half has been factored out. Likewise, both have half of t3. Then, on the bottom, we have a negative t2 that's effectively written twice, so when you take half of that, it's the same as the single negative t2 written up top. Like I said, the reason to rewrite it is that it takes us a step closer to the language of derivatives. In fact, let's go ahead and write these guys as delta t1 and delta t2. It's the same value on the right hand side, but we're adding a new perspective to how to think about it. Instead of comparing the average of the neighbors to t2, we're thinking about the difference of the differences. Here, take a moment to gut-check that this makes sense. If those two differences are the same, then the average of t1 and t3 is the same as t2, so t2 will not tend to change. If delta t2 is bigger than delta t1, meaning the difference of the differences is positive, notice how the average of t1 and t3 is bigger than t2, so t2 tends to increase. And on the flip side, if the difference of the differences is negative, which means delta t2 is smaller than delta t1, it corresponds to an average of these neighbors being less than t2. We could be especially compact with our notation and write this whole term, the difference between the differences, as delta delta t1. This is known in the lingo as a second difference. If it feels a little weird to think about, keep in mind, it's essentially a compact way of writing the idea of how much t2 differs from the average of its neighbors. It just has this extra factor of 1 half, is all. And that factor doesn't really matter, because either way, we're writing this equation in terms of some proportionality constant. The upshot is that the rate of change for the temperature of a point is proportional to the second difference around it. As we go from this finite context to the infinite continuous case, the analog of a second difference is the second derivative. Instead of looking at the difference between the temperature values at points some fixed distance apart, you instead consider what happens as you shrink this size of that step toward zero. And in calculus, instead of talking about absolute differences, which would also approach zero, you think in terms of the rate of change. In this case, what's the rate of change in temperature per unit distance? And remember, there are two separate rates of change at play. How does that temperature change as time progresses? And how does the temperature change as you move along the rod? The core intuition remains the same as what we had in the discrete case. To know how a point differs from its neighbors, look not just at how the function changes from one point to the next, but at how the rate of change itself changes. Now, in calculus land, we write this as del squared t over del x squared, the second partial derivative of our function, with respect to x. Notice how the slope increases at points where the graph curves upwards, meaning the rate of change of the rate of change is positive. Similarly, that slope decreases at points where the graph curves downwards, where the rate of change of this rate of change is negative. Talk that away as a meaningful intuition for problems well beyond the heat equation. Second derivatives give a measure of how a value compares to the average of its neighbors. Hopefully, that gives some satisfying added color to the equation. It's already pretty intuitive when you read it as saying that curved points tends to flatten out, but I think there's something even more satisfying about seeing a part of the equation. It's also satisfying about seeing a partial differential equation like this arise almost mechanistically from thinking about each point as simply tending towards the average of its neighbors. Take a moment to compare what this feels like to the case of ordinary differential equations. For example, if we have multiple bodies in space tugging at each other with gravity, what we're analyzing is a handful of changing numbers, in this case the coordinates of each object. The rate of change for any one of these values depends on the values of the other numbers. And we often write this down as a system of equations. On the left, we have the derivative of each value with respect to time, and on the right there's some combination of all the other values. In our partial differential equation, the difference is that we have infinitely many values changing from a continuum. And again, the way that any one of these values changes depends on the other values. But quite healthfully, each one only depends on its immediate neighbors in some limiting sense of the word neighbor. So here, the relation on the right hand side is not a sum or product of the other numbers. It's instead a kind of derivative, just a derivative with respect to space instead of with respect to time. In a sense, when you think about it, this one partial differential equation is like a system of infinitely many equations, one for each point on the rod. You might wonder about objects which are spread out in more than one dimension, like a plate or something three-dimensional. In that case, the equation looks quite similar, but you include the second derivative with respect to the other spatial directions as well. And adding up all of the second spatial derivatives like this is common enough as an operation that it has its own special name, the Laplacian, often written as this upside down triangle squared. It's essentially a multivariable version of the second derivative, and the intuition for this equation is no different from the one-dimensional case. This Laplacian can still be thought of as measuring how different is a point from the average of its neighbors. But now, these neighbors aren't just left and right of it, they are all around. For the curious among you, I did a couple of videos during my time at Khan Academy on this operator if you want to go check them out. For those of you with some multivariable calculus under your belt, it's nice to think about as the divergence of the gradient. But you don't have to worry about that, for our purposes, let's stay focused on the one-dimensional case. If you feel like you understand all of this, pat yourself on the back. Being able to read a PDE is no joke, and it's a powerful addition to have to your vocabulary for describing the world around you. But, after all of this time spent interpreting the equations, I say it's high time that we start solving them, don't you? And trust me, there are a few pieces of math quite as satisfying as what Poodle-Haird Fourier over here developed to solve this problem. All this and more in the next chapter. I was originally motivated to cover this particular topic when I got an early view of Steve Strogat's new book Infinite Powers. This isn't a sponsored message or anything like that, but all cards on the table, I do have two selfish ulterior motives for mentioning it. The first is that Steve has been a really strong, maybe even pivotal advocate for the channel since the very beginnings, and I've had an itch to repay that kindness for quite a while. And the second is to make more people love math than calculus in particular. That might not sound selfish, but think about it. When more people love math, the potential audience base for these videos gets bigger. And frankly, there are a few better ways to get people loving the subject than to expose them to Strogat's writing. So if you have friends who you know who you think would enjoy the ideas of calculus but maybe have been a bit intimidated by math in the past, this book does a really outstanding job communicating the heart of the subject both substantively and excessively. Its main theme is the idea of constructing solutions to complex real world problems from simple idealized building blocks, which, as you'll see, is exactly what Fourier did. And for those of you who already know and love the subject, you will find no shortage of fresh insights in enlightening stories. I certainly enjoyed it. Again, I kind of know that sounds like an ad, but it's not. I just actually think you'll enjoy the book.
Nonsquare matrices as transformations between dimensions | Chapter 8, Essence of linear algebra
Hey everyone, I've got another quick footnote for you between chapters today. When I've talked about linear transformations so far, I've only really talked about transformations from 2D vectors to other 2D vectors, represented with 2 by 2 matrices, or from 3D vectors to other 3D vectors, represented with 3 by 3 matrices. But several commenters have asked about non-square matrices, so I thought I'd take a moment to just show with those mean geometrically. By now in the series, you actually have most of the background you need to start pondering a question like this on your own, but I'll start talking through it just to give a little mental momentum. It's perfectly reasonable to talk about transformations between dimensions, such as one that takes 2D vectors to 3D vectors. Again, what makes one of these linear is that grid lines remain parallel and evenly spaced, and that the origin maps to the origin. What I have pictured here is the input space on the left, which is just 2D space, and the output of the transformation shown on the right. The reason I'm not showing the inputs move over to the outputs like I usually do is not just animation laziness. It's worth emphasizing that 2D vector inputs are very different animals from these 3D vector outputs, living in a completely separate, unconnected space. Encoding one of these transformations with a matrix is really just the same thing as what we've done before. You look at where each basis vector lands and write the coordinates of the landing spots as the columns of a matrix. For example, what you're looking at here is an output of a transformation that takes i hat to the coordinates 2-1-2, and j hat to the coordinates 0-1-1. Notice, this means the matrix encoding our transformation has 3 rows and 2 columns, which, to use standard terminology, makes it a 3x2 matrix. In the language of last video, the column space of this matrix, the place where all the vectors land, is a 2D plane slicing through the origin of 3D space. But the matrix is still full-rank, since the number of dimensions in this column space is the same as the number of dimensions of the input space. So if you see a 3x2 matrix out in the wild, you can know that it has the geometric interpretation of mapping 2 dimensions to 3 dimensions. Since the 2 columns indicate that the input space has 2 basis vectors, and the 3 rows indicate that the landing spots for each of those basis vectors is described with 3 separate coordinates. Likewise, if you see a 2x3 matrix with 2 rows and 3 columns, what do you think that means? Well, the 3 columns indicate that you're starting in a space that has 3 basis vectors, so we're starting in 3 dimensions. And the 2 rows indicate that the landing spot for each of those 3 basis vectors is described with only 2 coordinates, so they must be landing in 2 dimensions. So it's a transformation from 3D space onto the 2D plane. A transformation that should feel very uncomfortable if you imagine going through it. You could also have a transformation from 2 dimensions to 1 dimension. And a dimensional space is really just the number line, so a transformation like this takes in 2D vectors and spits out numbers. Thinking about grid lines remaining parallel and evenly spaced is a little bit messy due to all of the squishification happening here. So in this case, the visual understanding for what linearity means is that if you have a line of evenly spaced dots, it would remain evenly spaced once they're mapped onto the number line. One of these transformations is encoded with a 1x2 matrix, each of whose two lines are two columns has just a single entry. The two columns represent where the basis vectors land, and each one of those columns requires just one number, the number that that basis vector landed on. This is actually a surprisingly meaningful type of transformation, with close ties to the dot product, and I'll be talking about that in the next video. Until then, I encourage you to play around with this idea on your own, contemplating the meanings of things like matrix multiplication and linear systems of equations in the context of transformations between different dimensions. Have fun!
Three-dimensional linear transformations | Chapter 5, Essence of linear algebra
Hey folks, I've got a relatively quick video for you today, just sort of a footnote between chapters. In the last two videos, I talked about linear transformations and matrices, but I only showed the specific case of transformations that take two-dimensional vectors to other two-dimensional vectors. In general throughout the series, we'll work mainly in two dimensions, mostly because it's easier to actually see on the screen and wrap your mind around. But more importantly than that, once you get all the core ideas in two dimensions, they carry over pretty seamlessly to higher dimensions. Nevertheless, it's good to peek our heads outside of flatland now and then to, you know, see what it means to apply these ideas in more than just those two dimensions. For example, consider linear transformation with three-dimensional vectors as inputs and three-dimensional vectors as outputs. We can visualize this by smushing around all the points in three-dimensional space, as represented by a grid, in such a way that keeps the grid lines parallel and evenly spaced, and which fixes the origin and place. And just as with two dimensions, every point of space that we see moving around is really just a proxy for a vector who has its tip at that point, and what we're really doing is thinking about input vectors moving over to their corresponding outputs. And just as with two dimensions, one of these transformations is completely described by where the basis vectors go. But now, there are three standard basis vectors that we typically use, the unit vector in the x direction, i-hat, the unit vector in the y direction, j-hat, and a new guy, the unit vector in the z direction, called k-hat. In fact, I think it's easier to think about these transformations by only following those basis vectors since the full 3D grid representing all points can get kind of messy. By leaving a copy of the original axes in the background, we can think about the coordinates of where each of these three basis vectors lands. Record the coordinates of these three vectors as the columns of a three-by-three matrix. This gives a matrix that completely describes the transformation using only nine numbers. As a simple example, consider the transformation that rotates space 90 degrees around the y-axis. So that would mean that it takes i-hat to the coordinates 0, 0, negative 1 on the z-axis. It doesn't move j-hat, so it stays at the coordinates 0, 1, 0. And then k-hat moves over to the x-axis at 1, 0, 0. Those three sets of coordinates become the columns of a matrix that describes that rotation transformation. To see where vector with coordinates x, y, z lands, the reasoning is almost identical to what it was for two dimensions. Each of those coordinates can be thought of as instructions for how to scale each basis vector so that they add together to get your vector. And the important part, just like the 2D case, is that this scaling and adding process works both before and after the transformation. So to see where your vector lands, you multiply those coordinates by the corresponding columns of the matrix, and then you add together the three results. Multiplying two matrices is also similar. Whenever you see two, three by three matrices getting multiplied together, you should imagine that first applying the transformation encoded by the right one, then applying the transformation encoded by the left one. It turns out that 3D matrix multiplication is actually pretty important for fields like computer graphics and robotics. Since things like rotations and three dimensions can be pretty hard to describe, but they're easier to wrap your mind around if you can break them down as the composition of separate, easier to think about rotations. Doing this matrix multiplication numerically is, once again, pretty similar to the 2D case. In fact, a good way to test your understanding of the last video would be to try to reason through what specifically this matrix multiplication should look like, thinking closely about how it relates to the idea of applying two successive transformations in space. In the next video, I'll start getting into the determinant.
Pure Fourier series animation montage
2 Cloves 20十 Macs Пока 800 °. ※1-2-1L ※1-2-1L ※1-2-1L ※1-2-1L ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch ouch
Differential equations, a tourist's guide | DE1
Taking a quote from Stephen Stroghats, since Newton, mankind has come to realize that the laws of physics are always expressed in the language of differential equations. Of course, this language is spoken well beyond the boundaries of physics as well, and being able to speak it and read it adds a new color to how you view the world around you. In the next few videos, I want to give a sort of tour of this topic. The aim is to give a big picture view of what this piece of math is all about, while at the same time being happy to dig into the details of specific examples as they come along. I'll be assuming you know the basics of calculus, like what derivatives and integrals are, and in later videos we'll need some basic linear algebra, but not too much beyond that. Differential equations arise whenever it's easier to describe change than absolute amounts. It's easier to say why population sizes, for example, row or shrink, than it is to describe why they have the particular values they do at some point in time. It may be easier to describe why your love for someone is changing than why it happens to be where it is now. In physics, more specifically Newtonian mechanics, motion is often described in terms of force, and force determines acceleration, which is a statement about change. These equations come in two different flavors, ordinary differential equations or ODE's, involving functions with a single input, often thought of as time, and partial differential equations, or PDEs, dealing with functions that have multiple inputs. Partial differential equations are something we'll be looking at more closely in the next video. You often think of them as involving a whole continuum of values changing with time, like the temperature at every point of a solid body, or the velocity of a fluid at every point in space. ordinary differential equations, our focus for now, involve only a finite collection of values changing with time. And it doesn't have to be time, per se, your one independent variable could be something else, but things changing with time are the prototypical and most common example of differential equations. Physics offers a nice playground for us here, with simple examples to start with, and no shortage of intricacy and nuance as we delve deeper. As a nice warm-up, consider the trajectory of something you throw in the air. The force of gravity near the surface of Earth causes things to accelerate downward at 9.8 meters per second per second. Now unpack what that's really saying. It means if you look at that object free from other forces and record its velocity at every second, these velocity vectors will accrue an additional downward component of 9.8 meters per second every second. We call this constant 9.8 G for gravity. This is enough to give us an example of a differential equation, albeit a relatively simple one. Focus on the Y coordinate as a function of time. Its derivative gives the vertical component of velocity, whose derivative in turn gives the vertical component of acceleration. For compactness, let's write that first derivative as Y dot, and that second derivative as Y double dot. Our equation says that Y double dot is equal to negative G, a simple constant. This is one where we can solve by integrating, which is essentially working the question backwards. First, to find velocity, you ask, what function has negative G as a derivative? Well, it's negative G times T, or more specifically negative G T plus the initial velocity. Notice, there's many functions with this particular derivative, so you have an extra degree of freedom, which is determined by an initial condition. Now, what function has this as a derivative? Well, it turns out to be negative 1 half G times T squared, plus that initial velocity times T. And again, we're free to add an additional constant without changing the derivative, and that constant is determined by whatever the initial position is. And there you go. So, we've solved a differential equation, figuring out what a function is based on information about its rate of change. Things get more interesting when the forces acting on a body depend on where that body is. For example, studying the motion of planets, stars, and moons, gravity can no longer be considered a constant. Given two bodies, the pole on one of them is in the direction of the other, with a strength inversely proportional to the square of the distance between them. As always, the rate of change of position is velocity. But now, the rate of change of velocity, acceleration, is some function of position. So you have this dance between two mutually interacting variables, reminiscent of the dance between the two moving bodies which they describe. This is reflective of the fact that, often in differential equations, the puzzles you face involve finding a function whose derivative and or higher order derivatives are defined in terms of the function itself. In physics, it's most common to work with second-order differential equations, which means the highest derivative you find in this expression is a second derivative. Higher order differential equations would be ones involving third derivatives, fourth derivatives, and so on. Puzzles with more intricate clues. The sensation you get when really meditating on one of these equations is one of solving an infinite continuous jigsaw puzzle. In a sense, you have to find infinitely many numbers, one for each point in time t. But they're constrained by a very specific way that these values intertwine with their own rate of change, and the rate of change of that rate of change. To get a feel for what studying these can look like, I want you to take some time digging into a deceptively simple example, a pendulum. How does this angle theta, that it makes with the vertical, change as a function of time? This is often given as an example in introductory physics classes of harmonic motion, meaning it oscillates like a sine wave. More specifically, one with a period of 2 pi times the square root of L over g, where L is the length of the pendulum, and g is the strength of gravity. However, these formulas are actually lies, or rather approximations which only work in the realm of small angles. If you were to go and measure an actual pendulum, what you'd find is that as you pull it out farther, the period is longer than what the high school physics formulas would suggest. And when you pull it out really far, this value of theta plotted versus time doesn't even look like a sine wave anymore. To understand what's really going on, first things first, let's set up the differential equation. We'll measure the position of the pendulum's weight as a distance x along this arc, and if the angle theta that we care about is measured in radians, we can write x as L times theta, where L is the length of the pendulum. As usual, gravity pulls down with an acceleration of g, but because the pendulum constrains the motion of this mass, we have to look at the component of this acceleration in the direction of motion. A little geometry exercise for you is to show that this little angle here is the same as theta. So the component of gravity in the direction of motion, opposite this angle, will be negative g times sine of theta. Here we're considering theta to be positive when the pendulum is swung to the right and negative when it's swung to the left. And this minus sign in the acceleration indicates that it's always pointed in the opposite direction from displacement. So what we have is that the second derivative of x, the acceleration, is negative g times sine of theta. As always, it's nice to do a quick gut check that our formula makes physical sense. When theta is zero, sine of zero is zero, so there's no acceleration in the direction of movement. Sine of theta is 90 degrees, sine of theta is 1, so the acceleration is the same as it would be for freefall. Alright, that checks out. And because x is L times theta, that means the second derivative of theta is negative g over L times sine of theta. To be a little more realistic, let's add in a term to account for the air resistance, which may be we model as being proportional to the velocity. We'll write this as negative mu times theta dot, where mu is some constant that encapsulates all of the air resistance and friction and such that determines how quickly the pendulum loses energy. Now, this, my friends, is a particularly juicy differential equation. It's not easy to solve, but it's not so hard that we can't reasonably get some meaningful understanding out of it. At first glance, you might think that the sine function you see here relates to the sine wave pattern for the pendulum. Ironically, though, what you'll eventually find is that the opposite is true. The presence of the sine in this equation is precisely why real pendulums don't oscillate with a sine wave pattern. If that sounds odd, consider the fact that here, the sine function is taking theta as an input, but in the approximate solution you might see in a physics class, theta itself is oscillating as the output of a sine function. Clearly, something fishy is a foot. Something that I like about this example is that even though it's comparatively simple, it exposes an important truth about differential equations that you need to grapple with. They're really freaking hard to solve. In this case, if we remove that dampening term, we can just barely write down an analytic solution, but it's hilariously complicated. It involves all these functions you've probably never heard of, written in terms of integrals and weird inverse integral problems. And when you step back, presumably the reason for finding a solution is to then be able to make computations and to build an understanding for whatever dynamics you're studying. In this case, those questions have just been punted off to figuring out how to compute and more importantly understand these new functions. And more often, like if we add back in that dampening term, there is not a known way to write down an exact analytic solution. Well, I mean, for any hard problem you could just define a new function to be the answer of that problem, heck, even name it after yourself if you want. But again, that's pointless unless it leads you to being able to make computations and to build understanding. So instead, in the study of differential equations, we often do a sort of short circuit and skip the actual solution part since it's unattainable and go straight to building understanding and making computations from the equations alone. Let me walk through what that might look like with the pendulum. What do you hold in your head, or what visualization can you get some software to pull up for you? To understand the many possible ways that a pendulum governed by these laws might evolve depending on its starting conditions. You might be tempted to try imagining the graph of Theta versus T and somehow interpreting how this sloped the position in the curvature all interrelate with each other. However, what will turn out to be both easier and more general is to start by visualizing all possible states in a two-dimensional plane. What I mean by the state of the pendulum is that you can describe it with two numbers, the angle and the angular velocity. You can freely change either one of those two values without necessarily changing the other, but the acceleration is purely a function of those two values. So each point of this two-dimensional plane fully describes the pendulum at any given moment. You might think of these as all possible initial conditions of that pendulum. If you know the initial angle and the initial angular velocity, that's enough to predict how the system will evolve as time moves forward. If you haven't worked with them before, these sorts of diagrams can take a little getting used to. What you're looking at now, this inward spiral, is a fairly typical trajectory for our pendulum. Take a moment to think carefully about what is being represented. Notice how at the start, as theta decreases, theta dot, the y-coordinate, gets more negative, which makes sense because the pendulum moves faster in the leftward direction as it approaches at the bottom. Keep in mind, even though the velocity vector on this pendulum is pointed to the left, the value of that velocity is always being represented by the vertical component of our space. It's important to remind yourself that this state space is an abstract thing, and it's distinct from the physical space where the pendulum itself lives and moves. Since we're modeling this as losing some of its energy to air resistance, this trajectory spirals inward, meaning the peak velocity and peak displacement each go down a bit with each swing. Our point is, in a sense, attracted to the origin, where theta and theta dot both equal zero. With the space, we can visualize a differential equation as a vector field. Here, let me show you what I mean. The pendulum state is a vector, theta theta dot. Maybe you think of that as an arrow from the origin, or maybe you think of it as a point. What matters is that it has two coordinates, each a function of time. Taking the derivative of that vector gives you its rate of change, the direction and speed that it will tend to move in this diagram. That derivative is a new vector, theta dot theta double dot, which we visualize as being attached to the relevant point in space. Now take a moment to interpret what this is saying. The first component for this rate of change vector is theta dot, which is also a coordinate in our space. So the higher up we are in the diagram, the more the point tends to move to the right, and the lower we are, the more it tends to move to the left. The vertical component is theta double dot, which our differential equation lets us rewrite entirely in terms of theta and theta dot itself. In other words, the first derivative of our state vector is some function of that vector itself, with most of the intricacy tied up in that second coordinate. Doing the same at all points of this space will show how that state tends to change from any position. As is typical with vector fields, we artificially scale down the vectors when we draw them to prevent clutter, but use color to loosely indicate magnitude. Notice we've effectively broken up a single second order equation into a system of two first order equations. You might even give theta dot a different name, to emphasize that we're really thinking of two separate values, intertwined via this mutual effect they have on one another's rate of change. This is a common trick in the study of differential equations. Instead of thinking about higher order changes of a single value, we often prefer to think of the first derivative of vector values. In this form, we have a wonderful visual way to think about what solving the equation means. As our system evolves from some initial state, our point in this space will move along some trajectory in such a way that at every moment, the velocity of that point matches the vector from this field. Again, keep in mind, this velocity is not the same thing as the physical velocity of the pendulum. It's a more abstract rate of change, encoding the rates of change for both theta and theta dot. You might find it fun to pause for a moment and think through what exactly some of these trajectory lines say about the possible ways that the pendulum evolves from different starting conditions. For example, in regions where theta dot is quite high, the vectors guide the point to travel to the right quite a ways before settling down into an inward spiral. This corresponds to a pendulum with a high enough initial velocity that it fully rotates around several times before settling into a decaying back and forth. Having a little more fun? When I tweak this air resistance term, mu, say increasing it, you can immediately see how this will result in trajectories that spiral inward faster, which is to say the pendulum slows down faster. That's obvious when I call it the air resistance term, but imagine that you saw these equations out of context, not knowing that they describe the pendulum. It's not obvious just looking at them, that increasing this value of mu means that the system as a whole tends towards some attracting state faster. So getting some software to draw these vector fields for you can be a great way to build an intuition for how they behave. This wonderful is that any system of ordinary differential equations can be described by a vector field like this, so it's a very general way to get a feel for them. Usually, though, they have many more dimensions. For example, consider the famous three-body problem, which is to predict how three masses in three-dimensional space evolve if they act on each other with gravity, and if you know their initial positions and velocities. Each mass has three coordinates describing its position, and three more describing its momentum. So the system has 18 degrees of freedom in total, and hence an 18-dimensional space of possible states. It's a bizarre thought, isn't it? A single point meandering through an 18-dimensional space that we cannot visualize, obediently taking steps through time based on whatever vector it happens to be sitting on from moment to moment, completely encoding the positions and the momenta of the three masses we see in ordinary, physical, 3D space. In practice, by the way, you can reduce the number of dimensions here by taking advantage of the symmetries of your setup, but the point that more degrees of freedom results in higher dimensional state spaces remains the same. In math, we often call a space like this a phase space. You'll hear me use that term broadly for spaces encoding all kinds of states of changing systems, but you should know that in the context of physics, especially Hamiltonian mechanics, the term is often reserved for a more special case, namely a space whose axes represent position and momentum. So a physicist would agree that the 18-dimensional space describing the three-body problem is a phase space, but they might ask that we make a couple of modifications to our pendulum setup for it to properly deserve the term. For those of you who just watched the block collision video, the planes we worked with there would happily be called phase spaces by math folk. The a physicist might prefer other terminology. Let's know that the specific meaning may depend on your context. It may seem like a simple idea, depending on how well indoctrinated you are to modern ways of thinking about math, but it's worth keeping in mind that it took humanity quite a while to really embrace thinking of dynamics, especially like this, especially when the dimensions get very large. In his book Chaos, the author James Glick describes phase space as one of the most powerful inventions of modern science. One reason it's powerful is that you can ask questions not just about a single initial condition, but about a whole spectrum of initial states. The collection of all possible trajectories is reminiscent of a moving fluid, so we call it phase flow. To take one example of why phase flow is a fruitful idea, consider the question of stability. The origin of our space corresponds to the pendulum standing still, and so does this point over here, representing when the pendulum is perfectly balanced upright. These are the so-called fixed points of our system, and one natural question to ask is whether or not they're stable. That is, will tiny nudges to the system result in a state that tends back towards that fixed point or away from it? Physical intuition for the pendulum makes the answer here kind of obvious, but how would you think about stability just looking at the equations? Say if they arose in some completely different lesson to it of context. We'll go over how to compute the answers to questions like this in following videos, and the intuition for the relevant computations are guided heavily by the thought of looking at small regions in space around a fixed point and asking whether the flow tends to contract or expand. And speaking of attraction and stability, let's take a brief side step to talk about love. The Strogat's quote that I mentioned earlier comes from a whimsical column in the New New York Times on the mathematics of modeling affection, an example well worth pillferring to illustrate that we're not just talking about physics here. Imagine you've been flirting with someone, but there's been some frustrating inconsistency to how mutual your affection seems. And perhaps during a moment when you turn your attention towards physics to keep your mind off the romantic turmoil, bowling over the broken-up pendulum equations, you suddenly understand the onigin-offigin dynamics of your flirtation. You've noticed that your own affection tends to increase when your companion seems interested in you. But decrease when they seem colder. That is, the rate of change for your love is proportional to their feelings for you. But this sweetheart of yours is precisely the opposite, strangely attracted to you when you seem uninterested, but turned off once you seem too keen. The phase space for these equations looks very similar to the center part of your pendulum diagram. The two of you will go back and forth between affection and repulsion in an endless cycle. A metaphor of pendulum swings in your feelings would not just be apt, but mathematically verified. In fact, if your partner's feelings were further slowed when they feel themselves too in love, let's say out of a fear of being made vulnerable, we'd have a term matching the friction in the pendulum, and you too would be destined to an inward spiral towards mutual ambivalence. I hear wedding bells already. The point is that two very different seeming laws of dynamics, one from physics involving a single variable, and another from chemistry with two variables, actually have a very similar structure, easier to recognize when you're looking at the phase diagram. Most notably, even though the equations are different, for example, there's no sign function in the romance equations, the phase space exposes an underlying similarity nevertheless. In other words, you're not just studying a pendulum right now. The tactics you develop to study one case have a tendency to transfer to many others. Okay, so phase diagrams are a nice way to build understanding. But what about actually computing the answer to our equation? Well, one way to do this is to essentially simulate what the universe would do, but using finite time steps instead of the infinitesimals and limits defining calculus. The basic idea is that if you're at some point in this phase diagram, take a step based on the vector that you're sitting on for a small time step, delta T. Specifically, take a step equal to delta T times that vector. As a reminder, in drawing these vector fields, the magnitude for each vector has been artificially scaled down to prevent clutter. When you do this repeatedly, your final location will be an approximation of theta T, where T is the sum of all of those time steps. If you think about what's being shown right now, though, and what that would imply for the pendulum's movement, you'd probably agree that this is grossly inaccurate. But that's only because the time step delta T of 0.5 is way too big. If we turned it down, say to 0.01, you can get a much more accurate approximation. Which just takes more repeated steps as all. In this case, computing theta of 10 requires 1,000 little steps. Luckily, we live in a world with computers, so repeating a simple task a thousand times is as simple as articulating that task with the programming language. In fact, let's finish things off by writing a little Python program that computes theta of T for us. What it has to do is make use of the differential equation, which returns the second derivative of theta as a function of theta and theta dot. You start off by defining two variables, theta and theta dot, each in terms of some initial conditions. In this case, I'll have theta start at pi thirds, which is 60 degrees, and theta dot start at 0. Next, write a loop that corresponds to taking many little time steps between 0 and your time T, each of size delta T, which I'm setting here to be 0.01. In each step of this loop, increase theta by theta dot times delta T, and increase theta dot by theta double dot times delta T, where theta double dot can be computed based on the differential equation. After all these little time steps, simply return the value of theta. This is called solving a differential equation numerically. Numerical methods can get way more sophisticated and intricate than this to better balance the trade-off between accuracy and efficiency, but this loop gives the basic idea. So even though it sucks that we can't always find exact solutions, there are still meaningful ways to study differential equations in the face of this inability. In the following videos, we will look at several methods for finding exact solutions when it's possible. But one theme I'd like to focus on is how these exact solutions can also help us to study the more general, unsolvable cases. But it gets worse. Just as there's a limit to how far exact analytic solutions can get us, one of the great fields to have emerged in the last century, chaos theory, has exposed that there are further limits on how well we can use these systems for prediction with or without solutions. Specifically, we know that for some systems, small variations to the initial conditions, say the kind due to necessarily imperfect measurements, result in wildly different trajectories. We've even built some good understanding for why this happens. The three-body problem, for example, is known to have seeds of chaos within it. So looking back at the quote from earlier, it seems almost cruel of the universe to fill its language with riddles that we either can't solve, or where we know that any solution would be useless for long-term prediction anyway. It is cruel, but then again, it should also be reassuring. It gives some hope that the complexity we see in the world around us could be studied somewhere in this math, and that it's not hidden away in the mismatch between model and reality.
Other math channels you'd enjoy
This is almost surreal to say, but the channel recently passed 1 million subscribers. And I know what you're thinking, just 48,576 more to go before the next big milestone. And indeed, I'll hold off proper celebrations until then, but you know, seeing that seventh digit really does make you reflect on what led to this point. And if I'm being honest, a lot of the growth for this channel just had to do with some very kind people sharing and promoting the content, both in terms of certain specific shout outs from creators with big audiences, and in terms of individuals just sharing with their friends. So what I want to do now is take a moment to let you all know about a few other math creators online who I think you would like a lot. There's just so much good creation that happens under a lot of people's radars. First up, we have the channel Think Twice. I honestly have no idea why this channel isn't better known among online math communities. It contains these really beautifully done math animations, and if you like this channel, you're definitely going to like it. You absolutely get the sense looking at any one of these that it comes from someone who thinks very visually about math. Naturally, this often takes the form in some very pretty geometry, or some algebraic results that are explained geometrically. And the focus tends to be a little bit more short form and bite-sized pieces, so if you ever want a quick little reminder of why math is beautiful, this is just a great channel to pop over to. Here, let me just let a couple more of these animations just play out. Next up, I'd recommend you check out Leo's OS run by James Schloss. What I like about James is that he seems to think first and foremost about community, and creating content seems to be more of a vehicle for bringing people together online. Let me turn things over to him to explain what it's all about, both on and off YouTube. Leo's OS is fundamentally a channel about algorithms and works with the arcane algorithm archive, an open source and collaborative effort to document every algorithm in existence in every language possible. He's just getting started, and it's obviously an impossible goal, but that's half the fun. This means that we cover a range of different topics, and here are some clips of the videos. Any light that enters a lens leaves on the other side of the lens with the same angle it came in with. It's almost as if the light never entered the lens at all, and hence the name, the invisible lens. If we take any arbitrary matrix, say the identity matrix with zeros along the W dimension, the rotation doesn't look quite right. This is because most Tesseract-a-Pitchens use stereographic projections, which resemble the act of holding a light source behind an object and checking its shadow against the screen. Here we divide our points into smaller packages and use the Graham scan to find their convex holes. We then take the vertices from the holes and plug them into the Jarvis march to create a wrapped gift of wrapped gifts. This is of course just a small subset of the types of videos on our channel. Quickly, I would like to thank Grant for making such amazing videos on 3D Blue 1 Brown. It's inspiring to see such a large and dedicated community attempting to understand the true beauty of mathematics. You guys are honestly amazing, so even if I don't see you guys again, please keep being such an awesome community. For my part, I think it would be awesome to see this algorithm's archive grow more. Both of the last two, at least on YouTube, have shorter form content. So next, let's switch to a channel with more of a long form focus, Welsh Labs, run by Stephen Welsh. Now I suspect many of you already know about Stephen's work, for example the excellent series imaginary numbers are real. But there are many other great playlists from the channel that you should absolutely take a look at. In fact, one of the things I like most about this channel is that Stephen thinks in terms of series rather than in terms of individual videos. And each one of them includes clear points where the viewer can engage with the materials more than just through a passive viewing experience, often including workbooks and associated PDFs. So the focus really is on learning more so than just entertainment. To get a feel here, let me just play some snippets from the start and then from the end of his imaginary numbers are real series, which should give you a pretty good feel for the style in the scope. And I'll also throw in a snippet from his series on machine learning. Algebraically, this new dimension has everything to do with a problem that was considered impossible for over 2,000 years. The square root of negative 1. When we include this missing dimension in our analysis, our parabola gets way more interesting. Now that our input numbers are in their full two-dimensional form, we can see how our function x squared plus 1 really behaves. Our function does cross the x-axis. We were just looking in the wrong dimension. So why is this extra dimension that numbers possess not common knowledge? One of the reason is that it has been given a terrible, terrible name. A name that suggests that these numbers aren't even real. Alright, ready? We'll draw the same exact paths on W and see how they show up on our remodeled surface as they are mapped to our z-plane. So why does our green path start out in one location on our z-plane only to end up in another? Only because our green path leads us to the other layer of our surface. From the perspective of our W plane, it appears that we've returned exactly to our starting point. But actually, we have it. The W plane is just a projection, a shadow. In reality, our path has led us to a completely different branch of our function. This is a decision tree. Right now it's learning. When it's done, it'll have learned to do something very human. Something that everyone knows how to do, but no one can quite explain how they do it. Something that is full generations of brilliant scientists with its apparent simplicity, but practical complexity. Something that has only become possible in the last few decades thanks to creative solutions in the face of huge complexity. It's learning to see. Let's test it out. With the help of a camera and a computer, our decision tree sees exactly how many fingers we're holding up. Now, depending on who you are, the fact that a machine can perform this task may be completely mind-blowing, not all that out of the ordinary, or may not even be all that impressive. The good news here is whichever camp you're in, you're in good company. Our finger counting machine is a simple example that belongs to a deceptively complex class of problem. Problems that arise from thinking about how we think. And finally, turning away from the YouTube world, I want to highlight the blog Infinity Plus One. The tone of the writing and the associated visuals are both delightfully playful, but the author, James Dilts, manages to get in some pretty substantive math all while keeping it accessible. A particularly good sequence of posts is the one he did on relativity, both special and general. And it's a little hard to directly feature text posts and video forms, but I'll leave you some links in the description to some of my favorite posts, and of course the description also has links for the other three creators mentioned here.
Solving Wordle using information theory
The game Wordal has gone pretty viral in the last month or two, and never one to overlook an opportunity for a math lesson. It occurs to me that this game makes for a very good central example in a lesson about information theory, and in particular a topic known as entropy. You see, like a lot of people I got kind of sucked into the puzzle, and like a lot of programmers, I also got sucked into trying to write an algorithm that would play the game as optimally as it could. And what I thought I'd do here is just talk through with you some of my process in that, and explain some of the math that went into it, since the whole algorithm centers on this idea of entropy. First things first, in case you haven't heard of it, what is Wordal? And to kill two birds with one stone here while we go through the rules of the game, let me also preview where we're going with this, which is to develop a little algorithm that will basically play the game for us. So I haven't done today's Wordal, this is February 4th, and we'll see how the bot does. The goal of Wordal is to guess a mystery 5 letter word, and you're given 6 different chances to guess. For example, my wordal bot suggests that I start with the guess crane. Each time that you make a guess, you get some information about how close your guess is to the true answer. Here the gray box is telling me there's no C in the actual answer. The yellow box is telling me there isn't R, but it's not in that position. The green box is telling me that the secret word does have an A, and it's in the third position, and then there's no N and there's no E. So let me just go in and tell the wordal bot that information we started with crane, we got gray, yellow, green, gray, gray. Don't worry about all the data that it's showing right now, I'll explain that in due time, but its top suggestion for our second pick is stick. And your guess does have to be an actual 5 letter word, but as you'll see, it's pretty liberal with what it will actually let you guess. In this case, we try stick. And alright, things are looking pretty good. We hit the S and the H, so we know the first three letters, we know that there's an R. And so it's going to be like SHA something R or SHA R something, and it looks like the wordal bot knows that it's down to just two possibilities, either shard or sharp. And it's kind of a toss up between them at this point, so I guess probably just because it's alphabetical, it goes with shard, which gray is the actual answer, so we got it in three. If you're wondering if that's any good, the way I heard one person phrase it is that with wordal 4 is par and 3 is birdie, which I think is a pretty apt analogy. You have to be consistently on your game to be getting 4, but it's certainly not crazy, but when you get it in 3, it just feels great. So if you're down for it, what I'd like to do here is just talk through my thought process from the beginning for how I approach the wordal bot, and like I said, really it's an excuse for an information theory lesson. The main goal is to explain what is information and what is entropy. My first thought in approaching this was to take a look at the relative frequencies of different letters in the English language. So I thought, okay, is there an opening guess or an opening pair of guesses that hits a lot of these most frequent letters? And one that I was pretty fond of was doing other followed by nails. The thought is that if you hit a letter, you know, you get a green or a yellow, that always feels good, it feels like you're getting information. But in these cases, even if you don't hit and you always get grays, that's still giving you a lot of information, since it's pretty rare to find a word that doesn't have any of these letters. But even still, that doesn't feel super systematic, because for example, it does nothing to consider the order of the letters. Why type nails when I could type snail? Is it better to have that as at the end? I'm not really sure. Now, a friend of mine said that he liked to open with the word wewey, which kind of surprised me because it has some uncommon letters in there like the w and the y. But who knows, maybe that is a better opener. Is there some kind of quantitative score that we can give to judge the quality of a potential guess? Now, to set up for the way that we're going to rank possible guesses, let's go back and add a little clarity to how exactly the game is set up. So there's a list of words that it will allow you to enter that are considered valid guesses that's just about 13,000 words long. But when you look at it, there's a lot of really uncommon things, things like aw-head or a-le and arg, the kind of words that bring about family arguments in a game of scrabble. But the vibe of the game is that the answer is always going to be a decently common word, and in fact, there's another list of around 2300 words that are the possible answers. And this is a human curated list, I think specifically by the game creator's girlfriend, which is kind of fun. But what I would like to do, our challenge for this project, is to see if we can write a program solving word that doesn't incorporate previous knowledge about this list. For one thing, there's plenty of pretty common five letter words that you won't find in that list. Though it would be better to write a program that's a little more resilient and would play wordle against anyone, not just what happens to be the official website. And also, the reason that we know what this list of possible answers is because it's visible in the source code, but the way that it's visible in the source code is in the specific order in which answers come up from day to day. So you could always just look up what tomorrow's answer will be. So clearly there's some sense in which using the list is cheating. And what makes for a more interesting puzzle and a richer information theory lesson is to instead use some more universal data, like relative word frequencies in general, to capture this intuition of having a preference for more common words. So of these 13,000 possibilities, how should we choose the opening guess? For example, if my friend proposes weary, how should we analyze its quality? Well, the reason he said he likes that unlikely W is that he likes the long shot nature of just how good it feels if you do hit that W. For example, if the first pattern revealed was something like this, then it turns out there are only 58 words in this giant lexicon that match that pattern. So that's a huge reduction from 13,000. But the flip side of that, of course, is that it's very uncommon to get a pattern like this. Specifically, if each word was equally likely to be the answer, the probability of hitting this pattern would be 58 divided by around 13,000. Of course, they're not equally likely to be answers. Most of these are very obscure and even questionable words. But at least for our first pass at all of this, let's assume that they're all equally likely and then refine that a bit later. The point is the pattern with a lot of information is, by its very nature, unlikely to occur. In fact, what it means to be informative is that it's unlikely. A much more probable pattern to see with this opening would be something like this, where of course there's not a W in it, maybe there's an E, and maybe there's no A, there's no R, there's no Y. In this case, there are 1400 possible matches. But if all were equally likely, it works out to be a probability of about 11% that this is the pattern you would see. So the most likely outcomes are also the least informative. To get a more global view here, let me show you the full distribution of probabilities across all of the different patterns that you might see. So each bar that you're looking at corresponds to a possible pattern of colors that could be revealed, of which there are three to the fifth possibilities, and they're organized from left to right, most common to least common. So the most common possibility here is that you get all grays. That happens about 14% of the time. And what you're hoping for when you make a guess is that you end up somewhere out in this long tail, like over here where there's only 18 possibilities for what matches this pattern that evidently look like this. Or if we venture a little farther to the left, you know, maybe we go all the way over here? Okay, here's a good puzzle for you. What are the three words in the English language that start with a W and with a Y and have an R somewhere in them? Turns out, the answers are, let's see. Wordy, wormy, and riley. So to judge how good this word is overall, we want some kind of measure of the expected amount of information that you're going to get from this distribution. If we go through each pattern and we multiply its probability of occurring times something that measures how informative it is, that can maybe give us an objective score. Now your first instinct for what that something should be might be the number of matches. You know, you want a lower average number of matches. But instead, I'd like to use a more universal measurement that we often ascribe to information, and one that will be more flexible once we have a different probability assigned to each of these 13,000 words for whether or not they're actually the answer. The standard unit of information is the bit, which has a little bit of a funny formula, but it's really intuitive if we just look at examples. If you have an observation that cuts your space of possibilities in half, we say that it has one bit of information. In our example, the space of possibilities is all possible words, and it turns out about half of the five letter words have an S, a little less than that, but about half. So that observation would give you one bit of information. If instead a new fact chops down that space of possibilities by a factor of 4, we say that it has two bits of information. For example, it turns out about a quarter of these words have a T. If the observation cuts that space by a factor of 8, we say it's three bits of information, and so on and so forth, four bits cuts it into a 16th, five bits cuts it into a 32nd. So now it's when you might want to take a moment and pause and ask for yourself, what is the formula for information for the number of bits in terms of the probability of an occurrence? Well, what we're saying here is basically that when you take one half to the number of bits, that's the same thing as the probability, which is the same thing as saying two to the power of the number of bits is one over the probability, which rearranges further to saying the information is the log base two of one divided by the probability. And sometimes you see this with one more rearrangement still where the information is the negative log base two of the probability. Expressed like this, it can look a little bit weird to the uninitiated, but it really is just the very intuitive idea of asking how many times you've cut down your possibilities in half. Now, if you're wondering, you know, I thought we were just playing a fun word game, why are logarithms entering the picture? One reason this is a nicer unit is it's just a lot easier to talk about very unlikely events, much easier to say that an observation has 20 bits of information than it is to say that the probability of such and such occurring is 0.00000095. But a more substantive reason that this logarithmic expression turned out to be a very useful addition to the theory of probability is the way that information adds together. For example, if one observation gives you two bits of information, cutting your space down by four, and then a second observation, like your second guess in wordle, gives you another three bits of information, chopping you down further by another factor of eight, the two together give you five bits of information. In the same way that probabilities like to multiply, information likes to add. So as soon as we're in the realm of something like an expected value, where we're adding a bunch of numbers up, the logs make it a lot nicer to deal with. Let's go back to our distribution for weary and another little tracker on here showing us how much information there is for each pattern. The main thing I want you to notice is that the higher the probability as we get to those more likely patterns, the lower the information, the fewer bits you gain. The way we measure the quality of this guess will be to take the expected value of this information. We'll go through each pattern, we say how probable is it, and then we multiply that by how many bits of information do we get? And in the example of weary, that turns out to be 4.9 bits. So on average, the information you get from this opening guess is as good as chopping your space of possibilities in half about five times. By contrast, an example of a guess with a higher expected information value would be something like slate. In this case, you'll notice the distribution looks a lot flatter. In particular, the most probable occurrence of all grades only has about a 6% chance of occurring. So at minimum, you're getting evidently 3.9 bits of information. But that's a minimum. More typically, you'd get something better than that. And it turns out, when you crunch the numbers on this one and you add up all of the relevant terms, the average information is about 5.8. So in contrast with weary, your space of possibilities will be about half as big after this first guess, on average. There's actually a fun story about the name for this expected value of information quantity. You see, information theory was developed by Claude Shannon, who was working at Bell Labs in the 1940s, but he was talking about some of his yet to be published ideas with John Von Neumann, who was this intellectual giant of the time, very prominent in math and physics and the beginnings of what was becoming computer science. And when he mentioned that he didn't really have a good name for this expected value of information quantity, but Neumann supposedly said so the story goes, well, you see, we should call it entropy and for two reasons. In the first place, your uncertainty function has been used in statistical mechanics under that name, so it already has a name. And in the second place, and more important, nobody knows what entropy really is. So in a debate, you'll always have the advantage. So if the name seems a little bit mysterious, and if the story is to be believed, it's kind of by design. Also if you're wondering about its relation to all of that second law of thermodynamics stuff from physics, there definitely is a connection, but in its origins, Shannon was just dealing with pure probability theory. And for our purposes here, when I use the word entropy, I just want you to think the expected information value of a particular guess. You can think of entropy as measuring two things simultaneously. The first one is how flat is the distribution? The closer a distribution is to uniform, the higher that entropy will be. In our case, where there are three to the fifth total patterns, or a uniform distribution, observing any one of them would have information log base two of three to the fifth, which happens to be 7.92. So that is the absolute maximum that you could possibly have for this entropy. But entropy is also kind of a measure of how many possibilities there are in the first place. For example, if you happen to have some word where there's only 16 possible patterns, and each one is equally likely, this entropy, this expected information, would be four bits. But if you have another word where there's 64 possible patterns that could come up, and they're all equally likely, then the entropy would work out to be six bits. Though if you see some distribution out in the wild that has an entropy of six bits, it's sort of like it's saying there's as much variation and uncertainty in what's about to happen as if there were 64 equally likely outcomes. For my first pass at the world, but I basically had it just do this. It goes through all of the different possible guesses that you could have all 13,000 words. It computes the entropy for each one, or more specifically, the entropy of the distribution across all patterns that you might see for each one. And then it picks the highest since that's the one that's likely to chop down your space of possibilities as much as possible. And even though I've only been talking about the first guess here, it does the same thing for the next few guesses. For example, after you see some pattern on that first guess, which would restrict you to a smaller number of possible words based on what matches with that, you just play the same game with respect to that smaller set of words. Bray proposed second guess. You look at the distribution of all patterns that could occur from that more restricted set of words. You search through all 13,000 possibilities and you find the one that maximizes that entropy. To show you how this works in action, let me just pull up a little variant of word all that I wrote that shows the highlights of this analysis in the margins. So after doing all its entropy calculations on the right here, it's showing us which ones have the highest expected information. Turns out the top answer, at least at the moment, we'll refine this later, is tears, which means, um, of course, a vetch, the most common vetch. Each time we make a guess here, we're maybe I kind of ignore its recommendations and go with slate because I like slate. We can see how much expected information it had, but then on the right of the word here, showing us how much actual information we got, given this particular pattern. So here it looks like we were a little unlucky, we were expected to get 5.8, but we happened to get something with less than that. And then on the left side here, it's showing us all of the different possible words given where we are now. The blue bars are telling us how likely it thinks each word is, so at the moment it's assuming each word is equally likely to occur, but we'll refine that in a moment. And then this uncertainty measurement is telling us the entropy of this distribution across the possible words, which right now, because it's a uniform distribution, is just a needlessly complicated way to count the number of possibilities. For example, if we were to take 2 to the power of 13.66, that should be around the 13,000 possibilities. Um, a little bit off here, but only because I'm not showing all the decimal places. At the moment, that might feel redundant and like it's overly complicating things, but you'll see why it's useful to have both numbers in a minute. So here it looks like it's suggesting the highest entropy for our second guess is ramen, which again, just really doesn't feel like a word. So to take the moral high ground here, I'm going to go ahead and type in rains. And again, it looks like we were a little unlucky. We were expecting 4.3 bits and we only got 3.39 bits of information. So that takes us down to 55 possibilities. And here, maybe I'll just actually go with what it's suggesting, which is kombu, whatever that means. And okay, this is actually a good chance for a puzzle. It's telling us this pattern gives us 4.7 bits of information. But over on the left, before we see that pattern, there were 5.78 bits of uncertainty. So as a quiz for you, what does that mean about the number of remaining possibilities? Well, it means that we're reduced down to one bit of uncertainty, which is the same thing as saying that there's two possible answers. It's a 50-50 choice. And from here, because you and I know which words are more common, we know that the answer should be a bis. But as it's written right now, the program doesn't know that. So it just keeps going, trying to gain as much information as it can, until it's only one possibility left. And then it guesses it. So obviously we need a better end game strategy. But let's say we call this version 1 of our wordless solver, and then we go and run some simulations to see how it does. So the way this is working is it's playing every possible wordal game. It's going through all of those 2,300, 15 words that are the actual wordal answers. It's basically using that as a testing set. And with this naive method of not considering how common a word is, and just trying to maximize the information at each step along the way, until it gets down to one and only one choice. By the end of the simulation, the average score works out to be about 4.124, which, you know, it's not bad, to be honest, I kind of expected to do worse. But the people who play wordal will tell you that they can usually get it in 4. The real challenge is to get as many in 3 as you can. It's a pretty big jump between the score 4 and the score of 3. The obvious low-hanging fruit here is to somehow incorporate whether or not a word is common, and how exactly do we do that? The way I approached it is to get a list of the relative frequencies for all of the words in the English language. And I just used Mathematica's word frequency data function, which itself pulls from the Google Books, English, and Gram public data set. And it's kind of fun to look at, for example, if we sort it from the most common words to the least common words. Evidently, these are the most common, five letter words in the English language. Or rather, these is the eighth most common. First is which, after which there's there and there. First itself is not first, but ninth, and it makes sense that these other words could come about more often. Where those after first are after, where, and those being just a little bit less common. Now, in using this data to model how likely each of these words is to be the final answer, it shouldn't just be proportional to the frequency. Because for example, which is given a score of 0.002 in this data set, whereas the word braid is in some sense about a thousand times less likely. But both of these are common enough words that they're almost certainly worth considering. So we want more of a binary cutoff. The way I went about it is to imagine taking this whole sorted list of words, and then arranging it on an x-axis, and then applying the sigmoid function, which is the standard way to have a function whose output is basically binary. It's either 0 or it's 1. But there's a smoothing in between for that region of uncertainty. So essentially, the probability that I'm assigning to each word for being in the final list will be the value of the sigmoid function above wherever it sits on the x-axis. Now, obviously this depends on a few parameters. For example, how wide a space on the x-axis those words fill determines how gradually or steeply we drop off from 1 to 0, and where we situate them left to right determines the cutoff. And to be honest, the way I did this was kind of just looking my finger and sticking it into the wind. I looked through the sorted list and tried to find a window where when I looked at it, I figured about half of these words are more likely than not to be the final answer, and use that as the cutoff. Now, once we have a distribution like this across the words, it gives us another situation where entropy becomes this really useful measurement. For example, let's say we were playing a game and we start with my old openers, which were other end nails, and we end up with a situation where there's four possible words that match it. And let's say we consider them all equally likely, let me ask you, what is the entropy of this distribution? Well, the information associated with each one of these possibilities is going to be the log base 2 of 4 since each one is 1 and 4, and that's 2. It's 2 bits of information for possibilities, all very well and good. But what if I told you that actually there's more than four matches, in reality when we look through the full word list, there are 16 words that match it. But suppose our model puts a really low probability on those other 12 words of actually being the final answer, something like 1 and 1,000 because they're really obscure. Now let me ask you, what is the entropy of this distribution? If entropy was purely measuring the number of matches here, then you might expect it to be something like the log base 2 of 16, which would be 4, 2 more bits of uncertainty than we had before. But of course the actual uncertainty is not really that different from what we had before, because just because there's these 12 really obscure words doesn't mean that it would be all that more surprising to learn that the final answer is charm, for example. So when you actually do the calculation here and you add up the probability of each occurrence times the corresponding information, what you get is 2.11 bits. Just saying it's basically 2 bits, basically those 4 possibilities, but there's a little more uncertainty because of all of those highly unlikely events, though if you did learn them you'd get a ton of information from it. So zooming out, this is part of what makes wordless such a nice example for an information theory lesson. We have these two distinct feeling applications for entropy. The first one, telling us what's the expected information we'll get from a given guess. And the second one, saying can we measure the remaining uncertainty among all of the words that we have possible. And I should emphasize in that first case where we're looking at the expected information of a guess, once we have an unequal waiting to the words that affects the entropy calculation. For example, let me pull up that same case we were looking at earlier of the distribution associated with weery, but this time using a non-uniform distribution across all possible words. So let me see if I can find a part here that illustrates it pretty well. Okay, here this is pretty good. Here we have two adjacent patterns that are about equally likely, but one of them were told has 32 possible words that match it. And if we check what they are, these are those 32, which are all just very unlikely words as you scan your eyes over them. It's hard to find any that feel like plausible answers, maybe yells, but if we look at the neighboring pattern in the distribution, which is considered just about as likely, we're told that it only has eight possible matches. So a quarter as many matches, but it's about as likely. And when we pull up those matches, we can see why. Some of these are actual plausible answers, like ring or wrath or wraps. To illustrate how we incorporate all that, let me pull up version two of the word a lot here. And there are two or three main differences from the first one that we saw. First off, like I just said, the way that we're computing these entries, these expected values of information, is now using the more refined distributions across the patterns that incorporates the probability that a given word would actually be the answer. As it happens, tears is still number one. So the ones following are a bit different. Second, when it ranks its top picks, it's now going to keep a model of the probability that each word is the actual answer, and it'll incorporate that into its decision, which is easier to see once we have a few guesses on the table. Again, ignoring its recommendation because we can't let machines rule our lives. And I suppose I should mention another thing different here is over on the left, that uncertainty value, that number of bits, is no longer just redundant with the number of possible matches. Now if we pull it up and we calculate say 2 to the 8.02, which should be a little above 256, I guess 259, what it's saying is even though there are 526 total words that actually match this pattern, the amount of uncertainty it has is more akin to what it would be if there were 259 equally likely outcomes. You could think of it like this. It knows Borks is not the answer, same with Yorts and Zorol and Zorris, but it's a little less uncertain than it was in the previous case. This number of bits will be smaller. And if I keep playing the game, I'll refining this down with a couple guesses that are apropos of what I would like to explain here. By the fourth guess, if you look over at its top picks, you can see it's no longer just maximizing the entropy. So at this point, there's technically 7 possibilities, but the only ones with a meaningful chance are dorms and words. And you can see it ranks choosing both of those above all of these other values that strictly speaking would give more information. The very first time I did this, I just added up these two numbers to measure the quality of each guess, which actually worked better than you might suspect, but it really didn't feel systematic, and I'm sure there's other approaches people could take, but here's the one I landed on. If we're considering the prospect of a next guess, like in this case, words, what we really care about is the expected score of our game if we do that. And to calculate that expected score, we say what's the probability that words is the actual answer, which at the moment it describes 58% to, we say with a 58% chance, we say once, our score in this game would be 4. And then with the probability of 1- that 58%, our score will be more than that 4. How much more we don't know, but we can estimate it based on how much uncertainty there's likely to be once we get to that point. Specifically, at the moment, there's 1.44 bits of uncertainty. If we guess words, it's telling us the expected information we'll get is 1.27 bits. So if we guess words, this difference represents how much uncertainty we're likely to be left with after that happens. What we need is some kind of function, which I'm calling f here, that associates this uncertainty with an expected score. And the way I went about this was to just plot a bunch of the data from previous games based on version 1 of the bot to say, hey, what was the actual score after various points with certain, very measurable amounts of uncertainty? For example, these data points here that are sitting above a value that's around like 8.7 or so are saying, for some games, after a point at which there were 8.7 bits of uncertainty, it took 2 guesses to get the final answer. Rather games it took 3 guesses, for other games it took 4 guesses. If we shift over to the left here, all the points over 0 are saying, whenever there's 0 bits of uncertainty, which is to say there's only 1 possibility, then the number of guesses required is always just 1, which is reassuring. Whenever there was 1 bit of uncertainty, meaning it was essentially just down to 2 possibilities, then sometimes it required 1 more guess, sometimes it required 2 more guesses. And so on and so forth here. Maybe a slightly easier way to visualize this data is to bucket it together and take averages. For example, this bar here is saying, among all the points where we had 1 bit of uncertainty, on average the number of new guesses required was about 1.5. And the bar over here is saying, among all of the different games where at some point the uncertainty was a little above 4 bits, which is like narrowing it down to 16 different possibilities. Then on average it requires a little more than 2 guesses from that point forward. And from here it just did a regression to fit a function that seemed reasonable to this. And remember the whole point of doing any of that is so that we can quantify this intuition that the more information we gain from a word, the lower the expected score will be. So, with this as version 2.0, if we go back and we run the same set of simulations, having it play against all 2,315 possible wordless answers, how does it do? Well, in contrast to our first version, it's definitely better, which is reassuring. All said and done the averages around 3.6, although unlike the first version, there are a couple times that it loses and requires more than 6 in this circumstance, presumably because there's times when it's making that trade-off to actually go for the goal rather than maximizing information. So, can we do better than 3.6? We definitely can. Now, I said at the start that it's most fun to try not incorporating the true list of wordless answers into the way that it builds its model. But if we do incorporate it, the best performance I could get was around 3.43. So, if we try to get more sophisticated than just using word-frequency data to choose this prior distribution, this 3.43 probably gives a max at how good we could get with that, or at least how good I could get with that. That best performance essentially just uses the ideas that I've been talking about here, but it goes a little farther, like it does a search for the expected information 2 steps forward rather than just 1, originally I was planning on talking more about that, but I realize we've actually gone quite long as it is. Though one thing also is after doing this 2 step search and then running a couple sample simulations in the top candidates, though far for me at least, it's looking like Crane is the best opener, who would have guessed. Also, if you use the true wordless to determine your space of possibilities, then the uncertainty you start with is a little over 11 bits. And it turns out, just from a brute-force search, the maximum possible expected information after the first 2 guesses is around 10 bits. Which suggests that best case scenario after your first 2 guesses, with perfectly optimal play, you'll be left with around 1 bit of uncertainty, which is the same as being down to 2 possible guesses. But I think it's fair and probably pretty conservative to say that you could never possibly write an algorithm that gets this average as low as 3, because with the words available to you, there's simply not room to get enough information after only 2 steps to be able to guarantee the answer in the third slot every single time without fail.
Euler's formula with introductory group theory
2 years ago, almost to the day actually, I put up the first video on this channel about Euler's formula, e to the pi i equals negative 1. As an anniversary of sorts, I want to revisit that same idea. For one thing, I've always wanted to improve on the presentation, but I wouldn't rehash an old topic if there wasn't something new to teach. You see, the idea underlying that video was to take certain concepts from a field and math called group theory, and show how they give Euler's formula a much richer interpretation than a mere association between numbers. And two years ago, I thought it might be fun to use those ideas without referencing group theory itself, or any of the technical terms within it. But I've come to see that you all actually quite like getting into the math itself, even if it takes some time. So here, two years later, let's you and me go through an introduction to the basics of group theory, building up to how Euler's formula comes to life under this light. If all you want is a quick explanation of Euler's formula, and if you're comfortable with vector calculus, I'll go ahead and put up a particularly short explanation on the screen that you can pause and ponder on. If it doesn't make sense, don't worry about it, it's not needed for where we're going. The reason that I want to put out this group theory view though, is not because I think it's a better explanation. Heck, it's not even a complete proof, it's just an intuition, really. It's because it has the chance to change how you think about numbers, and how you think about algebra. You see, group theory is all about studying the nature of symmetry. For example, a square is a very symmetric shape, but what do we actually mean by that? One way to answer that is to ask about what are all the actions you can take on the square that leave it looking indistinguishable from how it started? For example, you could rotate it 90 degrees counterclockwise, and it looks totally the same to how it started. You could also flip it around this vertical line, and again, it still looks identical. In fact, the thing about such perfect symmetry is that it's hard to keep track of what action has actually been taken, so to help out, I'm going to go ahead and stick on an asymmetric image here. Now we call each one of these actions a symmetry of the square, and all of the symmetries together make up a group of symmetries, or just group for short. This particular group consists of eight symmetries. There's the action of doing nothing, which is one that we count, plus three different rotations, and then there's four ways that you can flip it over. And in fact, this group of eight symmetries has a special name. It's called the dihedral group of order eight. And that's an example of a finite group, consisting of only eight actions. But a lot of other groups consist of infinitely many actions. Think of all possible rotations, for example, of any angle. Maybe you think of this as a group that acts on a circle, capturing all of the symmetries of that circle that don't involve flipping it. Here, every action from this group of rotation lies somewhere on the infinite continuum between zero and two pi radians. One nice aspect of these actions is that we can associate each one of them with a single point on the circle itself, the thing being acted on. You start by choosing some arbitrary point, maybe the one on the right here. Then every circle symmetry, every possible rotation, takes this marked point to some unique spot on the circle. And the action itself is completely determined by where it takes that spot. Now this doesn't always happen with groups, but it's nice when it does happen, because it gives us a way to label the actions themselves, which can otherwise be pretty tricky to think about. The study of groups is not just about what a particular set of symmetries is, whether that's the eight symmetries of a square, the infinite continuum of symmetries of the circle, or anything else you dream up. The real heart and soul of the study is knowing how these symmetries play with each other. On the square, if I rotate 90 degrees and then flip around the vertical axis, the overall effect is the same as if I had just flipped over this diagonal line. So in some sense, that rotation plus the vertical flip equals that diagonal flip. On the circle, if I rotate 270 degrees, and then follow it with a rotation of 120 degrees, the overall effect is the same as if I had just rotated 30 degrees to start with. So in this circle group, a 270 degree rotation plus a 120 degree rotation equals a 30 degree rotation. And in general, with any group, any collection of these sorts of symmetric actions, there's a kind of arithmetic, where you can always take two actions and add them together to get a third one by applying one after the other. Or maybe you think of it as multiplying actions, it doesn't really matter. The point is that there is some way to combine the two actions to get out another one. That collection of underlying relations, all associations between pairs of actions and the single action that's equivalent to applying one after the other, that's really what makes a group a group. It's actually crazy how much of modern math is rooted in, in well, this, in understanding how a collection of actions is organized by this relation. This relation between pairs of actions and the single action you get by composing them. Groups are extremely general. A lot of different ideas can be framed in terms of symmetries and composing symmetries. And maybe the most familiar example is numbers, just ordinary numbers. And there are actually two separate ways to think about numbers as a group. One, where composing actions is going to look like addition, and another where composing actions will look like multiplication. It's a little weird, because we don't usually think of numbers as actions. We usually think of them as counting things. But let me show you what I mean. Think of all of the ways that you can slide a number line left or right along itself. This collection of all sliding actions is a group, where you might think of as the group of symmetries on an infinite line. And in the same way that actions from the circle group could be associated with individual points on that circle, this is another one of those special groups, where we can associate each action with a unique point on the thing that it's actually acting on. You just follow where the point that starts at zero ends up. For example, the number three is associated with the action of sliding right by three units. The number negative two is associated with the action of sliding two units to the left, since that's the unique action that drags the point at zero over to the point at negative two. The number zero itself? Well, that's associated with the action of just doing nothing. This group of sliding actions, each one of which is associated with a unique real number, as a special name, the additive group of real numbers. The reason the word additive is in there is because of what the group operation of applying one action followed by another looks like. If I slide right by three units, and then I slide right by two units, the overall effect is the same as if I slide right by three plus two, or five units. Simple enough, we're just adding the distances of each slide. But the point here is that this gives an alternate view for what numbers even are. They are one example in a much larger category of groups, groups of symmetries acting on some object. And the arithmetic of adding numbers is just one example of the arithmetic that any group of symmetries has within it. We could also extend this idea, instead asking about the sliding actions on the complex plane. To newly introduce numbers, i, 2i, 3i, and so on on this vertical line, would all be associated with vertical sliding motions, since those are the actions that drag the point at zero up to the relevant point on that vertical line. The point over here, at three plus two i, would be associated with the action of sliding the plane in such a way that drags zero up into the right to that point. And it should make sense why we call this three plus two i. That diagonal sliding action is the same as first sliding by three to the right, and then following it with a slide that corresponds to two i, which is two units vertically. Similarly, let's get a feel for how composing any two of these actions generally breaks down. Consider this slide by three plus two i action, as well as this slide by one minus three i action, and imagine applying one of them right after the other. The overall effect, the composition of these two sliding actions, is the same as if we had slid three plus one to the right, and two minus three vertically. Notice how that involves adding together each component. So composing sliding actions is another way to think about what adding complex numbers actually means. This collection of all sliding actions on the 2D complex plane goes by the name the additive group of complex numbers. Again, the upshot here is that numbers, even complex numbers, are just one example of a group, and the idea of addition can be thought of in terms of successively applying actions. But numbers, schizophrenic as they are, also lead a completely different life as a completely different kind of group. Consider a new group of actions on the number line. Always that you can stretch or squish it, keeping everything evenly spaced, and keeping that number zero fixed in place. Yet again, this group of actions has that nice property, where we can associate each action in the group with a specific point on the thing that it's acting on. In this case, follow where the point that starts at the number one goes. There is one and only one stretching action that brings that point at one to the point at three, for instance, namely stretching by a factor of three. Likewise, there is one and only one action that brings that point at one to the point at one half, namely squishing by a factor of one half. I like to imagine using one hand to fix the number zero in place, and using the other to drag the number one wherever I like, while the rest of the number line just does whatever it takes to stay evenly spaced. In this way, every single positive number is associated with a unique stretching or squishing action. Now notice what composing actions looks like in this group. If I apply the stretch by three action, and then follow it with the stretch by two action, the overall effect is the same as if I had just applied the stretch by six action, the product of the two original numbers. And in general, applying one of these actions followed by another corresponds with multiplying the numbers that they're associated with. In fact, the name for this group is the multiplicative group of positive real numbers. So multiplication, ordinary familiar multiplication, is one more example of this very general and very far-reaching idea of groups, and the arithmetic within groups. And we can also extend this idea to the complex plane. Again, I like to think of fixing zero in place with one hand, and dragging around the point at one, keeping everything else evenly spaced while I do so. But this time, as we drag the number one to places that are off the real number line, we see that our group includes not only stretching and squishing actions, but actions that have some rotational component as well. The quintessential example of this is the action associated with that point at i, one unit above zero. What it takes to drag the point at one to that point at i is a 90 degree rotation. So the multiplicative action associated with i is a 90 degree rotation. And notice, if I apply that action twice in a row, the overall effect is to flip the plane 180 degrees. And that is the unique action that brings the point at one over to negative one. So in this sense, i times i equals negative one, meaning the action associated with i, followed by that same action associated with i, has the same overall effect as the action associated with negative one. As another example, here's the action associated with two plus i, dragging one up to that point. If you want, you could think of this as broken down as a rotation by 30 degrees, followed by a stretch by a factor of square root of five. And in general, every one of these multiplicative actions is some combination of a stretcher squish and action associated with some point on the positive real number line, followed by a pure rotation, where pure rotations are associated with points on this circle, the one with radius one. This is very similar to how the sliding actions in the additive group could be broken down as some pure horizontal slide represented with points on the real number line, plus some purely vertical slide represented with points on that vertical line. That comparison of how actions in each group breaks down is going to be important, so remember it. In each one, you can break down any action as some purely real number action, followed by something that's specific to complex numbers, whether that's vertical slides for the additive group or pure rotations for the multiplicative group. So that's our quick introduction to groups. A group is a collection of symmetric actions on some mathematical object, whether that's a square, a circle, the real number line, or anything else you dream up. And every group has a certain arithmetic, where you can combine two actions by applying one after the other and asking what other action from the group gives the same overall effect. Numbers, both real and complex numbers, can be thought of in two different ways as a group. They can act by sliding, in which case the group arithmetic just looks like ordinary addition, or they can act by these stretching, squishing, rotating actions, in which case the group arithmetic looks just like multiplication. And with that, let's talk about exponentiation. Our first introduction to exponents is to think of them in terms of repeated multiplication, right? I mean, the meaning of something like 2 cubed is to take 2 times 2 times 2. And the meaning of something like 2 to the fifth is 2 times 2 times 2 times 2 times 2 times 2. And a consequence of this, something you might call the exponential property, is that if I add two numbers in the exponent, say 2 to the 3 plus 5, this can be broken down as the product of 2 to the third times 2 to the 5. And when you expand things, this seems reasonable enough, right? But expressions like 2 to the 1 half, or 2 to the negative 1, and much less 2 to the I, don't really make sense when you think of exponents as repeated multiplication. I mean, what does it mean to multiply 2 by itself half of a time, or negative 1 of a time? So, we do something very common throughout math, and extend, beyond the original definition, which only makes sense for counting numbers, to something that applies to all sorts of numbers. But we don't just do this randomly. If you think back to how fractional and negative exponents are defined, it's always motivated by trying to make sure that this property, 2 to the x plus y equals 2 to the x times 2 to the y, still holds. To see what this might mean for complex exponents, think about what this property is saying from a group theory light. It's saying that adding the inputs corresponds with multiplying the outputs. And that makes it very tempting to think of the inputs not merely as numbers, but as members of the additive group of sliding actions. And to think of the outputs not merely as numbers, but as members of this multiplicative group of stretching and squishing actions. Now it is weird and strange to think about functions that take in one kind of action and spit out another kind of action. But this is something that actually comes up all the time throughout group theory. And this exponential property is very important for this association between groups. It guarantees that if I compose two sliding actions, maybe a slide by negative 1 and then a slide by positive 2, it corresponds to composing the two output actions, in this case squishing by 2 to the negative 1 and then stretching by 2 squared. Mathematicians would describe a property like this by saying that the function preserves the group structure. In the sense that the arithmetic within a group is what gives it its structure. And a function like this exponential plays nicely with that arithmetic. These between groups that preserve the arithmetic like this are really important throughout group theory, enough so that they've earned themselves a nice fancy name, homomorphisms. Now think about what all of this means for associating the additive group in the complex plane with the multiplicative group in the complex plane. We already know that when you plug in a real number to 2 to the x, you get out a real number, a positive real number in fact. So this exponential function takes any purely horizontal slide and turns it into some pure stretching or squishing action. So wouldn't you agree that it would be reasonable for this new dimension of additive actions, slides up and down, to map directly into this new dimension of multiplicative actions, pure rotations. Those vertical sliding actions correspond to points on this vertical axis, and those rotating multiplicative actions correspond to points on the circle with radius 1. So what it would mean for an exponential function like 2 to the x to map purely vertical slides into pure rotations would be that complex numbers on this vertical line, multiples of i, get mapped to complex numbers on this unit circle. In fact, for the function 2 to the x, the input i, a vertical slide of 1 unit, happens to map to a rotation of about 0.693 radians. That is, a walk around the unit circle that covers 0.693 units of distance. With a different exponential function, say 5 to the x, that input i, a vertical slide of 1 unit, would map to a rotation of about 1.609 radians. A walk around the unit circle, covering exactly 1.609 units of distance. What makes the number E special is that when the exponential E to the x, maps vertical slides to rotations, a vertical slide of 1 unit corresponding to i, maps to a rotation of exactly 1 radian, a walk around the unit circle covering a distance of exactly 1. And so a vertical slide of 2 units would map to a rotation of 2 radians. A 3 unit slide up corresponds to a rotation of 3 radians. And a vertical slide of exactly pi units up, corresponding to the input pi times i, maps to a rotation of exactly pi radians, halfway around the circle. And that's the multiplicative action associated with the number negative 1. Now you might ask why E, why not some other base? Well, the full answer resides in calculus. I mean, that's the birthplace of E, and where it's even defined. Again, I'll leave up another explanation on the screen if you're hungry for a fuller description, and if you're comfortable with the calculus. But at a high level, I'll say that it has to do with the fact that all exponential functions are proportional to their own derivative. What E to the x alone is the one that's actually equal to its own derivative. The important point that I want to make here, though, is that if you view things from the lens of group theory, thinking of the inputs to an exponential function as sliding actions, and thinking of the outputs as stretching and rotating actions, it gives a very vivid way to read what a formula like this is even saying. When you read it, you can think that exponentials in general map purely vertical slides, the additive actions that are perpendicular to the real number line, into pure rotations, which are, in some sense, perpendicular to the real number stretching actions. And moreover, E to the x does this in the very special way that ensures that a vertical slide of pi units corresponds to a rotation of exactly pi radians. The 180 degree rotation associated with the number negative 1. To finish things off here, I want to show a way that you can think about this function E to the x as a transformation of the complex plane. But before that, just two quick messages. I've mentioned before just how thankful I am to you, the community, for making these videos possible through Patreon. But in much the same way that numbers become more meaningful when you think of them as actions, gratitude is also best expressed as an action. So I've decided to turn off ads on new videos for their first month, in the hopes of giving you all a better viewing experience. This video was sponsored by Emerald Cloud Lab, and actually I was the one to reach out to them on this one, since it's a company I find particularly inspiring. Emerald is a very unusual startup, half software, half biotech. The Cloud Lab that they're building essentially enables biologists and chemists to conduct research through a software platform instead of working in a lab. Scientists can program experiments which are then executed remotely and robotically in Emerald's industrialized research lab. I know some of the people at the company and the software challenges they're working on are really interesting. Currently, they're looking to hire software engineers and web developers for their engineering team, as well as applied mathematicians and computer scientists for their scientific computing team. If you're interested in applying, whether that's now or a few months from now, there are a couple special links in the description of this video, and if you apply through those, it lets Emerald know that you heard about them through this channel. Alright, so e to the x transforming the plane. I'd like to imagine first rolling that plane into a cylinder, wrapping all those vertical lines into circles, and then taking that cylinder and kind of smushing it onto the plane around zero, where each of those concentric circles, spaced out exponentially, correspond with what started off as vertical lines.
Binomial distributions | Probabilities of probabilities, part 1
You're buying a product online and you see three different sellers. They're all offering that same product at essentially the same price. One of them has a 100% positive rating but with only 10 reviews. Another has a 96% positive rating with 50 total reviews. And yet another has a 93% positive rating but with 200 total reviews. Which one should you buy from? I think we all have this instinct that the more data we see, it gives us more confidence in a given rating. We're a little suspicious of seeing 100% ratings since more often than not, they come from a tiny number of reviews. Which makes it feel more plausible that things could have gone another way and given a lower rating. But how do you make that intuition quantitative? What's the rational way to reason about the trade-off here between the confidence gained from more data versus the lower absolute percentage? This particular example is a slight modification from one that John Cook gave in his excellent blog post, Abazion Review of Amazon Resellers. For you and me, it's a wonderful excuse to dig into a few core topics in probability and statistics. And to really cover these topics properly, it takes time. So what I'm going to do is break this down into three videos, where in this first one we'll set up a model for the situation and start by talking about the binomial distribution. The second is going to bring in ideas of Bayesian updating and how to work with probabilities over continuous values. And in the third, we'll look at something known as a beta distribution and pull up a little Python to analyze the data we have and come to various different answers depending on what it is you want to optimize. Now to throw you a bone before we dive into all the math, let me just show you what one of the answers turns out to be since it's delightfully simple. When you see an online rating, something like this 10 out of 10, you pretend that there were two more reviews, one that was positive and one that's negative. In this case, that means you pretend that it's 11 out of 12, which would give 91.7%. This number is your probability of having a good experience with that seller. So in the case of 50 reviews, where you have 48 positive and two negative, you pretend that it's really 49 positive and three negative, which would give you 49 out of 52 or 94.2%. That's your probability of having a good experience with the second seller. Playing the same game with our third seller who had 200 reviews, you get 187 out of 202 or 92.6%. So according to this rule, it would mean your best bet is to go with seller number two. This is something known as Laplace's rule of succession, it dates back to the 18th century, and to understand what assumptions are underlying this and how changing either those assumptions or your underlying goal can change the choice you make, we really do need to go through all the math. So without further ado, let's dive in. Step one, how exactly are we modeling the situation and what exactly is it that you want to optimize? Step one option is to think of each seller as producing random experiences that are either positive or negative, and that each seller has some kind of constant underlying probability of giving a good experience. What we're going to call the success rate or S for short. The whole challenge is that we don't know S. When you see that first rating of 100% with 10 reviews, that doesn't mean the underlying success rate is 100%. It could very well be something like 95%. And just to illustrate, let me run a little simulation where I choose a random number between 0 and 1, and if it's less than 0.95, I'll record it as a positive review, otherwise, record it as a negative review. Now do this 10 times in a row, and then make 10 more, and 10 more, and 10 more, and so on, to get a sense of what sequences of 10 reviews for a seller with this success rate 0.95 would tend to look like. Quite a few of those, around 60%, actually, give 10 out of 10. So the data we see seems perfectly plausible if the seller's true success rate was 95%, or maybe it's really 90%, or 99%, the whole challenge is that we just don't know. As to the goal, let's say you simply want to maximize your probability of having a positive experience, despite not being sure of this success rate. So, think about this here. We've got many different possible success rates for each seller, any number from 0 up to 1. And we need to say something about how likely each one of these success rates is, a kind of probability of probabilities. Unlike the more gamified examples, like coin flips and die tosses and the sort of things you see in an intro probability class, where you go in assuming a long run frequency, like 1.5 or 1.6, what we have here is uncertainty about the long run frequency itself, which is a much more potent kind of doubt. I should also emphasize that this kind of setup is relevant to many, many situations in the real world, where you need to make a judgment about a random process from limited data. For example, let's say that you're setting up a factory producing cars, and in an initial test of 100 cars, two of them come out with some kind of problem. If you plan to spin things up to produce a million cars, what are you willing to confidently say about how many total cars will have problems that need addressing? It's not like the test guarantees that the true error rate is 2%, but what exactly does it say? As your first challenge, let me ask you this, if you did magically know the true success rate for a given seller, say it was 95%, how would you compute the probability of seeing, say, 10 positive reviews and zero negative reviews, or 48 and 2, or 186 and 14? In other words, what's the probability of seeing the data given and assumed success rate? A moment ago, I showed you something like this with a simulation, generating 10 random reviews, and with a little programming, you could just do that many times, building up a histogram to get some sense of what this distribution looks like. Likewise, you could simulate sets of 50 reviews, and get some sense for how probable it would be to see 48 positive and 2 negative. You see, this is the nice thing about probability. A little programming can almost always let you cheat a little, and see what the answer is ahead of time by simulating it. For example, after a few hundred thousand samples of 50 reviews, assuming the success rate is 95%, it looks like about 26.1% of them would give us this 48 out of 50 review. Luckily, in this case, an exact formula is not bad at all. The probability of seeing exactly 48 out of 50 looks like this. This first term is pronounced 50 choose 48, and it represents the total number of ways that you could take 50 slots, and fill out 48 of them. For example, maybe you start with 48 good reviews and end with two bad reviews, or maybe you start with 47 good reviews, and then it goes bad good bad, and so on. In principle, if you were to enumerate every single possible way of filling 48 out of 50 slots like this, the total number of these patterns is 50 choose 48, which in this case works out to be 1225. What do we multiply by this count? Well, it's the probability of any one of these patterns, which is the probability of a single positive review raised to the 48th times the probability of a single negative review squared. Crucial is that we assume each review is independent of the last, so we can multiply all the probabilities together like this, and with the numbers we have, when you evaluate it, it works out to be 0.261, which matches what we just saw empirically with the simulation. You could also replace this 48 with some other value, and compute the probability of seeing any other number of positive reviews, again, assuming a given success rate. What you're looking at right now, by the way, is known in the business as a binomial distribution, one of the most fundamental distributions in probability. It comes up whenever you have something like a coin flip, a random event that can go one of two ways, and you repeat it some number of times. And what you want to know is the probability of getting various different totals. For our purposes, this formula gives us the probability of seeing the data given an assumed success rate, which ultimately we want to somehow use to make judgments about the opposite, the probability of a success rate given the fixed data that we see. These are related, but definitely distinct. To get more in that direction, let's play around with this value of s, and see what happens as we change it to different numbers between 0 and 1. The binomial distribution that it produces kind of looks like this pile that's centered around whatever s times 50 is. The value we care about, the probability of seeing 48 out of 50 reviews, is represented by this highlighted 48th bar. So let's draw a second plot on the bottom, representing how that value depends on s. When s is equal to 0.96, that value is as high as it's ever going to get. And this should kind of make sense, because when you look at that review of 96%, it should be most likely if the true underlying success rate was 96%. As s increases, it kind of peeters out, going to 0 as s approaches 1. Since someone with a perfect success rate would never have those two negative reviews. Also, as you move to the left, it approaches 0 pretty quickly. By the time you get to s equals 0.8, getting 48 out of 50 reviews by chance is exceedingly rare. It would happen 1 in a thousand times. This plot we have on the bottom is a great start to getting a more quantitative description for which values of s feel more or less plausible. Written down as a formula, what I want you to remember is that as a function of the success rate s, the curve looks like some constant times s to the number of positive reviews, times 1 minus s to the number of negative reviews. In principle, if we had more data, like 480 positive reviews, and 20 negative reviews, the resulting plot would still be centered around 0.96, but it would be smaller and more concentrated. A good exercise right now would be to see if you could explain why that's the case. There is a lingering question though of what to actually do with these curves. I mean, our goal is to compute the probability that you have a good experience with this seller. So what do you do? Now, evilly, you might think that probability is 96%, since that's where the peak of the graph is, which in a sense is the most likely success rate. But think of the example with 10 out of 10 positives. In that case, the whole binomial formula simplifies to be s to the power 10. The probability of seeing 10 consecutive good reviews is the probability of seeing one of them raised to the 10th. The closer the true success rate is to 1, the higher the probability of seeing a 10 out of 10. Our plot on the bottom only ever increases as s approaches 1. But even if s equals 1 is the value that maximizes this probability, surely you wouldn't feel comfortable saying that you personally have a 100% probability of a good experience with this seller. Maybe you think that instead we should look for some kind of center of mass of this graph, and that would absolutely be on the right track. First, though, we need to explain how to take this expression for the probability of the data we're seeing given a value of s and get the probability 4 of value of s, the thing we actually don't know, given the data, the thing we actually know. And that requires us to talk about Bayes rule and also probability density functions. For those, I'll see you in part 2.
Who cares about topology? (Inscribed rectangle problem)
I've got several fun things for you this video, an unsolved problem, a very elegant solution to a weaker version of the problem, and a little bit about what topology is and why people care. But before I jump into it, it's worth saying a few words on why I'm excited to share the solution. When I was a kid, since I loved math and sought out various mathy things, I would occasionally find myself in some talk or a seminar where people wanted to get the youth excited about things that mathematicians care about. A very common go-to topic to excite our imaginations was topology. We might be shown something like a mobius strip, maybe building it out of construction paper by twisting a rectangle and gluing its ends. Look, we'd be told, as we were asked to draw a line along the surface, it's a surface with just one side. Before we might be told that topologists view coffee mugs and donuts as the same thing, since each has just one hole. But these kinds of demos always left a lurking question. How is this math? How does any of this actually help to solve problems? It wasn't until I saw the problem that I'm about to show you, with its elegant and surprising solution that I started to understand why mathematicians actually care about some of these shapes and the properties they have. So there's this unsolved problem called the inscribed square problem. If you have a closed loop, meaning you squiggle some line through space in a potentially crazy way and you end up back where you started, the question is whether or not you'll always be able to find four points on this loop that make up a square. If your closed loop was a circle, for example, it's quite easy to find an inscribed square, infinitely many, in fact. If your loop was instead an ellipse, it's still pretty easy to find an inscribed square. The question is whether or not every possible closed loop, no matter how crazy, has at least one inscribed square. Pretty interesting, right? I mean, just the fact that this is unsolved is interesting, that the current tools of math can either confirm nor deny that there's some loop with no inscribed square in it. Now if we weaken the question a bit and ask about inscribed rectangles instead of inscribed squares, it's still pretty hard, but there is a beautiful video-worthy solution that might actually be my favorite piece of math. The idea is to shift the focus away from individual points on the loop and instead onto pairs of points. We'll use the following fact about rectangles. Let's label the vertices of some rectangle A, B, C, D. Then the pair of points A, C, has a few things in common with the pair of points B, D. The distance between A and C equals the distance between B and D, and the midpoint of A and C is the same as the midpoint of B and D. In fact, any time you have two separate pairs of points in space, A, C, and B, D, if you can guarantee that they share a midpoint and that the distance between A, C equals the distance between B and D. It's enough to guarantee that those four points make up a rectangle. So what we're going to do is try to prove that for any closed loop, it's always possible to find two distinct pairs of points on that loop that share a midpoint and which are the same distance apart. Take a moment to make sure that's clear. We're finding two distinct pairs of points that share a common midpoint and which are the same distance apart. The way we'll go about this is to define a function that takes in pairs of points on the loop and outputs a single point in 3D space which kind of encodes the midpoint and distance information. It will be sort of like a graph. Consider the closed loop to be sitting on the xy plane in 3D space. For a given pair of points, label their midpoint M, which will be some point on the xy plane, and label the distance between them D. Plot the point which is exactly D units above that midpoint M in the z direction. As you do this for many possible pairs of points, you'll effectively be drawing through 3D space. And if you do it for all possible pairs of points on the loop, you'll draw out some kind of surface above the plane. Now look at the surface and notice how it seems to hug the loop itself. This is actually going to be important later, so let's think about why it happens. As the pair of points on the loop gets closer and closer, the plotted point gets lower since its height is by definition equal to the distance between the points. Also, the midpoint gets closer and closer to the loop as the points approach each other. Once the pair of points coincides, meaning the input of our function looks like xx for some point x on the loop, the plotted point of the surface will be exactly on the loop at the point x. Okay, so remember that. Another important fact is that this function is continuous. And all that really means is that if you slightly adjust a given pair of points, then the corresponding output in 3D space is only slightly adjusted as well. There's never a sudden discontinuous jump. Our goal then is to show that this function has a collision that two distinct pairs of points each map to the same spot in 3D space. Because the only way for that to happen is if they share a common midpoint, and if their distance D apart from each other is the same. So in some sense, finding an inscribed rectangle comes down to showing that this surface has to intersect itself. To move forward from here, we need to build up a relationship with the idea of pairs of points on a loop. Think about how we represent pairs of real numbers using a two-dimensional coordinate plane. Analogist to this, we're going to seek out a certain 2D surface which naturally represents all pairs of points on the loop. Understanding the properties of this surface will help to show why the graph that we just defined has to intersect itself. Now when I say pair of points, there are two things that I could be talking about. The first is ordered pairs of points, which would mean a pair like A, B would be considered distinct from the pair B A, that is, there's some notion of which point is the first one. The second idea is unordered points, where A, B and B A would be considered the same thing, where all that really matters is what the points are, and there's no meaning to which one is first. Ultimately, we want to understand unordered pairs of points, but to get there, we need to take a path of thought through ordered pairs. We'll start out by straightening out the loop, cutting it at some point and deforming it into an interval. For the sake of having some labels, let's say that this is the interval on the number line from 0 to 1. By following where each point ends up, every point on the loop corresponds with a unique number on this interval. Except for the point where the cut happened, which corresponds simultaneously to both endpoints of the interval, meaning the number 0 and 1. Now the benefit of straightening out this loop like this is that we can start thinking about pairs of points the same way we think about pairs of numbers. Make a y-axis, using a second interval, then associate each pair of values on the interval with a single point in this one by one square that they span out. Every individual point of this square naturally corresponds to a pair of points on the loop, since its x and y coordinates are each numbers between 0 and 1, which are in turn associated to some unique point on the loop. Remember, we're trying to find a surface that naturally represents the set of all pairs of points on the loop, and this square is the first step to doing that. The problem is that there's some ambiguity when it comes to the edges of the square. Remember, the endpoints 0 and 1 on the interval really correspond to the same point of the loop, as if to say that those endpoints need to be glued together if we're going to faithfully map back to the loop. So all of the points on the left edge of the square, like 0, 0.1, 0, 0.2, on and on and on, really represent the same pair of points on the loop as the corresponding coordinates on the right edge of the square. 1, 0.1, 1, 0.2, on and on and on. So for this square to represent the pairs of points on the loop in a unique way, we need to glue this left edge to the right edge. I'll mark each edge with some arrows to remember how the edges need to be lined up. Likewise, the bottom edge needs to be glued to the top edge, since y coordinates of 0 and 1 really represent the same second point in a given pair of points on the loop. If you bend the square to perform the gluing, first rolling it into a cylinder to glue the left and right edges, then gluing the ends of that cylinder, which represent the top and bottom edges, we get a Taurus, better known as the surface of a donut. Every individual point on this Taurus corresponds to a unique pair of points on the loop, and likewise, every pair of points on the loop corresponds to some unique point on this Taurus. The Taurus is to pair of points on the loop, what the xy plane is to pairs of points on the real number line. The key property of this association is that it's continuous both ways, meaning if you nudge any point on the Taurus by just a tiny amount, it corresponds to only a very slight nudge to the pair of points on the loop, and vice versa. So if the Taurus is the natural shape for ordered pairs of points on the loop, what's the natural shape for unordered pairs? After all, the whole reason we're doing this is to show that two distinct pairs of points on the loop share a midpoint and are the same distance apart. But if we consider a pair A, B to be distinct from B, A, then that would trivially give us two separate pairs which have the same midpoint and distance apart. That's like saying you can always find a rectangle so long as you consider any pair of points to be a rectangle, not helpful. So let's think about this, let's think about how to represent unordered pairs of points looking back at our unit square. We need to say that the coordinates 0.2, 0.3 represent the same pair as 0.3, 0.2, or that 0.5, 0.7 really represents the same thing as 0.7, 0.5. And in general, any coordinates x, y has to represent the same thing as y, x. Once again, we capture this idea by gluing points together when they're supposed to represent the same pair, which in this case requires folding the square over diagonally. Now notice this diagonal line, the crease of the fold. It represents all pairs of points that look like x, x, meaning the pairs which are really just a single point written twice. Right now it's marked with a red line, and you should remember it. It will become important to know where all of these pairs like x, x live. But we still have some arrows to glue together here. We need to glue that bottom edge to the right edge. And the orientation with which we do this is going to be important. Points towards the left of the bottom edge have to be glued to points towards the bottom of the right edge. And points towards the right of the bottom edge have to be glued to points towards the top of the right edge. It's weird to think about, right? Go ahead, pause and ponder this for a moment. The trick, which is kind of clever, is to make a diagonal cut, which we need to remember to glue back in just a moment. After that, we can glue the bottom and the right, like so. But notice the orientation of the arrows here. To glue back what we just cut, we don't simply connect the edges of this rectangle to get a cylinder. We have to make a twist. Doing this in 3D space, the shape we get is a Mobius strip. Isn't that awesome? Evidently, the surface which represents all pairs of unordered points on the loop is the Mobius strip. And notice the edge of this strip, shown here in red, represents the pairs of points that look like xx, those which are really just a single point listed twice. The Mobius strip is to unordered pairs of points on the loop, what the xy plane is to pairs of real numbers. That totally blew my mind when I first saw it. Now, with this fact that there is a continuous one-to-one association between unordered pairs of points on the loop and individual points on this Mobius strip, we can solve the inscribed rectangle problem. Remember, we had to find the special kind of graph in 3D space, where the loop was sitting in the xy plane. For each pair of points, you consider their midpoint m, which lives on the xy plane, and their distance d apart, and you plot a point which is exactly d units above m. Because of the continuous one-to-one association between pairs of points on the loop and the Mobius strip, this gives us a natural map from the Mobius strip onto this surface in 3D space. For every point on the Mobius strip, consider the pair of points on the loop that it represents, then plug that pair of points into the special function. And here's the key point. When pairs of points on the loop are extremely close together, the output of the function is right above the loop itself. And in the extreme case of pairs of points like xx, the output of the function is exactly on the loop. Since points on this red edge of the Mobius strip correspond to pairs like xx, when the Mobius strip is mapped onto this surface, it must be done in such a way that the edge of the strip gets mapped right onto that loop in the xy plane. But if you stand back and think about it for a moment, considering the strange shape of the Mobius strip, there is no way to glue its edge to something too dimensional without forcing the strip to intersect itself. Since points of the Mobius strip represent pairs of points on the loop, if the strip intersects itself during this mapping, it means that there are at least two distinct pairs of points that correspond to the same output on this surface, which means they share a midpoint and are the same distance apart. Which in turn means that they form a rectangle. And that's the proof. Or at least if you're willing to trust me in saying that you can't glue the edge of a Mobius strip to a plane without forcing it to intersect itself, then that's the proof. This fact is intuitively clear looking at the Mobius strip, but in order to make it rigorous, you basically need to start developing the field of topology. In fact, for any of you who have a topology class in your future, going through the exercise of trying to justify this is a good way to gain an appreciation for why topologists choose to make certain definitions. And I want you to take note of something here. The reason for talking about the Taurus and the Mobius strip was not because we were playing around with construction paper, or because we were daydreaming about deforming a coffee mug. They came up as a natural way to understand pairs of points on a loop, and that's something that we needed to solve a concrete problem.
The paradox of the derivative | Chapter 2, Essence of calculus
The goal here is simple, explain what a derivative is. The thing is though, there's some subtlety to this topic, and a lot of potential for paradoxes if you're not careful. So kind of a secondary goal is that you have an appreciation for what those paradoxes are and how to avoid them. You see, it's common for people to say that the derivative measures an instantaneous rate of change. But when you think about it, that phrase is actually an oxymoron. Change is something that happens between separate points in time, and when you blind yourself to all but just a single instant, there's not really any room for change. You'll see what I mean more as we get into it, but when you appreciate that a phrase like instantaneous rate of change is actually nonsense, I think it makes you appreciate just how clever the fathers of calculus were in capturing the idea that that phrase is meant to evoke, but with a perfectly sensible piece of math, the derivative. As our central example, I want you to imagine a car that starts at some point A, speeds up, and then slows down to a stop at some point B, a hundred meters away. And let's say it all happens over the course of ten seconds. That's the setup to have in mind as we lay out what the derivative is. We could graph this motion, letting the vertical axis represent the distance traveled, and the horizontal axis represent time. So at each time T, represented with a point somewhere on this horizontal axis, the height of the graph tells us how far the car has traveled in total after that amount of time. It's pretty common to name a distance function like this S of T. I would use the letter D for distance, but that guy already has another full time job in calculus. Initially, this curve is quite shallow, since the car is slow to start. During that first second, the distance that it travels doesn't really change that much. Then for the next few seconds, as the car speeds up, the distance traveled in a given second gets larger, which corresponds to a steeper slope in this graph. And then towards the end when it slows down, that curve shallows out again. But if we were to plot the car's velocity in meters per second as a function of time, it might look like this bump. At early times, the velocity is very small. Up to the middle of the journey, the car builds up to some maximum velocity, covering a relatively large distance each second. Then it slows back down towards a speed of zero. And these two curves here are definitely related to each other, right? If you change the specific distance versus time function, you're going to have some different velocity versus time function. And what we want to understand is the specifics of that relationship. Exactly how does velocity depend on a distance versus time function? And to do that, it's worth taking a moment to think critically about what exactly velocity means here. Intuitively, we all might know what velocity at a given moment means. It's just whatever the car's pedometer shows in that moment. And intuitively, it might make sense that the car's velocity should be higher at times when this distance function is steeper, when the car traverses more distance per unit time. But the funny thing is, velocity at a single moment makes no sense. If I show you a picture of a car, just a snapshot in an instant, and ask you how fast it's going, you'd have no way of telling me. What you'd need are two separate points in time to compare. That way you can compute whatever the change in distance across those times is, divided by the change in time. Right? I mean, that's what velocity is. It's the distance traveled per unit time. So how is it that we're looking at a function for velocity that only takes in a single value of t, a single snapshot in time? It's weird, isn't it? We want to associate individual points in time with a velocity, but actually computing velocity requires comparing two separate points in times. If that feels strange and paradoxical, good. You're grappling with the same conflicts that the fathers of calculus did. And if you want a deep understanding for rates of change, not just for a moving car, but for all sorts of things in science, you're going to need to resolve this apparent paradox. First, I think it's best to talk about the real world, and then we'll go into a purely mathematical one. Let's think about what the car's speedometer is probably doing. At some point, say, three seconds into the journey, the speedometer might measure how far the car goes in a very small amount of time. Maybe the distance traveled between three seconds and 3.01 seconds. Then it could compute the speed in meters per second as that tiny distance traversed in meters, divided by that tiny time, 0.01 seconds. That is, a physical car just sidesteps the paradox and doesn't actually compute speed at a single point in time. It computes speed during a very small amount of time. So let's call that difference in time, dT, which you might think of in this case as 0.01 seconds. And let's call that resulting difference in distance, dS. So the velocity at some point in time is dS divided by dT, the tiny change in distance over the tiny change in time. Graphically, you can imagine zooming in on some point of this distance versus time graph above t equals 3. That dT is a small step to the right since time is on the horizontal axis. And that dS is the resulting change in the height of the graph since the vertical axis represents the distance traveled. So dS divided by dT is something you can think of as the rise overrun slope between two very close points on this graph. Of course, there's nothing special about the value t equals 3. We could apply this to any other point in time, so we consider this expression dS over dT to be a function of t, something where I can give you a time t and you can give me back the value of this ratio at that time, the velocity as a function of time. So for example, when I had the computer draw this bump curve here, the one representing the velocity function, here's what I had the computer actually do. First, I chose a small value for dT, I think in this case it was 0.01. Then I had the computer look at a whole bunch of times t between 0 and 10 and compute the distance function s at t plus dT and then subtract off the value of that function at t. In other words, that's the difference in the distance traveled between the given time t and the time 0.01 seconds after that. Then you can just divide that difference by the change in time dT and that gives you velocity in meters per second around each point in time. So with a formula like this, you could give the computer any curve representing any distance function s of t and it could figure out the curve representing velocity. So now would be a good time to pause, reflect, make sure that this idea of relating distance to velocity by looking at tiny changes makes sense, because what we're going to do is tackle the paradox of the derivative head on. This idea of dS over dT, a tiny change in the value of the function s divided by the tiny change in the input that caused it, that's almost what a derivative is. And even though a car's speedometer will actually look at a concrete change in time like 0.01 seconds, and even though the drawing program here is looking at an actual concrete change in time, in pure math, the derivative is not this ratio dS dT for a specific choice of dT. Instead, it's whatever that ratio approaches as your choice for dT approaches 0. Luckily, there is a really nice visual understanding for what it means to ask what this ratio approaches. Remember for any specific choice of dT, this ratio dS dT is the slope of a line passing through two separate points on the graph, right? Well, as dT approaches 0 and as those two points approach each other, the slope of the line approaches the slope of a line that's tangent to the graph at whatever point t we're looking at. So the true, honest to goodness, pure math derivative is not the rise over run slope between two nearby points on the graph. It's equal to the slope of a line tangent to the graph at a single point. Now notice what I'm not saying. I'm not saying that the derivative is whatever happens when dT is infinitely small, whatever that would mean. Remember I saying that you plug in 0 for dT. This dT is always a finitely small non-zero value. It's just that it approaches 0 is all. I think that's really clever. Even though change in an instant makes no sense, this idea of letting dT approach 0 is a really sneaky backdoor way to talk reasonably about the rate of change at a single point in time. Isn't that neat? It's kind of flirting with the paradox of change in an instant without ever needing to actually touch it. And it comes with such a nice visual intuition too, as the slope of a tangent line to a single point on the graph. And because change in an instant still makes no sense, I think it's healthiest for you to think of this slope not as some instantaneous rate of change, but instead as the best constant approximation for a rate of change around a point. By the way, it's worth saying a couple words on notation here. Throughout this video, I've been using dT to refer to a tiny change in t with some actual size, and dS to refer to the resulting change in s, which again has an actual size. And this is because that's how I want you to think about them. But the convention and calculus is that whenever you're using the letter d like this, you're kind of announcing your intention that eventually you're going to see what happens as dT approaches 0. For example, the honest to goodness pure math derivative is written as dS divided by dT, even though it's technically not a fraction per say, but whatever that fraction approaches for smaller and smaller nudges in t. I think a specific example should help here. You might think that asking about what this ratio approaches for smaller and smaller values would make it much more difficult to compute. But weirdly, it kind of makes things easier. Let's say that you have a given distance versus time function that happens to be exactly t cubed. So after one second, the car has traveled 1 cubed equals 1 meters. After two seconds, it's traveled 2 cubed, or 8 meters, and so on. Now what I'm about to do might seem somewhat complicated, but once the dust settles, it really is simpler. And more importantly, it's the kind of thing that you only ever have to do once in calculus. Let's say you wanted to compute the velocity, dS divided by dT at some specific time, like t equals 2. And for right now, let's think of dT as having an actual size, some concrete nudge. We'll let it go to zero in just a bit. The tiny change in distance between two seconds and two plus dT seconds, well that's s of 2 plus dT minus s of 2. And we divide that by dT. Since our function is t cubed, that numerator looks like 2 plus dT cubed minus 2 cubed. And this, this is something we can work out algebraically. Again, bear with me. There's a reason that I'm showing you the details here. When you expand that top, what you get is 2 cubed plus 3 times 2 squared dT plus 3 times 2 times dT squared plus dT cubed, and all of that is minus 2 cubed. Now there's a lot of terms, and I want you to remember that it looks like a mess, but it does simplify. Those two cubed terms, they cancel out. And then everything remaining here has a dT in it. And since there's a dT on the bottom there, many of those cancel out as well. What this means is that the ratio dS divided by dT has boiled down into 3 times 2 squared plus, well, two different terms that each have a dT in them. So if we ask what happens as dT approaches zero, representing the idea of looking at a smaller and smaller change in time, we can just completely ignore those other terms. By eliminating the need to think about a specific dT, we've actually eliminated a lot of the complication in the full expression. So what we're left with is this nice clean 3 times 2 squared. You can think of that as meaning that the slope of a line tangent to the point at t equals 2 of this graph is exactly 3 times 2 squared, or 12. And of course, there's nothing special about the time t equals 2. You could more generally say that the derivative of t cubed as a function of t is 3 times t squared. Now take a step back because that's beautiful. The derivative is this crazy complicated idea. We've got tiny changes in distance over tiny changes in time, but instead of looking at any specific one of those, we're talking about what that thing approaches. I mean, that's a lot to think about. And yet what we've come out with is such a simple expression, 3 times t squared. And in practice, you wouldn't go through all this algebra each time. Knowing that the derivative of t cubed is 3t squared is one of those things that all calculus students learn how to do immediately without having to read-arrive at each time. And in the next video, I'm going to show you a nice way to think about this and a couple of the derivative formulas in really nice geometric ways. But the point I want to make by showing you all of the algebraic guts here is that when you consider the tiny change in distance caused by a tiny change in time, for some specific value of dt, you'd have kind of a mess. But when you consider what that ratio approaches, as dt approaches zero, it lets you ignore much of that mess, and it really does simplify the problem. That right there is kind of the heart of why calculus becomes useful. Another reason to show you a concrete derivative like this is that it sets the stage for an example of the kind of paradoxes that come about if you believe too much in the illusion of instantaneous rate of change. So think about the actual car traveling according to this t cubed distance function, and consider its motion at the moment t equals zero, right at the start. Now ask yourself whether or not the car is moving at that time. On the one hand, we can compute its speed at that point using the derivative, 3t squared, which for time t equals zero works out to be zero. Visually this means that the tangent line to the graph at that point is perfectly flat, so the car's quote unquote instantaneous velocity is zero, and that suggests that obviously it's not moving. But on the other hand, if it doesn't start moving at time zero, when does it start moving? Really, pause and bronzer that for a moment. Is the car moving at time t equals zero? Do you see the paradox? The issue is that the question makes no sense. It references the idea of change in a moment, but that doesn't actually exist. That's just not what the derivative measures. What it means for the derivative of a distance function to be zero is that the best constant approximation for the car's velocity around that point is zero meters per second. For example, if you look at an actual change in time, say between time zero and 0.1 seconds, the car does move. It moves 0.001 meters. That's very small, and importantly, it's very small compared to the change in time, giving an average speed of only 0.01 meters per second. And remember, what it means for the derivative of this motion to be zero is that for smaller and smaller nudges in time, this ratio of meters per second approaches zero. But that's not to say that the car is static. Approximating its movement with a constant velocity of zero is, after all, just an approximation. So whenever you hear people refer to the derivative as an instantaneous rate of change, a phrase which is intrinsically oxymoronic, I want you to think of that as a conceptual shorthand for the best constant approximation for rate of change. In the next couple of videos, I'll be talking more about the derivative, what it looks like in different contexts, how do you actually compute it, why is it useful, things like that, focusing on visual intuition as always.
Limits, L'Hôpital's rule, and epsilon delta definitions | Chapter 7, Essence of calculus
The last several videos have been about the idea of a derivative, and before moving on to integrals, I want to take some time to talk about limits. To be honest, the idea of a limit is not really anything new. If you know what the word approach means, you pretty much already know what a limit is. You could say that it's a matter of assigning fancy notation to the intuitive idea of one value that just gets closer to another. But there actually are a few reasons to devote a full video to this topic. For one thing, it's worth showing how the way that I've been describing derivatives so far lines up with the formal definition of a derivative as it's typically presented in most courses and textbooks. I want to give you a little confidence that thinking in terms of dx and df as concrete non-zero nudges is not just some trick for building intuition. It's actually backed up by the formal definition of a derivative in all of its rigor. I also want to shed light on what exactly mathematicians mean when they say approach in terms of something called the epsilon delta definition of limits. Then we'll finish off with a clever trick for computing limits called Lopital's rule. So first things first, let's take a look at the formal definition of the derivative. As a reminder, when you have some function f of x, to think about its derivative at a particular input, maybe x equals 2, you start by imagining nudging that input, some little dx away, and looking at the resulting change to the output, df. The ratio, df divided by dx, which can be nicely thought of as the rise over run slope between the starting point on the graph and the nudged point, is almost what the derivative is. The actual derivative is whatever this ratio approaches as dx approaches 0. Just to spell out a little of what's meant there, that nudge to the output, df, is the difference between f at the starting input plus dx and f at the starting input, the change to the output caused by dx. To express that you want to find what this ratio approaches as dx approaches 0, you write LIM for limit with dx arrow 0 below it. Now you'll almost never see terms with a lowercase d like dx inside a limit expression like this. Instead the standard is to use a different variable, something like delta x or commonly h for whatever reason. The way I like to think of it is that terms with this lowercase d, in the typical derivative expression, have built into them this idea of a limit, the idea that dx is supposed to eventually go to 0. So in a sense, this left hand side here, df over dx, the ratio we've been thinking about for the past few videos, is just shorthand for what the right hand side here spells out in more detail, writing out exactly what we mean by df and writing out this limit process explicitly. And this right hand side here is the formal definition of a derivative, as you would commonly see it in any calculus textbook. And if you'll pardon me for a small rant here, I want to emphasize that nothing about this right hand side references the paradoxical idea of an infinitely small change. The point of limits is to avoid that. This value h is the exact same thing as the dx I've been referencing throughout the series. It's a nudge to the input of f with some non-zero, finitely small size, like 0.001. It's just that we're analyzing what happens for arbitrarily small choices of h. In fact, the only reason that people introduce a new variable name into this formal definition, rather than just using dx, is to be super extra clear that these changes to the input are just ordinary numbers that have nothing to do with infinitesimals. Because the thing is, there are others who like to interpret this dx as an infinitely small change, whatever that would mean. Or to just say that dx and df are nothing more than symbols that we shouldn't take too seriously. But by now in the series, you know I'm not really a fan of either of those views. I think you can, and should, interpret dx as a concrete, finitely small nudge. Just so long as you remember to ask what happens when that thing approaches 0. For one thing, then I hope the past few videos have helped convince you of this, that helps to build stronger intuition for where the rules of calculus actually come from. But it's not just some trick for building intuitions. Everything I've been saying about derivatives with this concrete, finitely small nudge philosophy, is just a translation of this formal definition we're staring at right now. So long story short, the big fuss about limits is that they let us avoid talking about infinitely small changes, by instead asking what happens as the size of some small change to our variable approaches 0. And this brings us to goal number 2, understanding exactly what it means for one value to approach another. For example, consider the function 2 plus h cubed minus 2 cubed, all divided by h. This happens to be the expression that pops out when you unravel the definition of a derivative of x cubed, evaluated at x equals 2. But let's just think of it as any whole function with an input h. Its graph is this nice continuous looking parabola, which it makes sense because it's a cubic term divided by a linear term. But actually, if you think about what's going on at h equals 0, plugging that in you would get 0 divided by 0, which is not defined. So really, this graph has a hole at that point, and you have to kind of exaggerate to draw that hole, often with a little empty circle like this. But keep in mind, the function is perfectly well defined for inputs as close to 0 as you want. And wouldn't you agree that as h approaches 0, the corresponding output, the height of this graph, approaches 12. And it doesn't matter which side you come at it from. That limit of this ratio, as h approaches 0, is equal to 12. But imagine that you are a mathematician inventing calculus, and someone skeptically asks you, well, what exactly do you mean by approach? That would be kind of an annoying question. I mean, come on. We all know what it means for one value to get closer to another. So let's start thinking about ways that you might be able to answer that person, completely unambiguously. For a given range of inputs within some distance of 0, excluding the forbidden point 0 itself, look at all of the corresponding outputs, all possible heights of the graph above that range. As the range of input values closes in more and more tightly around 0, that range of output values closes in more and more closely around 12. And importantly, the size of that range of output values can be made as small as you want. As a counter example, consider a function that looks like this, which is also not to find at 0, but it kind of jumps up at that point. When you approach h equals 0 from the right, the function approaches the value 2. But as you come at it from the left, it approaches 1. Since there's not a single, clear, unambiguous value that this function approaches, as h approaches 0, the limit is simply not defined at that point. One way to think of this is that when you look at any range of inputs around 0 and consider the corresponding range of outputs, as you shrink that input range, the corresponding outputs don't narrow in on any specific value. Instead, those outputs straddle a range that never shrinks smaller than 1, even as you make that input range as tiny as you could imagine. In this perspective of shrinking an input range around the limiting point, and seeing whether or not you're restricted in how much that shrinks the output range, leads to something called the Epsilon Delta definition of limits. Now I should tell you, you could argue that this is needlessly heavy duty for an introduction to calculus. Like I said, if you know what the word approach means, you already know what a limit means. There's nothing new on the conceptual level here. But this is an interesting glimpse into the field of real analysis. And it gives you a taste for how mathematicians make the intuitive ideas of calculus a little more airtight and rigorous. You've already seen the main idea here. When a limit exists, you can make this output range as small as you want. But when the limit doesn't exist, that output range cannot get smaller than some particular value, no matter how much you shrink the input range around the limiting input. Let's freeze that same idea, but a little more precisely. See in the context of this example where the limiting value was 12. Think about any distance away from 12, where for some reason it's common to use the Greek letter Epsilon to denote that distance. And the intent here is going to be that this distance Epsilon is as small as you want. What it means for the limit to exist is that you will always be able to find a range of inputs around our limiting point, some distance delta around zero, so that any input within delta of zero corresponds to an output within a distance Epsilon of 12. And the key point here is that that's true for any Epsilon, no matter how small, you'll always be able to find the corresponding delta. In contrast, when a limit does not exist as in this example here, you can find a sufficiently small Epsilon, like 0.4, so that no matter how small you make your range around zero, no matter how tiny delta is, the corresponding range of outputs is just always too big. There is no limiting output where everything is within a distance Epsilon of that output. So far, this is all pretty theory heavy, don't you think? Limits being used to formally define the derivative, and then Epsilon's and delta's being used to rigorously define the limit itself. So let's finish things off here with a trick for actually computing limits. For instance, let's say for some reason you were studying the function sine of pi times x divided by x squared minus 1. Maybe this was modeling some kind of dampened oscillation. When you plot a bunch of points to graph this, it looks pretty continuous. But there's a problematic value at x equals 1. When you plug that in, sine of pi is, well, 0, and the denominator also comes out to 0. So the function is actually not defined at that input, and the graph should really have a hole there. This also happens, by the way, at x equals negative 1, but let's just focus our attention on a single one of these holes for now. The graph certainly does seem to approach a distinct value at that point, wouldn't you say? So you might ask, how exactly do you find the value of the value of the value of the value defined what output this approaches as x approaches 1? Since you can't just plug in 1. Well, one way to approximate it would be to plug in a number that's just really, really close to 1, like 1.0001. Doing that, you'd find that this should be a number around negative 1.57. But is there a way to know precisely what it is? Some systematic process to take an expression like this one that looks like 0 divided by 0 at some input, and ask, what is its limit as x approaches that input? After limits, so helpfully let us write the definition for derivatives, derivatives can actually come back here, and return the favor to help us evaluate limits. Let me show you what I mean. Here's what the graph of sine of pi times x looks like. And here's what the graph of x squared minus 1 looks like. That's kind of a lot to have up on the screen, but just focus on what's happening around x equals 1. The point here is that sine of pi times x and x squared minus 1 are both 0 at that point. They both cross the x-axis. In the same spirit as plugging in a specific value near 1, like 1.0001, let's zoom in on that point and consider what happens just a tiny nudge dx away from it. The value sine of pi times x is bumped down, and the value of that nudge, which was caused by the nudge dx to the input, is what we might call d sine of pi x. And from our knowledge of derivatives using the chain rule, that should be around cosine of pi times x times pi times dx. Since the starting value was x equals 1, we plug in x equals 1 to that expression. In other words, the amount that this sine of pi times x graph changes is roughly proportional to dx, with a proportionality constant equal to cosine of pi times pi. And cosine of pi, if we think back to our trig knowledge, is exactly negative 1. So we can write this whole thing as negative pi times dx. Similarly, the value of the x squared minus 1 graph changes by some d x squared minus 1. And taking the derivative, the size of that nudge should be 2x times dx. Again, we were starting at x equals 1, so we plug in x equals 1 to that expression, meaning the size of that output nudge is about 2 times 1 times dx. What this means is that for values of x, which are just a tiny nudge dx away from 1, the ratio sine of pi x divided by x squared minus 1 is approximately negative pi times dx divided by 2 times dx. The dx is here, cancel out, so what's left is negative pi over 2. And importantly, those approximations get more and more accurate for smaller and smaller choices of dx, right? So this ratio, negative pi over 2, actually tells us the precise limiting value as x approaches 1. And remember, what that means is that the limiting height on our original graph is evidently exactly negative pi over 2. Now what happened there is a little subtle. So I want to go through it again, but this time a little more generally. Instead of these two specific functions, which are both equal to 0 at x equals 1, think of any two functions, f of x and g of x, which are both 0 at some common value, x equals a. The only constraint is that these have to be functions where you're able to take a derivative of them at x equals a, which means that they each basically look like a line when you zoom in close enough to that value. Now even though you can't compute f divided by g at this treble point since both of them equals 0, you can ask about this ratio for values of x really, really close to a, the limit as x approaches a. And it's helpful to think of those nearby inputs as just a tiny nudge dx away from a. The value of f at that nudge point is approximately its derivative, df over dx, evaluated at a, times dx. Likewise, the value of g at that nudge point is approximately the derivative of g, evaluated at a, times dx. So near that treble point, the ratio between the outputs of f and g is actually about the same as the derivative of f at a, times dx, divided by the derivative of g at a, times dx. Those dx's cancel out, so the ratio of f and g near a is about the same as the ratio between their derivatives. Because each of those approximations gets more and more accurate for smaller and smaller nudges, this ratio of derivatives gives the precise value for the limit. This is a really handy trick for computing a lot of limits. Whenever you come across some expression that seems to equal 0 divided by 0 when you plug in some particular input, just try taking the derivative of the top and bottom expressions and plugging in that same treble input. This clever trick is called Lopital's rule. Interestingly, it was actually discovered by Johann Bernoulli, but Lopital was this wealthy dude who essentially paid Bernoulli for the rights to some of his mathematical discoveries. Academia is weird back then, but hey, in a very literal way, it pays to understand these tiny nudges. Now, right now, you might be remembering that the definition of a derivative for a given function comes down to computing the limit of a certain fraction that looks like 0 divided by 0. So you might think that Lopital's rule could give us a handy way to discover new derivative formulas. But, that would actually be cheating, since presumably you don't know what the derivative of the numerator here is. When it comes to discovering derivative formulas, something that we've been doing a fair amount this series, there is no systematic plug and chug method. But that's a good thing. For creativity as needed to solve problems like these, it's a good sign that you're doing something real, something that might give you a powerful tool to solve future problems. And speaking of powerful tools, up next I'm going to be talking about what an integral is, as well as the fundamental theorem of calculus. And this is another example of where limits can be used to help give a clear meaning to a pretty delicate idea that flirts with infinity. As you know, most support for this channel comes through Patreon, and the primary perk for patrons is early access to future series like this one, where the next one is going to be on probability. But for those of you who want a more tangible way to flag that you're part of the community, there is also a small 3-blue 1-brown store. Links on the screen and in the description. I'm still debating whether or not to make a preliminary batch of plushy pie creatures. It kind of depends on how many viewers seem interested in the store more generally, but let me know in comments what other kinds of things you'd like to see in there.
Abstract vector spaces | Chapter 16, Essence of linear algebra
I'd like to revisit a deceptively simple question that I asked in the very first video of this series. What are vectors? Is a two-dimensional vector, for example, fundamentally an arrow on a flat plane that we can describe with coordinates for convenience, or is it fundamentally that pair of real numbers, which is just nicely visualized as an arrow on a flat plane? Or are both of these just manifestations of something deeper? On the one hand, defining vectors as primarily being a list of numbers feels clear-cut and unambiguous. It makes things like four-dimensional vectors or 100-dimensional vectors sound like real, concrete ideas that you can actually work with. And otherwise, an idea like four dimensions is just a vague geometric notion that's difficult to describe without waving your hands a bit. But on the other hand, a common sensation for those who actually work with linear algebra, especially as you get more fluent with changing your basis, is that you're dealing with a space that exists independently from the coordinates that you give it, and that coordinates are actually somewhat arbitrary, depending on what you happen to choose as your basis vectors. More topics in linear algebra, like determinants and eigenvectors, seem indifferent to your choice of coordinate systems. The determinant tells you how much a transformation scales areas. And eigenvectors are the ones that stay on their own span during a transformation. But both of these properties are inherently spatial, and you can freely change your coordinate system without changing the underlying values of either one. But if vectors are not fundamentally lists of real numbers, and if their underlying essence is something more spatial, that just begs the question of what mathematicians mean when they use a word like space, or spatial. To build up to where this is going, I'd actually like to spend the bulk of this video talking about something which is neither an arrow nor a list of numbers, but also has vector-ish qualities. Functions. You see, there's a sense in which functions are actually just another type of vector. In the same way that you can add two vectors together, there's also a sensible notion for adding two functions, f and g, to get a new function, f plus g. It's one of those things where you kind of already know what it's going to be, but actually phrasing it as a mouthful. The output of this new function at any given input, like negative four, is the sum of the outputs of f and g when you evaluate them each at that same input, negative four. For more generally, the value of the sum function at any given input x is the sum of the values f of x plus g of x. This is pretty similar to adding vectors coordinate by coordinate. It's just that there are, in a sense, infinitely many coordinates to deal with. Similarly, there's a sensible notion for scaling a function by a real number. It's just scale all of the outputs by that number. And again, this is analogous to scaling a vector, coordinate by coordinate, it just feels like there's infinitely many coordinates. Now given that the only thing vectors can really do is get added together or scaled, it feels like we should be able to take the same useful constructs and problem solving techniques of linear algebra that were originally thought about in the context of arrows in space, and apply them to functions as well. For example, there's a perfectly reasonable notion of a linear transformation for functions, something that takes in one function and turns it into another. One familiar example comes from calculus, the derivative. It's something which transforms one function into another function. Sometimes in this context you'll hear these called operators instead of transformations, but the meaning is the same. A natural question you might want to ask is what it means for a transformation of functions to be linear? The formal definition of linearity is relatively abstract and symbolically driven compared to the way that I first talked about it in chapter 3 of this series. But the reward of abstractness is that we'll get something general enough to apply to functions as well as arrows. A transformation is linear if it satisfies two properties, commonly called additivity and scaling. Additivity means that if you add two vectors, V and W, then apply a transformation to their sum, you get the same result as if you added the transformed versions of V and W. The scaling property is that when you scale a vector V by sum number, then apply the transformation. You get the same ultimate vector as if you scaled the transformed version of V by that same amount. The way you'll often hear this described is that linear transformations preserve the operations of vector addition and scalar multiplication. The idea of gridlines remaining parallel and evenly spaced that I've talked about in past videos is really just an illustration of what these two properties mean in the specific case of points in 2D space. One of the most important consequences of these properties, which makes matrix vector multiplication possible, is that a linear transformation is completely described by where it takes the basis vectors. Once any vector can be expressed by scaling and adding the basis vectors in some way, finding the transformed version of a vector comes down to scaling and adding the transformed versions of the basis vectors in that same way. As you'll see in just a moment, this is as true for functions as it is for arrows. For example, calculus students are always using the fact that the derivative is additive and has the scaling property even if they haven't heard it phrased that way. If you add two functions, then take the derivative. It's the same as first taking the derivative of each one separately, then adding the result. Similarly, if you scale a function, then take the derivative. It's the same as first taking the derivative, then scaling the result. To really drill in the parallel, let's see what it might look like to describe the derivative with a matrix. This will be a little tricky since function spaces have a tendency to be infinite dimensional, but I think this exercise is actually quite satisfying. Let's limit ourselves to polynomials, things like x squared plus 3x plus 5, or 4x to the 7th minus 5x squared. Each of the polynomials in our space will only have finitely many terms. But the full space is going to include polynomials with arbitrarily large degree. The first thing we need to do is give coordinates to this space, which requires choosing a basis. Since polynomials are already written down as the sum of scaled powers of the variable x, it's pretty natural to just choose pure powers of x as the basis function. In other words, our first basis function will be the constant function, b0 of x equals 1. The second basis function will be b1 of x equals x, then b2 of x equals x squared, then b3 of x equals x cubed, and so on. The role that these basis functions serve will be similar to the roles of i hat, j hat, and k hat in the world of vectors as arrows. Since our polynomials can have arbitrarily large degree, this set of basis functions is infinite. But that's okay. It just means that when we treat our polynomials as vectors, they're going to have infinitely many coordinates. A polynomial like x squared plus 3x plus 5, for example, would be described with the coordinates 5, 3, 1, then infinitely many zeros. You'd read this as saying that it's 5 times the first basis function, plus 3 times that second basis function, plus 1 times the third basis function, and then none of the other basis functions should be added from that point on. The polynomial 4x to the seventh minus 5x squared would have the coordinates 0, 0, negative 5, 0, 0, 0, 0, 4, then an infinite string of zeros. In general, since every individual polynomial has only finitely many terms, its coordinates will be some finite string of numbers with an infinite tail of zeros. In this coordinate system, the derivative is described with an infinite matrix that's mostly full of zeros, but which has the positive integers counting down on this offset diagonal. I'll talk about how you could find this matrix in just a moment, but the best way to get a feel for it is to just watch it in action. Take the coordinates representing the polynomial x cubed plus 5x squared plus 4x plus 5. And put those coordinates on the right of the matrix. The only term that contributes to the first coordinate of the result is 1 times 4, which means the constant term in the result will be 4. This corresponds to the fact that the derivative of 4x is the constant 4. The only term contributing to the second coordinate of the matrix vector product is 2 times 5, which means the coefficient in front of x in the derivative is 10. That one corresponds to the derivative of 5x squared. Similarly, the third coordinate in the matrix vector product comes down to taking 3 times 1. This one corresponds to the derivative of x cubed being 3x squared. And after that, it will be nothing but 0s. What makes this possible is that the derivative is linear. And for those of you who like to pause and ponder, you could construct this matrix by taking the derivative of each basis function and putting the coordinates of the results in each column. So surprisingly, matrix vector multiplication and taking a derivative, which at first seemed like completely different animals, are both just really members of the same family. In fact, most of the concepts I've talked about in this series with respect to vectors as arrows in space, things like the dot product or eigenvectors, have direct analogs in the world of functions, though sometimes they go by different names, things like inner product or eigenfunction. So back to the question of what is a vector. The point I want to make here is that there are lots of vector-ish things in math. As long as you're dealing with a set of objects where there's a reasonable notion of scaling and adding, whether that's a set of arrows in space, lists of numbers, functions, or whatever other crazy thing you choose to define, all of the tools developed in linear algebra regarding vectors, linear transformations, and all that stuff, should be able to apply. Take a moment to imagine yourself right now as a mathematician developing the theory of linear algebra. You want all of the definitions and discoveries of your work to apply to all of the vector-ish things in full generality, not just to one specific case. These sets of vector-ish things, like arrows or lists of numbers or functions, are called vector spaces. And what you as the mathematician might want to do is say, hey everyone, I don't want to have to think about all the different types of crazy vector spaces that y'all might come up with. So what you do is establish a list of rules that vector-edition and scaling have to abide by. These rules are called axioms, and in the modern theory of linear algebra, there are eight axioms that any vector space must satisfy if all of the theory and constructs that we've discovered are going to apply. I'll leave them on the screen here for anyone who wants to pause and ponder, but basically it's just a checklist to make sure that the notions of vector-edition and scalar multiplication do the things that you'd expect them to do. These axioms are not so much fundamental rules of nature as they are an interface between you, the mathematician discovering results, and other people who might want to apply those results to new sorts of vector spaces. If, for example, someone defines some crazy type of vector space, like the set of all pie creatures with some definition of adding and scaling pie creatures, these axioms are like a checklist of things that they need to verify about their definitions before they can start applying the results of linear algebra. When you, as the mathematician, never have to think about all the possible crazy vector spaces people might define. You just have to prove your results in terms of these axioms, so anyone whose definition satisfy those axioms can happily apply your results, even if you never thought about their situation. As a consequence, you tend to phrase all of your results pretty abstractly, which is to say only in terms of these axioms, rather than centering on a specific type of vector, like arrows in space or functions. For example, this is why just about every textbook you'll find will define linear transformations in terms of additivity and scaling, rather than talking about grid lines remaining parallel and evenly spaced. Even though the latter is more intuitive, and at least in my view, more helpful for first time learners, even if it is specific to one situation. So the mathematicians answer to what are vectors is to just ignore the question. In the modern theory, the form that vectors take doesn't really matter. Eros lists of numbers, functions, pie creatures, really. It can be anything, so long as there is some notion of adding and scaling vectors that follows these rules. It's like asking what the number 3 really is. Whenever it comes up concretely, it's in the context of some triplet of things, but in math, it's treated as an abstraction for all possible triplets of things, and lets you reason about all possible triplets using a single idea. Same goes with vectors, which have many embodiments, but math abstracts them all into a single intangible notion of a vector space. But as anyone watching this series knows, I think it's better to begin reasoning about vectors in a concrete, visualizable setting, like 2D space, with arrows rooted at the origin. But, as you learn more linear algebra, know that these tools apply much more generally, and that this is the underlying reason why textbooks and lectures tend to be phrased, well, abstractly. So with that, folks, I think I'll call it an end to this Essence of linear algebra series. If you've watched and understood the videos, I really do believe that you have a solid foundation in the underlying intuitions of linear algebra. This is not the same thing as learning the full topic, of course, that's something they can only really come from working through problems, but the learning you do moving forward could be substantially more efficient if you have all the right intuitions in place. So have fun applying those intuitions and best of luck with your future learning.
Cross products | Chapter 10, Essence of linear algebra
Last video I talked about the Dot product, showing both the standard introduction to the topic, as well as a deeper view of how it relates to linear transformations. I'd like to do the same thing for cross products, which also have a standard introduction, along with a deeper understanding in the light of linear transformations, but this time I'm dividing it into two separate videos. Here, I'll try to hit the main points that students are usually shown about the cross product. And in the next video, I'll be showing a view which is less commonly taught, but really satisfying when you learn it. We'll start in two dimensions. If you have two vectors, v and w, think about the parallelogram that they span out. What I mean by that is that if you take a copy of v and move its tail to the tip of w, and you take a copy of w and move its tail to the tip of v, the four vectors now on the screen enclose a certain parallelogram. The cross product of v and w, written with the x-shaped multiplication symbol, is the area of this parallelogram. Well, almost. We also need to consider orientation. Basically, if v is on the right of w, then v cross w is positive and equal to the area of the parallelogram. But if v is on the left of w, then the cross product is negative, namely the negative area of that parallelogram. Notice, this means that order matters. If you swapped v and w, instead taking w cross v, the cross product would become the negative of whatever it was before. The way I always remember the ordering here is that when you take the cross product of the two basis vectors in order, i hat cross j hat, the result should be positive. In fact, the order of your basis vectors is what defines orientation. So, since i hat is on the right of j hat, I remember that v cross w has to be positive whenever v is on the right of w. So, for example, with the vector shown here, I'll just tell you that the area of that parallelogram is seven. And since v is on the left of w, the cross product should be negative. So, v cross w is negative seven. But of course, you want to be able to compute this without someone telling you the area. This is where the determinant comes in. So, if you didn't see chapter five of this series where I talk about the determinant, now would be a really good time to go take a look. Even if you did see it, but it was a while ago, I'd recommend taking another look just to make sure those ideas are fresh in your mind. For the 2D cross product, v cross w, what you do is you write the coordinates of v as the first column of a matrix, and you take the coordinates of w and make them the second column, then you just compute the determinant. This is because a matrix whose columns represent v and w corresponds with a linear transformation that moves the basis vectors i hat and j hat to v and w. The determinant is all about measuring how areas change due to a transformation. And the prototypical area that we look at is the unit square resting on i hat and j hat. After the transformation, that square gets turned into the parallelogram that we care about. So, the determinant, which generally measures the factor by which areas are changed, gives the area of this parallelogram, since it evolved from a square that started with area one. What's more, if v is on the left of w, it means that orientation was flipped during that transformation, which is what it means for the determinant to be negative. As an example, let's say v has coordinates negative 3, 1, and w has coordinates 2, 1. The determinant of the matrix with those coordinates as columns is negative 3 times 1, minus 2 times 1, which is negative 5. So evidently, the area of the parallelogram that define is 5, and since v is on the left of w, it should make sense that this value is negative. As with any new operation you learn, I'd recommend playing around with this notion a bit in your head, just to get kind of an intuitive feel for what the cross product is all about. For example, you might notice that when two vectors are perpendicular, or at least close to being perpendicular, the cross product is larger than it would be if they were pointing in very similar directions. Because the area of that parallelogram is larger when the sides are closer to being perpendicular. Something else you might notice is that if you were to scale up one of those vectors, perhaps multiplying v by 3, then the area of that parallelogram is also scaled up by a factor of 3. So what this means for the operation is that 3v cross w will be exactly 3 times the value of v cross w. Now, even though all of this is a perfectly fine mathematical operation, what I just described is technically not the cross product. The true cross product is something that combines two different 3d vectors to get a new 3d vector. Just as before, we're still going to consider the parallelogram defined by the two vectors that we're crossing together. And the area of this parallelogram is still going to play a big role. To be concrete, let's say that the area is 2.5 for the vectors shown here. But as I said, the cross product is not a number. It's a vector. This new vector's length will be the area of that parallelogram, which in this case is 2.5. And the direction of that new vector is going to be perpendicular to the parallelogram. But which way, right? I mean, there are two possible vectors with length 2.5 that are perpendicular to a given plane. This is where the right hand rule comes in. Point the 4 finger of your right hand in the direction of v. Then stick out your middle finger in the direction of w. Then, when you point up your thumb, that's the direction of the cross product. For example, let's say that v was a vector with length 2 pointing straight up in the z direction. And w is a vector with length 2 pointing in the pure y direction. The parallelogram that they define in this simple example is actually a square since they're perpendicular and have the same length. And the area of that square is 4. So their cross product should be a vector with length 4. Using the right hand rule, their cross product should point in the negative x direction. So the cross product of these two vectors is negative 4 times i hat. For more general computations, there is a formula that you could memorize if you wanted. But it's common and easier to instead remember a certain process involving the 3D determinant. Now, this process looks truly strange at first. You write down a 3D matrix where the second and third columns contain the coordinates of v and w. But for that first column, you write the basis vectors i hat, j hat and k hat. Then, you compute the determinant of this matrix. The silliness is probably clear here. What on earth does it mean to put in a vector as the entry of a matrix? Students are often told that this is just a notational trick. When you carry out the computations as if i hat, j hat and k hat were numbers, then you get some linear combination of those basis vectors. And the vector defined by that linear combination, students are told to just believe, is the unique vector perpendicular to v and w whose magnitude is the area of the appropriate parallelogram and whose direction obeys the right hand rule. And, sure, in some sense, this is just a notational trick. But there is a reason for doing it. It's not just a coincidence that the determinant is once again important. And putting the basis vectors in those slots is not just a random thing to do. To understand where all of this comes from, it helps to use the idea of duality that I introduced in the last video. This concept is a little bit heavy though, so I'm putting it in a separate follow on video for any of you who are curious to learn more. Arguably, it falls outside the essence of linear algebra. The important part here is to know what that cross product vector geometrically represents. So if you want to skip that next video, feel free. But for those of you who are willing to go a bit deeper and who are curious about the connection between this computation and the underlying geometry, the ideas that I'll talk about in the next video are just a really elegant piece of math.
Q&A with Grant (3blue1brown), windy walk edition
Who's your favorite mathematician? I always find favorite questions kind of silly, but I will tell you about two different mathematicians that have been on my mind lately. So a month or two ago, I watched this documentary about Claude Shannon called The Bit Player, and then that prompted me to read a little bit more about Shannon in books like The Idea Factory or James Glick's The Information. And this guy was super interesting. I almost think of him as kind of a mixture between Donald Knuth and Adam Savage. So on the one hand, he's the father of information theory. He wrote this absolutely seminal, super important paper about transmitting information and storing it and encoding it and things like that. They really gave birth to the Information Age and a lot of the foundations for computer science. To give you some indication of just how important this work was, there's this one science fiction novel where all of the years, instead of being measured BC and AD with respect to the birth of Christ, everything is before Shannon or after Shannon. It's viewed where the foundation of information theory is so pivotal in any kind of civilization that it's worth setting your entire timeline around that point. So that's Shannon, super influential on that one hand. But on the other hand, he was incredibly playful and very willing to kind of pour his life into totally useless toys. So he spent a lot of time building different times of unicycles. He built this flaming trumpet for his son. He built this automatic juggling machine, this mechanical calculator that would do computations in Roman numerals, just wholly, completely useless things done for the purpose of play and nothing else. What I think you have to wonder is if this playfulness is wholly incidental and unrelated to the important work that he did, or if there's something that's necessary about it, right? If it's required that the kind of personality who's coming up with something totally new, like original enough that it is a foundational moment for a new kind of science, almost has to be the kind of personality that's also willing to make gizmos and toys for his own pleasure that other people look at and see as useless, if somewhat fun. Another mathematician I've been thinking about for kind of related reasons is Edward Lorenz who is maybe not the father of chaos theory, but certainly one of the fathers. And he wasn't even a mathematician, actually. He was a meteorologist, but really he was a mathematician to his core, but wearing the external veneer of a meteorologist. And he's maybe most famous for this system of equations, it's just three variables, three unknowns. That's one of the earliest examples of something called a strange attractor. But it came about when he was studying weather patterns, right, and weather is famously hard to understand, you know, at the extreme, there's a huge number of unknown variables. If you go all the way to one extreme, you could say that every atom in the atmosphere has six degrees of freedom. So you could have this truly ungodly monstrous system of equations that no one could ever hope to analyze. So inevitably you have to do something to simplify things down, propagmatism's sake. And a lot of people I think thought that the reason weather is hard to predict is simply because of the number of variables it play. And one of the important contributions from Lorenz was to be able to simplify down the unpredictability of it into a surprisingly simple set of equations to say, hey, some of the facts here that are hard to predict when you make a, you know, when you make some kind of measurement with a little bit of error around it and that error kind of propagates, that's not just because of the number of variables it play. You can have that kind of chaos arising from surprisingly simple circumstances. So in order to do this, right, you have to have the strange mixture again of a kind of pragmatism with a kind of more pure mathematician instinct. And I wonder if this is ever something that a pure mathematician could have done if he wasn't grounded in a problem like weather where he was doing all of these computational models and really just in the weeds of, you know, convection or whatever else it might be. So in the same way that Shannon kind of represents this contrast between playfulness and pragmatism and my mind Lorenz sort of represents this contrast between like applied science and then pure math. And just as Shannon is the father of information theory it doesn't seem like a coincidence that we find in the father of another very instrumental science to our modern age chaos theory. This sort of middle ground between those two. Now, obviously there's a lot of selection bias at play here, right, like most people that find themselves somewhere between peer and applied or somewhere between pragmatic and playful don't give rise to completely new fields of study. But I do have to wonder if the novelty required to father a new field necessitates breaking the norm and not falling into one clear cut path. Some of you might be wondering if every question in this Q&A will have me pontificating for many minutes. But I do have a broader point here, which is that I think a number of people watching this channel, especially those on the younger end, are clearly into pure math, right? They're curious about it. I wouldn't be surprised if a lot of them are contemplating becoming mathematicians. And for that sort of people, I kind of want to put out the question of where do you think there's more value? Do you think it's if all of these people with this inclination towards pure math go into that field? And that's where they get to collaborate with a lot of folks who are like-minded, who think like them, they resonate on the same wavelength, maybe they amplify each other's strengths. Or is there more value if each one of them kind of gets dispersed into some completely different field? And each one is not so much a mathematician as a mathematician plus x, right? There are mathematician plus a builder plus a meteorologist plus whatever have you. And they take those instincts of playfulness towards puzzles or desire to abstract the simplest form of a specific kind of hardness and basically bring that mathematician instinct to something totally different, right? Which of those worlds has more value? A teenage kid walks up to you and says they hate math. What do you tell or show them? I get the impression that the spirit of this question is for me to answer with some piece of math that's so enticing that even the most ardent of math haters would have to bring about some kind of affection for the subject. But the thing is, if someone comes to you and they admit that they don't like math or that they hate math, I don't think you shouldn't show them a piece of math, even if you do want to convert them. It's a little bit like if, let's say you really love coffee, right? And someone comes to you and they say, I just hate coffee. But you like it a lot and you're kind of this snob and you really want to turn them on to it. The way to do it is not to try to find a world's best cup of coffee and then give it to them. Because no matter what, it's going to taste like dirt. They don't like it. They're not addicted to it at this point. Instead, if you want to turn someone on to it, we have a couple options. One of them can be to first breed necessity and then from there maybe get an addiction. So let's say it's a student and they need to stay up all night to finish some kind of paper. And so they begrudging, we'll have to take some caffeine and after that, a kind of culinary Stockholm syndrome kicks in. And through the addiction, they come to kind of like this substance. That's not the greatest, but that's a way to do it. The analog with the world of math might be trying to find something where math is the drug necessary for someone to accomplish what they want to. You know, maybe they're really into video games, so they want to make their own video game. And often in writing this software for that, you have to use geometry, trigonometry, maybe bits of calculus. It kind of depends on the game that you're building. So you breed the necessity. But in either case, I think the key is to just try to build up familiarity, just be exposed to math in a lot of different contexts. And importantly for it to not ever be traumatic when that happens. I think the reason a lot of people say that they hate math is because they're only exposure to it is tightly linked with a sense of failure, right? It's a hard subject, but importantly, it's a very cumulative subject. So if you miss one little part, it looks like you're failing in all of the later parts, even if you would have been really good at those other parts without that missing link. So instead of the math comes about in a less judgmental context, something where there's no grading, you know, maybe it's just playing with puzzles with friends or it's writing that bit of software for the video game, or whatever it is where you're gaining exposure without that trauma. I think after enough time, you're just going to like the subject because I think if we're all really honest with ourselves and we look back on why we like the things that we like, it's often because someone in our life liked it. The time we were spending with them brought us to spending time with that thing, and then it just stuck with us. What advice would you give to a math enthusiast suffering from anxiety disorder, clinical depression, and ADHD? I'm not entirely sure how the word math enthusiast in the question changes the answer, but I will say that for anything health related, and this goes double from rental health, definitely seek professional help earlier than you think you need to. So don't be shy about finding a therapist or asking your doctor about these kinds of things. One thing I will say as a professional YouTuber, I do think it's probably healthy for people to spend less time on the internet. You know, I always get this uneasy feeling if I hear about someone binge watching all of my videos or something like that. Because on the one hand, I often do measure success in terms of how much time people are watching the videos. It's some indication of how much it's reaching the world, how deeply people want to engage with it and all of that. But on the other hand, if I think of someone kind of staying up all night to watch YouTube, or even just sitting in their house all day to watch YouTube, no matter what that video is, no matter how enlightening or how educational it seems to be, that's got to be bad for you and unhealthy in comparison to occasionally, you know, going outside or spending time with real people in the world, or if you want to engage with math, doing it in a more physical social kind of way. He's been been in blue still a thing. Ah, yes, the podcast with the bends. No, I don't think it'll continue. I don't think that's a bad thing. I think we had some really good conversations about education in there. I'm not sure how much we necessarily had to say. And I don't think all projects should live forever. I think there's something kind of nice about doing something a little different, a little experimental, and then being willing to just walk away from it if it seems better to spend time on, say, animated math videos or whatever else your main occupation might be. Favorite podcasts? Alright, so there's three different podcasts where I get actively excited if I see something new in the feed. And it's unfortunate because each one doesn't upload super regularly. First one is The Answer Pocene Reviewed by John Green. So as many of you probably know, John Green is an excellent writer. The podcast itself, he kind of reviews everyday things or aspects of human existence that range from deeply philosophical ones to very mundane, like the Taco Bell Breakfast Menu. It's maybe 70% personal memoir and you get this view into his own very thoughtful but also very twisted and tortured mind. So highly recommended. The second, which I think is super popular, so certainly don't need me saying this, is Hardcore History by Dan Carlin. He often likes to go deep into the human experience of certain wars or certain very violent times in human history that shaped the direction of things. And he does a good job just giving a super abundant amount of context, maybe way too much context, which is why these podcasts sometimes last as long as six hours and five minutes will be many, many part series. But sometimes you like that in a podcast. And then maybe foremost, that I like a lot and this will just come as no surprise is the number file podcast. I'm slightly biased because I was on it, but I actually think I'm the least interesting guest there. Super interesting to hear from different mathematicians, what their story is, how they relate to the subject, what motivates them, and of course Brady Haran is a phenomenal interviewer. So it's just perfect for anyone who's into math even in the slightest of ways. I've had both the responsibility and opportunity to best introduce the world of mathematics to curious and intelligent minds before they are shaped by the antiquated disempowering and demotivational education system of today. What would you do asking because I will soon be a father? First of all, let's just acknowledge that it's very weird for me to be asked a question that has anything with the flavor of parenting advice because. Well, look at me like I'm 27. What the hell do I know? But one interesting place to look here if we're thinking, what do you make a child entering the world love math or things related to math is Richard Feynman's dead? So evidently Feynman's dead was incredibly interested in making him into like a physicist or an engineer or something like that. And even when young Richard was just a newborn baby, he would paint these interesting patterns that were meant to instill young Richard's mind with a sense of mathematical patterns through the raw exposure. And as he grew up a little bit later, he would give these very deep thoughtful answers to questions about how the world worked or why when you like tug at a wagon that has a ball in it, the ball doesn't move. Find one would tell all of these stories. There were some of his favorites to tell. You know, if I look back at my own childhood, there's definitely a lot of influence from a very attentive, thoughtful father in that respect. I think I said this in a previous Q&A, but I'll just say it again here. I remember these games where he would stack these sugar cubes in interesting geometric arrangements. And I would be asked to count how many there are. And you couldn't just straight up count because it was some of them were hidden in certain ways, right? So you are effectively cubing numbers or something like that. And of course, if I got it right, then I would get one of the sugar cubes as this Pavlovian reward. And if you look at the success of someone like, fine, man, when it comes to problem solving. And I definitely don't view myself as like a great problem solver, but I do love this object. I have this like deep seated affection for it that probably is not unassociated to the kind of games my dad would play with me. It's not so much that I think, oh, the painting of those patterns really did instill young Richards' mind with a sense of math, or that the answers that his father gave to certain questions were the ones that made him deep and thoughtful later. I think it's just that when you're a parent and you're showing a lot of attention towards something, you're signaling to the kid that that something is important and it's worth thinking about. So all of the signaling that probably came from young Richard Feynman's dad showing this deep attentiveness to questions about the physical world about mathematical patterns probably made it such that young Richard would spend a lot of his own time thinking about those things because they just pattern match off of their parents. Another thing I might say is try to draw a distinction between school math and math, right? They can just be very separate things and kind of separating the brand of those two can't hurt. What's something you think could have been discovered long before it was actually discovered? Well, just last week, all of the internet has been very abuzz about a certain result that came from these three physicists studying neutrinos about eigenvectors and eigenvalues, which is a crazy fundamental thing. You kind of wouldn't imagine that there are any new things to be discovered about computing eigenvectors or computing eigenvalues because it's so, it's kind of old and it's very, I don't know, routine at this point. But they found what they thought was a result and they sent it to Terry Tau who actually responded and his initial thought was this can't be true. It would be in every textbook if it was true. And then within two hours, I think you found three independent proofs of the thing and yeah, it's just a different way to compute eigenvectors that was discovered in 2019 even though that totally could have been discovered hundreds of years ago. Can we fix math on Wikipedia? Really serious here. I constantly go there after your vids for a bit of a deeper dive and learn nothing more ever compared to almost any other topic in the natural sciences or physics where at least I get an outline of where to go next. It's such a shame. So this question kind of reminds me of that classic trope where you've got a girl and she's dating a boy and you know, he's kind of a bad boy. He does a couple things that are wrong that adds to his elure. He's kind of sexy in that way. But she's thinking, oh, I can change him, right? He's flawed, but I can fix him. And everyone in her life is looking and saying, oh, honey, like he's not going to change. People don't change. You have to find someone else in the same way. If I see someone trying to learn math from Wikipedia and not use it as a reference, it's like, it's not going to change. Don't try to change it. You've got to find a different source. There's lots of really great blogs that you can go to or math exchange in Kora are great in terms of people trying to explain things in approachable ways. And don't forget about just good old textbooks in math more so than a lot of fields. I think there's a strong contrast between what makes good reference material and what makes good pedagogical material. And a general rule of thumb, says not universal, but general rule of thumb, things that are single authored, I believe, are better pedagogically. And I suspect the reason for this is that when you want to explain a topic, often the best route to making it understandable is to start off by being a little bit wrong. You explain kind of a simplified version of something that isn't entirely accurate, but it's easier to get a foothold in. Then once you have that foothold, you slowly carve away what's wrong about it until you end up at what is entirely accurate, but more complicated. But you've taken this path through incorrectness. Now, when you have multiple authors, I think the tendency is that you sort of wipe away and edit away the things that are incorrect. That's like the stable equilibrium that you reach. So what you're left with is a source that's entirely factually correct, but it's harder to get a foothold into for that reason. And Wikipedia just represents the extreme of this. But I also think you see it if you look at a textbook that has three or four authors. Again, there are exceptions. But I like that as a rule of thumb. Real quick, I want to tell you about two new items that have been added to the three blue and brown store for any math enthusiasts. The first one in the spirit of upping the level of formality on things is this not theory-themed tie. So as you can see, the pattern includes a lot of different simple mathematical knots. So almost any knot that you are likely to tie with your tie anyway is going to be topologically equivalent to one of these unless you just go totally crazy. In sourcing this, we wanted to make sure that it was, you know, a legitimately high quality tie. And I'm really happy with what we found. Then as a supplement to the ties, I also got these vector field socks produced. And what they represent is the phase space of a pendulum, which some of you may know is most naturally represented on a cylinder, hence printing it on a sock. So the whole item is just sort of a subtle nod to that fact. I believe DFTBA is going to do some kind of sale on Black Friday and Cyber Monday. So if you're watching this before then, definitely check it out. And with that, I will see you all in the next probably much more typical video.
Pi hiding in prime regularities
This is a video I've been excited to make for a while now. The story here braids together prime numbers, complex numbers, and pi in a very pleasing trio. Quite often in modern math, especially that which flirts with the Riemann Zeta function, these three seemingly unrelated objects show up in unison, and I want to give you a little peek at one instance where this happens, one of the few that doesn't require too heavy a technical background. That's not to say that this is easy. In fact, this is probably one of the most intricate videos I've ever done, but the culmination is worth it. What we'll end up with is a formula for pi, a certain alternating infinite sum. This formula is actually written on the mug that I'm drinking coffee from right now as I write this, and a fun but almost certainly apocryphal story is that the beauty of this formula is what inspired livenits to quit being a lawyer and instead pursue math. Now, whenever you see pi show up in math, there's always going to be a circle hiding somewhere, sometimes very sneakily. So the goal here is not just to discover this sum, but to really understand the circle hiding behind it. You see, there is another way that you can prove the same result that you and I are going to spend some meaningful time building up to, but with just a few lines of calculus. And this is one of those proofs that leaves you thinking, okay, I suppose that's true, and not really getting a sense for why, or for where the hidden circle is. On the path that you and I will take though, what you'll see is that the fundamental truth behind this sum and the circle that it hides is a certain regularity in the way that prime numbers behave when you put them inside the complex numbers. To start the story, imagine yourself with nothing more than a pencil, some paper, and a desire to find a formula for computing pi. There are countless ways that you could approach this, but as a broad outline for the plot line here, you'll start by asking how many lattice points of the plane sit inside a big circle. And then that question is going to lead to asking about how to express numbers as the sum of two squares, which in turn is going to lead us to factoring integers inside the complex plane. From there, we'll bring in this special function named chi, which is going to give us a formula for pi that at first seems to involve a crazy complicated pattern, dependent on the distribution of primes. But a slight shift in perspective is going to simplify edumatically and expose the ultimate gold nugget. It's a lot, but good math takes time, and we'll take it step by step. When I say lattice point, what I mean is a point A, B on the plane where A and B are both integers, a spot where the grid lines here cross. If you draw a circle centered at the origin, let's say with radius 10, how many lattice points would you guess are inside that circle? Well, there's one lattice point for each unit of area, so the answer should be approximately equal to the area of the circle, pi r squared, which in this case is pi tends 10 squared. And if it was a really big circle, like radius 1 million, you would expect this to be a much more accurate estimate in the sense that the percent error between the estimate pi r squared and the actual count of lattice points should get smaller. What we're going to try to do is find a second way to answer the same question, how many lattice points are inside the circle? Because that can lead to another way to express the area of a circle, and hence another way to express pi. And so you play, and you wonder, and maybe, especially if you just watched a certain calculus video, you might try looking through every possible ring that a lattice point could sit on. Now if you think about it, for each one of these lattice points A, B, its distance from the origin is the square root of a squared plus b squared. And since a and b are both integers, a squared plus b squared is also some integer, so you only have to consider rings whose radii are the square roots of some whole number. A radius of zero just gives you that single origin point. If you look at the radius one, that hits four different lattice points. Radius square root of two, while that also hits four lattice points. Radius square root of three doesn't actually hit anything. Square root of four, again hits four lattice points. A radius square root of five actually hits eight lattice points. And what we want is a systematic way to count how many lattice points are on a given one of these rings, a given distance from the origin, and then to tally them all up. And if you pause and try this for a moment, what you'll find is that the pattern seems really chaotic, just very hard to find order under here. And that's a good sign that some very interesting math is about to come into play. In fact, as you'll see, this pattern is rooted in the distribution of primes. As an example, let's look at the ring with radius square root of twenty-five. It hits the point five zero, since five squared plus zero squared is twenty-five. It also hits four three, since four squared plus three squared gives twenty-five. And likewise, it hits three four, and also zero five. And what's really happening here is that you're counting how many pairs of integers, A, B, have the property, that A squared plus B squared equals twenty-five. And looking at the circle, it looks like there's a total of twelve of them. As another example, take a look at the ring with radius square root eleven. It doesn't hit any lattice points. And that corresponds to the fact that you cannot find two integers, whose squares add up to eleven. Try it. Now many times in math, when you see a question that has to do with the 2D plane, it can be surprisingly fruitful to just ask what it looks like when you think of this plane as the set of all complex numbers. So instead of thinking of this lattice point here as the pair of integer coordinates, three four, instead think of it as the single complex number, three plus four i. That way, another way to think about the sum of the squares of its coordinates, three squared plus four squared, is to multiply this number by three minus four i. This is called its complex conjugate. It's what you get by reflecting over the real axis, replacing i with negative i. And this might seem like a strange step if you don't have much of a history with complex numbers. But describing this distance as a product can be unexpectedly useful. It turns our question into a factoring problem, which is ultimately why patterns among prime numbers are going to come into play. Algebraically, this relation is straightforward enough to verify. You get a three squared, and then the three times minus four i cancels out with the four i times three, and then you have negative four i squared, which because i squared is negative one becomes plus four squared. This is also quite nice to see geometrically. And if you're a little rusty with how complex multiplication works, I do have another video that goes more into detail about why complex multiplication looks the way that it does. The way you might think about a case like this is that the number three plus four i has a magnitude of five, and some angle off of the horizontal. And what it means to multiply it by three minus four i is to rotate by that same angle in the opposite direction, putting it on the positive real axis, and then to stretch out by a factor of five, which in this case lands you on the output 25, the square of the magnitude. The collection of all of these lattice points, a plus b i, where a and b are integers, has a special name. They're called the Gaussian integers, named after Martin Sheen. Geometrically, you'll still be asking the same question. How many of these lattice points, Gaussian integers, are a given distance away from the origin, like square root of 25? But we'll be phrasing it in a slightly more algebraic way. How many Gaussian integers have the property that multiplying by their complex conjugate gives you 25? This might seem needlessly complex, but it's the key to understanding the seemingly random pattern for how many lattice points are a given distance away from the origin. To see why, we first need to understand how numbers factor inside the Gaussian integers. As a refresher among ordinary integers, every number can be factored as some unique collection of prime numbers. For example, 2,250 can be factored as 2 times 3 squared times 5 cubed. And there is no other collection of prime numbers that also multiplies to make 2,250. Unless you let negative numbers into the picture, in which case you could just make some of the primes in this factorization negative. So really, within the integers, factorization is not perfectly unique. It's almost unique, with the exception that you can get a different looking product by multiplying some of the factors by negative 1. The reason I bring that up is that factoring works very similarly inside the Gaussian integers. Some numbers, like 5, can be factored into smaller Gaussian integers, which in this case is 2 plus i times 2 minus i. This Gaussian integer here, 2 plus i, cannot be factored into anything smaller. So we call it a Gaussian prime. Again, this factorization is almost unique. But this time, not only can you multiply each one of those factors by negative 1 to get a factorization that looks different, you can also be extra sneaky and multiply one of these factors by i and then the other one by negative i. This will give you a different way to factor 5 into two distinct Gaussian primes. But other than the things that you can get by multiplying some of these factors by negative 1 or i or negative i, factorization within the Gaussian integers is unique. And if you configure out how ordinary prime numbers factor inside the Gaussian integers, that will be enough to tell us how any other natural number factors inside these Gaussian integers. And so here, we pull in a crucial and pretty surprising fact. Prime numbers that are 1 above a multiple of 4, like 5 or 13 or 17, these guys can always be factored into exactly two distinct Gaussian primes. This corresponds with the fact that rings with a radius equal to the square root of one of these prime numbers always hit some lattice points. In fact, they always hit exactly 8 lattice points, as you'll see in just a moment. On the other hand, prime numbers that are 3 above a multiple 4, like 3 or 7 or 11, these guys cannot be factored further inside the Gaussian integers. Not only are they primes in the normal numbers, but they're also Gaussian primes, unsplitable even when i is in the picture. And this corresponds with the fact that a ring whose radius is the square root of one of those primes will never hit any lattice points. And this pattern right here is the regularity within prime numbers that we're going to ultimately explain. And in a later video, I might explain why on earth this is true. Why a prime numbers remainder when divided by 4 has anything to do with whether or not it factors inside the Gaussian integers, or said differently whether or not it can be expressed as the sum of 2 squares. But here, and now we'll just have to take it as a given. The prime number 2, by the way, is a little special, because it does factor, you can write it as 1 plus i times 1 minus i. But these 2 Gaussian primes are a 90 degree rotation away from each other, so you can multiply one of them by i to get the other. And that fact is going to make us want to treat the prime number 2 a little bit differently for where all of this stuff is going, so just keep that in the back of your mind. Remember, our goal here is to count how many lattice points are a given distance away from the origin, and doing this systematically for all distances square root of n can lead us to a formula for pi. Encounting the number of lattice points with a given magnitude, like square root of 25, is the same as asking how many Gaussian integers have the special property that multiplying them by their complex conjugate gives you 25. So here's the recipe for finding all Gaussian integers that have this property. Step 1, factor 25, which inside the ordinary integers looks like 5 squared, but since 5 factors even further, as 2 plus i times 2 minus i, 25 breaks down as these 4 Gaussian primes. Step 2, organize these into two different columns, with conjugate pairs sitting right next to each other. Then once you do that, multiply what's in each column, and you'll come out with two different Gaussian integers on the bottom. And because everything on the right is a conjugate with everything on the left, what comes out is going to be a complex conjugate pair, which multiplies to 25. Picking an arbitrary standard, let's say that the product from that left column is the output of our recipe. Now notice, there are three choices for how you can divvy up the primes that can affect this output. Pictured right here, both copies of 2 plus i are in the left column, and that gives us the product 3 plus 4i. You could also have chosen to have only one copy of 2 plus i in this left column, in which case the product would be 5. Or you could have both copies of 2 plus i in that right column, in which case the output of our recipe would have been 3 minus 4i. And those three possible outputs are all different lattice points on a circle with radius square root of 25. But why does this recipe not yet capture all 12 of the lattice points? Remember how I mentioned that a factorization into Gaussian primes can look different if you multiply some of them by i or negative 1, negative i? In this case, you could write the factorization of 25 differently, maybe splitting up one of those 5s as negative 1 plus 2i times negative 1 minus 2i. And if you do that, running through the same recipe, it can affect the result. You'll get a different product out of that left column. But the only effect that this is going to have is to multiply that total output by i, or negative 1, or negative i. So as a final step for our recipe, let's say that you have to make one of four choices. Make that product from the left column and choose to multiply it by 1i, negative 1, or negative i, corresponding to rotations that are some multiple of 90 degrees. That will account for all 12 different ways of constructing a Gaussian integer whose product with its own conjugate is 25. This process is a little complicated, so I think the best way to get a feel for it is to just try it out with more examples. So instead we were looking at 125, which is 5 cubed. In that case, we would have four different choices for how to divvy up the prime conjugate pairs into these two columns. You can either have zero copies of 2 plus i in the left column, one copy in there, two copies in there, or all three of them in that left column. Those four choices multiplied by the final four choices of multiplying the product from the left column by 1, or by i, or negative 1, or negative i, would suggest that there are a total of 16 lattice points, a distance square root of 125 away from the origin. And indeed, if you draw that circle out and count, what you'll find is that it hits exactly 16 lattice points. But what if you introduce a factor like 3, which doesn't break down as the product of two conjugate Gaussian primes? Well, that really mucks up the whole system. When you're divvying up the primes between the two columns, there's no way that you can split up this three. No matter where you put it, it leaves the columns imbalanced. And what that means is that when you take the product of all of the numbers in each column, you're not going to end up with a conjugate pair. So for a number like this, 3 times 5 cubed, which is 375, there's actually no lattice point that you'll hit, no Gaussian integer, whose product with its own conjugate gives you 375. However, if you introduce a second factor of 3, then you have an option. You can throw 1 3 in the left column, and the other 3 in the right column. Since 3 is its own complex conjugate, this leaves things balanced, in the sense that the product of the left and right columns will indeed be a complex conjugate pair. But it doesn't add any new options. There's still going to be a total of 4 choices for how to divvy up those factors of 5, multiplied by the final 4 choices of multiplying by 1, i, negative 1, or negative i. So just like the square root of 125 circle, this guy is also going to end up hitting exactly 16 lattice points. Let's just sum up where we are. When you're counting up how many lattice points lie on a circle with the radius square root of n, the first step is to factor n. And for prime numbers like 5 or 13 or 17, which factor further into a complex conjugate pair of Gaussian primes, the number of choices they give you will always be one more than the exponent that shows up with that factor. On the other hand, for prime factors like 3 or 7 or 11, which are already Gaussian primes and cannot be split. If they show up with an even power, you have one and only one choice with what to do with them. But if it's an odd exponent, you're screwed, and you just have zero choices. And always, no matter what, you have those final 4 choices at the end. By the way, I do think that this process right here is the most complicated part of the video. It took me a couple times to think through that, yes, this is a valid way to count lattice points, so don't be shy if you want to pause and scribble things down to get a feel for it. The one last thing to mention about this recipe is how factors of 2 affect the count. If your number is even, then that factor of 2 breaks down as 1 plus i times 1 minus i, so you can divvy up that complex conjugate pair between the two columns. And at first, it might look like this doubles your options, depending on how you choose to place those two Gaussian primes between the columns. However, since multiplying one of these guys by i gives you the other one, when you swap them between the columns, the effect that that has on the output from the left column is to just multiply it by i or by negative i. So that's actually redundant with the final step, where we take the product of this left column and choose to multiply it either by 1, i, negative 1, or negative i. What this means is that a factor of 2, or any power of 2, doesn't actually change the count at all. It doesn't hurt, but it doesn't help. For example, a circle with radius square root of 5 hits 8 lattice points. And if you grow that radius to square root of 10, then you also hit 8 lattice points. And square root of 20 also hits 8 lattice points, as does square root of 40. Factors of 2 just don't make a difference. Now what's about to happen is number theory at its best. We have this complicated recipe telling us how many lattice points sit on a circle with radius square root of n, and it depends on the prime factorization of n. To turn this into something simpler, something we can actually deal with, we're going to exploit the regularity of primes that those which are 1 above a multiple of 4 split into distinct Gaussian prime factors, while those that are 3 above a multiple of 4 cannot be split. To do this, let's introduce a simple function, one which I'll label with the Greek letter chi. For inputs that are 1 above a multiple of 4, the output of chi is just 1. If it takes in an input 3 above a multiple of 4, then the output of chi is negative 1. And then on all even numbers, it gives 0. So if you evaluate chi on the natural numbers, it gives this very nice cyclic pattern, 1, 0, negative 1, 0, and then repeat indefinitely. And this cyclic function chi has a very special property. It's what's called a multiplicative function. If you evaluate it on two different numbers and multiply the results, like chi of 3 times chi of 5, it's the same as if you evaluate chi on the product of those two numbers, in this case chi of 15. Likewise chi of 5 times chi of 5 is equal to chi of 25. And no matter what two natural numbers you put in there, this property will hold. Go ahead, try it if you want. So for our central question of counting lattice points in this way that involves factoring a number, what I'm going to do is write down the number of choices we have but using chi in what at first seems like a much more complicated way, but this has the benefit of treating all prime factors equally. For each prime power, like 5 cubed, what you write down is chi of 1 plus chi of 5 plus chi of 5 squared plus chi of 5 cubed. You add up the value of chi on all the powers of this prime up to the one that shows up inside the factorization. In this case, since 5 is 1 above a multiple of 4, all of these are just 1, so this sum comes out to be 4, which reflects the fact that a factor of 5 cubed gives you 4 options for how to divvy up the two Gaussian prime factors between the columns. For a factor like 3 to the fourth, what you write down looks totally similar. Chi of 1 plus chi of 3 on and on up to chi of 3 to the fourth. But in this case, since chi of 3 is negative 1, this sum oscillates. It goes 1 minus 1 plus 1 minus 1 plus 1. And if it's an even power, like 4 in this case, the total sum comes out to be 1, which encapsulates the fact that there is only one choice for what to do with those unsplitable 3s. But if it's an odd power, that sum comes out to 0, indicating that you're screwed. You can't place that unsplitable 3. When you do this for a power of 2, what it looks like is 1 plus 0 plus 0 plus 0 on and on, since chi is always 0 on even numbers. And this reflects the fact that a factor of 2 doesn't help but it doesn't hurt. You always have just one option for what to do with it. And as always, we keep a 4 in front to indicate that final choice of multiplying by 1, i, negative 1, or negative i. We're getting close to the culmination now. Things are starting to look organized, so take a moment, pause and ponder. Make sure everything feels good up to this point. Take the number 45 as an example. This guy factors as 3 squared times 5, so the expression for the total number of lattice points is 4 times chi of 1 plus chi of 3 plus chi of 3 squared times chi of 1 plus chi of 5. You can think about this as 4 times the 1 choice for what to do with the 3s times 2 choices for how to divvy up the Gaussian prime factors of 5. It might seem like expanding out this sum is really complicated because it involves all possible combinations of these prime factors. And it kind of is. However, because chi is multiplicative, each one of those combinations corresponds to a divisor of 45. I mean, in this case, what we get is 4 times chi of 1 plus chi of 3 plus chi of 5 plus chi of 9 plus chi of 15 plus chi of 45. And what you'll notice is that this covers every number that divides evenly into 45, once and only once. And it works like this for any number. There's nothing special about 45. And that to me is pretty interesting, and I think wholly unexpected. This question of counting the number of lattice points, a distance square root of n away from the origin, involves adding up the value of this relatively simple function over all the divisors of n. To bring it all together, remember why we're doing this. The total number of lattice points inside a big circle with radius r should be about pi times r squared. But on the other hand, we can count those same lattice points by looking through all of the numbers n between 0 and r squared, and counting how many lattice points are a distance square root of n from the origin. Let's go ahead and just ignore that origin dot with radius 0. It doesn't really follow the pattern of the rest, and one little dot isn't going to make a difference as we let r grow towards infinity. Now from all of this Gaussian integer and factoring and chi function stuff that we've been doing, the answer for each n looks like adding up the value of chi on every divisor of n, and then multiplying by 4. And for now, let's just take that 4 and put it in the corner and remember to bring it back later. At first, adding up the values for each one of these rows seems super random, right? I mean, numbers with a lot of factors have a lot of divisors, whereas prime numbers will always only have two divisors. So it initially seems like you would have to have perfect knowledge of the distribution of primes to get anything useful out of this. But if instead you organize these into columns, the puzzle starts to fit together. How many numbers between 1 and r squared have 1 as a divisor? Well, all of them. So r sum should include r squared times chi of 1. How many of them have 2 as a divisor? Well, about half of them. So that would account for about r squared over 2 times chi of 2. About a third of these rows have chi of 3, so we can put in r squared divided by 3 times chi of 3. And keep in mind we're being approximate since r squared might not perfectly divide 2 or 3, but as r grows towards infinity, this approximation will get better. And when you keep going like this, you get a pretty organized expression for the total number of lattice points. And if you factor out that r squared and then bring back the 4 that needs to be multiplied in, what it means is that the total number of lattice points inside this big circle is approximately 4 times r squared times this sum. And because chi is 0 on every even number, and it oscillates between 1 and negative 1 for odd numbers, this sum looks like 1 minus 1 third plus a fifth minus 1 seventh and so on. And this is exactly what we wanted. What we have here is an alternate expression for the total number of lattice points inside a big circle, which we know should be around pi times r squared. And the bigger r is, the more accurate both of these estimates are, so the percent error between the left hand side and the right hand side can get arbitrarily small. So divide out by that r squared, and this gives us an infinite sum that should converge to pi. And keep in mind, I just think this is really cool. The reason that this sum came out to be so simple, requiring relatively low information to describe, ultimately stems from the regular pattern and how prime numbers factor inside the Gaussian integers. If you're curious, there are two main branches of number theory, algebraic number theory and analytic number theory. Very loosely speaking, the former deals with new number systems, things like these Gaussian integers that you and I looked at, and a lot more. And the latter deals with things like the remanzata function, or its cousins, called L functions, which involve multiplicative functions like this central character chi from our story. And the path that we just walked is a little glimpse at where those two fields intersect. And both of these are pretty heavy duty fields, with a lot of active research and unsolved problems. So if all this feels like something that takes time to mentally digest, like there's more patterns to be uncovered and understood, it's because it is, and there are.